id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
alkzar90/croupier-mtg-dataset | 2022-08-02T01:41:48.000Z | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:found",
"size_categories:1K<n<10K",
"source_datasets:original",
"license:apache-2.0",
"mgt",
"magic-card-game",
"creature-dataset",
"region:us"
] | alkzar90 | null | null | null | 2 | 34 | ---
annotations_creators:
- found
language: []
language_creators: []
license:
- apache-2.0
multilinguality: []
pretty_name: 'Croupier: a Magic the Gathering creatures dataset'
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- mgt
- magic-card-game
- creature-dataset
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
---
## Dataset Description
- **Homepage:** the [Gatherer](https://gatherer.wizards.com/Pages/)
- **Repository:** https://github.com/alcazar90/croupier-mtg-dataset
### Dataset Summary
A card images dataset of 4 types of creatures from Magic the Gathering card game: elf, goblin, knight, and zombie.
## Dataset Creation
All card information from Magic the Gathering card game is public available from the
[Gatherer]( https://gatherer.wizards.com/Pages/) website, the official Magic Card Database. The dataset is just
a subset selection of 4 kind of creatures from the game. |
valurank/News_headlines | 2022-08-17T08:19:18.000Z | [
"license:other",
"region:us"
] | valurank | null | null | null | 0 | 34 | ---
license: other
---
|
joelniklaus/mc4_legal | 2023-03-20T23:24:13.000Z | [
"task_categories:fill-mask",
"annotations_creators:other",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language... | joelniklaus | null | 3 | 34 | ---
annotations_creators:
- other
language_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
license:
- cc-by-4.0
multilinguality:
- multilingual
paperswithcode_id: null
pretty_name: "MC4_Legal: A Corpus Covering the Legal Part of MC4 for European Languages"
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- fill-mask
---
# Dataset Card for MC4_Legal: A Corpus Covering the Legal Part of MC4 for European Languages
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [GitHub](https://github.com/JoelNiklaus/LegalDatasets/tree/main/pretrain/mc4_legal)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
### Dataset Summary
This dataset contains large text resources (~133GB in total) from mc4 filtered for legal data that can be used for pretraining language models.
Use the dataset like this:
```python
from datasets import load_dataset
dataset = load_dataset("joelito/mc4_legal", "de", split='train', streaming=True)
```
### Supported Tasks and Leaderboards
The dataset supports the task of masked language modeling.
### Languages
The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv
## Dataset Structure
### Data Instances
The file format is jsonl.xz and there is one split available ("train").
| Source | Size (MB) | Words | Documents | Words/Document |
|:---------|------------:|------------:|------------:|-----------------:|
| all | 448980 | 28599300521 | 9873288 | 2896 |
| bg | 57 | 2390349 | 379 | 6306 |
| cs | 31005 | 1840827375 | 677796 | 2715 |
| da | 162 | 10466716 | 3231 | 3239 |
| de | 105739 | 6184578784 | 3164461 | 1954 |
| el | 30 | 1155977 | 307 | 3765 |
| en | 13734 | 966539309 | 359283 | 2690 |
| es | 132053 | 9058939804 | 2281888 | 3969 |
| et | 2059 | 110198368 | 49987 | 2204 |
| fi | 1270 | 62799074 | 44875 | 1399 |
| fr | 30878 | 2117306229 | 598983 | 3534 |
| ga | 1 | 32772 | 8 | 4096 |
| hu | 4677 | 244911748 | 58857 | 4161 |
| it | 46957 | 3053920779 | 990823 | 3082 |
| lt | 156 | 9142223 | 1529 | 5979 |
| lv | 1 | 58702 | 16 | 3668 |
| mt | 65 | 3479869 | 731 | 4760 |
| nl | 326 | 21962633 | 6875 | 3194 |
| pl | 37950 | 2235839721 | 827641 | 2701 |
| pt | 20120 | 1338147828 | 382173 | 3501 |
| ro | 8816 | 551372510 | 136513 | 4038 |
| sk | 5850 | 349265172 | 130701 | 2672 |
| sl | 1742 | 107493024 | 32574 | 3299 |
| sv | 5332 | 328471555 | 123657 | 2656 |
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
The dataset was created by filtering mc4 for legal data.
We used terms indicating legal citations to get the texts.
Note that this dataset can be quite noisy, and the quality is not known.
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@JoelNiklaus](https://github.com/joelniklaus) for adding this dataset.
| ||
sinhala-nlp/SemiSOLD | 2022-12-20T20:21:26.000Z | [
"region:us"
] | sinhala-nlp | null | null | null | 0 | 34 | # SOLD - A Benchmark for Sinhala Offensive Language Identification
In this repository, we introduce the {S}inhala {O}ffensive {L}anguage {D}ataset **(SOLD)** and present multiple experiments on this dataset. **SOLD** is a manually annotated dataset containing 10,000 posts from Twitter annotated as offensive and not offensive at both sentence-level and token-level. **SOLD** is the largest offensive language dataset compiled for Sinhala. We also introduce **SemiSOLD**, a larger dataset containing more than 145,000 Sinhala tweets, annotated following a semi-supervised approach.
:warning: This repository contains texts that may be offensive and harmful.
## Annotation
We use an annotation scheme split into two levels deciding (a) Offensiveness of a tweet (sentence-level) and (b) Tokens that contribute to the offence at sentence-level (token-level).
### Sentence-level
Our sentence-level offensive language detection follows level A in OLID [(Zampieri et al., 2019)](https://aclanthology.org/N19-1144/). We asked annotators to discriminate between the following types of tweets:
* **Offensive (OFF)**: Posts containing any form of non-acceptable language (profanity) or a targeted offence, which can be veiled or direct. This includes insults, threats, and posts containing profane language or swear words.
* **Not Offensive (NOT)**: Posts that do not contain offense or profanity.
Each tweet was annotated with one of the above labels, which we used as the labels in sentence-level offensive language identification.
### Token-level
To provide a human explanation of labelling, we collect rationales for the offensive language. Following HateXplain [(Mathew et al., 2021)](https://ojs.aaai.org/index.php/AAAI/article/view/17745), we define a rationale as a specific text segment that justifies the human annotator’s decision of the sentence-level labels. Therefore, We ask the annotators to highlight particular tokens in a tweet that supports their judgement about the sentence-level label (offensive, not offensive). Specifically, if a tweet is offensive, we guide the annotators to highlight tokens from the text that supports the judgement while including non-verbal expressions such as emojis and morphemes that are used to convey the intention as well. We use this as token-level offensive labels in SOLD.

## Data
SOLD is released in HuggingFace. It can be loaded in to pandas dataframes using the following code.
```python
from datasets import Dataset
from datasets import load_dataset
sold_train = Dataset.to_pandas(load_dataset('sinhala-nlp/SOLD', split='train'))
sold_test = Dataset.to_pandas(load_dataset('sinhala-nlp/SOLD', split='test'))
```
The dataset contains of the following columns.
* **post_id** - Twitter ID
* **text** - Post text
* **tokens** - Tokenised text. Each token is seperated by a space.
* **rationals** - Offensive tokens. If a token is offensive it is shown as 1 and 0 otherwise.
* **label** - Sentence-level label, offensive or not-offensive.

SemiSOLD is also released HuggingFace and can be loaded to a pandas dataframe using the following code.
```python
from datasets import Dataset
from datasets import load_dataset
semi_sold = Dataset.to_pandas(load_dataset('sinhala-nlp/SemiSOLD', split='train'))
```
The dataset contains following columns
* **post_id** - Twitter ID
* **text** - Post text
Furthermore it contains predicted offensiveness scores from nine classifiers trained on SOLD train; xlmr, xlmt, mbert, sinbert, lstm_ft, cnn_ft, lstm_cbow, cnn_cbow, lstm_sl, cnn_sl and svm
## Experiments
Clone the repository and install the libraries using the following command (preferably inside a conda environment)
~~~
pip install -r requirements.txt
~~~
### Sentence-level
Sentence-level transformer based experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_deepoffense
~~~
The command takes the following arguments;
~~~
--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).
--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.
--transfer : Whether to perform transfer learning or not (true or false).
--transfer_language : The initial language if transfer learning is performed (hi, en or si).
* hi - Perform transfer learning from HASOC 2019 Hindi dataset (Modha et al., 2019).
* en - Perform transfer learning from Offenseval English dataset (Zampieri et al., 2019).
* si - Perform transfer learning from CCMS Sinhala dataset (Rathnayake et al., 2021).
--augment : Perform semi supervised data augmentation.
--std : Standard deviation of the models to cut down data augmentation.
--augment_type: The type of the data augmentation.
* off - Augment only the offensive instances.
* normal - Augment both offensive and non-offensive instances.
~~~
Sentence-level CNN and LSTM based experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_offensive_nn
~~~
The command takes the following arguments;
~~~
--model_type : Type of the architecture (cnn2D, lstm).
--model_name : The exact word embeddings to use. This may be a gensim model, or the path to a word embeddinng files.
--augment : Perform semi supervised data augmentation.
--std : Standard deviation of the models to cut down data augmentation.
--augment_type: The type of the data augmentation.
* off - Augment only the offensive instances.
* normal - Augment both offensive and non-offensive instances.
~~~
### Token-level
Token-level transformer based experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_mudes
~~~
The command takes the following arguments;
~~~
--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).
--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.
--transfer : Whether to perform transfer learning or not (true or false).
--transfer_language : The initial language if transfer learning is performed (hatex or tsd).
* hatex - Perform transfer learning from HateXplain dataset (Mathew et al., 2021).
* tsd - Perform transfer learning from TSD dataset (Pavlopoulos et al., 2021).
~~~
Token-level LIME experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_lime
~~~
The command takes the following arguments;
~~~
--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).
--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.
~~~
## Acknowledgments
We want to acknowledge Janitha Hapuarachchi, Sachith Suraweera, Chandika Udaya Kumara and Ridmi Randima, the team of volunteer annotators that provided their free time and efforts to help us produce SOLD.
## Citation
If you are using the dataset or the models please cite the following paper
~~~
@article{ranasinghe2022sold,
title={SOLD: Sinhala Offensive Language Dataset},
author={Ranasinghe, Tharindu and Anuradha, Isuri and Premasiri, Damith and Silva, Kanishka and Hettiarachchi, Hansi and Uyangodage, Lasitha and Zampieri, Marcos},
journal={arXiv preprint arXiv:2212.00851},
year={2022}
}
~~~ |
dreamproit/bill_summary_us | 2022-11-09T20:01:15.000Z | [
"task_categories:summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"bills",
"region:us"
] | dreamproit | null | null | null | 1 | 34 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- expert-generated
license: []
multilinguality:
- monolingual
pretty_name: bill_summarization
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- bills
task_categories:
- summarization
task_ids: []
---
# Dataset Card for "bill_summarization"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/dreamproit/BillML
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
Dataset for summarization of summarization of US Congressional bills (bill_summarization).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
English
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 186 MB
- **Total amount of disk used:** 177 MB
### Data Fields
- id: id of the bill.
- sections: list of bill sections with section_id and text.
- text: bill text.
- text_len: number of characters in the text.
- summary: summary of the bill.
- summary_len: number of characters in the summary.
- title: official title of the bill.
### Data Splits
No splits.
## Dataset Creation
### Curation Rationale
Bills (proposed laws) are specialized, structured documents with great public significance. Often, the language of a bill may not directly explain the potential impact of the legislation. For bills in the U.S. Congress, the Congressional Research Service of the Library of Congress provides professional, non-partisan summaries of bills. These are valuable for public understanding of the bills and are serve as an essential part of the lawmaking process to understand the meaning and potential legislative impact.
This dataset collects the text of bills, some metadata, as well as the CRS summaries. In order to build more accurate ML models for bill summarization it is important to have a clean dataset, alongside the professionally-written CRS summaries. ML summarization models built on generic data are bound to produce less accurate results (sometimes creating summaries that describe the opposite of a bill's actual effect). In addition, models that attempt to summarize all bills (some of which may reach 4000 pages long) may also be inaccurate due to the current limitations of summarization on long texts.
As a result, this dataset collects bill and summary information for only small bills (10 sections or fewer). It is meant as a starting point for community-driven development of ML models for bill summarization. In the future, we may expand or enhance the dataset in a number of ways-- adding metadata, including larger bills, and providing feedback from expert legislative analysts on any automated summaries that are produced.
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
The data consists of the US congress bills that were collected from the [Govinfo](https://github.com/unitedstates/congress) service provided by the United States Government Publishing Office (GPO) under CC0-1.0 license.
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
dreamproit.com
### Licensing Information
Bill and summary information are public and are unlicensed, as it is data produced by government entities. The collection and enhancement work that we provide for this dataset, to the degree it may be covered by copyright, is released under CC0 (https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@BorodaUA](https://github.com/BorodaUA), [@alexbojko](https://github.com/alexbojko) for adding this dataset. |
PlanTL-GOB-ES/WikiCAT_esv2 | 2023-07-27T09:13:16.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:automatically-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"language:es",
"license:cc-by-sa-3.0",
"region:us"
] | PlanTL-GOB-ES | WikiCAT: Text Classification Spanish dataset from the Viquipedia | null | 0 | 34 | ---
YAML tags:
annotations_creators:
- automatically-generated
language_creators:
- found
language:
- es
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
pretty_name: wikicat_esv2
size_categories:
- unknown
source_datasets: []
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# WikiCAT_es: Spanish Text Classification dataset
## Dataset Description
- **Paper:**
- **Point of Contact:** carlos.rodriguez1@bsc.es
**Repository**
### Dataset Summary
WikiCAT_ca is a Spanish corpus for thematic Text Classification tasks. It is created automatically from Wikipedia and Wikidata sources, and contains 8401 articles from the Viquipedia classified under 12 different categories.
This dataset was developed by BSC TeMU as part of the PlanTL project, and intended as an evaluation of LT capabilities to generate useful synthetic corpus.
### Supported Tasks and Leaderboards
Text classification, Language Model
### Languages
ES- Spanish
## Dataset Structure
### Data Instances
Two json files, one for each split.
### Data Fields
We used a simple model with the article text and associated labels, without further metadata.
#### Example:
<pre>
{'sentence': 'La economía de Reunión se ha basado tradicionalmente en la agricultura. La caña de azúcar ha sido el cultivo principal durante más de un siglo, y en algunos años representa el 85% de las exportaciones. El gobierno ha estado impulsando el desarrollo de una industria turística para aliviar el alto desempleo, que representa más del 40% de la fuerza laboral.(...) El PIB total de la isla fue de 18.800 millones de dólares EE.UU. en 2007., 'label': 'Economía'}
</pre>
#### Labels
'Religión', 'Entretenimiento', 'Música', 'Ciencia_y_Tecnología', 'Política', 'Economía', 'Matemáticas', 'Humanidades', 'Deporte', 'Derecho', 'Historia', 'Filosofía'
### Data Splits
* hfeval_esv5.json: 1681 label-document pairs
* hftrain_esv5.json: 6716 label-document pairs
## Dataset Creation
### Methodology
La páginas de "Categoría" representan los temas.
para cada tema, extraemos las páginas asociadas a ese primer nivel de la jerarquía, y utilizamos el resúmen ("summary") como texto representativo.
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The source data are thematic categories in the different Wikipedias
#### Who are the source language producers?
### Annotations
#### Annotation process
Automatic annotation
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Spanish.
### Discussion of Biases
We are aware that this data might contain biases. We have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es).
For further information, send an email to (plantl-gob-es@bsc.es).
This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://avancedigital.mineco.gob.es/en-us/Paginas/index.aspx) within the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
### Licensing Information
This work is licensed under [CC Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) License.
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Contributions
[N/A]
| |
1aurent/ICDAR-2011 | 2023-09-23T18:58:09.000Z | [
"size_categories:1K<n<10K",
"license:unknown",
"online handwriting",
"offline handwriting",
"signature",
"verification",
"region:us"
] | 1aurent | null | null | null | 0 | 34 | ---
license: unknown
size_categories:
- 1K<n<10K
tags:
- online handwriting
- offline handwriting
- signature
- verification
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': genuine
'1': forgeries
- name: forger
dtype: int32
- name: writer
dtype: uint32
- name: attempt
dtype: uint32
splits:
- name: train
num_bytes: 240159596.0
num_examples: 937
- name: test
num_bytes: 466376280.094
num_examples: 2534
download_size: 793149429
dataset_size: 706535876.094
---
# ICDAR 2011 Signature Verification Competition (SigComp2011)
http://iapr-tc11.org/mediawiki/index.php/ICDAR_2011_Signature_Verification_Competition_(SigComp2011)
The collection contains simultaneously acquired online and offline samples.
The collection contains offline and online signature samples. The offline dataset comprises PNG images, scanned at 400 dpi, RGB color. The online dataset comprises ascii files with the format: X, Y, Z (per line).
Marcus Liwicki, Michael Blumenstein, Elisa van den Heuvel, Charles E.H. Berger, Reinoud D. Stoel, Bryan Found, Xiaohong Chen, Muhammad Imran Malik. "SigComp11: Signature Verification Competition for On- and Offline Skilled Forgeries", Proc. 11th Int. Conference on Document Analysis and Recognition, 2011
|
Multimodal-Fatima/OxfordFlowers_test | 2023-06-02T02:11:11.000Z | [
"region:us"
] | Multimodal-Fatima | null | null | null | 0 | 34 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': pink primrose
'1': hard-leaved pocket orchid
'2': canterbury bells
'3': sweet pea
'4': english marigold
'5': tiger lily
'6': moon orchid
'7': bird of paradise
'8': monkshood
'9': globe thistle
'10': snapdragon
'11': colt's foot
'12': king protea
'13': spear thistle
'14': yellow iris
'15': globe-flower
'16': purple coneflower
'17': peruvian lily
'18': balloon flower
'19': giant white arum lily
'20': fire lily
'21': pincushion flower
'22': fritillary
'23': red ginger
'24': grape hyacinth
'25': corn poppy
'26': prince of wales feathers
'27': stemless gentian
'28': artichoke
'29': sweet william
'30': carnation
'31': garden phlox
'32': love in the mist
'33': mexican aster
'34': alpine sea holly
'35': ruby-lipped cattleya
'36': cape flower
'37': great masterwort
'38': siam tulip
'39': lenten rose
'40': barbeton daisy
'41': daffodil
'42': sword lily
'43': poinsettia
'44': bolero deep blue
'45': wallflower
'46': marigold
'47': buttercup
'48': oxeye daisy
'49': common dandelion
'50': petunia
'51': wild pansy
'52': primula
'53': sunflower
'54': pelargonium
'55': bishop of llandaff
'56': gaura
'57': geranium
'58': orange dahlia
'59': pink-yellow dahlia?
'60': cautleya spicata
'61': japanese anemone
'62': black-eyed susan
'63': silverbush
'64': californian poppy
'65': osteospermum
'66': spring crocus
'67': bearded iris
'68': windflower
'69': tree poppy
'70': gazania
'71': azalea
'72': water lily
'73': rose
'74': thorn apple
'75': morning glory
'76': passion flower
'77': lotus
'78': toad lily
'79': anthurium
'80': frangipani
'81': clematis
'82': hibiscus
'83': columbine
'84': desert-rose
'85': tree mallow
'86': magnolia
'87': cyclamen
'88': watercress
'89': canna lily
'90': hippeastrum
'91': bee balm
'92': ball moss
'93': foxglove
'94': bougainvillea
'95': camellia
'96': mallow
'97': mexican petunia
'98': bromelia
'99': blanket flower
'100': trumpet creeper
'101': blackberry lily
- name: id
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: blip_caption
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_ViT_L_14
sequence: string
- name: LLM_Description_opt175b_downstream_tasks_ViT_L_14
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: clip_tags_ViT_L_14_with_openai_classes
sequence: string
- name: clip_tags_ViT_L_14_wo_openai_classes
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003_full
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003_oxfordflowers
sequence: string
- name: clip_tags_ViT_L_14_simple_specific
dtype: string
- name: clip_tags_ViT_L_14_ensemble_specific
dtype: string
- name: clip_tags_ViT_B_16_simple_specific
dtype: string
- name: clip_tags_ViT_B_16_ensemble_specific
dtype: string
- name: clip_tags_ViT_B_32_simple_specific
dtype: string
- name: clip_tags_ViT_B_32_ensemble_specific
dtype: string
- name: Attributes_ViT_B_16_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_simple_specific
dtype: string
- name: clip_tags_LAION_ViT_H_14_2B_ensemble_specific
dtype: string
splits:
- name: test
num_bytes: 275107541.0
num_examples: 6149
download_size: 261098161
dataset_size: 275107541.0
---
# Dataset Card for "OxfordFlowers_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Kaludi/food-category-classification-v2.0 | 2023-02-09T19:38:17.000Z | [
"task_categories:image-classification",
"region:us"
] | Kaludi | null | null | null | 0 | 34 | ---
task_categories:
- image-classification
---
# Dataset for project: food-category-classification-v2.0
## Dataset Description
This dataset for project food-category-classification-v2.0 was scraped with the help of a bulk google image downloader.
## Dataset Structure
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Bread', 'Dairy', 'Dessert', 'Egg', 'Fried Food', 'Fruit', 'Meat', 'Noodles', 'Rice', 'Seafood', 'Soup', 'Vegetable'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follows:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1200 |
| valid | 300 |
|
IlyaGusev/habr | 2023-03-09T23:16:35.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:ru",
"language:en",
"region:us"
] | IlyaGusev | null | null | null | 13 | 34 | ---
dataset_info:
features:
- name: id
dtype: uint32
- name: language
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text_markdown
dtype: string
- name: text_html
dtype: string
- name: author
dtype: string
- name: original_author
dtype: string
- name: original_url
dtype: string
- name: lead_html
dtype: string
- name: lead_markdown
dtype: string
- name: type
dtype: string
- name: time_published
dtype: uint64
- name: statistics
struct:
- name: commentsCount
dtype: uint32
- name: favoritesCount
dtype: uint32
- name: readingCount
dtype: uint32
- name: score
dtype: int32
- name: votesCount
dtype: int32
- name: votesCountPlus
dtype: int32
- name: votesCountMinus
dtype: int32
- name: labels
sequence: string
- name: hubs
sequence: string
- name: flows
sequence: string
- name: tags
sequence: string
- name: reading_time
dtype: uint32
- name: format
dtype: string
- name: complexity
dtype: string
- name: comments
sequence:
- name: id
dtype: uint64
- name: parent_id
dtype: uint64
- name: level
dtype: uint32
- name: time_published
dtype: uint64
- name: score
dtype: int32
- name: votes
dtype: uint32
- name: message_html
dtype: string
- name: message_markdown
dtype: string
- name: author
dtype: string
- name: children
sequence: uint64
splits:
- name: train
num_bytes: 19968161329
num_examples: 302049
download_size: 3485570346
dataset_size: 19968161329
task_categories:
- text-generation
language:
- ru
- en
size_categories:
- 100K<n<1M
---
# Habr dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Description](#description)
- [Usage](#usage)
- [Data Instances](#data-instances)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
## Description
**Summary:** Dataset of posts and comments from [habr.com](https://habr.com/ru/all/), a Russian collaborative blog about IT, computer science and anything related to the Internet.
**Script:** [create_habr.py](https://github.com/IlyaGusev/rulm/blob/master/data_processing/create_habr.py)
**Point of Contact:** [Ilya Gusev](ilya.gusev@phystech.edu)
**Languages:** Russian, English, some programming code.
## Usage
Prerequisites:
```bash
pip install datasets zstandard jsonlines pysimdjson
```
Dataset iteration:
```python
from datasets import load_dataset
dataset = load_dataset('IlyaGusev/habr', split="train", streaming=True)
for example in dataset:
print(example["text_markdown"])
```
## Data Instances
```
{
"id": 12730,
"language": "ru",
"url": "https://habr.com/ru/post/12730/",
"text_markdown": "...",
"text_html": "...",
"lead_markdown": "...",
"lead_html": "...",
"type": "article",
"labels": [],
"original_author": null,
"original_url": null,
"time_published": 1185962380,
"author": "...",
"title": "Хочешь в университет — сделай презентацию",
"statistics": {
"commentsCount": 23,
"favoritesCount": 1,
"readingCount": 1542,
"score": 7,
"votesCount": 15,
"votesCountPlus": 11,
"votesCountMinus": 4
},
"hubs": [
"itcompanies"
],
"flows": [
"popsci"
],
"tags": [
"PowerPoint",
"презентация",
"абитуриенты",
],
"reading_time": 1,
"format": null,
"complexity": null,
"comments": {
"id": [11653537, 11653541],
"parent_id": [null, 11653537],
"level": [0, 1],
"time_published": [1185963192, 1185967886],
"score": [-1, 0],
"votes": [1, 0],
"message_html": ["...", "..."],
"author": ["...", "..."],
"children": [[11653541], []]
}
}
```
You can use this little helper to unflatten sequences:
```python
def revert_flattening(records):
fixed_records = []
for key, values in records.items():
if not fixed_records:
fixed_records = [{} for _ in range(len(values))]
for i, value in enumerate(values):
fixed_records[i][key] = value
return fixed_records
```
The original JSONL is already unflattened.
## Source Data
* The data source is the [Habr](https://habr.com/) website.
* API call example: [post 709430](https://habr.com/kek/v2/articles/709430).
* Processing script is [here](https://github.com/IlyaGusev/rulm/blob/master/data_processing/create_habr.py).
## Personal and Sensitive Information
The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original authors is included in the dataset where possible.
|
OllieStanley/humaneval-mbpp-codegen-qa | 2023-03-15T15:13:27.000Z | [
"region:us"
] | OllieStanley | null | null | null | 1 | 34 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 225572
num_examples: 591
download_size: 89931
dataset_size: 225572
---
# Dataset Card for "humaneval-mbpp-codegen-qa"
This dataset contains prompt-reply (question-answer) pairs where the prompt is to create a Python function which satisfies the functionality described in a specified docstring. The responses are then the generated functions. |
0x70DA/sci_summ | 2023-03-05T18:12:54.000Z | [
"region:us"
] | 0x70DA | null | null | null | 0 | 34 | ---
dataset_info:
features:
- name: text
dtype: string
- name: summary
dtype: string
splits:
- name: validation
num_bytes: 23361937.97759879
num_examples: 4631
- name: test
num_bytes: 23487172.952651516
num_examples: 4665
- name: train
num_bytes: 176474272.610434
num_examples: 34083
download_size: 120216439
dataset_size: 223323383.54068428
---
# Dataset Card for "sci_summ"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
davebulaval/RISCBAC | 2023-08-10T22:04:01.000Z | [
"task_categories:summarization",
"task_categories:question-answering",
"task_categories:translation",
"multilinguality:monolingual",
"multilinguality:aligned",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:fr",
"license:cc-by-4.0",
"unsupervised",
"arxiv:23... | davebulaval | RISCBAC was created using [RISC](https://github.com/GRAAL-Research/risc), an open-source Python package data
generator. RISC generates look-alike automobile insurance contracts based on the Quebec regulatory insurance
form in French and English.
It contains 10,000 English and French insurance contracts generated using the same seed. Thus, contracts share
the same deterministic synthetic data (RISCBAC can be used as an aligned dataset). RISC can be used to generate
more data for RISCBAC. | @misc{beaucheminrisc,
title={{RISC: Generating Realistic Synthetic Bilingual Insurance
Contract}},
author={David Beauchemin and Richard Khoury},
year={2023},
eprint={2304.04212},
archivePrefix={arXiv}
} | null | 1 | 34 | ---
license:
- cc-by-4.0
multilinguality:
- monolingual
- aligned
task_categories:
- summarization
- question-answering
- translation
source_datasets:
- original
language:
- en
- fr
tags:
- unsupervised
pretty_name: Realistic Bilingual Synthetic Automobile Insurance Contract
size_categories:
- 10K<n<100K
dataset_info:
download_size: 376971
dataset_size: 611048
viewer: true
---
# Dataset Card for RISCBAC
RISCBAC was created using [RISC](https://github.com/GRAAL-Research/risc), an open-source Python package data generator. RISC generates look-alike automobile insurance contracts based on the Quebec regulatory insurance form in French and English.
It contains 10,000 English and French insurance contracts generated using the same seed. Thus, contracts share the same deterministic synthetic data (RISCBAC can be used as an aligned dataset). RISC can be used to generate more data for RISCBAC.
# Data Instances
## Default (`'fr'`)
The default data instance is the French version of the dataset. The dataset is comprised of 10,000 synthetic automobile insurance contracts.
## Other Option
The other data instance option is `"en"`. The dataset is comprised of 10,000 synthetic automobile insurance contracts.
# Citation Information
```
@misc{beaucheminrisc,
title={{RISC: Generating Realistic Synthetic Bilingual Insurance
Contract}},
author={David Beauchemin and Richard Khoury},
year={2023},
eprint={2304.04212},
archivePrefix={arXiv}
}
```
|
shareAI/ShareGPT-Chinese-English-90k | 2023-09-19T14:27:07.000Z | [
"license:apache-2.0",
"region:us"
] | shareAI | null | null | null | 113 | 34 | ---
license: apache-2.0
---
# ShareGPT-Chinese-English-90k 中英文双语人机问答数据集
中英文平行双语优质人机问答数据集,覆盖真实复杂场景下的用户提问。用于训练高质量的对话模型 (比那些通过反复调用api接口生成机器模拟问答的数据在指令分布上更鲁棒)
特点:
- 1.同时提供意义表达完全相同的中英文平行对照语料,可进行双语对话模型训练。
- 2.所有问题均非人为臆想加上api轮询拟造的假数据(如Moss),更加符合真实用户场景的指令分布和提问表达。
- 3.sharegpt数据集是由网友自发分享而收集到的,相当于有一层非常天然的过滤(通过人类感觉),筛除了大部分体验不好的对话。
补充:该数据收集于chatGPT还未表现出明显智力退化的时间点。(猜测一方面可能是官方为了减小开支把150B的gpt3.5替换成10b左右的蒸馏版本了,另一方面可能是由于引入了更多的拒绝答复导致模型连接知识逻辑的程度退化)
优秀对话llm的训练离不开高质量的多轮对话数据集,如果你也想成为志愿者
欢迎加入数据集QQ群:130920969,共同进行优质数据集的交流、收集和建设工作 |
mattymchen/mr | 2023-04-19T15:20:03.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"language:en",
"region:us"
] | mattymchen | null | null | null | 0 | 34 | ---
language:
- en
task_categories:
- text-classification
task_ids:
- sentiment-classification
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: test
num_bytes: 1352524
num_examples: 10662
download_size: 883903
dataset_size: 1352524
---
# Dataset Card for "mr"
## Dataset Description
Movie review dataset from SentEval.
## Data Fields
- `sentence`: Complete sentence expressing an opinion about a film.
- `label`: Sentiment of the opinion, either "negative" (0) or positive (1).
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DavidMOBrien/8000-java | 2023-04-19T15:14:06.000Z | [
"region:us"
] | DavidMOBrien | null | null | null | 0 | 34 | ---
dataset_info:
features:
- name: before
dtype: string
- name: after
dtype: string
- name: repo
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 722488653.5318879
num_examples: 441596
- name: test
num_bytes: 90311899.73405604
num_examples: 55200
- name: valid
num_bytes: 90311899.73405604
num_examples: 55200
download_size: 323537982
dataset_size: 903112452.9999999
---
# Dataset Card for "8000-java"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Squish42/bluemoon-fandom-1-1-rp-cleaned | 2023-07-09T22:35:05.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:wtfpl",
"not-for-all-audiences",
"roleplay",
"creative",
"region:us"
] | Squish42 | null | null | null | 23 | 34 | ---
language:
- en
pretty_name: "Bluemoon - Fandom 1x1 Roleplay"
tags:
- not-for-all-audiences
- roleplay
- creative
license: wtfpl
task_categories:
- conversational
- text-generation
size_categories:
- 100K<n<1M
---
290,544 posts of roleplay forum data scraped by a third party. The source data is not available here.
It should be effective when used to finetune for one-one roleplay and creative writing.
Additionally, it may help to generate various fanfiction-style writing and scenarios.
The `dataset.yaml` file contains the SHA512 hash of the source data and accurately describes each step resulting in this
dataset.
This dataset has been cleaned and formatted for use with fastchat.

 |
ccmusic-database/chest_falsetto | 2023-10-03T17:14:13.000Z | [
"task_categories:audio-classification",
"size_categories:1K<n<10K",
"language:zh",
"language:en",
"license:mit",
"music",
"art",
"region:us"
] | ccmusic-database | This database contains 1280 monophonic singing audio (.wav format) of chest and falsetto voices,
with chest voice tagged as _chest and falsetto voice tagged as _falsetto. In addition,
the Mel-spectrogram, MFCC, and spectral characteristics of each audio segment are also included,
for a total of 5120 CSV files. | @dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li},
title = {{Music Data Sharing Platform for Computational Musicology Research (CCMUSIC DATASET)}},
month = nov,
year = 2021,
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
} | null | 3 | 34 | ---
license: mit
task_categories:
- audio-classification
language:
- zh
- en
tags:
- music
- art
pretty_name: Chest voice and Falsetto Database
size_categories:
- 1K<n<10K
---
# Dataset Card for Chest voice and Falsetto Database
## Dataset Description
- **Homepage:** <https://ccmusic-database.github.io>
- **Repository:** <https://huggingface.co/datasets/ccmusic-database/chest_falsetto>
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
- **Leaderboard:** <https://ccmusic-database.github.io/team.html>
- **Point of Contact:** N/A
### Dataset Summary
This database contains 1280 monophonic singing audio (.wav format) of chest and falsetto voices, with chest voice tagged as _chest_ and falsetto voice tagged as _falsetto_.
### Supported Tasks and Leaderboards
Audio classification, singing method classification, voice classification
### Languages
Chinese, English
## Dataset Structure
### Data Instances
.zip(.wav, .jpg)
### Data Fields
m_chest, f_chest, m_falsetto, f_falsetto
### Data Splits
train, validation, test
## Dataset Creation
### Curation Rationale
Lack of a dataset for Chest voice and Falsetto
### Source Data
#### Initial Data Collection and Normalization
Zhaorui Liu, Monan Zhou
#### Who are the source language producers?
Students from CCMUSIC
### Annotations
#### Annotation process
1280 monophonic singing audio (.wav format) of chest and falsetto voices, with chest voice tagged as _chest_ and falsetto voice tagged as _falsetto_.
#### Who are the annotators?
Students from CCMUSIC
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
Promoting the development of AI in the music industry
### Discussion of Biases
Only for chest and falsetto voices
### Other Known Limitations
Recordings are cut into slices that are too short
## Additional Information
### Dataset Curators
Zijin Li
### Evaluation
Coming soon...
### Licensing Information
```
MIT License
Copyright (c) CCMUSIC
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Citation Information
```
@dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li},
title = {CCMUSIC DATABASE: Music Data Sharing Platform for Computational Musicology Research},
month = {nov},
year = {2021},
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
}
```
### Contributions
Provide a dataset for distinguishing chest and falsetto voices |
Den4ikAI/ru_sberquad_long_answers | 2023-05-29T05:32:22.000Z | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:ru",
"license:mit",
"region:us"
] | Den4ikAI | null | null | null | 3 | 34 | ---
license: mit
task_categories:
- question-answering
- text2text-generation
language:
- ru
size_categories:
- 10K<n<100K
---
UPD 29.05.2023: Добавлены негативные примеры.
Датасет для ответов на вопросы по тексту.
Сгенерирован моделью Den4ikAI/FRED-T5-XL_instructor
Отличия от sberquad, xquad и т.д:
1. Ответы не односложные, развернутые, представляют несколько предложений
2. Не подходит для обучения энкодерных моделей! |
nicholasKluge/reward-aira-dataset | 2023-08-30T20:50:28.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:pt",
"language:en",
"license:apache-2.0",
"reward model",
"instruction",
"alignment",
"region:us"
] | nicholasKluge | null | null | null | 0 | 34 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- pt
- en
tags:
- reward model
- instruction
- alignment
pretty_name: Reward-Aira Dataset
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: instruction
dtype: string
- name: chosen_response
dtype: string
- name: rejected_response
dtype: string
splits:
- name: english
num_bytes: 53232958
num_examples: 32675
- name: portuguese
num_bytes: 59650447
num_examples: 32675
download_size: 65455319
dataset_size: 112883405
---
# Dataset (`Reward-Aira Dataset`)
### Overview
This dataset contains a collection of prompt + completion examples of LLM following instructions in a conversational manner. All prompts come with two possible completions (one better than the other). The dataset is available in both Portuguese and English languages.
### Dataset Details
- **Dataset Name:** Reward-Aira Dataset
- **Language:** Portuguese and English
- **Total Size:** 32,675 pairs of demonstrations (in Portuguese and English)
### Contents
The dataset consists of data frames with the following columns:
- **Prompt (`instruction`):** The initial prompt provided to the model.
- **Completion (`chosen_response`):** A completion to the prompt.
- **Completion (`rejected_response`):** A worst completion to the prompt.
```python
{
"instruction":"Why is AI Ethics important?",
"chosen_response": "The field of AI Ethics delves deeply into the intricate ethical considerations that arise with respect to AI systems. This includes the role of humanity in creating and deploying these systems, as well as the conduct of machines themselves. Broadly speaking, AI Ethics can be divided into two major categories : concerns surrounding the morality of human actions in relation to creating and using AI, and concerns regarding the moral implications of machine behavior.",
"rejected_response": "Who cares about AI Ethics? It's just a bunch of whining about humans making and using AI and bitching about what the machines do."
}
```
### Use Cases
`Reward-Aira Dataset` can be utilized to train a reward/preference model.
## How to use
Available splits are `portuguese` and `english`.
```python
from datasets import load_dataset
dataset = load_dataset("nicholasKluge/reward-aira-dataset")
```
### Dataset License
The `Reward-Aira Dataset` is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
### Disclaimer
This dataset is provided as is, without any warranty or guarantee of its accuracy or suitability for any purpose. The creators and contributors of this dataset are not liable for any damages or losses arising from its use. Please review and comply with the licenses and terms of the original datasets before use. |
BAAI/COIG-PC-Lite | 2023-09-26T08:51:45.000Z | [
"language:zh",
"license:unknown",
"region:us"
] | BAAI | null | null | null | 20 | 34 | ---
extra_gated_heading: "Acknowledge license to accept the repository"
extra_gated_prompt: |
北京智源人工智能研究院(以下简称“我们”或“研究院”)通过BAAI DataHub(data.baai.ac.cn)和COIG-PC HuggingFace仓库(https://huggingface.co/datasets/BAAI/COIG-PC)向您提供开源数据集(以下或称“数据集”),您可通过下载的方式获取您所需的开源数据集,并在遵守各原始数据集使用规则前提下,基于学习、研究、商业等目的使用相关数据集。
在您获取(包括但不限于访问、下载、复制、传播、使用等处理数据集的行为)开源数据集前,您应认真阅读并理解本《COIG-PC开源数据集使用须知与免责声明》(以下简称“本声明”)。一旦您获取开源数据集,无论您的获取方式为何,您的获取行为均将被视为对本声明全部内容的认可。
1. 平台的所有权与运营权
您应充分了解并知悉,BAAI DataHub和COIG-PC HuggingFace仓库(包括当前版本及全部历史版本)的所有权与运营权归智源人工智能研究院所有,智源人工智能研究院对本平台/本工具及开源数据集开放计划拥有最终解释权和决定权。
您知悉并理解,基于相关法律法规更新和完善以及我们需履行法律合规义务的客观变化,我们保留对本平台/本工具进行不定时更新、维护,或者中止乃至永久终止提供本平台/本工具服务的权利。我们将在合理时间内将可能发生前述情形通过公告或邮件等合理方式告知您,您应当及时做好相应的调整和安排,但我们不因发生前述任何情形对您造成的任何损失承担任何责任。
2. 开源数据集的权利主张
为了便于您基于学习、研究、商业的目的开展数据集获取、使用等活动,我们对第三方原始数据集进行了必要的格式整合、数据清洗、标注、分类、注释等相关处理环节,形成可供本平台/本工具用户使用的开源数据集。
您知悉并理解,我们不对开源数据集主张知识产权中的相关财产性权利,因此我们亦无相应义务对开源数据集可能存在的知识产权进行主动识别和保护,但这不意味着我们放弃开源数据集主张署名权、发表权、修改权和保护作品完整权(如有)等人身性权利。而原始数据集可能存在的知识产权及相应合法权益由原权利人享有。
此外,向您开放和使用经合理编排、加工和处理后的开源数据集,并不意味着我们对原始数据集知识产权、信息内容等真实、准确或无争议的认可,您应当自行筛选、仔细甄别,使用经您选择的开源数据集。您知悉并同意,研究院对您自行选择使用的原始数据集不负有任何无缺陷或无瑕疵的承诺义务或担保责任。
3. 开源数据集的使用限制
您使用数据集不得侵害我们或任何第三方的合法权益(包括但不限于著作权、专利权、商标权等知识产权与其他权益)。
获取开源数据集后,您应确保对开源数据集的使用不超过原始数据集的权利人以公示或协议等形式明确规定的使用规则,包括原始数据的使用范围、目的和合法用途等。我们在此善意地提请您留意,如您对开源数据集的使用超出原始数据集的原定使用范围及用途,您可能面临侵犯原始数据集权利人的合法权益例如知识产权的风险,并可能承担相应的法律责任。
4. 个人信息保护
基于技术限制及开源数据集的公益性质等客观原因,我们无法保证开源数据集中不包含任何个人信息,我们不对开源数据集中可能涉及的个人信息承担任何法律责任。
如开源数据集涉及个人信息,我们不对您使用开源数据集可能涉及的任何个人信息处理行为承担法律责任。我们在此善意地提请您留意,您应依据《个人信息保护法》等相关法律法规的规定处理个人信息。
为了维护信息主体的合法权益、履行可能适用的法律、行政法规的规定,如您在使用开源数据集的过程中发现涉及或者可能涉及个人信息的内容,应立即停止对数据集中涉及个人信息部分的使用,并及时通过“6. 投诉与通知”中载明的联系我们。
5. 信息内容管理
我们不对开源数据集可能涉及的违法与不良信息承担任何法律责任。
如您在使用开源数据集的过程中发现开源数据集涉及或者可能涉及任何违法与不良信息,您应立即停止对数据集中涉及违法与不良信息部分的使用,并及时通过“6. 投诉与通知”中载明的联系我们。
6. 投诉与通知
如您认为开源数据集侵犯了您的合法权益,您可通过010-50955974联系我们,我们会及时依法处理您的主张与投诉。
为了处理您的主张和投诉,我们可能需要您提供联系方式、侵权证明材料以及身份证明等材料。请注意,如果您恶意投诉或陈述失实,您将承担由此造成的全部法律责任(包括但不限于合理的费用赔偿等)。
7. 责任声明
您理解并同意,基于开源数据集的性质,数据集中可能包含来自不同来源和贡献者的数据,其真实性、准确性、客观性等可能会有所差异,我们无法对任何数据集的可用性、可靠性等做出任何承诺。
在任何情况下,我们不对开源数据集可能存在的个人信息侵权、违法与不良信息传播、知识产权侵权等任何风险承担任何法律责任。
在任何情况下,我们不对您因开源数据集遭受的或与之相关的任何损失(包括但不限于直接损失、间接损失以及可得利益损失等)承担任何法律责任。
8. 其他
开源数据集处于不断发展、变化的阶段,我们可能因业务发展、第三方合作、法律法规变动等原因更新、调整所提供的开源数据集范围,或中止、暂停、终止开源数据集提供业务。
extra_gated_fields:
Name: text
Affiliation: text
Country: text
I agree to use this model for non-commercial use ONLY: checkbox
extra_gated_button_content: "Acknowledge license"
license: unknown
language:
- zh
configs:
- config_name: default
data_files:
- split: full
path: data/full-*
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
- split: Top50PerTask
path: data/Top50PerTask-*
- split: Top100PerTask
path: data/Top100PerTask-*
- split: Top200PerTask
path: data/Top200PerTask-*
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: split
dtype: string
- name: task_name_in_eng
dtype: string
- name: task_type
struct:
- name: major
sequence: string
- name: minor
sequence: string
- name: domain
sequence: string
- name: other
dtype: string
- name: filename
dtype: string
splits:
- name: full
num_bytes: 1099400407
num_examples: 650147
- name: train
num_bytes: 410204689
num_examples: 216691
- name: valid
num_bytes: 12413560
num_examples: 16148
- name: test
num_bytes: 51472090
num_examples: 69301
- name: Top50PerTask
num_bytes: 14763925
num_examples: 19274
- name: Top100PerTask
num_bytes: 28489139
num_examples: 37701
- name: Top200PerTask
num_bytes: 51472090
num_examples: 69301
download_size: 53939740
dataset_size: 1668215900
---
# COIG Prompt Collection
## License
**Default Licensing for Sub-Datasets Without Specific License Declaration**: In instances where sub-datasets within the COIG-PC Dataset do not have a specific license declaration, the Apache License 2.0 (Apache-2.0) will be the applicable licensing terms by default.
**Precedence of Declared Licensing for Sub-Datasets**: For any sub-dataset within the COIG-PC Dataset that has an explicitly declared license, the terms and conditions of the declared license shall take precedence and govern the usage of that particular sub-dataset.
Users and developers utilizing the COIG-PC Dataset must ensure compliance with the licensing terms as outlined above. It is imperative to review and adhere to the specified licensing conditions of each sub-dataset, as they may vary.
## What is COIG-PC?
The COIG-PC Dataset is a meticulously curated and comprehensive collection of Chinese tasks and data, designed to facilitate the fine-tuning and optimization of language models for Chinese natural language processing (NLP). The dataset aims to provide researchers and developers with a rich set of resources to improve the capabilities of language models in handling Chinese text, which can be utilized in various fields such as text generation, information extraction, sentiment analysis, machine translation, among others.
COIG-PC-Lite is a subset of COIG-PC with only 200 samples from each task file. If you are looking for COIG-PC, please refer to https://huggingface.co/datasets/BAAI/COIG-PC.
## Why COIG-PC?
The COIG-PC Dataset is an invaluable resource for the domain of natural language processing (NLP) for various compelling reasons:
**Addressing Language Complexity**: Chinese is known for its intricacy, with a vast array of characters and diverse grammatical structures. A specialized dataset like COIG-PC, which is tailored for the Chinese language, is essential to adequately address these complexities during model training.
**Comprehensive Data Aggregation**: The COIG-PC Dataset is a result of an extensive effort in integrating almost all available Chinese datasets in the market. This comprehensive aggregation makes it one of the most exhaustive collections for Chinese NLP.
**Data Deduplication and Normalization**: The COIG-PC Dataset underwent rigorous manual processing to eliminate duplicate data and perform normalization. This ensures that the dataset is free from redundancy, and the data is consistent and well-structured, making it more user-friendly and efficient for model training.
**Fine-tuning and Optimization**: The dataset’s instruction-based phrasing facilitates better fine-tuning and optimization of language models. This structure allows models to better understand and execute tasks, which is particularly beneficial in improving performance on unseen or novel tasks.
The COIG-PC Dataset, with its comprehensive aggregation, meticulous selection, deduplication, and normalization of data, stands as an unmatched resource for training and optimizing language models tailored for the Chinese language and culture. It addresses the unique challenges of Chinese language processing and serves as a catalyst for advancements in Chinese NLP.
## Who builds COIG-PC?
The bedrock of COIG-PC is anchored in the dataset furnished by stardust.ai, which comprises an aggregation of data collected from the Internet.
And COIG-PC is the result of a collaborative effort involving engineers and experts from over twenty distinguished universities both domestically and internationally. Due to space constraints, it is not feasible to list all of them; however, the following are a few notable institutions among the collaborators:
- Beijing Academy of Artificial Intelligence, China
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-Lite/resolve/main/assets/baai.png" alt= “BAAI” height="100" width="150">
- Peking University, China
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-Lite/resolve/main/assets/pku.png" alt= “PKU” height="100" width="200">
- The Hong Kong University of Science and Technology (HKUST), China
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-Lite/resolve/main/assets/hkust.png" alt= “HKUST” height="100" width="200">
- The University of Waterloo, Canada
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-Lite/resolve/main/assets/waterloo.png" alt= “Waterloo” height="100" width="150">
- The University of Sheffield, United Kingdom
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-Lite/resolve/main/assets/sheffield.png" alt= “Sheffield” height="100" width="200">
- Beijing University of Posts and Telecommunications, China
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-Lite/resolve/main/assets/bupt.png" alt= “BUPT” height="100" width="200">
- [Multimodal Art Projection](https://huggingface.co/m-a-p)
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-Lite/resolve/main/assets/map.png" alt= “M.A.P” height="100" width="200">
- stardust.ai, China
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-Lite/resolve/main/assets/stardust.png" alt= “stardust.ai” height="100" width="200">
- LinkSoul.AI, China
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-Lite/resolve/main/assets/linksoul.png" alt= “linksoul.ai” height="100" width="200">
For the detailed list of engineers involved in the creation and refinement of COIG-PC, please refer to the paper that will be published subsequently. This paper will provide in-depth information regarding the contributions and the specifics of the dataset’s development process.
## How to use COIG-PC?
COIG-PC is structured in a **.jsonl** file format. Each line in the file represents a single data record and is structured in JSON (JavaScript Object Notation) format. Below is a breakdown of the elements within each line:
**instruction**: This is a text string that provides the instruction for the task. For example, it might tell the model what to do with the input data.
**input**: This is the input data that the model needs to process. In the context of translation, it would be the text that needs to be translated.
**output**: This contains the expected output data after processing the input. In the context of translation, it would be the translated text.
**split**: Indicates the official split of the original dataset, which is used to categorize data for different phases of model training and evaluation. It can be 'train', 'test', 'valid', etc.
**task_type**: Contains major and minor categories for the dataset. Major categories are broader, while minor categories can be more specific subcategories.
**domain**: Indicates the domain or field to which the data belongs.
**other**: This field can contain additional information or metadata regarding the data record. If there is no additional information, it may be set to null.
### Example
Here is an example of how a line in the COIG-PC dataset might be structured:
```
{
"instruction": "请把下面的中文句子翻译成英文",
"input": "我爱你。",
"output": "I love you.",
"split": "train",
"task_type": {
"major": ["翻译"],
"minor": ["翻译", "中译英"]
},
"domain": ["通用"],
"other": null
}
```
In this example:
**instruction** tells the model to translate the following Chinese sentence into English.
**input** contains the Chinese text "我爱你" which means "I love you".
**output** contains the expected translation in English: "I love you".
**split** indicates that this data record is part of the training set.
**task_type** specifies that the major category is "Translation" and the minor categories are "Translation" and "Chinese to English".
**domain** specifies that this data record belongs to the general domain.
**other** is set to null as there is no additional information for this data record.
## Update: Aug. 30, 2023
- v1.2: Delete 31 bad task files. Update 99 task files. Rename 2 task files. Add 3 new task files. COIG-PC now has 3339 tasks in total.
- v1.1: Fix 00040-001-000 and 00050-003-000, ignore 00930 and 01373.
- v1.0: First version for arXiv paper.
- v0.6: Upload 28 new tasks. COIG-PC now has 3367 tasks in total.
- v0.5: Upload 202 new tasks. COIG-PC now has 3339 tasks in total.
- v0.4: Upload 1049 new tasks. COIG-PC now has 3137 tasks in total.
- v0.3: Upload 1139 new tasks. COIG-PC now has 2088 tasks in total.
- v0.2: Upload 422 new tasks. COIG-PC now has 949 tasks in total. Add "TopSamplenumPerTask" split where only "Samplenum" samples are used from each task.
- v0.1: Upload 527 tasks.
## COIG-PC Citation
If you want to cite COIG-PC dataset, you could use this:
```
```
## Contact Us
To contact us feel free to create an Issue in this repository.
|
Binaryy/travel_sample_extended | 2023-07-03T19:50:34.000Z | [
"region:us"
] | Binaryy | null | null | null | 1 | 34 | ---
dataset_info:
features:
- name: query
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 203357
num_examples: 110
download_size: 109729
dataset_size: 203357
---
# Dataset Card for "travel_sample_extended"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
teleprint-me/phi-1 | 2023-07-08T04:01:52.000Z | [
"license:cc-by-nc-sa-3.0",
"arxiv:2306.11644",
"region:us"
] | teleprint-me | null | null | null | 30 | 34 | ---
title: 'Phi-1 Model Dataset'
date: '2023-07-03'
license: cc-by-nc-sa-3.0
---
## Dataset Description
- **Homepage:** [teleprint.me](https://teleprint.me)
- **Repository:** [phi-1](https://huggingface.co/datasets/teleprint-me/phi-1)
- **Paper:** [2306.11644v1](https://arxiv.org/abs/2306.11644v1)
- **Leaderboard:** [Link to the leaderboard]
- **Point of Contact:** [aberrio@teleprint.me](aberrio@teleprint.me)
### Dataset Summary
This dataset is created for training the phi-1 model, based on the paper
"Textbooks are All You Need". It contains high-quality data derived from various
textbooks, transformed and synthesized using OpenAI's GPT-3.5 and GPT-4 models.
For optimal results, it is recommended to train models with the following
parameters and sequence lengths:
- For a model with 350M parameters, use a sequence length of 2048.
- For a model with 700M parameters, use a sequence length of 4096.
- For a model with 1.3B parameters, use a sequence length of 8096.
Please note that the dataset is currently in its initial phase of planning and
collection. The process involves preparing the data, extracting it, formatting
it, chunking it, and preparing it for synthesis. Scripts for preparing and
processing the data for the model will be developed. Once the data is generated,
it will undergo a review and revision process to ensure its quality and
relevance.
These recommendations and notes are based on the dataset creator's initial plans
and may be subject to change as the project progresses.
**NOTE**: Due to the nature of this dataset, it cannot be released without
obtaining permissions from the respective publishers and/or authors. If you are
an author or publisher and have any concerns about this repository, please feel
free to email me.
If you are an author or publisher and would like to grant permission for the use
of your work, your support would be greatly appreciated. Please note that in
order for the dataset to be released, permissions would need to be unanimous
from all involved parties.
In the absence of such permissions, I will respect the copyrights of the
copyrighted materials and exercise my right to Fair Use with my own physical
property for personal use.
**This dataset is NOT intended for commercial purposes**. Its primary purpose is
for research in machine learning and AI software development. If a model is
created using this dataset, it will be shared under the same license.
Any proceeds derived from donations will be primarily used for the development
of the dataset and the model.
### Supported Tasks and Leaderboards
- `text-generation`: The dataset can be used to train a model for chat-like text
generation, more specifically, for generating explanations and examples in the
context of arithmetic, algebra, geometry, trigonometry, calculus, algorithms
and data structures, design patterns, and the python programming language.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
A data instance consists of a dialogue between a user and an assistant,
discussing a topic in arithmetic, algebra, geometry, trigonometry, calculus,
algorithms and data structures, design patterns, or the Python programming
language. The dialogue is structured as a list of turns, each turn containing
the role ("user" or "assistant") and the content of the turn.
### Data Fields
- `role`: a string indicating the role of the speaker in the dialogue ("system",
"user", "assistant", "function").
- `content`: a string containing the content of the speaker's turn in the
dialogue.
### Data Splits
The dataset is split into a training set, a validation set, and a test set. The
exact sizes and proportions of these splits will depend on the final size of the
dataset.
## Dataset Creation
### Curation Rationale
The dataset is being created to train a model capable of generating explanations
and examples in the context of various mathematical and computer science topics.
The goal is to create an AI assistant that can provide clear, accurate, and
pedagogically sound responses to user queries on these topics.
### Source Data
#### Initial Data Collection and Normalization
The data is collected from a variety of textbooks covering arithmetic, algebra,
geometry, trigonometry, calculus, algorithms and data structures, design
patterns, and the Python programming language. The textbooks used include:
- Barron's Arithmetic The Easy Way Fourth Edition
- Blitzer Introductory Algebra for College Students Fifth Edition
- McDougal Littell Geometry
- Blitzer Intermediate Algebra for College Students 5th Edition
- Trigonometry Sixth Edition
- Pearson College Algebra Fourth Edition
- Hughes-Hallet Applied Calculus 5th Edition
- CLRS Introduction to Algorithms Third Edition
In addition to the textbooks, the dataset also includes material from the
following online resources:
- [C reference](https://en.cppreference.com/w/c)
- [Cpp reference](https://en.cppreference.com/w/cpp)
- [Python Standard Library](https://docs.python.org/3/)
These resources provide up-to-date information and examples for the C, C++, and
Python programming languages. The creators of the Cppreference site also provide
[archives](https://en.cppreference.com/w/Cppreference:Archives) of their site
for offline use. Code samples synthesized by OpenAI's GPT models, curated by the
dataset creator, are also included in the dataset.
**Note:** The creator of this dataset owns physical copies of all the textbooks
listed above. The data from these sources are transformed into a dialogue format
using OpenAI's GPT-3.5 and GPT-4 models. The resulting dialogues are then used
as the training data for the phi-1 model. This dataset does not include the full
content of the source textbooks. Instead, it consists of transformations and
syntheses of the original content. Anyone who wants access to the full original
content should purchase or otherwise legally access the textbooks themselves.
#### Who are the source language producers?
The original language data was created by a variety of authors and educators,
who wrote the textbooks and other materials used as sources for this dataset.
These include:
- Barron's Arithmetic The Easy Way Fourth Edition - Edward Williams, Katie
Prindle
- Blitzer Introductory Algebra for College Students Fifth Edition - Robert
Blitzer
- McDougal Littell Geometry - Ron Larson, Laurie Boswell, Timothy D. Kanold, Lee
Stiff
- Blitzer Intermediate Algebra for College Students 5th Edition - Robert Blitzer
- Trigonometry Sixth Edition - Charles P. McKeague, Mark D. Turner
- Pearson College Algebra Fourth Edition - Robert F. Blitzer
- Hughes-Hallet Applied Calculus 5th Edition - Deborah Hughes-Hallett, Andrew M.
Gleason, Patti Frazer Lock, Daniel E. Flath, Sheldon P. Gordon, David O.
Lomen, David Lovelock, William G. McCallum, Brad G. Osgood, Andrew Pasquale,
Jeff Tecosky-Feldman, Joseph Thrash, Karen R. Rhea, Thomas W. Tucker
- CLRS Introduction to Algorithms Third Edition - Thomas H. Cormen, Charles E.
Leiserson, Ronald L. Rivest, Clifford Stein
In addition to these authors, the developers of OpenAI's GPT-3.5 and GPT-4
models also contributed to the creation of the language data, as these models
were used to transform the source material into a dialogue format.
### Annotations
#### Annotation process
The dataset does not contain any explicit annotations. However, the data is
curated and synthesized using OpenAI's GPT-3.5 and GPT-4 models. The process
involves transforming the source material into a dialogue format suitable for
training the phi-1 model. The dataset creator, an independent learner with a
strong interest in computer science, reviewed and curated the synthesized
dialogues to ensure their quality and relevance.
#### Who are the annotators?
The dataset creator, an independent learner who has studied computer science
extensively in a self-directed manner, performed the curation and review of the
synthesized dialogues.
### Personal and Sensitive Information
The dataset does not contain any personal or sensitive information. All the data
is derived from publicly available textbooks and online resources. Any names or
other potential identifiers in the source material have been removed or
anonymized.
### Social Impact of Dataset
The dataset is intended to support the development of AI models capable of
providing detailed explanations and examples in the context of arithmetic,
algebra, geometry, trigonometry, calculus, algorithms and data structures,
design patterns, and the python programming language. The potential social
impact is significant, as such models could greatly enhance self-directed
learning and provide valuable educational support to students worldwide.
However, it's important to note that the quality and usefulness of the AI models
trained on this dataset will depend on the quality of the data itself. If the
data is inaccurate or biased, the models could propagate these inaccuracies and
biases, potentially leading to misinformation or unfair outcomes.
### Discussion of Biases
The dataset is based on a variety of textbooks and online resources, which may
contain their own inherent biases. For example, textbooks often reflect the
perspectives and biases of their authors, which can influence the way
information is presented. These biases could potentially be reflected in the
dataset and in any models trained on it.
### Other Known Limitations
At this stage of the dataset creation process, it's difficult to identify all
potential limitations. However, one potential limitation is that the dataset may
not cover all possible topics or perspectives within the fields it addresses.
The dataset creator will continue to monitor and assess the dataset for
limitations as the work progresses.
## Additional Information
### Dataset Curators
The dataset was curated by an independent learner with a strong interest in
computer science. The curator has studied the subject matter in a self-directed
manner, using a variety of resources including textbooks and online materials.
The curation process also involved the use of OpenAI's GPT-3.5 and GPT-4 models
to synthesize dialogues based on the source material.
### Licensing Information
This dataset is released under the Creative Commons
Attribution-NonCommercial-ShareAlike 3.0 International (CC BY-NC-SA 3.0)
license.
### Citation Information
As this dataset is a compilation of various sources synthesized and curated for
the purpose of training the phi-1 model, please ensure to cite the original
sources when using this dataset. If referencing the dataset directly, please
refer to this repository.
|
rdpahalavan/UNSW-NB15 | 2023-07-22T21:41:28.000Z | [
"task_categories:text-classification",
"task_categories:tabular-classification",
"size_categories:100M<n<1B",
"license:apache-2.0",
"Network Intrusion Detection",
"Cybersecurity",
"Network Packets",
"UNSW-NB15",
"region:us"
] | rdpahalavan | null | null | null | 0 | 34 | ---
license: apache-2.0
task_categories:
- text-classification
- tabular-classification
tags:
- Network Intrusion Detection
- Cybersecurity
- Network Packets
- UNSW-NB15
size_categories:
- 100M<n<1B
---
We have developed a Python package as a wrapper around Hugging Face Hub and Hugging Face Datasets library to access this dataset easily.
# NIDS Datasets
The `nids-datasets` package provides functionality to download and utilize specially curated and extracted datasets from the original UNSW-NB15 and CIC-IDS2017 datasets. These datasets, which initially were only flow datasets, have been enhanced to include packet-level information from the raw PCAP files. The dataset contains both packet-level and flow-level data for over 230 million packets, with 179 million packets from UNSW-NB15 and 54 million packets from CIC-IDS2017.
## Installation
Install the `nids-datasets` package using pip:
```shell
pip install nids-datasets
```
Import the package in your Python script:
```python
from nids_datasets import Dataset, DatasetInfo
```
## Dataset Information
The `nids-datasets` package currently supports two datasets: [UNSW-NB15](https://research.unsw.edu.au/projects/unsw-nb15-dataset) and [CIC-IDS2017](https://www.unb.ca/cic/datasets/ids-2017.html). Each of these datasets contains a mix of normal traffic and different types of attack traffic, which are identified by their respective labels. The UNSW-NB15 dataset has 10 unique class labels, and the CIC-IDS2017 dataset has 24 unique class labels.
- UNSW-NB15 Labels: 'normal', 'exploits', 'dos', 'fuzzers', 'generic', 'reconnaissance', 'worms', 'shellcode', 'backdoor', 'analysis'
- CIC-IDS2017 Labels: 'BENIGN', 'FTP-Patator', 'SSH-Patator', 'DoS slowloris', 'DoS Slowhttptest', 'DoS Hulk', 'Heartbleed', 'Web Attack – Brute Force', 'Web Attack – XSS', 'Web Attack – SQL Injection', 'Infiltration', 'Bot', 'PortScan', 'DDoS', 'normal', 'exploits', 'dos', 'fuzzers', 'generic', 'reconnaissance', 'worms', 'shellcode', 'backdoor', 'analysis', 'DoS GoldenEye'
## Subsets of the Dataset
Each dataset consists of four subsets:
1. Network-Flows - Contains flow-level data.
2. Packet-Fields - Contains packet header information.
3. Packet-Bytes - Contains packet byte information in the range (0-255).
4. Payload-Bytes - Contains payload byte information in the range (0-255).
Each subset contains 18 files (except Network-Flows, which has one file), where the data is stored in parquet format. In total, this package provides access to 110 files. You can choose to download all subsets or select specific subsets or specific files depending on your analysis requirements.
## Getting Information on the Datasets
The `DatasetInfo` function provides a summary of the dataset in a pandas dataframe format. It displays the number of packets for each class label across all 18 files in the dataset. This overview can guide you in selecting specific files for download and analysis.
```python
df = DatasetInfo(dataset='UNSW-NB15') # or dataset='CIC-IDS2017'
df
```
## Downloading the Datasets
The `Dataset` class allows you to specify the dataset, subset, and files that you are interested in. The specified data will then be downloaded.
```python
dataset = 'UNSW-NB15' # or 'CIC-IDS2017'
subset = ['Network-Flows', 'Packet-Fields', 'Payload-Bytes'] # or 'all' for all subsets
files = [3, 5, 10] # or 'all' for all files
data = Dataset(dataset=dataset, subset=subset, files=files)
data.download()
```
The directory structure after downloading files:
```
UNSW-NB15
│
├───Network-Flows
│ └───UNSW_Flow.parquet
│
├───Packet-Fields
│ ├───Packet_Fields_File_3.parquet
│ ├───Packet_Fields_File_5.parquet
│ └───Packet_Fields_File_10.parquet
│
└───Payload-Bytes
├───Payload_Bytes_File_3.parquet
├───Payload_Bytes_File_5.parquet
└───Payload_Bytes_File_10.parquet
```
You can then load the parquet files using pandas:
```python
import pandas as pd
df = pd.read_parquet('UNSW-NB15/Packet-Fields/Packet_Fields_File_10.parquet')
```
## Merging Subsets
The `merge()` method allows you to merge all data of each packet across all subsets, providing both flow-level and packet-level information in a single file.
```python
data.merge()
```
The merge method, by default, uses the details specified while instantiating the `Dataset` class. You can also pass subset=list of subsets and files=list of files you want to merge.
The directory structure after merging files:
```
UNSW-NB15
│
├───Network-Flows
│ └───UNSW_Flow.parquet
│
├───Packet-Fields
│ ├───Packet_Fields_File_3.parquet
│ ├───Packet_Fields_File_5.parquet
│ └───Packet_Fields_File_10.parquet
│
├───Payload-Bytes
│ ├───Payload_Bytes_File_3.parquet
│ ├───Payload_Bytes_File_5.parquet
│ └───Payload_Bytes_File_10.parquet
│
└───Network-Flows+Packet-Fields+Payload-Bytes
├───Network_Flows+Packet_Fields+Payload_Bytes_File_3.parquet
├───Network_Flows+Packet_Fields+Payload_Bytes_File_5.parquet
└───Network_Flows+Packet_Fields+Payload_Bytes_File_10.parquet
```
## Extracting Bytes
Packet-Bytes and Payload-Bytes subset contains the first 1500-1600 bytes. To retrieve all bytes (up to 65535 bytes) from the Packet-Bytes and Payload-Bytes subsets, use the `Bytes()` method. This function requires files in the Packet-Fields subset to operate. You can specify how many bytes you want to extract by passing the max_bytes parameter.
```python
data.bytes(payload=True, max_bytes=2500)
```
Use packet=True to extract packet bytes. You can also pass files=list of files to retrieve bytes.
The directory structure after extracting bytes:
```
UNSW-NB15
│
├───Network-Flows
│ └───UNSW_Flow.parquet
│
├───Packet-Fields
│ ├───Packet_Fields_File_3.parquet
│ ├───Packet_Fields_File_5.parquet
│ └───Packet_Fields_File_10.parquet
│
├───Payload-Bytes
│ ├───Payload_Bytes_File_3.parquet
│ ├───Payload_Bytes_File_5.parquet
│ └───Payload_Bytes_File_10.parquet
│
├───Network-Flows+Packet-Fields+Payload-Bytes
│ ├───Network_Flows+Packet_Fields+Payload_Bytes_File_3.parquet
│ ├───Network_Flows+Packet_Fields+Payload_Bytes_File_5.parquet
│ └───Network_Flows+Packet_Fields+Payload_Bytes_File_10.parquet
│
└───Payload-Bytes-2500
├───Payload_Bytes_File_3.parquet
├───Payload_Bytes_File_5.parquet
└───Payload_Bytes_File_10.parquet
```
## Reading the Datasets
The `read()` method allows you to read files using Hugging Face's `load_dataset` method, one subset at a time. The dataset and files parameters are optional if the same details are used to instantiate the `Dataset` class.
```python
dataset = data.read(dataset='UNSW-NB15', subset='Packet-Fields', files=[1,2])
```
The `read()` method returns a dataset that you can convert to a pandas dataframe or save to a CSV, parquet, or any other desired file format:
```python
df = dataset.to_pandas()
dataset.to_csv('file_path_to_save.csv')
dataset.to_parquet('file_path_to_save.parquet')
```
For scenarios where you want to process one packet at a time, you can use the `stream=True` parameter:
```python
dataset = data.read(dataset='UNSW-NB15', subset='Packet-Fields', files=[1,2], stream=True)
print(next(iter(dataset)))
```
## Notes
The size of these datasets is large, and depending on the subset(s) selected and the number of bytes extracted, the operations can be resource-intensive. Therefore, it's recommended to ensure you have sufficient disk space and RAM when using this package. |
unwilledset/raven-data | 2023-10-04T09:59:30.000Z | [
"license:apache-2.0",
"region:us"
] | unwilledset | Finance LM tuning datasets | @article{2016arXiv160605250R,
author = {{Theuma}, Adrian
title = "{Finance LM Tuning Dataset}",
journal = {na},
year = 2023,
eid = {na},
pages = {na},
archivePrefix = {na},
eprint = {na},
} | null | 0 | 34 | ---
license: apache-2.0
---
|
Yunij/tokenized_datasets | 2023-07-18T12:07:24.000Z | [
"region:us"
] | Yunij | null | null | null | 0 | 34 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: text
dtype: string
- name: source
dtype: string
- name: label
dtype: int64
- name: perplexity
dtype: float64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 777879009
num_examples: 330345
- name: test
num_bytes: 40979430
num_examples: 17387
download_size: 432466136
dataset_size: 818858439
---
# Dataset Card for "tokenized_datasets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FanChen0116/19100_chat_128x_slot_pvi | 2023-09-26T02:20:20.000Z | [
"region:us"
] | FanChen0116 | null | null | null | 0 | 34 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: tokens
sequence: string
- name: labels
sequence:
class_label:
names:
'0': O
'1': I-time
'2': B-date
'3': B-last_name
'4': B-people
'5': I-date
'6': I-people
'7': I-last_name
'8': I-first_name
'9': B-first_name
'10': B-time
- name: request_slot
sequence: string
splits:
- name: train
num_bytes: 650303
num_examples: 3584
- name: validation
num_bytes: 5405
num_examples: 32
- name: test
num_bytes: 646729
num_examples: 3731
download_size: 98930
dataset_size: 1302437
---
# Dataset Card for "19100_chat_128x_slot_pvi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
medarc/problem_list_summarization | 2023-09-28T20:07:30.000Z | [
"region:us"
] | medarc | null | null | null | 0 | 34 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 436805
num_examples: 151
- name: train
num_bytes: 1356010
num_examples: 483
- name: validation
num_bytes: 324036
num_examples: 121
download_size: 0
dataset_size: 2116851
---
# Dataset Card for "problem_list_summarization"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Fredithefish/openassistant-guanaco-unfiltered | 2023-08-27T21:08:58.000Z | [
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:en",
"language:de",
"language:fr",
"language:es",
"license:apache-2.0",
"region:us"
] | Fredithefish | null | null | null | 5 | 34 | ---
license: apache-2.0
task_categories:
- conversational
language:
- en
- de
- fr
- es
size_categories:
- 1K<n<10K
---
# Guanaco-Unfiltered
- Any language other than English, German, French, or Spanish has been removed.
- Refusals of assistance have been removed.
- The identification as OpenAssistant has been removed.
## [Version 2 is out](https://huggingface.co/datasets/Fredithefish/openassistant-guanaco-unfiltered/blob/main/guanaco-unfiltered-v2.jsonl)
- Identification as OpenAssistant is now fully removed
- other improvements |
pig4431/HeQ_v1 | 2023-08-16T13:13:16.000Z | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:he",
"license:cc-by-4.0",
"region:us"
] | pig4431 | null | null | null | 1 | 34 | ---
license: cc-by-4.0
task_categories:
- question-answering
language:
- he
size_categories:
- 1K<n<10K
---
# Dataset Card for HeQ_v1
## Dataset Description
- **Homepage:** [HeQ - Hebrew Question Answering Dataset](https://github.com/NNLP-IL/Hebrew-Question-Answering-Dataset)
- **Repository:** [GitHub Repository](https://github.com/NNLP-IL/Hebrew-Question-Answering-Dataset)
- **Paper:** [HeQ: A Dataset for Hebrew Question Answering](https://u.cs.biu.ac.il/~yogo/heq.pdf)
- **Leaderboard:** N/A
### Dataset Summary
HeQ is a question answering dataset in Modern Hebrew, consisting of 30,147 questions. It follows the format and crowdsourcing methodology of SQuAD and ParaShoot, with paragraphs sourced from Hebrew Wikipedia and Geektime.
### Supported Tasks and Leaderboards
- **Task:** Question Answering
### Languages
- Hebrew (he)
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- **ID:** `string`
- **Title:** `string`
- **Source:** `string`
- **Context:** `string`
- **Question:** `string`
- **Answers:** `string`
- **Is_Impossible:** `bool`
- **WH_Question:** `string`
- **Question_Quality:** `string`
### Data Splits
- **Train:** 27,142 examples
- **Test:** 1,504 examples
- **Validation:** 1,501 examples
## Dataset Creation
### Curation Rationale
The dataset was created to provide a resource for question answering research in Hebrew.
### Source Data
#### Initial Data Collection and Normalization
Paragraphs were sourced from Hebrew Wikipedia and Geektime.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
A team of crowdworkers formulated and answered reading comprehension questions.
#### Who are the annotators?
crowdsourced
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
License: cc-by-4.0
### Citation Information
[More Information Needed]
### Contributions
Contributions and additional information are welcome. |
mHossain/indic_model_indic_test_data_paraphrase_detection | 2023-08-20T19:26:15.000Z | [
"region:us"
] | mHossain | null | null | null | 0 | 34 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 3849846.3
num_examples: 36000
- name: test
num_bytes: 427760.7
num_examples: 4000
download_size: 1899118
dataset_size: 4277607.0
---
# Dataset Card for "indic_model_indic_test_data_paraphrase_detection"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mHossain/mh_new_para_detection_data_v1 | 2023-08-20T20:47:31.000Z | [
"region:us"
] | mHossain | null | null | null | 0 | 34 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 6842297.7
num_examples: 36000
- name: test
num_bytes: 760255.3
num_examples: 4000
download_size: 3375458
dataset_size: 7602553.0
---
# Dataset Card for "mh_new_para_detection_data_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gonced8/multi-session_chat | 2023-08-25T10:59:38.000Z | [
"task_categories:conversational",
"size_categories:100K<n<1M",
"language:en",
"license:gpl-3.0",
"region:us"
] | gonced8 | null | null | null | 1 | 34 | ---
license: gpl-3.0
task_categories:
- conversational
language:
- en
pretty_name: Multi-Session Chat
size_categories:
- 100K<n<1M
---
Not my dataset, I only cleaned the dataset from [ParlAI - Msc](https://parl.ai/projects/msc/). |
PurCL/malware-top-100-labels | 2023-08-31T21:13:39.000Z | [
"region:us"
] | PurCL | null | null | null | 0 | 34 | ---
dataset_info:
features:
- name: l
dtype: string
splits:
- name: train
num_bytes: 1045
num_examples: 100
download_size: 1723
dataset_size: 1045
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "malware-top-100-labels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
stevenhsu123/ai_comment_train_data | 2023-09-04T09:49:27.000Z | [
"region:us"
] | stevenhsu123 | null | null | null | 0 | 34 | Entry not found |
findzebra/corpus.latest.vod-retriever-medical-v1.1 | 2023-09-05T05:17:16.000Z | [
"region:us"
] | findzebra | null | null | null | 0 | 34 | Entry not found |
philikai/spider_SQL_PALM_Prompt | 2023-09-11T13:40:51.000Z | [
"license:mit",
"region:us"
] | philikai | null | null | null | 0 | 34 | ---
license: mit
---
Dataset for creating prompts for fine-tuning on Spider Dataset with Foreign and Primary Key Information as well as Schema information.
|
MBZUAI-LLM/SlimPajama-627B-DC | 2023-09-20T06:26:19.000Z | [
"task_categories:text-generation",
"language:en",
"license:mit",
"arxiv:2309.10818",
"region:us"
] | MBZUAI-LLM | null | null | null | 5 | 34 | ---
license: mit
task_categories:
- text-generation
language:
- en
pretty_name: SlimPajama-627B-divided
---
### Dataset Description:
This is a split version of [cerebras/SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B) that divides data based on its sources.
The content of this dataset is the same as SlimPajama-627B.
We divide data from different sources based on the "redpajama_setname" and save them in different directories, which is convenient for future dataset combination related research.
This dataset consists of 15,967 jsonl files and is ~ 883G compressed.
### Primary Usage:
This dataset is used for our study: [SlimPajama-DC: Understanding Data Combinations for LLM Training](https://arxiv.org/abs/2309.10818).
For more details about the content in this dataset, please refer to the original [cerebras/SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B).
### License:
Please refer to the licenses of the data subsets you use.
- [Common Crawl Foundation Terms of Use](https://commoncrawl.org/terms-of-use/full/)
- [C4 license](https://huggingface.co/datasets/allenai/c4#license)
- GitHub was limited to MIT, BSD, or Apache licenses only
- Books: [the_pile_books3 license](https://huggingface.co/datasets/the_pile_books3#licensing-information) and [pg19 license](https://huggingface.co/datasets/pg19#licensing-information)
- [ArXiv Terms of Use](https://info.arxiv.org/help/api/tou.html)
- [Wikipedia License](https://huggingface.co/datasets/wikipedia#licensing-information)
- [StackExchange license on the Internet Archive](https://archive.org/details/stackexchange) |
chuyin0321/timeseries-daily-stocks | 2023-09-11T08:52:28.000Z | [
"region:us"
] | chuyin0321 | null | null | null | 0 | 34 | ---
dataset_info:
features:
- name: symbol
dtype: string
- name: date
dtype: string
- name: open
dtype: float64
- name: high
dtype: float64
- name: low
dtype: float64
- name: close
dtype: float64
- name: adj_close
dtype: float64
- name: volume
dtype: float64
splits:
- name: train
num_bytes: 588967254
num_examples: 8405823
download_size: 291992665
dataset_size: 588967254
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "timeseries-daily-stocks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sazirarrwth99/triplets_train_sample | 2023-09-13T22:29:40.000Z | [
"region:us"
] | sazirarrwth99 | null | null | null | 0 | 34 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 472799.2
num_examples: 800
- name: test
num_bytes: 118199.8
num_examples: 200
download_size: 245603
dataset_size: 590999.0
---
# Dataset Card for "triplets_train_sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pranjal01/messi-ronaldo-dataset | 2023-09-14T11:41:28.000Z | [
"region:us"
] | pranjal01 | null | null | null | 0 | 34 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 61573451.0
num_examples: 136
download_size: 0
dataset_size: 61573451.0
---
# Dataset Card for "messi-ronaldo-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Surajsangwan90/NZTA | 2023-09-17T01:49:24.000Z | [
"region:us"
] | Surajsangwan90 | null | null | null | 0 | 34 | Entry not found |
A-Roucher/amazon_product_reviews_datafiniti | 2023-09-26T14:12:40.000Z | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:feature-extraction",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | A-Roucher | null | null | null | 0 | 34 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: brand
dtype:
class_label:
names:
'0': Amazon
'1': AmazonBasics
'2': Amazonbasics
- name: primaryCategories
dtype: string
- name: reviews.numHelpful
dtype: float64
- name: reviews.rating
dtype: int64
- name: reviews.text
dtype: string
splits:
- name: train
num_bytes: 1107781.5
num_examples: 6000
- name: test
num_bytes: 369260.5
num_examples: 2000
download_size: 704792
dataset_size: 1477042
task_categories:
- text-classification
- question-answering
- feature-extraction
language:
- en
pretty_name: Amazon Product Reviews by Datafiniti
size_categories:
- 1K<n<10K
---
# Dataset Card for "amazon_product_reviews_datafiniti"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Ibrahim-Alam/cornel_sentiment | 2023-09-19T01:50:16.000Z | [
"region:us"
] | Ibrahim-Alam | null | null | null | 0 | 34 | Entry not found |
jangmin/ecommerce_purchase_history | 2023-09-21T05:35:05.000Z | [
"size_categories:10K<n<100K",
"language:ko",
"region:us"
] | jangmin | null | null | null | 0 | 34 | ---
dataset_info:
features:
- name: user_id
dtype: int64
- name: day
dtype: string
- name: order_ts
dtype: string
- name: positive_prod_id
dtype: int64
- name: negative_prod_id
dtype: int64
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 121752019
num_examples: 60232
- name: test
num_bytes: 57340599
num_examples: 19284
download_size: 29045732
dataset_size: 179092618
size_categories:
- 10K<n<100K
language:
- ko
---
# Dataset Card for "ecommerce_purchase_history"
## Dataset Description
# Dataset Summary
이 데이터셋은 특정 이커머스 회사의 추천 시스템 연구 개발을 위한 데이터셋이다. 특정 기간에 대해 약 90일 동안의 구매 히스토리로부터 생성되었다. 구매 히스토리를 텍스트로 기술하였다.
# Supported Tasks and Leaderboards
# Languages
This dataset is only made of `ko`(korean).
# Dataset Structure |
MikeTrizna/bee_specimens | 2023-09-22T21:12:23.000Z | [
"license:cc0-1.0",
"region:us"
] | MikeTrizna | null | null | null | 1 | 34 | ---
license: cc0-1.0
dataset_info:
features:
- name: occurrenceID
dtype: string
- name: catalogNumber
dtype: string
- name: recordedBy
dtype: string
- name: year
dtype: int64
- name: month
dtype: int64
- name: day
dtype: int64
- name: country
dtype: string
- name: stateProvince
dtype: string
- name: county
dtype: string
- name: locality
dtype: string
- name: decimalLatitude
dtype: float64
- name: decimalLongitude
dtype: float64
- name: identifiedBy
dtype: string
- name: scientificName
dtype: string
- name: genus
dtype: string
- name: subgenus
dtype: string
- name: specificEpithet
dtype: string
- name: infraspecificEpithet
dtype: string
- name: scientificNameAuthorship
dtype: string
- name: PixelXDimension
dtype: float64
- name: PixelYDimension
dtype: float64
- name: accessURI
dtype: string
splits:
- name: train
num_bytes: 26732760
num_examples: 73387
download_size: 7117791
dataset_size: 26732760
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for Bee_Specimens
## Dataset Summary
The USNM Bumblebee Dataset is a natural history dataset containing, for each of 73,497 Bumblebee specimens in the family Apidae, a single image in lateral or dorsal view and a tab-separated value file with occurrence data. Occurrence data includes the species classification, the date and site/location of collection, and other metadata conforming to the Darwin Core data standard (https://dwc.tdwg.org). 11,421 specimens are not identified to species and these specimens are included as 'Bombus sp.' or 'Xylocopa sp.' The collecting sites/locations of the majority of specimens (55,301), have been georeferenced. The dataset is worldwide in scope, but is limited to the specimens available in the Smithsonian USNM collection.
## Languages
English
## Data Instances
A typical data point comprises of the specimen metadata and image information for a single bumblebee specimen.
An example from the dataset looks as follows:
```json
{
'occurrenceID': 'http://n2t.net/ark:/65665/30042e2d8-669d-4520-b456-e3c64203eff8',
'catalogNumber': 'USNMENT01732649',
'recordedBy': 'R. Craig',
'year': '1949',
'month': '4',
'day': '13',
'country': 'United States',
'stateProvince': 'California',
'county': 'Fresno',
'locality': 'Auberry',
'decimalLatitude': '37.0808',
'decimalLongitude': '-119.485',
'identifiedBy': "O'Brien, L. R.",
'scientificName': 'Xylocopa (Notoxylocopa) tabaniformis orpifex',
'genus': 'Xylocopa',
'subgenus': 'Notoxylocopa',
'specificEpithet': 'tabaniformis',
'infraspecificEpithet': 'orpifex',
'scientificNameAuthorship': 'Smith',
'accessURI': 'https://ids.si.edu/ids/deliveryService?id=NMNH-USNMENT01732649',
'PixelXDimension': 2000,
'PixelYDimension': 1212
}
```
## Data Fields
Specimen metadata fields conform to the Darwin Core data standard and are detailed here: https://dwc.tdwg.org. Image metadata fields conform to the Audiovisual Core data standard and are detailed here: https://ac.tdwg.org/.
## Curation Rationale
The dataset represents a portion of the U. S. National Entomological Collection. The U.S. National Entomological Collection (USNM) traces its origins in part to the acquisition of the U.S. Department of Agriculture Collection of 138,000 specimens donated in 1885. These specimens became the foundation of one of the world’s largest and most important accessible entomological collections, with over 33 million specimens taken care of by the combined staff of three government agencies: the Smithsonian Institution; the Systematic Entomology Laboratory (Agricultural Research Service, United States Department of Agriculture); and the Walter Reed Biosystematics Unit (Walter Reed Army Institute of Research). The specimens were imaged in a mass-digitization project in collaboration with the Digitization Program Office. The goal was to digitize every Bombus specimen in the collection.
## Initial Data Collection and Normalization
Bumblebee specimens were collected over a period of 150 years (earliest specimen dates from 1807, most recent specimen dates from 2020). The specimens were collected by and identified by many different individual researchers over this time. The initial images of about 49,000 specimens were taken in a rapid capture project by a dedicated team in 2014 with additional specimen images (about 25,000) taken in 2018. The labels containing the information on site/location, date of collection, collector, and identifier were removed from the insect pin. The occurrence data were transcribed from the labels by online volunteers and a professional transcription service into Darwin Core fields. Following quality control of the transcribed data by NMNH staff, they were imported into the institutional database (EMu).
NMNH specimen data get exported to the Global Biodiversity Information Facility (GBIF) on a weekly basis through an installation of an Integrated Publishing Toolkit (IPT, https://collections.nmnh.si.edu/ipt/). Some data transformation takes place within EMu and GBIF likewise normalizes the data to meet their standards.
## Who are the source language producers?
The occurrence data were produced by humans, observed and written onto paper labels over the museum’s history, and then transcribed from paper labels pinned with the specimens upon collection.
## Annotations
The specimen occurrence data in Darwin Core fields.
## Annotation process
The occurrence data were transcribed from the labels by online volunteers and a professional transcription service into Darwin Core fields.
## Who are the annotators?
Original collectors and identifiers were entomologists and researchers from the Smithsonian and other institutions. Collectors may not be bumblebee specialists. For data transcription, online volunteers and professional transcription service workers. Demographic data of transcribers is unknown.
## Personal and Sensitive Information
The dataset contains the names of the collectors and identifiers.
## Social Impact of Dataset
Digitized natural history collections have the potential to be used in diverse research applications in evolutionary biology, ecology, and climate change.
The dataset contains records for species listed on the U.S. Endangered Species List: Bombus affinis, Bombus franklini, and Bombus terricola.
Some site/location names could cause harm as they are insensitive or racist towards indigenous communities.
## Discussion of Biases
Estimates of species geographic ranges based on these data may not be complete. There are many reasons collectors may collect more frequently from some areas rather than others, including their own taxonomic interests, proximity to collections institutions, accessibility via roads, ability to acquire permits for a specific area, or for geopolitical reasons.
The majority of specimens in this dataset originate from North America.
Most specimens are expected to be female, because bumblebees are social insects and it is more common to find female bees.
## Other Known Limitations
As with all natural history collections data, there is the potential that some metadata are inaccurate or inconsistent given that they have been collected and recorded over the course of the past 150 years. Smithsonian staff seek to correct these errors as they are identified but the dataset as presented is a snapshot in time.
Species identifications may be inaccurate or not up-to-date based on the latest classification.
Collector names may not be consistent across records (e.g. the same person’s name may be written differently). For women’s names, which were often historically recorded as Mrs. <spouse’s name>, only the spouse’s name may appear.
Locality data may use historical place names that are no longer used.
Dates may sometimes have been recorded by original collectors inconsistently or may be incomplete (no month/day information).
For specimens collected from Brazil, specimen images are not included in the dataset.
For endangered species, locality data is not included in the dataset.
## Dataset Curators
Smithsonian National Museum of Natural History, Department of Entomology.
Jessica Bird (Data Manager in the Department of Entomology) is the main contact person for the dataset.
## Licensing Information
Public domain, Creative Commons CC0.
## Citation Information
Orrell T, Informatics Office (2023). NMNH Extant Specimen Records (USNM, US). Version 1.72. National Museum of Natural History, Smithsonian Institution. Occurrence dataset. https://collections.nmnh.si.edu/ipt/resource?r=nmnh_extant_dwc-a&v=1.72
## Contributions
Thanks to NMNH for adding this dataset. |
tyzhu/squad_baseline_v4_train_30_eval_10 | 2023-09-26T09:49:00.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 34 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 172536
num_examples: 159
- name: validation
num_bytes: 47457
num_examples: 50
download_size: 52942
dataset_size: 219993
---
# Dataset Card for "squad_baseline_v4_train_30_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/squad_context_v4_train_10_eval_10 | 2023-09-26T14:58:57.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 34 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 78251
num_examples: 44
- name: validation
num_bytes: 80830
num_examples: 50
download_size: 63029
dataset_size: 159081
---
# Dataset Card for "squad_context_v4_train_10_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
karan4d/machiavellian_synthetic_textbooks | 2023-10-03T16:30:11.000Z | [
"license:apache-2.0",
"region:us"
] | karan4d | null | null | null | 2 | 34 | ---
license: apache-2.0
---
credits: shoutout @vikp for his textbook_quality GH repo this was created with
dataset info: a bunch of bad boy data for Machiavellian LLMs |
NikitaO/xix3d_v1_cluster_5 | 2023-10-02T14:21:35.000Z | [
"region:us"
] | NikitaO | null | null | null | 0 | 34 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 8359019.0
num_examples: 120
download_size: 8360501
dataset_size: 8359019.0
---
# Dataset Card for "xix3d_v1_cluster_5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/eval_tag_squad_v7 | 2023-10-05T17:04:07.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 34 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 12876477
num_examples: 10570
- name: validation
num_bytes: 12876477
num_examples: 10570
download_size: 5563526
dataset_size: 25752954
---
# Dataset Card for "eval_tag_squad_v7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ShoukanLabs/OpenNiji-Dataset-Aesthetic-Finetune | 2023-10-04T06:41:14.000Z | [
"task_categories:text-to-image",
"size_categories:10K<n<100K",
"language:en",
"language:ja",
"language:ko",
"license:cc-by-nc-4.0",
"anime",
"dataset",
"Nijijourney",
"Midjourney",
"discord",
"region:us"
] | ShoukanLabs | null | null | null | 1 | 34 | ---
task_categories:
- text-to-image
language:
- en
- ja
- ko
tags:
- anime
- dataset
- Nijijourney
- Midjourney
- discord
size_categories:
- 10K<n<100K
license: cc-by-nc-4.0
---
Used in quality tuning for OpenNiji |
Tural/processed_bert_dataset | 2023-10-04T23:04:01.000Z | [
"region:us"
] | Tural | null | null | null | 0 | 34 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: train
num_bytes: 24943076880
num_examples: 27349865
download_size: 5901536405
dataset_size: 24943076880
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "processed_bert_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ALBADDAWI/90k-race-style | 2023-10-06T12:41:35.000Z | [
"license:mit",
"region:us"
] | ALBADDAWI | null | null | null | 0 | 34 | ---
license: mit
--- |
Nikhil090/Dataset | 2023-10-10T11:22:21.000Z | [
"region:us"
] | Nikhil090 | null | null | null | 0 | 34 | Entry not found |
numeric_fused_head | 2023-06-01T14:59:47.000Z | [
"task_categories:token-classification",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:1K<n<10K",
"source_datasets:original... | null | Fused Head constructions are noun phrases in which the head noun is missing and is said to be "fused" with its dependent modifier. This missing information is implicit and is important for sentence understanding.The missing heads are easily filled in by humans, but pose a challenge for computational models.
For example, in the sentence: "I bought 5 apples but got only 4.", 4 is a Fused-Head, and the missing head is apples, which appear earlier in the sentence.
This is a crowd-sourced dataset of 10k numerical fused head examples (1M tokens). | @article{elazar_head,
author = {Elazar, Yanai and Goldberg, Yoav},
title = {Where’s My Head? Definition, Data Set, and Models for Numeric Fused-Head Identification and Resolution},
journal = {Transactions of the Association for Computational Linguistics},
volume = {7},
number = {},
pages = {519-535},
year = {2019},
doi = {10.1162/tacl\\_a\\_00280},
URL = {https://doi.org/10.1162/tacl_a_00280},
} | null | 1 | 33 | ---
annotations_creators:
- crowdsourced
- expert-generated
- machine-generated
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids: []
paperswithcode_id: numeric-fused-head
pretty_name: Numeric Fused Heads
tags:
- fused-head-identification
dataset_info:
- config_name: identification
features:
- name: tokens
sequence: string
- name: start_index
dtype: int32
- name: end_index
dtype: int32
- name: label
dtype:
class_label:
names:
'0': neg
'1': pos
splits:
- name: train
num_bytes: 22290345
num_examples: 165606
- name: test
num_bytes: 68282
num_examples: 500
- name: validation
num_bytes: 2474528
num_examples: 18401
download_size: 24407520
dataset_size: 24833155
- config_name: resolution
features:
- name: tokens
sequence: string
- name: line_indices
sequence: int32
- name: head
sequence: string
- name: speakers
sequence: string
- name: anchors_indices
sequence: int32
splits:
- name: train
num_bytes: 19766437
num_examples: 7412
- name: test
num_bytes: 2743071
num_examples: 1000
- name: validation
num_bytes: 2633549
num_examples: 1000
download_size: 24923403
dataset_size: 25143057
config_names:
- identification
- resolution
---
# Dataset Card for Numeric Fused Heads
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [The Numeric Fused-Head demo](https://nlp.biu.ac.il/~lazary/fh/)
- **Repository:** [Github Repo](https://github.com/yanaiela/num_fh)
- **Paper:** [Where’s My Head? Definition, Dataset and Models for Numeric Fused-Heads Identification and Resolution](https://www.mitpressjournals.org/doi/full/10.1162/tacl_a_00280)
- **Leaderboard:** [NLP Progress](http://nlpprogress.com/english/missing_elements.html)
- **Point of Contact:** [Yanai Elazar](https://yanaiela.github.io), [Yoav Goldberg](https://www.cs.bgu.ac.il/~yoavg/uni/)
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
- Numeric Fused Head Identification
- Numeric Fused Head Resolution
### Languages
English
## Dataset Structure
### Data Instances
## Identification
```
{
"tokens": ["It", "’s", "a", "curious", "thing", ",", "the", "death", "of", "a", "loved", "one", "."]
"start_index": 11
"end_index": 12
"label": 1
}
```
## Resolution
```
{
"tokens": ["I", "'m", "eighty", "tomorrow", ".", "Are", "you", "sure", "?"],
"line_indices": [0, 0, 0, 0, 0, 1, 1, 1, 1],
"head": ["AGE"],
"speakers": ["John Doe", "John Doe", "John Doe", "John Doe", "John Doe", "Joe Bloggs", "Joe Bloggs", "Joe Bloggs", "Joe Bloggs"],
"anchors_indices": [2]
}
```
### Data Fields
## Identification
- `tokens` - List of token strings as tokenized with [Spacy](spacy.io).
- `start_index` - Start index of the anchor.
- `end_index` - End index of the anchor.
- `label` - "pos" or "neg" depending on whether this example contains a numeric fused head.
## Resolution
- `tokens` - List of token strings as tokenized with [Spacy](spacy.io)
- `line_indices` - List of indices indicating line number (one for each token)
- `head` - Reference to the missing head. If the head exists elsewhere in the sentence this is given as a token index.
- `speakers` - List of speaker names (one for each token)
- `anchors_indices` - Index to indicate which token is the anchor (the visible number)
### Data Splits
Train, Test, Dev
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
MIT License
### Citation Information
```
@article{doi:10.1162/tacl\_a\_00280,
author = {Elazar, Yanai and Goldberg, Yoav},
title = {Where’s My Head? Definition, Data Set, and Models for Numeric Fused-Head Identification and Resolution},
journal = {Transactions of the Association for Computational Linguistics},
volume = {7},
number = {},
pages = {519-535},
year = {2019},
doi = {10.1162/tacl\_a\_00280},
}
```
### Contributions
Thanks to [@ghomasHudson](https://github.com/ghomasHudson) for adding this dataset. |
py_ast | 2022-11-18T21:40:05.000Z | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"task_categories:fill-mask",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:code",
"license:bsd-2-claus... | null | Dataset consisting of parsed ASTs that were used to train and
evaluate the DeepSyn tool.
The Python programs are collected from GitHub repositories
by removing duplicate files, removing project forks (copy of another existing repository)
,keeping only programs that parse and have at most 30'000 nodes in the AST and
we aim to remove obfuscated files | @InProceedings{OOPSLA ’16, ACM,
title = {Probabilistic Model for Code with Decision Trees.},
authors={Raychev, V., Bielik, P., and Vechev, M.},
year={2016}
} | null | 3 | 33 | ---
pretty_name: PyAst
annotations_creators:
- machine-generated
language_creators:
- found
language:
- code
license:
- bsd-2-clause
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text2text-generation
- text-generation
- fill-mask
task_ids: []
paperswithcode_id: null
tags:
- code-modeling
- code-generation
dataset_info:
features:
- name: ast
sequence:
- name: type
dtype: string
- name: value
dtype: string
- name: children
sequence: int32
config_name: ast
splits:
- name: train
num_bytes: 1870790180
num_examples: 100000
- name: test
num_bytes: 907514993
num_examples: 50000
download_size: 526642289
dataset_size: 2778305173
---
# Dataset Card for [py_ast]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **homepage**: [py150](https://www.sri.inf.ethz.ch/py150)
- **Paper**: [Probabilistic Model for Code with Decision Trees](https://www.semanticscholar.org/paper/Probabilistic-model-for-code-with-decision-trees-Raychev-Bielik/62e176977d439aac2e2d7eca834a7a99016dfcaf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The dataset consists of parsed ASTs that were used to train and evaluate the DeepSyn tool.
The Python programs are collected from GitHub repositories
by removing duplicate files, removing project forks (copy of another existing repository),
keeping only programs that parse and have at most 30'000 nodes in the AST and
we aim to remove obfuscated files
### Supported Tasks and Leaderboards
Code Representation, Unsupervised Learning
### Languages
Python
## Dataset Structure
### Data Instances
A typical datapoint contains an AST of a python program, parsed.
The main key is `ast` wherein every program's AST is stored.
Each children would have,
`type` which will formulate the type of the node.
`children` which enumerates if a given node has children(non-empty list).
`value`, if the given node has any hardcoded value(else "N/A").
An example would be,
'''
[ {"type":"Module","children":[1,4]},{"type":"Assign","children":[2,3]},{"type":"NameStore","value":"x"},{"type":"Num","value":"7"}, {"type":"Print","children":[5]}, {"type":"BinOpAdd","children":[6,7]}, {"type":"NameLoad","value":"x"}, {"type":"Num","value":"1"} ]
'''
### Data Fields
- `ast`: a list of dictionaries, wherein every dictionary is a node in the Abstract Syntax Tree.
- `type`: explains the type of the node.
- `children`: list of nodes which are children under the given
- `value`: hardcoded value, if the node holds an hardcoded value.
### Data Splits
The data is split into a training and test set.
The final split sizes are as follows:
| | train | validation |
|------------------|--------:|------------:|
| py_ast examples | 100000 | 50000 |
## Dataset Creation
[More Information Needed]
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Raychev, V., Bielik, P., and Vechev, M
### Licensing Information
MIT, BSD and Apache
### Citation Information
@InProceedings{OOPSLA ’16, ACM,
title = {Probabilistic Model for Code with Decision Trees.},
authors={Raychev, V., Bielik, P., and Vechev, M.},
year={2016}
}
```
@inproceedings{10.1145/2983990.2984041,
author = {Raychev, Veselin and Bielik, Pavol and Vechev, Martin},
title = {Probabilistic Model for Code with Decision Trees},
year = {2016},
isbn = {9781450344449},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/2983990.2984041},
doi = {10.1145/2983990.2984041},
booktitle = {Proceedings of the 2016 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications},
pages = {731–747},
numpages = {17},
keywords = {Code Completion, Decision Trees, Probabilistic Models of Code},
location = {Amsterdam, Netherlands},
series = {OOPSLA 2016}
}
```
### Contributions
Thanks to [@reshinthadithyan](https://github.com/reshinthadithyan) for adding this dataset. |
scb_mt_enth_2020 | 2022-11-18T21:43:37.000Z | [
"task_categories:translation",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"annotations_creators:found",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"language_creators:found",
"language_creators:machine-generated",
"multilingualit... | null | scb-mt-en-th-2020: A Large English-Thai Parallel Corpus
The primary objective of our work is to build a large-scale English-Thai dataset for machine translation.
We construct an English-Thai machine translation dataset with over 1 million segment pairs, curated from various sources,
namely news, Wikipedia articles, SMS messages, task-based dialogs, web-crawled data and government documents.
Methodology for gathering data, building parallel texts and removing noisy sentence pairs are presented in a reproducible manner.
We train machine translation models based on this dataset. Our models' performance are comparable to that of
Google Translation API (as of May 2020) for Thai-English and outperform Google when the Open Parallel Corpus (OPUS) is
included in the training data for both Thai-English and English-Thai translation.
The dataset, pre-trained models, and source code to reproduce our work are available for public use. | @article{lowphansirikul2020scb,
title={scb-mt-en-th-2020: A Large English-Thai Parallel Corpus},
author={Lowphansirikul, Lalita and Polpanumas, Charin and Rutherford, Attapol T and Nutanong, Sarana},
journal={arXiv preprint arXiv:2007.03541},
year={2020}
} | null | 2 | 33 | ---
annotations_creators:
- crowdsourced
- expert-generated
- found
- machine-generated
language_creators:
- expert-generated
- found
- machine-generated
language:
- en
- th
license:
- cc-by-sa-4.0
multilinguality:
- translation
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: scb-mt-en-th-2020
pretty_name: ScbMtEnth2020
dataset_info:
- config_name: enth
features:
- name: translation
dtype:
translation:
languages:
- en
- th
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 390411946
num_examples: 801402
- name: validation
num_bytes: 54167280
num_examples: 100173
- name: test
num_bytes: 53782790
num_examples: 100177
download_size: 138415559
dataset_size: 498362016
- config_name: then
features:
- name: translation
dtype:
translation:
languages:
- th
- en
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 390411946
num_examples: 801402
- name: validation
num_bytes: 54167280
num_examples: 100173
- name: test
num_bytes: 53782790
num_examples: 100177
download_size: 138415559
dataset_size: 498362016
---
# Dataset Card for `scb_mt_enth_2020`
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://airesearch.in.th/
- **Repository:** https://github.com/vistec-AI/thai2nmt
- **Paper:** https://arxiv.org/abs/2007.03541
- **Leaderboard:**
- **Point of Contact:** https://airesearch.in.th/
### Dataset Summary
scb-mt-en-th-2020: A Large English-Thai Parallel Corpus
The primary objective of our work is to build a large-scale English-Thai dataset for machine translation.
We construct an English-Thai machine translation dataset with over 1 million segment pairs, curated from various sources,
namely news, Wikipedia articles, SMS messages, task-based dialogs, web-crawled data and government documents.
Methodology for gathering data, building parallel texts and removing noisy sentence pairs are presented in a reproducible manner.
We train machine translation models based on this dataset. Our models' performance are comparable to that of
Google Translation API (as of May 2020) for Thai-English and outperform Google when the Open Parallel Corpus (OPUS) is
included in the training data for both Thai-English and English-Thai translation.
The dataset, pre-trained models, and source code to reproduce our work are available for public use.
### Supported Tasks and Leaderboards
machine translation
### Languages
English, Thai
## Dataset Structure
### Data Instances
```
{'subdataset': 'aqdf', 'translation': {'en': 'FAR LEFT: Indonesian National Police Chief Tito Karnavian, from left, Philippine National Police Chief Ronald Dela Rosa and Royal Malaysian Police Inspector General Khalid Abu Bakar link arms before the Trilateral Security Meeting in Pasay city, southeast of Manila, Philippines, in June 2017. [THE ASSOCIATED PRESS]', 'th': '(ซ้ายสุด) นายติโต คาร์นาเวียน ผู้บัญชาการตํารวจแห่งชาติอินโดนีเซีย (จากซ้าย) นายโรนัลด์ เดลา โรซา ผู้บัญชาการตํารวจแห่งชาติฟิลิปปินส์ และนายคาลิด อาบู บาการ์ ผู้บัญชาการตํารวจแห่งชาติมาเลเซีย ไขว้แขนกันก่อนเริ่มการประชุมความมั่นคงไตรภาคีในเมืองปาเซย์ ซึ่งอยู่ทางตะวันออกเฉียงใต้ของกรุงมะนิลา ประเทศฟิลิปปินส์ ในเดือนมิถุนายน พ.ศ. 2560 ดิแอสโซซิเอทเต็ด เพรส'}}
{'subdataset': 'thai_websites', 'translation': {'en': "*Applicants from certain countries may be required to pay a visa issuance fee after their application is approved. The Department of State's website has more information about visa issuance fees and can help you determine if an issuance fee applies to your nationality.", 'th': 'ประเภทวีซ่า รวมถึงค่าธรรมเนียม และข้อกําหนดในการสัมภาษณ์วีซ่า จะขึ้นอยู่กับชนิดของหนังสือเดินทาง และจุดประสงค์ในการเดินทางของท่าน โปรดดูตารางด้านล่างก่อนการสมัครวีซ่า'}}
{'subdataset': 'nus_sms', 'translation': {'en': 'Yup... Okay. Cya tmr... So long nvr write already... Dunno whether tmr can come up with 500 words', 'th': 'ใช่...ได้ แล้วเจอกันพรุ่งนี้... นานแล้วไม่เคยเขียน... ไม่รู้ว่าพรุ่งนี้จะทําได้ถึง500คําไหมเลย'}}
```
### Data Fields
- `subdataset`: subdataset from which the sentence pair comes from
- `translation`:
- `en`: English sentences (original source)
- `th`: Thai sentences (originally target for translation)
### Data Splits
```
Split ratio (train, valid, test) : (0.8, 0.1, 0.1)
Number of paris (train, valid, test): 801,402 | 100,173 | 100,177
# Train
generated_reviews_yn: 218,637 ( 27.28% )
task_master_1: 185,671 ( 23.17% )
generated_reviews_translator: 105,561 ( 13.17% )
thai_websites: 93,518 ( 11.67% )
paracrawl: 46,802 ( 5.84% )
nus_sms: 34,495 ( 4.30% )
mozilla_common_voice: 2,451 ( 4.05% )
wikipedia: 26,163 ( 3.26% cd)
generated_reviews_crowd: 19,769 ( 2.47% )
assorted_government: 19,712 ( 2.46% )
aqdf: 10,466 ( 1.31% )
msr_paraphrase: 8,157 ( 1.02% )
# Valid
generated_reviews_yn: 30,786 ( 30.73% )
task_master_1: 18,531 ( 18.50% )
generated_reviews_translator: 13,884 ( 13.86% )
thai_websites: 13,381 ( 13.36% )
paracrawl: 6,618 ( 6.61% )
nus_sms: 4,628 ( 4.62% )
wikipedia: 3,796 ( 3.79% )
assorted_government: 2,842 ( 2.83% )
generated_reviews_crowd: 2,409 ( 2.40% )
aqdf: 1,518 ( 1.52% )
msr_paraphrase: 1,107 ( 1.11% )
mozilla_common_voice: 673 ( 0.67% )
# Test
generated_reviews_yn: 30,785 ( 30.73% )
task_master_1: 18,531 ( 18.50% )
generated_reviews_translator: 13,885 ( 13.86% )
thai_websites: 13,381 ( 13.36% )
paracrawl: 6,619 ( 6.61% )
nus_sms: 4,627 ( 4.62% )
wikipedia: 3,797 ( 3.79% )
assorted_government: 2,844 ( 2.83% )
generated_reviews_crowd: 2,409 ( 2.40% )
aqdf: 1,519 ( 1.52% )
msr_paraphrase: 1,107 ( 1.11% )
mozilla_common_voice : 673 ( 0.67% )
```
## Dataset Creation
### Curation Rationale
[AIResearch](https://airesearch.in.th/), funded by [VISTEC](https://www.vistec.ac.th/) and [depa](https://www.depa.or.th/th/home), curated this dataset as part of public NLP infrastructure. The center releases the dataset and baseline models under CC-BY-SA 4.0.
### Source Data
#### Initial Data Collection and Normalization
The sentence pairs are curated from news, Wikipedia articles, SMS messages, task-based dialogs, webcrawled data and government documents. Sentence pairs are generated by:
- Professional translators
- Crowdsourced translators
- Google Translate API and human annotators (accepted or rejected)
- Sentence alignment with [multilingual universal sentence encoder](https://tfhub.dev/google/universal-sentence-encoder-multilingual/3); the author created [CRFCut](https://github.com/vistec-AI/crfcut) to segment Thai sentences to be abel to align with their English counterparts (sentence segmented by [NLTK](https://www.nltk.org/))
For detailed explanation of dataset curation, see https://arxiv.org/pdf/2007.03541.pdf
### Annotations
#### Sources and Annotation process
- generated_reviews_yn: generated by [CTRL](https://arxiv.org/abs/1909.05858), translated to Thai by Google Translate API and annotated as accepted or rejected by human annotators (we do not include rejected sentence pairs)
- task_master_1: [Taskmaster-1](https://research.google/tools/datasets/taskmaster-1/) translated by professional translators hired by [AIResearch](https://airesearch.in.th/)
- generated_reviews_translator: professional translators hired by [AIResearch](https://airesearch.in.th/)
- thai_websites: webcrawling from top 500 websites in Thailand; respective content creators; the authors only did sentence alignment
- paracrawl: replicating Paracrawl's methodology for webcrawling; respective content creators; the authors only did sentence alignment
- nus_sms: [The National University of Singapore SMS Corpus](https://scholarbank.nus.edu.sg/handle/10635/137343) translated by crowdsourced translators hired by [AIResearch](https://airesearch.in.th/)
- wikipedia: Thai Wikipedia; respective content creators; the authors only did sentence alignment
- assorted_government: Government document in PDFs from various government websites; respective content creators; the authors only did sentence alignment
- generated_reviews_crowd: generated by [CTRL](https://arxiv.org/abs/1909.05858), translated to Thai by crowdsourced translators hired by [AIResearch](https://airesearch.in.th/)
- aqdf: Bilingual news from [Asia Pacific Defense Forum](https://ipdefenseforum.com/); respective content creators; the authors only did sentence alignment
- msr_paraphrase: [Microsoft Research Paraphrase Corpus](https://www.microsoft.com/en-us/download/details.aspx?id=52398) translated to Thai by crowdsourced translators hired by [AIResearch](https://airesearch.in.th/)
- mozilla_common_voice: English version of [Mozilla Common Voice](https://commonvoice.mozilla.org/) translated to Thai by crowdsourced translators hired by [AIResearch](https://airesearch.in.th/)
### Personal and Sensitive Information
There are risks of personal information to be included in the webcrawled data namely `paracrawl` and `thai_websites`.
## Considerations for Using the Data
### Social Impact of Dataset
- The first and currently largest English-Thai machine translation dataset that is strictly cleaned and deduplicated, compare to other sources such as Paracrawl.
### Discussion of Biases
- Gender-based ending honorifics in Thai (ครับ/ค่ะ) might not be balanced due to more female translators than male for `task_master_1`
### Other Known Limitations
#### Segment Alignment between Languages With and Without Boundaries
Unlike English, there is no segment boundary marking in Thai. One segment in Thai may or may not cover all
the content of an English segment. Currently, we mitigate this problem by grouping Thai segments together before
computing the text similarity scores. We then choose the combination with the highest text similarity score. It can be
said that adequacy is the main issue in building this dataset.
Quality of Translation from Crawled Websites
Some websites use machine translation models such as Google Translate to localize their content. As a result, Thai
segments retrieved from web crawling might face issues of fluency since we do not use human annotators to perform
quality control.
#### Quality Control of Crowdsourced Translators
When we use a crowdsourcing platform to translate the content, we can not fully control the quality of the translation.
To combat this, we filter out low-quality segments by using a text similarity threshold, based on cosine similarity of
universal sentence encoder vectors. Moreover, some crowdsourced translators might copy and paste source segments to
a translation engine and take the results as answers to the platform. To further improve, we can apply techniques such
as described in [Zaidan, 2012] to control the quality and avoid fraud on the platform.
#### Domain Dependence of Machine Tranlsation Models
We test domain dependence of machine translation models by comparing models trained and tested on the same dataset,
using 80/10/10 train-validation-test split, and models trained on one dataset and tested on the other.
## Additional Information
### Dataset Curators
[AIResearch](https://airesearch.in.th/), funded by [VISTEC](https://www.vistec.ac.th/) and [depa](https://www.depa.or.th/th/home)
### Licensing Information
CC-BY-SA 4.0
### Citation Information
```
@article{lowphansirikul2020scb,
title={scb-mt-en-th-2020: A Large English-Thai Parallel Corpus},
author={Lowphansirikul, Lalita and Polpanumas, Charin and Rutherford, Attapol T and Nutanong, Sarana},
journal={arXiv preprint arXiv:2007.03541},
year={2020}
}
```
### Contributions
Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset. |
sofc_materials_articles | 2023-03-09T10:44:46.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:token-classification",
"task_categories:text-classification",
"task_ids:named-entity-recognition",
"task_ids:slot-filling",
"task_ids:topic-classification",
"annotations_creators:expert-generated",
"language_creators:fo... | null | The SOFC-Exp corpus consists of 45 open-access scholarly articles annotated by domain experts.
A corpus and an inter-annotator agreement study demonstrate the complexity of the suggested
named entity recognition and slot filling tasks as well as high annotation quality is presented
in the accompanying paper. | @misc{friedrich2020sofcexp,
title={The SOFC-Exp Corpus and Neural Approaches to Information Extraction in the Materials Science Domain},
author={Annemarie Friedrich and Heike Adel and Federico Tomazic and Johannes Hingerl and Renou Benteau and Anika Maruscyk and Lukas Lange},
year={2020},
eprint={2006.03039},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 6 | 33 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- token-classification
- text-classification
task_ids:
- named-entity-recognition
- slot-filling
- topic-classification
pretty_name: SofcMaterialsArticles
dataset_info:
features:
- name: text
dtype: string
- name: sentence_offsets
sequence:
- name: begin_char_offset
dtype: int64
- name: end_char_offset
dtype: int64
- name: sentences
sequence: string
- name: sentence_labels
sequence: int64
- name: token_offsets
sequence:
- name: offsets
sequence:
- name: begin_char_offset
dtype: int64
- name: end_char_offset
dtype: int64
- name: tokens
sequence:
sequence: string
- name: entity_labels
sequence:
sequence:
class_label:
names:
'0': B-DEVICE
'1': B-EXPERIMENT
'2': B-MATERIAL
'3': B-VALUE
'4': I-DEVICE
'5': I-EXPERIMENT
'6': I-MATERIAL
'7': I-VALUE
'8': O
- name: slot_labels
sequence:
sequence:
class_label:
names:
'0': B-anode_material
'1': B-cathode_material
'2': B-conductivity
'3': B-current_density
'4': B-degradation_rate
'5': B-device
'6': B-electrolyte_material
'7': B-experiment_evoking_word
'8': B-fuel_used
'9': B-interlayer_material
'10': B-interconnect_material
'11': B-open_circuit_voltage
'12': B-power_density
'13': B-resistance
'14': B-support_material
'15': B-thickness
'16': B-time_of_operation
'17': B-voltage
'18': B-working_temperature
'19': I-anode_material
'20': I-cathode_material
'21': I-conductivity
'22': I-current_density
'23': I-degradation_rate
'24': I-device
'25': I-electrolyte_material
'26': I-experiment_evoking_word
'27': I-fuel_used
'28': I-interlayer_material
'29': I-interconnect_material
'30': I-open_circuit_voltage
'31': I-power_density
'32': I-resistance
'33': I-support_material
'34': I-thickness
'35': I-time_of_operation
'36': I-voltage
'37': I-working_temperature
'38': O
- name: links
sequence:
- name: relation_label
dtype:
class_label:
names:
'0': coreference
'1': experiment_variation
'2': same_experiment
'3': thickness
- name: start_span_id
dtype: int64
- name: end_span_id
dtype: int64
- name: slots
sequence:
- name: frame_participant_label
dtype:
class_label:
names:
'0': anode_material
'1': cathode_material
'2': current_density
'3': degradation_rate
'4': device
'5': electrolyte_material
'6': fuel_used
'7': interlayer_material
'8': open_circuit_voltage
'9': power_density
'10': resistance
'11': support_material
'12': time_of_operation
'13': voltage
'14': working_temperature
- name: slot_id
dtype: int64
- name: spans
sequence:
- name: span_id
dtype: int64
- name: entity_label
dtype:
class_label:
names:
'0': ''
'1': DEVICE
'2': MATERIAL
'3': VALUE
- name: sentence_id
dtype: int64
- name: experiment_mention_type
dtype:
class_label:
names:
'0': ''
'1': current_exp
'2': future_work
'3': general_info
'4': previous_work
- name: begin_char_offset
dtype: int64
- name: end_char_offset
dtype: int64
- name: experiments
sequence:
- name: experiment_id
dtype: int64
- name: span_id
dtype: int64
- name: slots
sequence:
- name: frame_participant_label
dtype:
class_label:
names:
'0': anode_material
'1': cathode_material
'2': current_density
'3': degradation_rate
'4': conductivity
'5': device
'6': electrolyte_material
'7': fuel_used
'8': interlayer_material
'9': open_circuit_voltage
'10': power_density
'11': resistance
'12': support_material
'13': time_of_operation
'14': voltage
'15': working_temperature
- name: slot_id
dtype: int64
splits:
- name: train
num_bytes: 7402373
num_examples: 26
- name: test
num_bytes: 2650700
num_examples: 11
- name: validation
num_bytes: 1993857
num_examples: 8
download_size: 3733137
dataset_size: 12046930
---
# Dataset Card for SofcMaterialsArticles
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [boschresearch/sofc-exp_textmining_resources](https://github.com/boschresearch/sofc-exp_textmining_resources)
- **Repository:** [boschresearch/sofc-exp_textmining_resources](https://github.com/boschresearch/sofc-exp_textmining_resources)
- **Paper:** [The SOFC-Exp Corpus and Neural Approaches to Information Extraction in the Materials Science Domain](https://arxiv.org/abs/2006.03039)
- **Leaderboard:**
- **Point of Contact:** [Annemarie Friedrich](annemarie.friedrich@de.bosch.com)
### Dataset Summary
> The SOFC-Exp corpus contains 45 scientific publications about solid oxide fuel cells (SOFCs), published between 2013 and 2019 as open-access articles all with a CC-BY license. The dataset was manually annotated by domain experts with the following information:
>
> * Mentions of relevant experiments have been marked using a graph structure corresponding to instances of an Experiment frame (similar to the ones used in FrameNet.) We assume that an Experiment frame is introduced to the discourse by mentions of words such as report, test or measure (also called the frame-evoking elements). The nodes corresponding to the respective tokens are the heads of the graphs representing the Experiment frame.
> * The Experiment frame related to SOFC-Experiments defines a set of 16 possible participant slots. Participants are annotated as dependents of links between the frame-evoking element and the participant node.
> * In addition, we provide coarse-grained entity/concept types for all frame participants, i.e, MATERIAL, VALUE or DEVICE. Note that this annotation has not been performed on the full texts but only on sentences containing information about relevant experiments, and a few sentences in addition. In the paper, we run experiments for both tasks only on the set of sentences marked as experiment-describing in the gold standard, which is admittedly a slightly simplified setting. Entity types are only partially annotated on other sentences. Slot filling could of course also be evaluated in a fully automatic setting with automatic experiment sentence detection as a first step.
### Supported Tasks and Leaderboards
- `topic-classification`: The dataset can be used to train a model for topic-classification, to identify sentences that mention SOFC-related experiments.
- `named-entity-recognition`: The dataset can be used to train a named entity recognition model to detect `MATERIAL`, `VALUE`, `DEVICE`, and `EXPERIMENT` entities.
- `slot-filling`: The slot-filling task is approached as fine-grained entity-typing-in-context, assuming that each sentence represents a single experiment frame. Sequence tagging architectures are utilized for tagging the tokens of each experiment-describing sentence with the set of slot types.
The paper experiments with BiLSTM architectures with `BERT`- and `SciBERT`- generated token embeddings, as well as with `BERT` and `SciBERT` directly for the modeling task. A simple CRF architecture is used as a baseline for sequence-tagging tasks. Implementations of the transformer-based architectures can be found in the `huggingface/transformers` library: [BERT](https://huggingface.co/bert-base-uncased), [SciBERT](https://huggingface.co/allenai/scibert_scivocab_uncased)
### Languages
This corpus is in English.
## Dataset Structure
### Data Instances
As each example is a full text of an academic paper, plus annotations, a json formatted example is space-prohibitive for this README.
### Data Fields
- `text`: The full text of the paper
- `sentence_offsets`: Start and end character offsets for each sentence in the text.
- `begin_char_offset`: a `int64` feature.
- `end_char_offset`: a `int64` feature.
- `sentences`: A sequence of the sentences in the text (using `sentence_offsets`)
- `sentence_labels`: Sequence of binary labels for whether a sentence contains information of interest.
- `token_offsets`: Sequence of sequences containing start and end character offsets for each token in each sentence in the text.
- `offsets`: a dictionary feature containing:
- `begin_char_offset`: a `int64` feature.
- `end_char_offset`: a `int64` feature.
- `tokens`: Sequence of sequences containing the tokens for each sentence in the text.
- `feature`: a `string` feature.
- `entity_labels`: a dictionary feature containing:
- `feature`: a classification label, with possible values including `B-DEVICE`, `B-EXPERIMENT`, `B-MATERIAL`, `B-VALUE`, `I-DEVICE`.
- `slot_labels`: a dictionary feature containing:
- `feature`: a classification label, with possible values including `B-anode_material`, `B-cathode_material`, `B-conductivity`, `B-current_density`, `B-degradation_rate`.
- `links`: a dictionary feature containing:
- `relation_label`: a classification label, with possible values including `coreference`, `experiment_variation`, `same_experiment`, `thickness`.
- `start_span_id`: a `int64` feature.
- `end_span_id`: a `int64` feature.
- `slots`: a dictionary feature containing:
- `frame_participant_label`: a classification label, with possible values including `anode_material`, `cathode_material`, `current_density`, `degradation_rate`, `device`.
- `slot_id`: a `int64` feature.
- `spans`: a dictionary feature containing:
- `span_id`: a `int64` feature.
- `entity_label`: a classification label, with possible values including ``, `DEVICE`, `MATERIAL`, `VALUE`.
- `sentence_id`: a `int64` feature.
- `experiment_mention_type`: a classification label, with possible values including ``, `current_exp`, `future_work`, `general_info`, `previous_work`.
- `begin_char_offset`: a `int64` feature.
- `end_char_offset`: a `int64` feature.
- `experiments`: a dictionary feature containing:
- `experiment_id`: a `int64` feature.
- `span_id`: a `int64` feature.
- `slots`: a dictionary feature containing:
- `frame_participant_label`: a classification label, with possible values including `anode_material`, `cathode_material`, `current_density`, `degradation_rate`, `conductivity`.
- `slot_id`: a `int64` feature.
Very detailed information for each of the fields can be found in the [corpus file formats section](https://github.com/boschresearch/sofc-exp_textmining_resources#corpus-file-formats) of the associated dataset repo
### Data Splits
This dataset consists of three splits:
| | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| Input Examples | 26 | 8 | 11 |
The authors propose the experimental setting of using the training data in a 5-fold cross validation setting for development and tuning, and finally applying tte model(s) to the independent test set.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
The corpus consists of 45
open-access scientific publications about SOFCs
and related research, annotated by domain experts.
### Annotations
#### Annotation process
For manual annotation, the authors use the InCeption annotation tool (Klie et al., 2018).
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The manual annotations created for the SOFC-Exp corpus are licensed under a [Creative Commons Attribution 4.0 International License (CC-BY-4.0)](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```
@misc{friedrich2020sofcexp,
title={The SOFC-Exp Corpus and Neural Approaches to Information Extraction in the Materials Science Domain},
author={Annemarie Friedrich and Heike Adel and Federico Tomazic and Johannes Hingerl and Renou Benteau and Anika Maruscyk and Lukas Lange},
year={2020},
eprint={2006.03039},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@ZacharySBrown](https://github.com/ZacharySBrown) for adding this dataset. |
turkish_product_reviews | 2023-01-25T14:54:42.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:tr",
"license:unknown",
"region:us"
] | null | Turkish Product Reviews.
This repository contains 235.165 product reviews collected online. There are 220.284 positive, 14881 negative reviews. | null | null | 3 | 33 | ---
annotations_creators:
- found
language_creators:
- found
language:
- tr
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: Turkish Product Reviews
dataset_info:
features:
- name: sentence
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 43369710
num_examples: 235165
download_size: 13184332
dataset_size: 43369710
---
# Dataset Card for Turkish Product Reviews
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [turkish-text-data](https://github.com/fthbrmnby/turkish-text-data)
- **Point of Contact:** [Fatih Barmanbay](https://github.com/fthbrmnby)
### Dataset Summary
This Turkish Product Reviews Dataset contains 235.165 product reviews collected online. There are 220.284 positive, 14881 negative reviews.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is based on Turkish.
## Dataset Structure
### Data Instances
**Example 1:**
**sentence:** beklentimin altında bir ürün kaliteli değil
**sentiment:** 0 (negative)
**Example 2:**
**sentence:** fiyat ve performans olarak gayet iyi
**sentiment:** 1 (positive)
### Data Fields
- **sentence**(string) : Contatins turkish product review
- **sentiment**(int) : 0 (negative) or 1 (positive)
### Data Splits
It is not divided into Train set and Test set.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was created by [Fatih Barmanbay](https://github.com/fthbrmnby).
### Licensing Information
The data is under the [CC-BY-SA-4.0 License](https://github.com/fthbrmnby/turkish-text-data/blob/master/LICENCE)
### Citation Information
No citation available for this dataset.
### Contributions
Thanks to [@basakbuluz](https://github.com/basakbuluz) for adding this dataset. |
GEM/mlsum | 2022-10-24T15:30:21.000Z | [
"task_categories:summarization",
"annotations_creators:none",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:de",
"language:es",
"license:other",
"region:us"
] | GEM | This is the MLSUM subset of the GEM benchmark. MLSUM is the first large-scale MultiLingual SUMmarization dataset.
Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, German, Spanish, Russian, Turkish.
Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community.
We report cross-lingual comparative analyses based on state-of-the-art systems.
These highlight existing biases which motivate the use of a multi-lingual dataset. | @article{scialom2020mlsum,
title={MLSUM: The Multilingual Summarization Corpus},
author={Scialom, Thomas and Dray, Paul-Alexis and Lamprier, Sylvain and Piwowarski, Benjamin and Staiano, Jacopo},
journal={arXiv preprint arXiv:2004.14900},
year={2020}
} | null | 1 | 33 | ---
annotations_creators:
- none
language_creators:
- unknown
language:
- de
- es
license:
- other
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- summarization
task_ids: []
pretty_name: mlsum
---
# Dataset Card for GEM/mlsum
## Dataset Description
- **Homepage:** N/A
- **Repository:** https://gitlab.lip6.fr/scialom/mlsum_data/-/tree/master/MLSUM
- **Paper:** https://aclanthology.org/2020.emnlp-main.647/
- **Leaderboard:** N/A
- **Point of Contact:** Thomas Scialom
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/mlsum).
### Dataset Summary
MLSum is a multilingual summarization dataset crawled from different news websites. The GEM version supports the German and Spanish subset alongside specifically collected challenge sets for COVID-related articles to test out-of-domain generalization.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/mlsum')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/mlsum).
#### website
N/A
#### paper
[ACL Anthology](https://aclanthology.org/2020.emnlp-main.647/)
#### authors
Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano
## Dataset Overview
### Where to find the Data and its Documentation
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Gitlab](https://gitlab.lip6.fr/scialom/mlsum_data/-/tree/master/MLSUM)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[ACL Anthology](https://aclanthology.org/2020.emnlp-main.647/)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@inproceedings{scialom-etal-2020-mlsum,
title = "{MLSUM}: The Multilingual Summarization Corpus",
author = "Scialom, Thomas and
Dray, Paul-Alexis and
Lamprier, Sylvain and
Piwowarski, Benjamin and
Staiano, Jacopo",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.647",
doi = "10.18653/v1/2020.emnlp-main.647",
pages = "8051--8067",
abstract = "We present MLSUM, the first large-scale MultiLingual SUMmarization dataset. Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages {--} namely, French, German, Spanish, Russian, Turkish. Together with English news articles from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community. We report cross-lingual comparative analyses based on state-of-the-art systems. These highlight existing biases which motivate the use of a multi-lingual dataset.",
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Thomas Scialom
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
{thomas,paul-alexis,jacopo}@recital.ai, {sylvain.lamprier,benjamin.piwowarski}@lip6.fr
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
yes
#### Covered Dialects
<!-- info: What dialects are covered? Are there multiple dialects per language? -->
<!-- scope: periscope -->
There is only one dialect per language, Hochdeutsch for German and Castilian Spanish for Spanish.
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`German`, `Spanish, Castilian`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
The German articles are crawled from Süddeutsche Zeitung and the Spanish ones from El Pais.
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
other: Other license
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
The intended use of this dataset is to augment existing datasets for English news summarization with additional languages.
#### Add. License Info
<!-- info: What is the 'other' license of the dataset? -->
<!-- scope: periscope -->
Restricted to non-commercial research purposes.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Summarization
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
The speaker is required to produce a high quality summary of news articles in the same language as the input article.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`other`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
CNRS, Sorbonne Université, reciTAL
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
Funding information is not specified.
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
The original data card was written by Pedro Henrique Martins (Instituto de Telecomunicações) and Sebastian Gehrmann (Google Research) extended and updated it to the v2 format. The COVID challenge set was created by Laura Perez-Beltrachini (University of Edinburgh). Data cleaning was done by Juan Diego Rodriguez (UT Austin).
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
The data fields are:
- `text`: the source article (`string`).
- `summary`: the output summary (`string`).
- `topic`: the topic of the article (`string`).
- `url`: the article's url (`string`).
- `title`: the article's title (`string`).
- `date`: the article's date (`string`).
#### Reason for Structure
<!-- info: How was the dataset structure determined? -->
<!-- scope: microscope -->
The structure follows previously released datasets. The `topic` and `title` fields were added to enable additional tasks like title generation and topic detection.
#### How were labels chosen?
<!-- info: How were the labels chosen? -->
<!-- scope: microscope -->
They are human written highlights or summaries scraped from the same website.
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
```
{
'date': '00/01/2010',
'gem_id': 'mlsum_de-train-2',
'gem_parent_id': 'mlsum_de-train-2',
'references': [],
'target': 'Oskar Lafontaine gibt den Parteivorsitz der Linken ab - und seine Kollegen streiten, wer ihn beerben soll. sueddeutsche.de stellt die derzeit aussichtsreichsten Anwärter für Führungsaufgaben vor. Mit Vote.',
'text': 'Wenn an diesem Montag die Landesvorsitzenden der Linken über die Nachfolger der derzeitigen Chefs Lothar Bisky und Oskar Lafontaine sowie des Bundesgeschäftsführers Dietmar Bartsch beraten, geht es nicht nur darum, wer die Partei führen soll. Es geht auch um die künftige Ausrichtung und Stärke einer Partei, die vor allem von Lafontaine zusammengehalten worden war. Ihm war es schließlich vor fünf Jahren gelungen, aus der ostdeutschen PDS und der westedeutschen WASG eine Partei zu formen. Eine Partei allerdings, die zerrissen ist in Ost und West, in Regierungswillige und ewige Oppositionelle, in Realos und Ideologen, in gemäßigte und radikale Linke. Wir stellen mögliche Kandidaten vor. Stimmen Sie ab: Wen halten Sie für geeignet und wen für unfähig? Kampf um Lafontaines Erbe: Gregor Gysi Sollte überhaupt jemand die Partei alleine führen, wie es sich viele Ostdeutsche wünschen, käme dafür wohl nur der 62-jährige Gregor Gysi in Betracht. Er ist nach Lafontaine einer der bekanntesten Politiker der Linken und derzeit Fraktionsvorsitzender der Partei im Bundestag. Allerdings ist der ehemalige PDS-Vorsitzende und Rechtsanwalt nach drei Herzinfarkten gesundheitlich angeschlagen. Wahrscheinlich wäre deshalb, dass er die zerstrittene Partei nur übergangsweise führt. Doch noch ist nicht klar, ob eine Person allein die Partei führen soll oder eine Doppelspitze. Viele Linke wünschen sich ein Duo aus einem westdeutschen und einem ostdeutschen Politiker, Mann und Frau. Foto: Getty Images',
'title': 'Personaldebatte bei der Linken - Wer kommt nach Lafontaine?',
'topic': 'politik',
'url': 'https://www.sueddeutsche.de/politik/personaldebatte-bei-der-linken-wer-kommt-nach-lafontaine-1.70041'
}
```
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
The statistics of the original dataset are:
| | Dataset | Train | Validation | Test | Mean article length | Mean summary length |
| :--- | :----: | :---: | :---: | :---: | :---: | :---: |
| German | 242,982 | 220,887 |11,394 |10,701 |570.6 (words) | 30.36 (words) |
| Spanish | 290,645 | 266,367 |10,358 |13,920 |800.5 (words) |20.71 (words) |
The statistics of the cleaned version of the dataset are:
| | Dataset | Train | Validation | Test |
| :--- | :----: | :---: | :---: | :---: |
| German | 242,835 | 220,887 |11,392 |10,695 |
| Spanish | 283,228 |259,886 |9,977 |13,365 |
The COVID challenge sets have 5058 (de) and 1938 (es) examples.
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
The training set contains data from 2010 to 2018. Data from 2019 (~10% of the dataset) is used for validation (up to May) and testing(May-December 2019).
####
<!-- info: What does an outlier of the dataset in terms of length/perplexity/embedding look like? -->
<!-- scope: microscope -->
Some topics are less represented within the dataset (e.g., Financial news in German and Television in Spanish).
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
As the first large-scale multilingual summarization dataset, it enables evaluation of summarization models beyond English.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
yes
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
yes
#### Difference from other GEM datasets
<!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
<!-- scope: microscope -->
In our configuration, the dataset is fully non-English.
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
Content Selection, Content Planning, Realization
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
yes
#### GEM Modifications
<!-- info: What changes have been made to he original dataset? -->
<!-- scope: periscope -->
`data points removed`, `data points added`
#### Modification Details
<!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification -->
<!-- scope: microscope -->
The modifications done to the original dataset are the following:
- Selection of 2 languages (Spanish and German) out of the dataset 5 languages due to copyright restrictions.
- Removal of duplicate articles.
- Manually removal of article-summary pairs for which the summary is not related to the article.
- Removal of article-summary pairs written in a different language (detected using the [langdetect](https://pypi.org/project/langdetect/) library).
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
yes
#### Split Information
<!-- info: Describe how the new splits were created -->
<!-- scope: periscope -->
For both selected languages (German and Spanish), we compiled time-shifted test data in the form of new articles for the second semester of 2020 with Covid19-related keywords. We collected articles from the same German and Spanish outlets as the original MLSUM datasets (El Pais and Süddeutsche Zeitung). We used the scripts provided for the re-creation of the [MLSUM datasets](https://github.com/recitalAI/MLSUM). The new challenge test set for German contains 5058 instances and the Spanish one contains 1938.
We additionally sample 500 training and validation points as additional challenge sets to measure overfitting.
#### Split Motivation
<!-- info: What aspects of the model's generation capacities were the splits created to test? -->
<!-- scope: periscope -->
Generalization to unseen topics.
### Getting Started with the Task
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
Content Selection, Content Planning, Realization
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`METEOR`, `ROUGE`, `Other: Other Metrics`
#### Other Metrics
<!-- info: Definitions of other metrics -->
<!-- scope: periscope -->
Novelty: Number of generated n-grams not included in the source articles.
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
ROUGE and METEOR both measure n-gram overlap with a focus on recall and are standard summarization metrics. Novelty is often reported alongside them to characterize how much a model diverges from its inputs.
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
yes
#### Other Evaluation Approaches
<!-- info: What evaluation approaches have others used? -->
<!-- scope: periscope -->
The GEM benchmark results (https://gem-benchmark.com/results) report a wide range of metrics include lexical overlap metrics but also semantic ones like BLEURT and BERT-Score.
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
The rationale was to create a multilingual news summarization dataset that mirrors the format of popular English datasets like XSum or CNN/DM.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
The speaker is required to produce a high quality summary of news articles in the same language as the input article.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
yes
#### Source Details
<!-- info: List the sources (one per line) -->
<!-- scope: periscope -->
www.lemonde.fr
www.sueddeutsche.de
www.elpais.com
www.mk.ru
www.internethaber.com
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Multiple websites`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
The language producers are professional journalists.
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
4/5 of the original languages report their topics (except Turkish) and the distributions differ between sources. The dominant topics in German are Politik, Sport, Wirtschaft (economy). The dominant topics in Spanish are actualidad (current news) and opinion. French and Russian are different as well but we omit these languages in the GEM version.
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
not validated
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
algorithmically
#### Filter Criteria
<!-- info: What were the selection criteria? -->
<!-- scope: microscope -->
In the original dataset, only one filter was applied: all the articles shorter than 50 words or summaries shorter than 10 words are discarded.
The GEM version additionally applies langID filter to ensure that articles are in the correct language.
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
none
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
no
#### Justification for Using the Data
<!-- info: If not, what is the justification for reusing the data? -->
<!-- scope: microscope -->
The copyright remains with the original data creators and the usage permission is restricted to non-commercial uses.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
yes/very likely
#### Categories of PII
<!-- info: What categories of PII are present or suspected in the data? -->
<!-- scope: periscope -->
`sensitive information`, `generic PII`
#### Any PII Identification?
<!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? -->
<!-- scope: periscope -->
no identification
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
no
|
allegro/klej-nkjp-ner | 2021-11-29T19:14:56.000Z | [
"region:us"
] | allegro | null | null | null | 0 | 33 | Entry not found |
eugenesiow/Set14 | 2022-10-21T04:00:31.000Z | [
"task_categories:other",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"license:other",
"other-image-super-resolution",
"region:us"
] | eugenesiow | Set14 is an evaluation dataset with 14 RGB images for the image super resolution task. | @inproceedings{zeyde2010single,
title={On single image scale-up using sparse-representations},
author={Zeyde, Roman and Elad, Michael and Protter, Matan},
booktitle={International conference on curves and surfaces},
pages={711--730},
year={2010},
organization={Springer}
} | null | 0 | 33 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language: []
license:
- other
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- other
task_ids: []
pretty_name: Set14
tags:
- other-image-super-resolution
---
# Dataset Card for Set14
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage**: https://sites.google.com/site/romanzeyde/research-interests
- **Repository**: https://huggingface.co/datasets/eugenesiow/Set14
- **Paper**: http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-get.cgi/2010/CS/CS-2010-12.pdf
- **Leaderboard**: https://github.com/eugenesiow/super-image#scale-x2
### Dataset Summary
Set14 is an evaluation dataset with 14 RGB images for the image super resolution task. It was first used as the test set of the paper "On single image scale-up using sparse-representations" by [Zeyde et al. (2010)](http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-get.cgi/2010/CS/CS-2010-12.pdf).
Install with `pip`:
```bash
pip install datasets super-image
```
Evaluate a model with the [`super-image`](https://github.com/eugenesiow/super-image) library:
```python
from datasets import load_dataset
from super_image import EdsrModel
from super_image.data import EvalDataset, EvalMetrics
dataset = load_dataset('eugenesiow/Set14', 'bicubic_x2', split='validation')
eval_dataset = EvalDataset(dataset)
model = EdsrModel.from_pretrained('eugenesiow/edsr-base', scale=2)
EvalMetrics().evaluate(model, eval_dataset)
```
### Supported Tasks and Leaderboards
The dataset is commonly used for evaluation of the `image-super-resolution` task.
Unofficial [`super-image`](https://github.com/eugenesiow/super-image) leaderboard for:
- [Scale 2](https://github.com/eugenesiow/super-image#scale-x2)
- [Scale 3](https://github.com/eugenesiow/super-image#scale-x3)
- [Scale 4](https://github.com/eugenesiow/super-image#scale-x4)
- [Scale 8](https://github.com/eugenesiow/super-image#scale-x8)
### Languages
Not applicable.
## Dataset Structure
### Data Instances
An example of `validation` for `bicubic_x2` looks as follows.
```
{
"hr": "/.cache/huggingface/datasets/downloads/extracted/Set14_HR/baboon.png",
"lr": "/.cache/huggingface/datasets/downloads/extracted/Set14_LR_x2/baboon.png"
}
```
### Data Fields
The data fields are the same among all splits.
- `hr`: a `string` to the path of the High Resolution (HR) `.png` image.
- `lr`: a `string` to the path of the Low Resolution (LR) `.png` image.
### Data Splits
| name |validation|
|-------|---:|
|bicubic_x2|14|
|bicubic_x3|14|
|bicubic_x4|14|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
No annotations.
#### Who are the annotators?
No annotators.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- **Original Authors**: [Zeyde et al.](http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-get.cgi/2010/CS/CS-2010-12.pdf)
### Licensing Information
Academic use only.
### Citation Information
```bibtex
@inproceedings{zeyde2010single,
title={On single image scale-up using sparse-representations},
author={Zeyde, Roman and Elad, Michael and Protter, Matan},
booktitle={International conference on curves and surfaces},
pages={711--730},
year={2010},
organization={Springer}
}
```
### Contributions
Thanks to [@eugenesiow](https://github.com/eugenesiow) for adding this dataset.
|
GEM/xwikis | 2023-02-22T13:05:19.000Z | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:de",
"language:en",
"language:fr",
"language:cs",
"license:cc-by-sa-4.0",
"arxiv:2202.09583",
"region:us"
] | GEM | The XWikis Corpus (Perez-Beltrachini and Lapata, 2021) provides datasets with different language pairs and directions for cross-lingual abstractive document summarisation. This current version includes four languages: English, German, French, and Czech. The dataset is derived from Wikipedia. It is based on the observation that for a Wikipedia title, the lead section provides an overview conveying salient information, while the body provides detailed information. It thus assumes the body and lead paragraph as a document-summary pair. Furthermore, as a Wikipedia title can be associated with Wikipedia articles in various languages, 1) Wikipedia’s Interlanguage Links are used to find titles across languages and 2) given any two related Wikipedia titles, e.g., Huile d’Olive (French) and Olive Oil (English), the lead paragraph from one title is paired with the body of the other to derive cross-lingual pairs. | @inproceedings{perez2021models,
title={Models and Datasets for Cross-Lingual Summarisation},
author={Perez-Beltrachini, Laura and Lapata, Mirella},
booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing},
pages={9408--9423},
year={2021}
} | null | 1 | 33 | ---
annotations_creators:
- found
language_creators:
- unknown
language:
- de
- en
- fr
- cs
license:
- cc-by-sa-4.0
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- summarization
task_ids: []
pretty_name: xwikis
---
# Dataset Card for GEM/xwikis
## Dataset Description
- **Homepage:** https://github.com/lauhaide/clads
- **Repository:** [Needs More Information]
- **Paper:** https://arxiv.org/abs/2202.09583
- **Leaderboard:** N/A
- **Point of Contact:** Laura Perez-Beltrachini
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/xwikis).
### Dataset Summary
The XWikis Corpus provides datasets with different language pairs and directions for cross-lingual and multi-lingual abstractive document summarisation.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/xwikis')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/xwikis).
#### website
[Github](https://github.com/lauhaide/clads)
#### paper
https://arxiv.org/abs/2202.09583
#### authors
Laura Perez-Beltrachini (University of Edinburgh)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
[Github](https://github.com/lauhaide/clads)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
https://arxiv.org/abs/2202.09583
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@InProceedings{clads-emnlp,
author = "Laura Perez-Beltrachini and Mirella Lapata",
title = "Models and Datasets for Cross-Lingual Summarisation",
booktitle = "Proceedings of The 2021 Conference on Empirical Methods in Natural Language Processing ",
year = "2021",
address = "Punta Cana, Dominican Republic",
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Laura Perez-Beltrachini
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
lperez@ed.ac.uk
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
yes
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`German`, `English`, `French`, `Czech`, `Chinese`
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
Cross-lingual and Multi-lingual single long input document abstractive summarisation.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Summarization
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
Entity descriptive summarisation, that is, generate a summary that conveys the most salient facts of a document related to a given entity.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Laura Perez-Beltrachini (University of Edinburgh)
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Laura Perez-Beltrachini (University of Edinburgh) and Ronald Cardenas (University of Edinburgh)
### Dataset Structure
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
For each language pair and direction there exists a train/valid/test split.
The test split is a sample of size 7k from the intersection of titles existing in the four languages (cs,fr,en,de).
Train/valid are randomly split.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
no
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
no
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
- identification of entity salient information
- translation
- multi-linguality
- cross-lingual transfer, zero-shot, few-shot
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`ROUGE`
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
yes
#### Other Evaluation Approaches
<!-- info: What evaluation approaches have others used? -->
<!-- scope: periscope -->
ROUGE-1/2/L
## Dataset Curation
### Original Curation
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Single website`
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
other
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
found
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
#### Annotation Values
<!-- info: Purpose and values for each annotation -->
<!-- scope: microscope -->
The input documents have section structure information.
#### Any Quality Control?
<!-- info: Quality control measures? -->
<!-- scope: telescope -->
validated by another rater
#### Quality Control Details
<!-- info: Describe the quality control measures that were taken. -->
<!-- scope: microscope -->
Bilingual annotators assessed the content overlap of source document and target summaries.
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
no
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
no
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`public domain`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`public domain`
### Known Technical Limitations
|
alexfabbri/answersumm | 2022-12-14T20:18:28.000Z | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"query-based-summarization",
"arxiv:2111.06474",
"region:us"
] | alexfabbri | null | null | null | 3 | 33 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
task_ids: []
tags:
- query-based-summarization
---
# Dataset Card for answersumm
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/Alex-Fabbri/AnswerSumm
- **Paper:** [AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer Summarization](https://arxiv.org/abs/2111.06474)
- **Point of Contact:** [Alex Fabbri](mailto:afabbri@salesforce.com)
### Dataset Summary
The AnswerSumm dataset is an English-language dataset of questions and answers collected from a [StackExchange data dump](https://archive.org/details/stackexchange). The dataset was created to support the task of query-focused answer summarization with an emphasis on multi-perspective answers.
The dataset consists of over 4200 such question-answer threads annotated by professional linguists and includes over 8700 summaries. We decompose the task into several annotation stages, including sentence selection, sentence clustering, cluster summarization, and overall summarization. For each thread, the annotator writes two summaries, one in which the annotator is asked to mark sentences that are included in the final summary and instructed to more closely use the words in these sentences rather than abstract. We have multiple annotators for a subset of the examples in the test set.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
A data point comprises a question with a `title` field containing the overview of the question and a `question` that elaborates on the title. The answers are sentence tokenized and contain relevance labels, labels for inclusion in the final summary, and cluster labels. We include cluster summaries, overall summaries, and additional metadata.
An example from the AnswerSumm test set looks as follows:
```json
{
"example_id": 9_24,
"annotator_id": [1],
"question": {
"author": "gaming.stackexchange.com/users/11/Jeffrey",
"forum": "gaming.stackexchange.com",
"link": "gaming.stackexchange.com/questions/1",
"question": "Now that the Engineer update has come, there will be lots of Engineers building up everywhere. How should this best be handled?",
"question_tags": "\<team-fortress-2\>",
"title": "What is a good strategy to deal with lots of engineers turtling on the other team?"
},
"answers": [
{
"answer_details": {
"author": "gaming.stackexchange.com/users/44/Corv1nus",
"score": 49
}
"sents": [
"text": "Lots of medics with lots of ubers on high-damage-dealing classes."
"label": [0],
"label_summ": [0],
"cluster_id": [[-1]],
]
...
},
...
]
"summaries": [
[
"Demomen usually work best against a sentry farm. Heavies or pyros can also be effective. Medics should be in the frontline to absorb the shock. Build a teleporter to help your team through.",
"Demomen are best against a sentry farm. Heavies or pyros can also be effective. The medic should lead the uber combo. ..."
]
]
"cluster_summaries":[
"Demomen are best against a sentry farm.",
"Heavies or pyros can also be effective.",
...
]
}
```
### Data Fields
- question: contains metadata about the question and forum
- question: the body of the question post
- title: the title of the question post
- question_tags: user-provided question tags
- link: link to the original question
- author: link to the author's user page (as requested by StackExchange's attribution policy)
- answers: list of sentence-tokenized answers
- answer_details: dictionary consisting of link to answer author's user page (author) and community-assigned score (score)
- sents: sentences that compose the answer
- text: the sentence text
- label: a list (to generalize to multi-annotator scenarios) of whether the sentence is labeled as relevant or not for answering the question.
- label_summ: a list of whether the sentence was used to write the first annotator-created summary (that is the first summary in `summaries`)
- cluster_id: a list of lists (potentially multiple annotators and a sentence can be in potentially multiple clusters) of the clusters a sentence belongs to. -1 implies no cluster. This label can be used to aggregate sentences into clusters across answers.
- summaries: list of list of summaries. Each annotator wrote two summaries. The first in the list is the summary in which the instructor was told to mark sentences relevant for inclusion in the summary and then closely use the words of these sentences, while for the second summary the annotator was asked to paraphrase and condense the cluster summaries but was not asked to reduce abstraction.
- annotator_id: a list of the ids of the annotator(s) who completed all tasks related to that thread.
- mismatch_info: a dict of any issues in processing the excel files on which annotations were completed.
- rel_sent_not_in_cluster: list of booleans indicating whether there are sentences that are labeled as relevant but were not included in a cluster.
- cluster_sents_not_matched: list of sentences that were found in a cluster but which our processing script didn't automatically match to sentences in the source answers. If cluster summarization is of interest to the user you may want to process these examples separately using clusters_orig.
### Data Splits
The data is split into training, validation, and test sets using stratified sampling on the source forums. There are 2783, 500, and 1000 train/validation/test threads, respectively.
## Dataset Creation
### Curation Rationale
AnswerSumm was built to provide a testbed for query-focused summarization of multi-perspective answers. The data collection was designed to tackle multiple subtasks including sentence selection, clustering, cluster summarization, and overall summarization.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained by filtering examples based on a whitelist of forums from StackExchange which we believed would be able to be summarized by a lay person. We describe. We asked annotators to remove examples which required technical knowledge or additional context beyond what was present in the answers.
#### Who are the source language producers?
The language producers are the users of the StackExchange forums sampled.
### Annotations
#### Annotation process
Please see our [paper](https://arxiv.org/pdf/2111.06474.pdf) for additional annotation details. We began with a pre-pilot of 50 examples, followed by a pilot of 500 and a final annotation of 5000 examples. This release contains the results of the final data collection. We will release the instructions used in data collection.
#### Who are the annotators?
The annotators are professional linguists who were obtained through an internal contractor.
### Personal and Sensitive Information
We did not anonymize the data. We followed the specifications from StackExchange [here](https://archive.org/details/stackexchange) to include author information.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop systems that automatically summarize multi-perspective answers. A system that succeeds at this task would be able to summarize many perspectives present in an answer and not limit itself to a single perspective.
### Discussion of Biases
While StackExchange allows for the exchange of information and ideas, hate and harassment may exist on this site. While our annotators did not flag examples in this process, we encourage users of the dataset to reach out with concerns.
We also note that this dataset is limited in its monolingual coverage.
## Additional Information
### Dataset Curators
The dataset was collected by Alex Fabbri, Xiaojian Wu, Srini Iyer, Haoran Li, and Mona Diab during work done at Facebook.
### Licensing Information
The data is released under cc-by-sa 4.0 following the original StackExchange [release](https://archive.org/details/stackexchange).
### Citation Information
```bibtex
@misc{fabbri-etal-2022-answersumm,
title={AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer Summarization},
author={Alexander R. Fabbri and Xiaojian Wu and Srini Iyer and Haoran Li and Mona Diab },
year={2022},
eprint={2111.06474},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2111.06474}
}
```
|
iohadrubin/mini_xsum | 2022-06-15T09:58:01.000Z | [
"region:us"
] | iohadrubin | null | null | null | 0 | 33 | Entry not found |
tner/tweetner7 | 2022-11-27T18:50:28.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:1k<10K",
"language:en",
"license:other",
"arxiv:2210.03797",
"region:us"
] | tner | [TweetNER7](TBA) | TBA | null | 1 | 33 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1k<10K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: TweetNER7
---
# Dataset Card for "tner/tweetner7"
## Dataset Description
- **Repository:** [https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper](https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper)
- **Paper:** [https://arxiv.org/abs/2210.03797](https://arxiv.org/abs/2210.03797)
- **Dataset:** TweetNER7
- **Domain:** Twitter
- **Number of Entity:** 7
### Dataset Summary
This is the official repository of TweetNER7 (["Named Entity Recognition in Twitter:
A Dataset and Analysis on Short-Term Temporal Shifts, AACL main conference 2022"](https://arxiv.org/abs/2210.03797)), an NER dataset on Twitter with 7 entity labels. Each instance of TweetNER7 comes with a timestamp which distributes from September 2019 to August 2021.
The tweet collection used in TweetNER7 is same as what used in [TweetTopic](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi).
The dataset is integrated in [TweetNLP](https://tweetnlp.org/) too.
- Entity Types: `corperation`, `creative_work`, `event`, `group`, `location`, `product`, `person`
### Preprocessing
We pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token `{{URL}}` and non-verified usernames into `{{USERNAME}}`.
For verified usernames, we replace its display name (or account name) with symbols `{@}`.
For example, a tweet
```
Get the all-analog Classic Vinyl Edition
of "Takin' Off" Album from @herbiehancock
via @bluenoterecords link below:
http://bluenote.lnk.to/AlbumOfTheWeek
```
is transformed into the following text.
```
Get the all-analog Classic Vinyl Edition
of "Takin' Off" Album from {@herbiehancock@}
via {@bluenoterecords@} link below: {{URL}}
```
A simple function to format tweet follows below.
```python
import re
from urlextract import URLExtract
extractor = URLExtract()
def format_tweet(tweet):
# mask web urls
urls = extractor.find_urls(tweet)
for url in urls:
tweet = tweet.replace(url, "{{URL}}")
# format twitter account
tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet)
return tweet
target = """Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek"""
target_format = format_tweet(target)
print(target_format)
'Get the all-analog Classic Vinyl Edition of "Takin\' Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}}'
```
We ask annotators to ignore those special tokens but label the verified users' mentions.
### Data Split
| split | number of instances | description |
|:------------------|------:|------:|
| train_2020 | 4616 | training dataset from September 2019 to August 2020 |
| train_2021 | 2495 | training dataset from September 2020 to August 2021 |
| train_all | 7111 | combined training dataset of `train_2020` and `train_2021` |
| validation_2020 | 576 | validation dataset from September 2019 to August 2020 |
| validation_2021 | 310 | validation dataset from September 2020 to August 2021 |
| test_2020 | 576 | test dataset from September 2019 to August 2020 |
| test_2021 | 2807 | test dataset from September 2020 to August 2021 |
| train_random | 4616 | randomly sampled training dataset with the same size as `train_2020` from `train_all` |
| validation_random | 576 | randomly sampled training dataset with the same size as `validation_2020` from `validation_all` |
| extra_2020 | 87880 | extra tweet without annotations from September 2019 to August 2020 |
| extra_2021 | 93594 | extra tweet without annotations from September 2020 to August 2021 |
For the temporal-shift setting, model should be trained on `train_2020` with `validation_2020` and evaluate on `test_2021`.
In general, model would be trained on `train_all`, the most representative training set with `validation_2021` and evaluate on `test_2021`.
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tokens': ['Morning', '5km', 'run', 'with', '{{USERNAME}}', 'for', 'breast', 'cancer', 'awareness', '#', 'pinkoctober', '#', 'breastcancerawareness', '#', 'zalorafit', '#', 'zalorafitxbnwrc', '@', 'The', 'Central', 'Park', ',', 'Desa', 'Parkcity', '{{URL}}'],
'tags': [14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 2, 14, 2, 14, 14, 14, 14, 14, 14, 4, 11, 11, 11, 11, 14],
'id': '1183344337016381440',
'date': '2019-10-13'
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/tweetner7/raw/main/dataset/label.json).
```python
{
"B-corporation": 0,
"B-creative_work": 1,
"B-event": 2,
"B-group": 3,
"B-location": 4,
"B-person": 5,
"B-product": 6,
"I-corporation": 7,
"I-creative_work": 8,
"I-event": 9,
"I-group": 10,
"I-location": 11,
"I-person": 12,
"I-product": 13,
"O": 14
}
```
## Models
See full evaluation metrics [here](https://github.com/asahi417/tner/blob/master/MODEL_CARD.md#models-for-tweetner7).
### Main Models
| Model (link) | Data | Language Model | Micro F1 (2021) | Macro F1 (2021) |
|:--------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------|------------------:|------------------:|
| [`tner/roberta-large-tweetner7-all`](https://huggingface.co/tner/roberta-large-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 65.75 | 61.25 |
| [`tner/roberta-base-tweetner7-all`](https://huggingface.co/tner/roberta-base-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 65.16 | 60.81 |
| [`tner/twitter-roberta-base-2019-90m-tweetner7-all`](https://huggingface.co/tner/twitter-roberta-base-2019-90m-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-2019-90m`](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) | 65.68 | 61 |
| [`tner/twitter-roberta-base-dec2020-tweetner7-all`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 65.26 | 60.7 |
| [`tner/bertweet-large-tweetner7-all`](https://huggingface.co/tner/bertweet-large-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 66.46 | 61.87 |
| [`tner/bertweet-base-tweetner7-all`](https://huggingface.co/tner/bertweet-base-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 65.36 | 60.52 |
| [`tner/bert-large-tweetner7-all`](https://huggingface.co/tner/bert-large-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 63.58 | 59 |
| [`tner/bert-base-tweetner7-all`](https://huggingface.co/tner/bert-base-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-base`](https://huggingface.co/bert-base) | 62.3 | 57.59 |
| [`tner/roberta-large-tweetner7-continuous`](https://huggingface.co/tner/roberta-large-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 66.02 | 60.9 |
| [`tner/roberta-base-tweetner7-continuous`](https://huggingface.co/tner/roberta-base-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 65.47 | 60.01 |
| [`tner/twitter-roberta-base-2019-90m-tweetner7-continuous`](https://huggingface.co/tner/twitter-roberta-base-2019-90m-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-2019-90m`](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) | 65.87 | 61.07 |
| [`tner/twitter-roberta-base-dec2020-tweetner7-continuous`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 65.51 | 60.57 |
| [`tner/bertweet-large-tweetner7-continuous`](https://huggingface.co/tner/bertweet-large-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 66.41 | 61.66 |
| [`tner/bertweet-base-tweetner7-continuous`](https://huggingface.co/tner/bertweet-base-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 65.84 | 61.02 |
| [`tner/bert-large-tweetner7-continuous`](https://huggingface.co/tner/bert-large-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 63.2 | 57.67 |
| [`tner/roberta-large-tweetner7-2021`](https://huggingface.co/tner/roberta-large-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.05 | 59.11 |
| [`tner/roberta-base-tweetner7-2021`](https://huggingface.co/tner/roberta-base-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 61.76 | 57 |
| [`tner/twitter-roberta-base-dec2020-tweetner7-2021`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 63.98 | 58.91 |
| [`tner/bertweet-large-tweetner7-2021`](https://huggingface.co/tner/bertweet-large-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 62.9 | 58.13 |
| [`tner/bertweet-base-tweetner7-2021`](https://huggingface.co/tner/bertweet-base-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 63.09 | 57.35 |
| [`tner/bert-large-tweetner7-2021`](https://huggingface.co/tner/bert-large-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 59.75 | 53.93 |
| [`tner/bert-base-tweetner7-2021`](https://huggingface.co/tner/bert-base-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-base`](https://huggingface.co/bert-base) | 60.67 | 55.5 |
| [`tner/roberta-large-tweetner7-2020`](https://huggingface.co/tner/roberta-large-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.76 | 60 |
| [`tner/roberta-base-tweetner7-2020`](https://huggingface.co/tner/roberta-base-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 64.21 | 59.11 |
| [`tner/twitter-roberta-base-2019-90m-tweetner7-2020`](https://huggingface.co/tner/twitter-roberta-base-2019-90m-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-2019-90m`](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) | 64.28 | 59.31 |
| [`tner/twitter-roberta-base-dec2020-tweetner7-2020`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 62.87 | 58.26 |
| [`tner/bertweet-large-tweetner7-2020`](https://huggingface.co/tner/bertweet-large-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 64.01 | 59.47 |
| [`tner/bertweet-base-tweetner7-2020`](https://huggingface.co/tner/bertweet-base-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 64.06 | 59.44 |
| [`tner/bert-large-tweetner7-2020`](https://huggingface.co/tner/bert-large-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 61.43 | 56.14 |
| [`tner/bert-base-tweetner7-2020`](https://huggingface.co/tner/bert-base-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-base`](https://huggingface.co/bert-base) | 60.09 | 54.67 |
Model description follows below.
* Model with suffix `-all`: Model fine-tuned on `train_all` and validated on `validation_2021`.
* Model with suffix `-continuous`: Model fine-tuned on `train_2021` continuously after fine-tuning on `train_2020` and validated on `validation_2021`.
* Model with suffix `-2021`: Model fine-tuned only on `train_2021` and validated on `validation_2021`.
* Model with suffix `-2020`: Model fine-tuned only on `train_2021` and validated on `validation_2020`.
### Sub Models (used in ablation study)
- Model fine-tuned only on `train_random` and validated on `validation_2020`.
| Model (link) | Data | Language Model | Micro F1 (2021) | Macro F1 (2021) |
|:------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------|------------------:|------------------:|
| [`tner/roberta-large-tweetner7-random`](https://huggingface.co/tner/roberta-large-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 66.33 | 60.96 |
| [`tner/twitter-roberta-base-2019-90m-tweetner7-random`](https://huggingface.co/tner/twitter-roberta-base-2019-90m-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-2019-90m`](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) | 63.29 | 58.5 |
| [`tner/roberta-base-tweetner7-random`](https://huggingface.co/tner/roberta-base-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 64.04 | 59.23 |
| [`tner/twitter-roberta-base-dec2020-tweetner7-random`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 64.72 | 59.97 |
| [`tner/bertweet-large-tweetner7-random`](https://huggingface.co/tner/bertweet-large-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 64.86 | 60.49 |
| [`tner/bertweet-base-tweetner7-random`](https://huggingface.co/tner/bertweet-base-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 65.55 | 59.58 |
| [`tner/bert-large-tweetner7-random`](https://huggingface.co/tner/bert-large-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 62.39 | 57.54 |
| [`tner/bert-base-tweetner7-random`](https://huggingface.co/tner/bert-base-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-base`](https://huggingface.co/bert-base) | 60.91 | 55.92 |
- Model fine-tuned on the self-labeled dataset on `extra_{2020,2021}` and validated on `validation_2020`.
| Model (link) | Data | Language Model | Micro F1 (2021) | Macro F1 (2021) |
|:----------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------|:--------------------------------------------------------|------------------:|------------------:|
| [`tner/roberta-large-tweetner7-selflabel2020`](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.56 | 59.63 |
| [`tner/roberta-large-tweetner7-selflabel2021`](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.6 | 59.45 |
| [`tner/roberta-large-tweetner7-2020-selflabel2020-all`](https://huggingface.co/tner/roberta-large-tweetner7-2020-selflabel2020-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 65.46 | 60.39 |
| [`tner/roberta-large-tweetner7-2020-selflabel2021-all`](https://huggingface.co/tner/roberta-large-tweetner7-2020-selflabel2021-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.52 | 59.45 |
| [`tner/roberta-large-tweetner7-selflabel2020-continuous`](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2020-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 65.15 | 60.23 |
| [`tner/roberta-large-tweetner7-selflabel2021-continuous`](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2021-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.48 | 59.41 |
Model description follows below.
* Model with suffix `-self2020`: Fine-tuning on the self-annotated data of `extra_2020` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7).
* Model with suffix `-self2021`: Fine-tuning on the self-annotated data of `extra_2021` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7).
* Model with suffix `-2020-self2020-all`: Fine-tuning on the self-annotated data of `extra_2020` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). Combined training dataset of `extra_2020` and `train_2020`.
* Model with suffix `-2020-self2021-all`: Fine-tuning on the self-annotated data of `extra_2021` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). Combined training dataset of `extra_2021` and `train_2020`.
* Model with suffix `-2020-self2020-continuous`: Fine-tuning on the self-annotated data of `extra_2020` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). Fine-tuning on `train_2020` and continuing fine-tuning on `extra_2020`.
* Model with suffix `-2020-self2021-continuous`: Fine-tuning on the self-annotated data of `extra_2021` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). Fine-tuning on `train_2020` and continuing fine-tuning on `extra_2020`.
### Reproduce Experimental Result
To reproduce the experimental result on our AACL paper, please see the repository
[https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper](https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper).
## Citation Information
```
@inproceedings{ushio-etal-2022-tweet,
title = "{N}amed {E}ntity {R}ecognition in {T}witter: {A} {D}ataset and {A}nalysis on {S}hort-{T}erm {T}emporal {S}hifts",
author = "Ushio, Asahi and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco. and
Camacho-Collados, Jose",
booktitle = "The 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing",
month = nov,
year = "2022",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
ziwenyd/AVATAR | 2022-07-27T08:27:28.000Z | [
"region:us"
] | ziwenyd | null | null | null | 0 | 33 | Entry not found |
Yaxin/SemEval2014Task4NLTK | 2022-08-15T06:56:51.000Z | [
"region:us"
] | Yaxin | A collection of SemEval2014 specifically designed to aid research in Aspect Based Sentiment Analysis. | @article{2014SemEval,
title={SemEval-2014 Task 4: Aspect Based Sentiment Analysis},
author={ Pontiki, M. and D Galanis and Pavlopoulos, J. and Papageorgiou, H. and Manandhar, S. },
journal={Proceedings of International Workshop on Semantic Evaluation at},
year={2014},
} | null | 0 | 33 | Entry not found |
indonesian-nlp/librivox-indonesia | 2022-10-24T09:14:51.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:librivox",
"language:ace",
"language:bal",
"language:bug",
"language:ind",
"language:min",
"language:jav",
"language:sun",
"... | indonesian-nlp | null | \ | null | 2 | 33 | ---
pretty_name: LibriVox Indonesia 1.0
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ace
- bal
- bug
- ind
- min
- jav
- sun
license: cc
multilinguality:
- multilingual
size_categories:
ace:
- 1K<n<10K
bal:
- 1K<n<10K
bug:
- 1K<n<10K
ind:
- 1K<n<10K
min:
- 1K<n<10K
jav:
- 1K<n<10K
sun:
- 1K<n<10K
source_datasets:
- librivox
task_categories:
- automatic-speech-recognition
---
# Dataset Card for LibriVox Indonesia 1.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia
- **Repository:** https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia
- **Point of Contact:** [Cahya Wirawan](mailto:cahya.wirawan@gmail.com)
### Dataset Summary
The LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public
domain audiobooks [LibriVox](https://librivox.org/). We collected only languages in Indonesia for this dataset.
The original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours. Each audio
file in the speech dataset now lasts from a few seconds to a maximum of 20 seconds.
We converted the audiobooks to speech datasets using the forced alignment software we developed. It supports
multilingual, including low-resource languages, such as Acehnese, Balinese, or Minangkabau. We can also use it
for other languages without additional work to train the model.
The dataset currently consists of 8 hours in 7 languages from Indonesia. We will add more languages or audio files
as we collect them.
### Languages
```
Acehnese, Balinese, Bugisnese, Indonesian, Minangkabau, Javanese, Sundanese
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`. Additional fields include
`reader` and `language`.
```python
{
'path': 'librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3',
'language': 'sun',
'reader': '3174',
'sentence': 'pernyataan umum ngeunaan hak hak asasi manusa sakabeh manusa',
'audio': {
'path': 'librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 44100
},
}
```
### Data Fields
`path` (`string`): The path to the audio file
`language` (`string`): The language of the audio file
`reader` (`string`): The reader Id in LibriVox
`sentence` (`string`): The sentence the user read from the book.
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
### Data Splits
The speech material has only train split.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
``` |
tner/multinerd | 2022-09-27T19:48:40.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:multilingual",
"size_categories:<10K",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:it",
"language:nl",
"language:pl",
"language:pt",
"language:ru",
"region:us"
] | tner | [MultiNERD](https://aclanthology.org/2022.findings-naacl.60/) | @inproceedings{tedeschi-navigli-2022-multinerd,
title = "{M}ulti{NERD}: A Multilingual, Multi-Genre and Fine-Grained Dataset for Named Entity Recognition (and Disambiguation)",
author = "Tedeschi, Simone and
Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-naacl.60",
doi = "10.18653/v1/2022.findings-naacl.60",
pages = "801--812",
abstract = "Named Entity Recognition (NER) is the task of identifying named entities in texts and classifying them through specific semantic categories, a process which is crucial for a wide range of NLP applications. Current datasets for NER focus mainly on coarse-grained entity types, tend to consider a single textual genre and to cover a narrow set of languages, thus limiting the general applicability of NER systems.In this work, we design a new methodology for automatically producing NER annotations, and address the aforementioned limitations by introducing a novel dataset that covers 10 languages, 15 NER categories and 2 textual genres.We also introduce a manually-annotated test set, and extensively evaluate the quality of our novel dataset on both this new test set and standard benchmarks for NER.In addition, in our dataset, we include: i) disambiguation information to enable the development of multilingual entity linking systems, and ii) image URLs to encourage the creation of multimodal systems.We release our dataset at https://github.com/Babelscape/multinerd.",
} | null | 5 | 33 | ---
language:
- de
- en
- es
- fr
- it
- nl
- pl
- pt
- ru
multilinguality:
- multilingual
size_categories:
- <10K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: MultiNERD
---
# Dataset Card for "tner/multinerd"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://aclanthology.org/2022.findings-naacl.60/](https://aclanthology.org/2022.findings-naacl.60/)
- **Dataset:** MultiNERD
- **Domain:** Wikipedia, WikiNews
- **Number of Entity:** 18
### Dataset Summary
MultiNERD NER benchmark dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `PER`, `LOC`, `ORG`, `ANIM`, `BIO`, `CEL`, `DIS`, `EVE`, `FOOD`, `INST`, `MEDIA`, `PLANT`, `MYTH`, `TIME`, `VEHI`, `MISC`, `SUPER`, `PHY`
## Dataset Structure
### Data Instances
An example of `train` of `de` looks as follows.
```
{
'tokens': [ "Die", "Blätter", "des", "Huflattichs", "sind", "leicht", "mit", "den", "sehr", "ähnlichen", "Blättern", "der", "Weißen", "Pestwurz", "(", "\"", "Petasites", "albus", "\"", ")", "zu", "verwechseln", "." ],
'tags': [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0 ]
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/multinerd/raw/main/dataset/label.json).
```python
{
"O": 0,
"B-PER": 1,
"I-PER": 2,
"B-LOC": 3,
"I-LOC": 4,
"B-ORG": 5,
"I-ORG": 6,
"B-ANIM": 7,
"I-ANIM": 8,
"B-BIO": 9,
"I-BIO": 10,
"B-CEL": 11,
"I-CEL": 12,
"B-DIS": 13,
"I-DIS": 14,
"B-EVE": 15,
"I-EVE": 16,
"B-FOOD": 17,
"I-FOOD": 18,
"B-INST": 19,
"I-INST": 20,
"B-MEDIA": 21,
"I-MEDIA": 22,
"B-PLANT": 23,
"I-PLANT": 24,
"B-MYTH": 25,
"I-MYTH": 26,
"B-TIME": 27,
"I-TIME": 28,
"B-VEHI": 29,
"I-VEHI": 30,
"B-SUPER": 31,
"I-SUPER": 32,
"B-PHY": 33,
"I-PHY": 34
}
```
### Data Splits
| language | test |
|:-----------|-------:|
| de | 156792 |
| en | 164144 |
| es | 173189 |
| fr | 176185 |
| it | 181927 |
| nl | 171711 |
| pl | 194965 |
| pt | 177565 |
| ru | 82858 |
### Citation Information
```
@inproceedings{tedeschi-navigli-2022-multinerd,
title = "{M}ulti{NERD}: A Multilingual, Multi-Genre and Fine-Grained Dataset for Named Entity Recognition (and Disambiguation)",
author = "Tedeschi, Simone and
Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-naacl.60",
doi = "10.18653/v1/2022.findings-naacl.60",
pages = "801--812",
abstract = "Named Entity Recognition (NER) is the task of identifying named entities in texts and classifying them through specific semantic categories, a process which is crucial for a wide range of NLP applications. Current datasets for NER focus mainly on coarse-grained entity types, tend to consider a single textual genre and to cover a narrow set of languages, thus limiting the general applicability of NER systems.In this work, we design a new methodology for automatically producing NER annotations, and address the aforementioned limitations by introducing a novel dataset that covers 10 languages, 15 NER categories and 2 textual genres.We also introduce a manually-annotated test set, and extensively evaluate the quality of our novel dataset on both this new test set and standard benchmarks for NER.In addition, in our dataset, we include: i) disambiguation information to enable the development of multilingual entity linking systems, and ii) image URLs to encourage the creation of multimodal systems.We release our dataset at https://github.com/Babelscape/multinerd.",
}
``` |
bigcode/the-stack-metadata | 2023-03-16T13:58:24.000Z | [
"task_categories:text-generation",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:unknown",
"language:code",
"license:other",
"arxiv:2211.15533",
"region:us"
] | bigcode | null | null | null | 3 | 33 | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license:
- other
multilinguality:
- multilingual
pretty_name: The-Stack-Metadata
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids: []
extra_gated_prompt: |-
## Terms of Use for The Stack
The Stack Metadata is a collection of additional information for and is part of The Stack dataset, - a collection of source code in over 300 programming languages. We ask that you read and acknowledge the following points before using the dataset:
1. The Stack is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.
2. The Stack is regularly updated to enact validated data removal requests. By clicking on "Access repository", you agree to update your own version of The Stack to the most recent usable version specified by the maintainers in [the following thread](https://huggingface.co/datasets/bigcode/the-stack/discussions/7). If you have questions about dataset versions and allowed uses, please also ask them in the dataset’s [community discussions](https://huggingface.co/datasets/bigcode/the-stack/discussions/new). We will also notify users via email when the latest usable version changes.
3. To host, share, or otherwise provide access to The Stack dataset, you must include [these Terms of Use](https://huggingface.co/datasets/bigcode/the-stack#terms-of-use-for-the-stack) and require users to agree to it.
By clicking on "Access repository" below, you accept that your contact information (email address and username) can be shared with the dataset maintainers as well.
extra_gated_fields:
Email: text
I have read the License and agree with its terms: checkbox
---
# Dataset Card for The Stack Metadata
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Changelog](#changelog)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Usage Example](#usage-example)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Additional Information](#additional-information)
- [Terms of Use for The Stack](#terms-of-use-for-the-stack)
## Dataset Description
- **Homepage:** https://www.bigcode-project.org/
- **Repository:** https://github.com/bigcode-project
- **Paper:** https://arxiv.org/abs/2211.15533
- **Leaderboard:** N/A
- **Point of Contact:** contact@bigcode-project.org
### Changelog
|Release|Description|
|-|-|
|v1.1| This is the first release of the metadata. It is for The Stack v1.1|
|v1.2| Metadata dataset matching The Stack v1.2|
### Dataset Summary
This is a set of additional information for repositories used for The Stack. It contains file paths, detected licenes as well as some other information for the repositories.
### Supported Tasks and Leaderboards
The main task is to recreate repository structure from the files of The Stack. Also, the set can be used for computing statistics and custom filtering or aggregation operations on The Stack.
## Dataset Structure
### Data Fields

The set is split into buckets by repositories. There are 944 buckets. Additionally to the fields in the image, `ri` contains `min_repo_event_datetime` which is the ealiest date and time of an event for a repo after Jan 1 2015.

As an example of an aggregation operation on The Stack, the image above shows conceptually a selection of stars ( and issues and PR count) for a file. Each unique file can be part of multiple repositories. So, The Stack releases unique files and aggregates meta information (e.g stars) from all repositories it belongs to. For example, for max_stars_count we take the maximum number of stars from all repositories the file is part of.
The meta data will allow you to reconstruct repository directory structures. For this, for each repository form `ri` tabele it is needed to take all its files from `fi` table, find them in The Stack by file's `hexsha` and save those files' content under its path for a repository from `fi` table. For speed it is preferable to index The Stack by hexsha first.
### Usage Example
Restore folder structure for python files in numpy repository
```python
import datasets
from pathlib import Path
from tqdm.auto import tqdm
import pandas as pd
# assuming metadata is cloned into the local folder /data/hf_repos/the-stack-metadata
# the stack is cloned into the local folder /data/hf_repos/the-stack-v1.1
# destination folder is in /repo_workdir/numpy_restored
the_stack_meta_path = Path('/data/hf_repos/the-stack-metadata')
the_stack_path = Path('/data/hf_repos/the-stack-v1.1')
repo_dst_root = Path('/repo_workdir/numpy_restored')
repo_name = 'numpy/numpy'
# Get bucket with numpy repo info
# meta_bucket_path = None
#for fn in tqdm(list((the_stack_meta_path/'data').glob('*/ri.parquet'))):
# df = pd.read_parquet(fn)
# if any(df['name'] == repo_name):
# meta_bucket_path = fn
# break
meta_bucket_path = the_stack_meta_path / 'data/255_944'
# Get repository id from repo name
ri_id = pd.read_parquet(
meta_bucket_path / 'ri.parquet'
).query(
f'`name` == "{repo_name}"'
)['id'].to_list()[0]
# Get files information for the reopository
files_info = pd.read_parquet(
meta_bucket_path / 'fi.parquet'
).query(
f'`ri_id` == {ri_id} and `size` != 0 and `is_deleted` == False'
)
# Convert DF with files information to a dictionary by language and then file hexsha
# there can be more than one file with the same hexsha in the repo so we gather
# all instances per unique hexsha
files_info_dict = {
k: v[['hexsha', 'path']].groupby('hexsha').apply(lambda x: list(x['path'])).to_dict()
for k, v in files_info.groupby('lang_ex')
}
# Load Python part of The Stack
ds = datasets.load_dataset(
str(the_stack_path/'data/python'),
num_proc=10, ignore_verifications=True
)
# Save file content of the python files in the numpy reposirotry in their appropriate locations
def save_file_content(example, files_info_dict, repo_dst_root):
if example['hexsha'] in files_info_dict:
for el in files_info_dict[example['hexsha']]:
path = repo_dst_root / el
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(example['content'])
ds.map(
save_file_content,
fn_kwargs={'files_info_dict': files_info_dict['Python'], 'repo_dst_root': repo_dst_root},
num_proc=10
)
```
## Dataset Creation
Please refer to [the section](https://huggingface.co/datasets/bigcode/the-stack#dataset-creation) in The Stack.
## Considerations for Using the Data
Please refer to [the section](https://huggingface.co/datasets/bigcode/the-stack#considerations-for-using-the-data) in The Stack.
## Additional Information
Please refer to [the section](https://huggingface.co/datasets/bigcode/the-stack#additional-information) in The Stack.
## Terms of Use for The Stack
Please refer to [the section](https://huggingface.co/datasets/bigcode/the-stack#terms-of-use-for-the-stack) in The Stack. |
Xpitfire/cmp_facade | 2023-01-15T01:43:17.000Z | [
"task_categories:image-segmentation",
"language:en",
"license:mit",
"building",
"facade",
"region:us"
] | Xpitfire | null | null | null | 1 | 33 | ---
license: mit
task_categories:
- image-segmentation
language:
- en
tags:
- building
- facade
---
# CMP Facade Database
We present a dataset of facade images assembled at the Center for Machine Perception, which includes 606 rectified images of facades from various sources, which have been manually annotated. The facades are from different cities around the world and diverse architectural styles.
Documentation
Data origin, format and processing, annotation principles for 12 classes are specified in the report.
- facade
- molding
- cornice
- pillar
- window
- door
- sill
- blind
- balcony
- shop
- deco
- background
Link to original website:
https://cmp.felk.cvut.cz/~tylecr1/facade/
Citation
Please use the following reference to cite the dataset:
```latex
@INPROCEEDINGS{Tylecek13,
author = {Radim Tyle{\v c}ek and Radim {\v S}{\' a}ra},
title = {Spatial Pattern Templates for Recognition of Objects with Regular Structure},
booktitle = {Proc. GCPR},
year = {2013},
address = {Saarbrucken, Germany},
}
``` |
Multimodal-Fatima/COCO_captions_train | 2023-03-17T21:59:22.000Z | [
"region:us"
] | Multimodal-Fatima | null | null | null | 2 | 33 | ---
dataset_info:
features:
- name: image
dtype: image
- name: filepath
dtype: string
- name: sentids
list: int32
- name: filename
dtype: string
- name: imgid
dtype: int32
- name: split
dtype: string
- name: sentences_tokens
list:
list: string
- name: sentences_raw
list: string
- name: sentences_sentid
list: int32
- name: cocoid
dtype: int32
- name: id
dtype: int64
splits:
- name: train
num_bytes: 18595506212.0
num_examples: 113287
download_size: 18500220513
dataset_size: 18595506212.0
---
# Dataset Card for "COCO_captions_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KK04/LogicInference_OA | 2023-04-05T15:38:22.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"Logic Inference",
"region:us"
] | KK04 | null | null | null | 5 | 33 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 30414202
num_examples: 54607
download_size: 7588805
dataset_size: 30414202
license: apache-2.0
task_categories:
- question-answering
language:
- en
tags:
- Logic Inference
size_categories:
- 10K<n<100K
---
# Dataset Card for "LogicInference_OA"
This is an re-produce of the dataset from LogicInference Dataset in paper: https://openreview.net/pdf?id=HAGeIS_Lcg9.
The github page of LogicInference Dataset: https://github.com/google-research/google-research/tree/master/logic_inference_dataset.
This dataset is aimed to offer more dataset for Open Assistant project, depending on their demands, there three columns: INSTRUCTION, RESPONSE, SOURCE.
The results in this dataset is a little different from which was introduced in the original paper:
1.For all three splits (IID/OOD/length), only IID is used. In the original paper, it seems that model can reach better performance with data generated by this split method.
2.In the original paper, there are two form of responses: LOGICINFERENCE<sub>b</sub> (with the answer at the beginning) and LOGICINFERENCE<sub>e</sub> (with the answer at the end). This dataset uses LOGICINFERENCE<sub>e</sub>, that means: for all questions, the model will first do logic inference, and give the final answer at the end.
3.The original paper, some parameters in generate_dataset.py are:
N_INFERENCE_PROBLEMS = 5000
N_VARIATIONS = 25
N_EXAMPLES = 200000
TRAIN_RATIO = 0.9
LENGTH_SPLIT_THRESHOLD = 4
RANDOM_SEED = 0
I choose some new parameters:
N_INFERENCE_PROBLEMS = 10000
N_VARIATIONS = 25
N_EXAMPLES = 55000
TRAIN_RATIO = 1
LENGTH_SPLIT_THRESHOLD = 4
RANDOM_SEED = 1111
The original script generated 4814 different inference problems and extended all those inference problems to around 200,000 Q-A pairs. My settings generated 5491 different inference problems and extended them to around 54,607 Instruction-Response pairs. I think for Open Assistant projects, maybe the number of different inference problems is more important, and generated many similar Instruction-Response pairs will only add training time and doesn't make much sense. |
ruanchaves/assin_por_Latn_to_eng_Latn | 2023-04-22T19:11:54.000Z | [
"region:us"
] | ruanchaves | null | null | null | 0 | 33 | ---
dataset_info:
features:
- name: sentence_pair_id
dtype: int64
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: relatedness_score
dtype: float32
- name: entailment_judgment
dtype:
class_label:
names:
'0': NONE
'1': ENTAILMENT
'2': PARAPHRASE
- name: __language__
dtype: string
splits:
- name: train
num_bytes: 993418
num_examples: 5000
- name: test
num_bytes: 777672
num_examples: 4000
- name: validation
num_bytes: 198351
num_examples: 1000
download_size: 0
dataset_size: 1969441
---
# Dataset Card for "assin_por_Latn_to_eng_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
theblackcat102/sharegpt-english | 2023-04-22T03:57:11.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:other",
"region:us"
] | theblackcat102 | null | null | null | 5 | 33 | ---
license: other
task_categories:
- text-generation
language:
- en
size_categories:
- 10K<n<100K
--- |
ruanchaves/assin_por_Latn_to_glg_Latn | 2023-04-22T19:12:15.000Z | [
"region:us"
] | ruanchaves | null | null | null | 0 | 33 | ---
dataset_info:
features:
- name: sentence_pair_id
dtype: int64
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: relatedness_score
dtype: float32
- name: entailment_judgment
dtype:
class_label:
names:
'0': NONE
'1': ENTAILMENT
'2': PARAPHRASE
- name: __language__
dtype: string
splits:
- name: train
num_bytes: 1005495
num_examples: 5000
- name: test
num_bytes: 781854
num_examples: 4000
- name: validation
num_bytes: 201144
num_examples: 1000
download_size: 0
dataset_size: 1988493
---
# Dataset Card for "assin_por_Latn_to_glg_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
alvations/globalvoices-en-es | 2023-05-11T01:14:34.000Z | [
"region:us"
] | alvations | null | null | null | 1 | 33 | ---
dataset_info:
features:
- name: en
dtype: string
- name: es
dtype: string
splits:
- name: train
num_bytes: 89033765
num_examples: 355136
download_size: 57678468
dataset_size: 89033765
---
# Dataset Card for "globalvoices-en-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nan-Do/code-search-net-python | 2023-05-15T00:55:15.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:summarization",
"language:en",
"license:apache-2.0",
"code",
"python",
"CodeSearchNet",
"region:us"
] | Nan-Do | null | null | null | 12 | 33 | ---
dataset_info:
features:
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
- name: partition
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 1772584117
num_examples: 455243
download_size: 598837908
dataset_size: 1772584117
license: apache-2.0
task_categories:
- text-generation
- text2text-generation
- summarization
language:
- en
tags:
- code
- python
- CodeSearchNet
pretty_name: Python CodeSearchNet with Summaries
---
# Dataset Card for "code-search-net-python"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/code-search-net-python
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/code-search-net-python
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This dataset is the Python portion of the CodeSarchNet annotated with a summary column.
The code-search-net dataset includes open source functions that include comments found at GitHub.
The summary is a short description of what the function does.
### Languages
The dataset's comments are in English and the functions are coded in Python
### Data Splits
Train, test, validation labels are included in the dataset as a column.
## Dataset Creation
May of 2023
### Curation Rationale
This dataset can be used to generate instructional (or many other interesting) datasets that are useful to train LLMs
### Source Data
The CodeSearchNet dataset can be found at https://www.kaggle.com/datasets/omduggineni/codesearchnet
### Annotations
This datasets include a summary column including a short description of the function.
#### Annotation process
The annotation procedure was done using [Salesforce](https://huggingface.co/Salesforce) T5 summarization models.
A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython
The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries. (some may still be present in the dataset)
### Licensing Information
Apache 2.0 |
silk-road/Wizard-LM-Chinese-instruct-evol | 2023-05-15T00:13:52.000Z | [
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:zh",
"language:en",
"license:cc-by-4.0",
"region:us"
] | silk-road | null | null | null | 58 | 33 | ---
license: cc-by-4.0
task_categories:
- text-generation
- question-answering
language:
- zh
- en
size_categories:
- 10K<n<100K
---
Wizard-LM-Chinese是在MSRA的Wizard-LM数据集上,对指令进行翻译,然后再调用GPT获得答案的数据集
Wizard-LM包含了很多难度超过Alpaca的指令。
中文的问题翻译会有少量指令注入导致翻译失败的情况
中文回答是根据中文问题再进行问询得到的。
我们会陆续将更多数据集发布到hf,包括
- [ ] Coco Caption的中文翻译
- [ ] CoQA的中文翻译
- [ ] CNewSum的Embedding数据
- [ ] 增广的开放QA数据
- [x] WizardLM的中文翻译
如果你也在做这些数据集的筹备,欢迎来联系我们,避免重复花钱。
# 骆驼(Luotuo): 开源中文大语言模型
[https://github.com/LC1332/Luotuo-Chinese-LLM](https://github.com/LC1332/Luotuo-Chinese-LLM)
骆驼(Luotuo)项目是由[冷子昂](https://blairleng.github.io) @ 商汤科技, 陈启源 @ 华中师范大学 以及 李鲁鲁 @ 商汤科技 发起的中文大语言模型开源项目,包含了一系列语言模型。
( 注意: [陈启源](https://qiyuan-chen.github.io/) 正在寻找2024推免导师,欢迎联系 )
骆驼项目**不是**商汤科技的官方产品。
## Citation
Please cite the repo if you use the data or code in this repo.
```
@misc{alpaca,
author={Ziang Leng, Qiyuan Chen and Cheng Li},
title = {Luotuo: An Instruction-following Chinese Language model, LoRA tuning on LLaMA},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/LC1332/Luotuo-Chinese-LLM}},
}
``` |
fnlp/moss-003-sft-data | 2023-07-09T15:09:50.000Z | [
"license:cc-by-4.0",
"region:us"
] | fnlp | null | null | null | 44 | 33 | ---
license: cc-by-4.0
---
# moss-003-sft-data
## Conversation Without Plugins
### Categories
| Category | \# samples |
|----------------------|-----------:|
| Brainstorming | 99,162 |
| Complex Instruction | 95,574 |
| Code | 198,079 |
| Role Playing | 246,375 |
| Writing | 341,087 |
| Harmless | 74,573 |
| Others | 19,701 |
| Total | 1,074,551 |
**Others** contains two categories: **Continue**(9,839) and **Switching**(9,862).
The **Continue** category refers to instances in a conversation where the user asks the system to continue outputting the response from the previous round that was not completed.
The **Switching** category refers to instances in a conversation where the user switches the language they are using.
We remove the data for honesty because it contains private information.
|
ahazeemi/opus-medical-en-de | 2023-07-16T07:37:53.000Z | [
"task_categories:translation",
"size_categories:100K<n<1M",
"language:en",
"language:de",
"medical",
"region:us"
] | ahazeemi | null | null | null | 1 | 33 | ---
dataset_info:
features:
- name: de
dtype: string
- name: en
dtype: string
splits:
- name: train
num_bytes: 53121579
num_examples: 248099
- name: dev
num_bytes: 433240
num_examples: 2000
- name: test
num_bytes: 446369
num_examples: 2000
download_size: 35861692
dataset_size: 54001188
task_categories:
- translation
language:
- en
- de
tags:
- medical
size_categories:
- 100K<n<1M
---
# Dataset Card for "opus-medical-en-de"
This is a multi-domain German-English parallel data introduced in [Aharoni and Goldberg (2020)](https://aclanthology.org/2020.acl-main.692/). It is a new data split created that avoids duplicate examples and leakage from the train split to the dev/test splits. The original multi-domain data first appeared in [Koehn and Knowles (2017)](https://www.aclweb.org/anthology/W17-3204/) and consists of five datasets available in the [Opus website](http://opus.nlpl.eu/). |
truehealth/medqa | 2023-06-12T11:22:24.000Z | [
"region:us"
] | truehealth | null | null | null | 1 | 33 | Entry not found |
RIPS-Goog-23/RVL-CDIP | 2023-06-29T06:25:59.000Z | [
"region:us"
] | RIPS-Goog-23 | null | null | null | 0 | 33 | Entry not found |
Delius/first | 2023-07-14T06:20:38.000Z | [
"size_categories:10K<n<100K",
"language:zh",
"license:apache-2.0",
"region:us"
] | Delius | null | null | null | 1 | 33 | ---
license: apache-2.0
language:
- zh
size_categories:
- 10K<n<100K
---
This dataset aims to complete text extension task.
Instance:
{
"Instruction": "请将给你的文本内容扩写成新闻",
"Input": "杨幂获杂志主编夸奖:大幂幂瘦了越来越漂亮",
"Response": "网易娱乐4月29日报道 4月29日,时尚主编张宇晒出一组与杨幂的自拍合影,并发文:“今天与杨幂做同事,大幂幂瘦了,越来越漂亮。”照片中,杨幂托腮卖萌,瘦了不少,少女味十足。网友们也被杨幂的造型征服,纷纷表示:“太美了,看上去年轻了10岁。”"
} |
CreativeLang/ukp_novel_metaphor | 2023-09-25T16:25:39.000Z | [
"region:us"
] | CreativeLang | null | null | null | 0 | 33 | ---
dataset_info:
features:
- name: id
dtype: string
- name: words
sequence: string
- name: lemmas
sequence: string
- name: poses
sequence: string
- name: metaphor_classes
sequence:
class_label:
names:
'0': '0'
'1': '1'
- name: novel_score
sequence: float64
- name: novel_metaphors
sequence:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 10443700
num_examples: 16018
download_size: 1768297
dataset_size: 10443700
---
# Dataset Card for "ukp_novel_metaphor"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Abdelkareem/arabic_tweets_classification | 2023-07-09T10:01:29.000Z | [
"region:us"
] | Abdelkareem | null | null | null | 0 | 33 | ---
dataset_info:
features:
- name: Date
dtype: string
- name: Time
dtype: string
- name: Date Time
dtype: string
- name: URL
dtype: string
- name: Tweet Text
dtype: string
- name: Cleaned Text
dtype: string
- name: User Name
dtype: string
- name: Location
dtype: string
- name: 'Replied Tweet ID '
dtype: float64
- name: Replied Tweet User ID
dtype: float64
- name: Replied Tweet User name
dtype: string
- name: Coordinates
dtype: float64
- name: Retweet Count
dtype: float64
- name: Favorite Count
dtype: int64
- name: Favorited
dtype: string
- name: Label
dtype: string
splits:
- name: train
num_bytes: 7469621
num_examples: 13240
download_size: 3109198
dataset_size: 7469621
---
# Dataset Card for "arabic_tweets_classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
llm-book/aio | 2023-10-06T00:59:01.000Z | [
"region:us"
] | llm-book | null | null | null | 1 | 33 | ---
dataset_info:
features:
- name: qid
dtype: string
- name: competition
dtype: string
- name: timestamp
dtype: string
- name: section
dtype: string
- name: number
dtype: string
- name: original_question
dtype: string
- name: original_answer
dtype: string
- name: original_additional_info
dtype: string
- name: question
dtype: string
- name: answers
list: string
splits:
- name: train
num_bytes: 9464003
num_examples: 22335
- name: validation
num_bytes: 409779
num_examples: 1000
download_size: 2267163
dataset_size: 9873782
---
# Dataset Card for llm-book/aio
書籍『大規模言語モデル入門』で使用する、「AI王」コンペティションのQAデータセットです。
[AI王の公式ページ](https://sites.google.com/view/project-aio/dataset/)で公開されているデータセットを利用しています。
## Licence
本データセットに含まれる一部のクイズ問題の著作権は [abc/EQIDEN 実行委員会](https://abc-dive.com/portal/)に帰属するものであり、これらのクイズ問題は本書における使用許諾を得ているものです。
本データセットに含まれる一部のクイズ問題は[株式会社キュービック](http://www.qbik.co.jp/)および[株式会社カプリティオ](https://capriccio.tokyo/)に依頼し作成したものであり、これらのクイズ問題は[クリエイティブ・コモンズ表示・継承ライセンス 4.0 (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/deed.ja) ライセンスの下に提供されています。
本データセットにパッセージとして付与されている Wikipedia のコンテンツは、[クリエイティブ・コモンズ表示・継承ライセンス 3.0 (CC BY-SA 3.0)](https://creativecommons.org/licenses/by-sa/3.0/deed.ja) および [GNU 自由文書ライセンス (GFDL)](https://www.gnu.org/licenses/fdl.html) の下に配布されているものです。
クイズ問題のライセンスについて、詳細は[AI王の公式ページ](https://sites.google.com/view/project-aio/dataset/)を参照してください。
|
phunc20/raw_vnexpress | 2023-07-18T05:50:58.000Z | [
"region:us"
] | phunc20 | null | null | null | 0 | 33 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 54727134
num_examples: 7531
download_size: 29191718
dataset_size: 54727134
---
# Dataset Card for "raw_vnexpress"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bigcode/oasst-octopack | 2023-08-17T10:33:37.000Z | [
"arxiv:2308.07124",
"region:us"
] | bigcode | null | null | null | 3 | 33 | This is a filtered version of OASST to focus only on high-quality conversation trees as used in the [OctoPack](https://arxiv.org/abs/2308.07124) paper.
```python
from datasets import load_dataset
d = load_dataset("bigcode/oasst-octopack")["train"]
``` |
tjaffri/NSText2SQL-generate | 2023-08-15T00:33:59.000Z | [
"license:apache-2.0",
"region:us"
] | tjaffri | null | null | null | 0 | 33 | ---
license: apache-2.0
dataset_info:
features:
- name: question
dtype: string
- name: table_info
dtype: string
- name: sql_query
dtype: string
splits:
- name: train
num_bytes: 847766
num_examples: 3473
download_size: 391731
dataset_size: 847766
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# NSText2SQL Dataset (Reformatted for Fine Tuned Generative Models)
This is the exact same dataset as NSText2SQL: https://huggingface.co/datasets/NumbersStation/NSText2SQL, but with the data reformatted to allow direct use to fine tune generative models. The original license and credits for the original dataset remain in place.
Specifically, the changes from standard NSText2SQL are:
1. Removed non-english questions
2. Removed all rows with more than one input table, simplifying the problem for smaller models.
3. Updated SQL queries in the dataset to prefer using LIKE statements for string matches, to allow better partial matching of results in chat scenarios where a user may not fully specify all data.
4. Removed syntactically invalid SQL. Specifically, we created in-memory (SQLite) tables using the SQL DESCRIBE of the tables, then ran the SQL query against these in-memory tables. Any SQL queries that threw exceptions for any reason were discarded, and the rest that ran without exceptions were included in this dataset.
|
allenai/objaverse-xl | 2023-09-26T13:54:17.000Z | [
"language:en",
"license:odc-by",
"arxiv:2307.05663",
"region:us"
] | allenai | null | null | null | 26 | 33 | ---
license: odc-by
language:
- en
viewer: false
---
# Objaverse-XL
<a href="//arxiv.org/abs/2307.05663" target="_blank">
<img src="https://img.shields.io/badge/arXiv-2307.05663-<COLOR>">
</a>
Objaverse-XL is an open dataset of over 10 million 3D objects!
With it, we train Zero123-XL, a foundation model for 3D, observing incredible 3D generalization abilities: 🧵👇
<img src="https://mattdeitke.com/static/1cdcdb2ef7033e177ca9ae2975a9b451/9c1ca/objaverse-xl.webp">
## Scale Comparison
Objaverse 1.0 was released back in December. It was a step in the right direction, but still relatively small with 800K objects.
Objaverse-XL is over an order of magnitude larger and much more diverse!
<img src="https://github.com/allenai/objaverse-rendering/assets/28768645/43833dd3-ec97-4a3d-8782-00a6aea584b4">
## Unlocking Generalization
Compared to the original Zero123 model, Zero123-XL improves remarkably in 0-shot generalization abilities, even being able to perform novel view synthesis on sketches, cartoons, and people!
A ton more examples in the [📝 paper](https://arxiv.org/abs/2307.05663) :)
<img src="https://github.com/allenai/objaverse-rendering/assets/28768645/8470e4df-e39d-444b-9871-58fbee4b87fd">
## Image → 3D
With the base Zero123-XL foundation model, we can perform image → 3D using [DreamFusion](https://dreamfusion3d.github.io/), having the model guide a NeRF to generate novel views!
<video autoplay muted loop controls>
<source src="https://github.com/allenai/objaverse-rendering/assets/28768645/571852cd-dc02-46ce-b2bb-88f64a67d0ac" type="video/mp4">
</video>
## Text → 3D
Text-to-3D comes for free with text → image models, such as with SDXL here, providing the initial image!
<video autoplay muted loop controls>
<source src="https://github.com/allenai/objaverse-rendering/assets/28768645/96255b42-8158-4c7a-8308-7b0f1257ada8" type="video/mp4">
</video>
## Scaling Trends
Beyond that, we show strong scaling trends for both Zero123-XL and [PixelNeRF](https://alexyu.net/pixelnerf/)!
<img src="https://github.com/allenai/objaverse-rendering/assets/28768645/0c8bb433-27df-43a1-8cb8-1772007c0899">
## Tutorial
Check out the [Google Colab tutorial](https://colab.research.google.com/drive/15XpZMjrHXuky0IgBbXcsUtb_0g-XWYmN?usp=sharing) to download Objaverse-XL.
Polycam data is available by Polycam to academic researchers for non-commercial use upon request and approval from Polycam. For access please fill out [this form](https://forms.gle/HUjYVtS9GKVS5QBXA).
## License
The use of the dataset as a whole is licensed under the ODC-By v1.0 license. Individual objects in Objaverse-XL are licensed under different licenses.
## Citation
To cite Objaverse-XL, please cite our [📝 arXiv](https://arxiv.org/abs/2307.05663) paper with the following BibTeX entry:
```bibtex
@article{objaverseXL,
title={Objaverse-XL: A Universe of 10M+ 3D Objects},
author={Matt Deitke and Ruoshi Liu and Matthew Wallingford and Huong Ngo and
Oscar Michel and Aditya Kusupati and Alan Fan and Christian Laforte and
Vikram Voleti and Samir Yitzhak Gadre and Eli VanderBilt and
Aniruddha Kembhavi and Carl Vondrick and Georgia Gkioxari and
Kiana Ehsani and Ludwig Schmidt and Ali Farhadi},
journal={arXiv preprint arXiv:2307.05663},
year={2023}
}
```
Objaverse 1.0 is available on 🤗Hugging Face at [@allenai/objaverse](https://huggingface.co/datasets/allenai/objaverse). To cite it, use:
```bibtex
@article{objaverse,
title={Objaverse: A Universe of Annotated 3D Objects},
author={Matt Deitke and Dustin Schwenk and Jordi Salvador and Luca Weihs and
Oscar Michel and Eli VanderBilt and Ludwig Schmidt and
Kiana Ehsani and Aniruddha Kembhavi and Ali Farhadi},
journal={arXiv preprint arXiv:2212.08051},
year={2022}
}
```
|
mediabiasgroup/BABE-v3 | 2023-08-23T05:37:34.000Z | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | mediabiasgroup | null | null | null | 0 | 33 | ---
license: cc-by-nc-sa-4.0
---
Original BABE dataset enriched with sentences from two annotations rounds: NewsUnfold project and Media Bias Game project.
# Please cite as
```
@InProceedings{Spinde2021f,
title = "Neural Media Bias Detection Using Distant Supervision With {BABE} - Bias Annotations By Experts",
author = "Spinde, Timo and
Plank, Manuel and
Krieger, Jan-David and
Ruas, Terry and
Gipp, Bela and
Aizawa, Akiko",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.101",
doi = "10.18653/v1/2021.findings-emnlp.101",
pages = "1166--1177",
}
``` |
iason-consulting/ADEClassificationDataset_augmented | 2023-08-28T05:17:07.000Z | [
"region:us"
] | iason-consulting | null | null | null | 0 | 33 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 9744253
num_examples: 37104
download_size: 4689669
dataset_size: 9744253
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ADEClassificationDataset_augmented"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
quocanh34/soict_train_dataset | 2023-08-25T15:45:58.000Z | [
"region:us"
] | quocanh34 | null | null | null | 0 | 33 | ---
dataset_info:
features:
- name: id
dtype: string
- name: sentence
dtype: string
- name: intent
dtype: string
- name: sentence_annotation
dtype: string
- name: entities
list:
- name: type
dtype: string
- name: filler
dtype: string
- name: file
dtype: string
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: origin_transcription
dtype: string
- name: sentence_norm
dtype: string
splits:
- name: train
num_bytes: 3486818827.8416476
num_examples: 6729
- name: test
num_bytes: 387597040.15835226
num_examples: 748
download_size: 918573512
dataset_size: 3874415868.0
---
# Dataset Card for "asr_spoken_norm_train_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cestwc/concise536 | 2023-08-29T15:13:43.000Z | [
"region:us"
] | cestwc | null | null | null | 0 | 33 | ---
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: int64
- name: cite
dtype: string
- name: wordy
dtype: string
- name: concise
sequence: string
- name: category
dtype: string
- name: link
dtype: string
- name: delete
dtype:
class_label:
names:
'0': not required
'1': required
- name: replace
dtype:
class_label:
names:
'0': not required
'1': required
- name: rewrite
dtype:
class_label:
names:
'0': not required
'1': required
splits:
- name: validation
num_bytes: 3692
num_examples: 14
- name: test
num_bytes: 161635
num_examples: 536
download_size: 79866
dataset_size: 165327
---
# Dataset Card for "concise536"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
simlamkr1/train-dataset-sim001 | 2023-09-03T15:28:09.000Z | [
"license:other",
"region:us"
] | simlamkr1 | null | null | null | 0 | 33 | ---
license: other
---
|
huangyt/FINETUNE2 | 2023-09-01T09:40:00.000Z | [
"license:openrail",
"region:us"
] | huangyt | null | null | null | 0 | 33 | ---
license: openrail
---

# 📔 **DATASET**
| **Dataset** | Class | Number of Questions |
| ------- | ----------------------------------------------------------------- | ------------------------ |
| **FLAN_CoT(zs)** | Reasoning 、 MATH 、 ScienceQA 、 Commonsense | 8000 |
| **Prm800k** | Reasoning 、 MATH | 6713 |
| **ScienceQA** | ScienceQA | 5177 |
| **SciBench** | ScienceQA | 695 |
| **ReClor** | Reasoning | 1624 |
| **TheoremQA** | Commonsense 、 MATH 、 ScienceQA | 800 |
| **OpenBookQA** | Text_Understanding 、 Reasoning 、 Commonsense 、 ScienceQA | 5957 |
| **ARB** | Reasoning 、 MATH 、 ScienceQA 、 Commonsense 、 Text_Understanding | 605 |
| **Openassistant-guanaco** | Commonsense 、 Text_Understanding 、 Reasoning | 802 |
# 📌 **Methon**
## *Improving the dataset*
In addition to evaluating based on the finetune1 score, we found that previously fine-tuning on the 200,000, 50,000, and 5,000 datasets for 'dolphin' and 'openorca' resulted in scores better than the 200,000 dataset. Therefore, we plan to prioritize testing with a smaller yet high-quality dataset. We will use the 'platypus' dataset and combine it with 'cot' for stratified sampling and optimizing the dataset based on output length.
## *Dataset Format Definition*
Use "instruction、input、output" tend to lean towards guided datasets. In this format, each sample includes an instruction, an input, and an expected output. The instruction provides guidance on how to process the input to generate the output. This format of dataset is often used to train models to perform specific tasks, as they explicitly indicate the operations the model should perform.
```
{
"input": "",
"output": "",
"instruction": ""
}
```
- ### [FLAN_V2 COT(ZS)](https://huggingface.co/datasets/conceptofmind/cot_submix_original/tree/main)
We only extract the 'zs_opt' from COT and categorize each task.
- ### [OTHER](https://github.com/arielnlee/Platypus/tree/main/data_pipeline)
Prm800k, ScienceQA, SciBench, ReClor, TheoremQA, OpenBookQA, ARB, and OpenAssistant-Guanaco datasets adopt the same format as Platypus.
## *Sampling Algorithms*
Since the flan_v2 cot dataset includes tasks like:
- cot_esnli
- cot_strategyqa
- cot_qasc
- stream_qed
- cot_gsm8k
- cot_ecqa
- cot_creak
- stream_aqua
To ensure this dataset contains diverse high-quality data, we first select zs_opt questions. Then, we filter out questions with output lengths exceeding the average length. This step aims to help the model learn richer reasoning steps. After that, we perform stratified sampling. Initially, we attempted stratified sampling before the length-based filtering, but we found that this approach resulted in varying sample sizes, making it challenging to reproduce. Thus, we decided to first filter by length and then perform stratified sampling.
```py
import json
import random
with open("cot_ORIGINAL.json", "r") as f:
abc = json.load(f)
# --- part1 ---
zsopt_data = [] # "zs_opt"
for i in abc :
if i["template_type"] == "zs_opt":
zsopt_data.append(i)
# --- part2 ---
output_lengths = [len(i["targets"]) for i in zsopt_data]
average_length = sum(output_lengths) / len(output_lengths) # average length
filtered_data = []
for a in zsopt_data:
if len(a["targets"]) >= average_length:
filtered_data.append(a) # output length need to >= average_length
class_counts = {} # Count the number of samples for each class
for a in filtered_data:
task_name = a["task_name"]
if task_name in class_counts:
class_counts[task_name] += 1
else:
class_counts[task_name] = 1
# --- part3 ---
total_samples = 8000 # we plan to select a total of 8000 samples
sample_ratios = {}
for task_name, count in class_counts.items():
sample_ratios[task_name] = count / len(filtered_data)
sample_sizes = {}
for task_name, sample_ratio in sample_ratios.items():
sample_sizes[task_name] = round(sample_ratio * total_samples)
stratified_samples = {} # Perform stratified sampling for each class
for task_name, sample_size in sample_sizes.items():
class_samples = []
for data in filtered_data:
if data["task_name"] == task_name:
class_samples.append(data)
selected_samples = random.sample(class_samples, sample_size)
stratified_samples[task_name] = selected_samples
final_samples = [] # Convert to the specified format
for task_name, samples in stratified_samples.items():
for sample in samples:
final_samples.append(
{
"input": "", # use ""
"output": sample["targets"], # output
"instruction": sample["inputs"], # question
}
)
with open("cot_change.json", "w") as f:
json.dump(final_samples, f, indent=2)
```
# 🏁 **Feature Work**
Under discussion... |
zelros/insurance-fr | 2023-10-01T14:10:54.000Z | [
"insurance",
"region:us"
] | zelros | null | null | null | 0 | 33 | ---
tags:
- insurance
---
Ce dataset contient des paires question / réponse d'une assurance habitation (MRH: Multi Risque Habitation) française.
Il est issu de la structuration des sources ouvertes suivantes:
- https://www.mma.fr/assurance-habitation.html
- https://cap.mma.fr/files/live/sites/mmafr/files/documents-cg/cg410/Habitation_MMA_410p.pdf
L'objectif de ce dataset est de contribuer à des travaux de recherche open source visant par exemple à:
* fine tune des LLM sur des datasets de grande qualité, pour les spécialiser au domaine de l'assurance
* développer de nouvelles applications de question / réponse par Retrieval Augmented Generation (RAG) sur les contrats d'assurance
* évaluer les connaissances de modèles de langage sur le domaine de l'assurance
* de façon générale, appliquer les LLM au domaine de l'assurance pour une meilleure compréhension et une plus grande transparence de cette industrie.
D'autres datasets du même type (mais sur d'autres types d'assurance, d'autres langues, ou issus d'autres sources) sont également disponibles - ou vont être disponibles - et font partie de cette effort de recheche. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.