id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
webis/Touche23-ValueEval | 2023-05-23T20:19:40.000Z | [
"task_categories:text-classification",
"task_categories:zero-shot-classification",
"task_ids:multi-label-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"Human Values",
"Value Detection",
"Multi-Label",
"region:us"
] | webis | Dataset for Touch\u00E9 / SemEval 2023 Task 4; ValueEval: Identification of Human Values behind Arguments:
https://www.overleaf.com/6679855346wrdckzkdccxg
Based on the original Webis-ArgValues-22 (https://doi.org/10.5281/zenodo.5657249) dataset accompanying the paper
Identifying the Human Values behind Arguments (Kiesel et al. 2022b; https://webis.de/publications.html#kiesel_2022b),
published at ACL'22. | @Article{mirzakhmedova:2023a,
author = {Nailia Mirzakhmedova and Johannes Kiesel and Milad Alshomary and Maximilian Heinrich and Nicolas Handkeand Xiaoni Cai and Valentin Barriere and Doratossadat Dastgheib and Omid Ghahroodi and {Mohammad Ali} Sadraeiand Ehsaneddin Asgari and Lea Kawaletz and Henning Wachsmuth and Benno Stein},
doi = {10.48550/arXiv.2301.13771},
journal = {CoRR},
month = jan,
publisher = {arXiv},
title = {{The Touch{\'e}23-ValueEval Dataset for Identifying Human Values behind Arguments}},
volume = {abs/2301.13771},
year = 2023
} | null | 3 | 67 | ---
license: cc-by-4.0
task_categories:
- text-classification
- zero-shot-classification
task_ids:
- multi-label-classification
language:
- en
tags:
- Human Values
- Value Detection
- Multi-Label
pretty_name: Human Value Detection Dataset
size_categories:
- 1K<n<10K
---
# The Touché23-ValueEval Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Usage](#dataset-usage)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Argument Instances](#argument-instances)
- [Metadata Instances](#metadata-instances)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://webis.de/data/touche23-valueeval.html](https://webis.de/data/touche23-valueeval.html)
- **Repository:** [Zenodo](https://doi.org/10.5281/zenodo.6814563)
- **Paper:** [The Touché23-ValueEval Dataset for Identifying Human Values behind Arguments.](https://webis.de/downloads/publications/papers/mirzakhmedova_2023a.pdf)
- **Leaderboard:** [https://touche.webis.de/](https://touche.webis.de/semeval23/touche23-web/index.html#results)
- **Point of Contact:** [Webis Group](https://webis.de/people.html)
### Dataset Summary
The Touché23-ValueEval Dataset comprises 9324 arguments from six different sources. An arguments source is indicated with the first letter of its `Argument ID`:
- `A`: [IBM-ArgQ-Rank-30kArgs](https://research.ibm.com/haifa/dept/vst/debating_data.shtml#Argument%20Quality)
- `C`:Chinese question-answering website [Zhihu](https://www.zhihu.com)
- `D`:[Group Discussion Ideas (GD IDEAS)](https://www.groupdiscussionideas.com)
- `E`:[The Conference for the Future of Europe](https://futureu.europa.eu)
- `F`:Contribution by the language.ml lab (Doratossadat, Omid, Mohammad, Ehsaneddin) [1]:
arguments from the "Nahj al-Balagha" [2] and "Ghurar al-Hikam wa Durar ak-Kalim" [3]
- `G`:[The New York Times](https://www.nytimes.com)
The annotated labels are based on the value taxonomy published in
[Identifying the Human Values behind Arguments](https://webis.de/publications.html#kiesel_2022b) (Kiesel et al. 2022) at ACL'22.
[1] https://language.ml
[2] https://en.wikipedia.org/wiki/Nahj_al-Balagha
[3] https://en.wikipedia.org/wiki/Ghurar_al-Hikam_wa_Durar_al-Kalim
### Dataset Usage
The default configuration name is `main`.
```python
from datasets import load_dataset
dataset = load_dataset("webis/Touche23-ValueEval")
print(dataset['train'].info.description)
for argument in iter(dataset['train']):
print(f"{argument['Argument ID']}: {argument['Stance']} '{argument['Conclusion']}': {argument['Premise']}")
```
### Supported Tasks and Leaderboards
Human Value Detection
### Languages
The [Argument Instances](#argument-instances) are all monolingual; it only includes English (mostly en-US) documents.
The [Metadata Instances](#metadata-instances) for some dataset parts additionally state the arguments in their original language and phrasing.
## Dataset Structure
### Argument Instances
Each argument instance has the following attributes:
- `Argument ID`: The unique identifier for the argument within the dataset
- `Conclusion`: Conclusion text of the argument
- `Stance`: Stance of the `Premise` towards the `Conclusion; one of "in favor of", "against"
- `Premise`: Premise text of the argument
- `Labels`: The `Labels` for each example is an array of 1s (argument resorts to value) and 0s (argument does not resort to value). The order is the same as in the original files.
Additionally, the labels are separated into *value-categories*, aka. level 2 labels of the value taxonomy (Kiesel et al. 2022b), and *human values*, aka. level 1 labels of the value taxonomy.
This distinction is also reflected in the configuration names:
- `<config>`: As the [Task](https://touche.webis.de/semeval23/touche23-web/) is focused mainly on the detection of value-categories,
each base configuration ([listed below](#p-list-base-configs)) has the 20 value-categories as labels:
```python
labels = ["Self-direction: thought", "Self-direction: action", "Stimulation", "Hedonism", "Achievement", "Power: dominance", "Power: resources", "Face", "Security: personal", "Security: societal", "Tradition", "Conformity: rules", "Conformity: interpersonal", "Humility", "Benevolence: caring", "Benevolence: dependability", "Universalism: concern", "Universalism: nature", "Universalism: tolerance", "Universalism: objectivity"]
```
- `<config>-level1`: The 54 human values from the level 1 of the value taxonomy are not used for the 2023 task
(except for the annotation), but are still listed here for some might find them useful for understanding the value
categories. Their order is also the same as in the original files. For more details see the [value-categories](#metadata-instances) configuration.
<p id="p-list-base-configs">The configuration names (as replacements for <code><config></code>) in this dataset are:</p>
- `main`: 8865 arguments (sources: `A`, `D`, `E`) with splits `train`, `validation`, and `test` (default configuration name)
```python
dataset_main_train = load_dataset("webis/Touche23-ValueEval", split="train")
dataset_main_validation = load_dataset("webis/Touche23-ValueEval", split="validation")
dataset_main_test = load_dataset("webis/Touche23-ValueEval", split="test")
```
- `nahjalbalagha`: 279 arguments (source: `F`) with split `test`
```python
dataset_nahjalbalagha_test = load_dataset("webis/Touche23-ValueEval", name="nahjalbalagha", split="test")
```
- `nyt`: 80 arguments (source: `G`) with split `test`
```python
dataset_nyt_test = load_dataset("webis/Touche23-ValueEval", name="nyt", split="test")
```
- `zhihu`: 100 arguments (source: `C`) with split `validation`
```python
dataset_zhihu_validation = load_dataset("webis/Touche23-ValueEval", name="zhihu", split="validation")
```
Please note that due to copyright reasons, there currently does not exist a direct download link to the arguments contained in the
New york Times
dataset. Accessing any of the `nyt` or `nyt-level1` configurations will therefore use the specifically created
[nyt-downloader program](https://github.com/touche-webis-de/touche-code/tree/main/semeval23/human-value-detection/nyt-downloader)
to create and access the arguments locally. See the program's
[README](https://github.com/touche-webis-de/touche-code/blob/main/semeval23/human-value-detection/nyt-downloader/README.md)
for further details.
### Metadata Instances
The following lists all configuration names for metadata. Each configuration only has a single split named `meta`.
- `ibm-meta`: Each row corresponds to one argument (IDs starting with `A`) from the [IBM-ArgQ-Rank-30kArgs](https://research.ibm.com/haifa/dept/vst/debating_data.shtml#Argument%20Quality)
- `Argument ID`: The unique identifier for the argument
- `WA`: the quality label according to the weighted-average scoring function
- `MACE-P`: the quality label according to the MACE-P scoring function
- `stance_WA`: the stance label according to the weighted-average scoring function
- `stance_WA_conf`: the confidence in the stance label according to the weighted-average scoring function
```python
dataset_ibm_metadata = load_dataset("webis/Touche23-ValueEval", name="ibm-meta", split="meta")
```
- `zhihu-meta`: Each row corresponds to one argument (IDs starting with `C`) from the Chinese question-answering website [Zhihu](https://www.zhihu.com)
- `Argument ID`: The unique identifier for the argument
- `Conclusion Chinese`: The original chinese conclusion statement
- `Premise Chinese`: The original chinese premise statement
- `URL`: Link to the original statement the argument was taken from
```python
dataset_zhihu_metadata = load_dataset("webis/Touche23-ValueEval", name="zhihu-meta", split="meta")
```
- `gdi-meta`: Each row corresponds to one argument (IDs starting with `D`) from [GD IDEAS](https://www.groupdiscussionideas.com/)
- `Argument ID`: The unique identifier for the argument
- `URL`: Link to the topic the argument was taken from
```python
dataset_gdi_metadata = load_dataset("webis/Touche23-ValueEval", name="gdi-meta", split="meta")
```
- `cofe-meta`: Each row corresponds to one argument (IDs starting with `E`) from [the Conference for the Future of Europe](https://futureu.europa.eu)
- `Argument ID`: The unique identifier for the argument
- `URL`: Link to the comment the argument was taken from
```python
dataset_cofe_metadata = load_dataset("webis/Touche23-ValueEval", name="cofe-meta", split="meta")
```
- `nahjalbalagha-meta`: Each row corresponds to one argument (IDs starting with `F`). This file contains information on the 279 arguments in `nahjalbalagha` (or `nahjalbalagha-level1`)
and 1047 additional arguments that were not labeled so far. This data was contributed by the language.ml lab.
- `Argument ID`: The unique identifier for the argument
- `Conclusion Farsi`: Conclusion text of the argument in Farsi
- `Stance Farsi`: Stance of the `Premise` towards the `Conclusion`, in Farsi
- `Premise Farsi`: Premise text of the argument in Farsi
- `Conclusion English`: Conclusion text of the argument in English (translated from Farsi)
- `Stance English`: Stance of the `Premise` towards the `Conclusion`; one of "in favor of", "against"
- `Premise English`: Premise text of the argument in English (translated from Farsi)
- `Source`: Source text of the argument; one of "Nahj al-Balagha", "Ghurar al-Hikam wa Durar ak-Kalim"; their Farsi translations were used
- `Method`: How the premise was extracted from the source; one of "extracted" (directly taken), "deduced"; the conclusion are deduced
```python
dataset_nahjalbalagha_metadata = load_dataset("webis/Touche23-ValueEval", name="nahjalbalagha-meta", split="meta")
```
- `nyt-meta`: Each row corresponds to one argument (IDs starting with `G`) from [The New York Times](https://www.nytimes.com)
- `Argument ID`: The unique identifier for the argument
- `URL`: Link to the article the argument was taken from
- `Internet Archive timestamp`: Timestamp of the article's version in the Internet Archive that was used
```python
dataset_nyt_metadata = load_dataset("webis/Touche23-ValueEval", name="nyt-meta", split="meta")
```
- `value-categories`: Contains a single JSON-entry with the structure of level 2 and level 1 values regarding the value taxonomy:
```
{
"<value category>": {
"<level 1 value>": [
"<exemplary effect a corresponding argument might target>",
...
], ...
}, ...
}
```
As this configuration contains just a single entry, an example usage could be:
```python
value_categories = load_dataset("webis/Touche23-ValueEval", name="value-categories", split="meta")[0]
```
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@Article{mirzakhmedova:2023a,
author = {Nailia Mirzakhmedova and Johannes Kiesel and Milad Alshomary and Maximilian Heinrich and Nicolas Handke\
and Xiaoni Cai and Valentin Barriere and Doratossadat Dastgheib and Omid Ghahroodi and {Mohammad Ali} Sadraei\
and Ehsaneddin Asgari and Lea Kawaletz and Henning Wachsmuth and Benno Stein},
doi = {10.48550/arXiv.2301.13771},
journal = {CoRR},
month = jan,
publisher = {arXiv},
title = {{The Touch{\'e}23-ValueEval Dataset for Identifying Human Values behind Arguments}},
volume = {abs/2301.13771},
year = 2023
}
``` |
mstz/pums | 2023-04-18T07:42:19.000Z | [
"task_categories:tabular-classification",
"language:en",
"pums",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_us_census_data_(1990)_116,
author = {Meek,Meek, Thiesson,Thiesson & Heckerman,Heckerman},
title = {{US Census Data (1990)}},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5VP42}}
} | null | 0 | 67 | ---
language:
- en
tags:
- pums
- tabular_classification
- binary_classification
- UCI
pretty_name: Ipums
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- pums
---
# Pums
The [Pums dataset](https://archive-beta.ics.uci.edu/dataset/116/us+census+data+1990) from the [UCI repository](https://archive-beta.ics.uci.edu/).
U.S.A. Census dataset, classify the income of the individual.
# Configurations and tasks
| **Configuration** | **Task** |
|-----------------------|---------------------------|
| pums | Binary classification.| |
gauss314/bitcoin_daily | 2023-07-30T02:20:32.000Z | [
"task_categories:tabular-regression",
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"license:gpl-3.0",
"bitcoin",
"cryptocurrencies",
"crypto",
"region:us"
] | gauss314 | null | null | null | 2 | 67 | ---
license: gpl-3.0
task_categories:
- tabular-regression
- tabular-classification
tags:
- bitcoin
- cryptocurrencies
- crypto
size_categories:
- 1K<n<10K
--- |
Patt/ReCoRD_TH_drop | 2023-07-20T15:29:42.000Z | [
"task_categories:text-classification",
"language:en",
"language:th",
"arxiv:1907.04307",
"region:us"
] | Patt | null | null | null | 0 | 67 | ---
task_categories:
- text-classification
language:
- en
- th
dataset_info:
features:
- name: passage
dtype: string
- name: passage_TH
dtype: string
- name: query
dtype: string
- name: query_TH
dtype: string
- name: entities
sequence: string
- name: entities_TH
sequence: string
- name: entity_spans
struct:
- name: end
sequence: int64
- name: start
sequence: int64
- name: text
sequence: string
- name: answers
sequence: string
- name: answers_TH
sequence: string
- name: idx
struct:
- name: passage
dtype: int64
- name: query
dtype: int64
- name: score_passage
dtype: float64
- name: score_query
dtype: float64
- name: score_entities
dtype: float64
- name: score_answers
dtype: float64
splits:
- name: train
num_bytes: 281547282
num_examples: 57811
- name: validation
num_bytes: 32258456
num_examples: 6676
download_size: 112999233
dataset_size: 313805738
---
# Dataset Card for ReCoRD_TH_drop
### Dataset Description
This dataset is Thai translated version of [ReCoRD](https://huggingface.co/datasets/super_glue/viewer/record) using google translate with [Multilingual Universal Sentence Encoder](https://arxiv.org/abs/1907.04307) to calculate score for Thai translation.
Drop every row that score_answers < 0.8 and every row that score < 0.5 after penalty. |
rusheeliyer/uk-abs | 2023-08-11T16:40:28.000Z | [
"region:us"
] | rusheeliyer | null | null | null | 0 | 67 | ---
dataset_info:
features:
- name: judgement
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 52800141
num_examples: 589
- name: test
num_bytes: 8174530
num_examples: 100
- name: validation
num_bytes: 10432092
num_examples: 104
download_size: 32973908
dataset_size: 71406763
---
# Dataset Card for "uk-abs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
morpheuslord/cve-llm-training | 2023-08-30T06:26:48.000Z | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"code",
"region:us"
] | morpheuslord | null | null | null | 4 | 67 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- code
pretty_name: CVE-LLm-Dataset
data_source: Custom data collected from the CVE database
data_formats: JSONL
---
# CVE-llm_dataset
This dataset is intended to train an LLM model for an utterly CVE-focused input and output.
## Data extraction:
For the data extraction, I first downloaded the CVE database from NVD lists and then loaded them using the `cve_dataset_2.py` and `cve_dataset.py` both have produce different datasets one is for llama and the other is for openai GPT.
The CVE json files are mapped in this format:
```
cves:
|
├─1999
| ├─0xxx
| | ├─CVE-1999-0001.json
| | ├─....
| | └─CVE-1999-0999.json
| └─1xxx
| ├─CVE-1999-1000.json
| ├─....
| └─CVE-1999-1598.json
└─2023
```
The programs traverse through these folders extract the data in the files and arrange them into usable formats for the fine-tuning process.
## llama2 Model dataset:
The llama2 fine-tuned dataset follows this format:
```
{
"instruction": "Explain CVE-1999-0001",
"input": "Explain the vulnerability: CVE-1999-0001",
"output": "ip_input.c in BSD-derived TCP/IP implementations allows remote attackers to cause a denial of service (crash or hang) via crafted packets.\nAffected Products: n/a\nReferences: [{'tags': ['x_refsource_CONFIRM'], 'url': 'http://www.openbsd.org/errata23.html#tcpfix'}, {'name': '5707', 'tags': ['vdb-entry', 'x_refsource_OSVDB'], 'url': 'http://www.osvdb.org/5707'}]\nCVE State: PUBLISHED"
}
```
The instruction is what we instruct the AI to do with the data provided For example we can command the AI `To take in user input analyze it and then based on what he asks returns an answer` This is also where we can add a `role` or a `personal` to the AI.
The input is the user Input of the main query or data that must be processed by the AI. This is a crucial piece of information that the AI will process in order to provide an output.
The output is the format that we define and tell the AI to generate answers in that format or provide that answer to the question asked. |
pharaouk/codellama_blob_1 | 2023-08-24T22:30:45.000Z | [
"region:us"
] | pharaouk | null | null | null | 0 | 67 | Entry not found |
boapps/kmdb_classification | 2023-09-21T11:43:34.000Z | [
"region:us"
] | boapps | null | null | null | 0 | 67 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: title
dtype: string
- name: description
dtype: string
- name: keywords
sequence: string
- name: label
dtype: int64
- name: url
dtype: string
- name: date
dtype: string
- name: is_hand_annoted
dtype: bool
- name: score
dtype: float64
- name: title_score
dtype: float64
splits:
- name: train
num_bytes: 187493981
num_examples: 45683
- name: test
num_bytes: 13542701
num_examples: 3605
- name: validation
num_bytes: 25309037
num_examples: 6579
download_size: 139938458
dataset_size: 226345719
---
# Dataset Card for "kmdb_classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Trelis/tiny-shakespeare | 2023-09-06T16:27:30.000Z | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"fine-tuning",
"shakespeare",
"region:us"
] | Trelis | null | null | null | 0 | 67 | ---
task_categories:
- text-generation
language:
- en
tags:
- fine-tuning
- shakespeare
size_categories:
- n<1K
---
# Data source
Downloaded via Andrej Karpathy's nanogpt repo from this [link](https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt)
# Data Format
- The entire dataset is split into train (90%) and test (10%).
- All rows are at most 1024 tokens, using the Llama 2 tokenizer.
- All rows are split cleanly so that sentences are whole and unbroken. |
bobbybelajar/LlamaAmazonReviewOneColumn | 2023-09-15T07:12:06.000Z | [
"region:us"
] | bobbybelajar | null | null | null | 0 | 67 | Entry not found |
zxvix/pubmed_subset_wiki_40p | 2023-09-22T13:47:37.000Z | [
"region:us"
] | zxvix | null | null | null | 0 | 67 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 4745394961.571536
num_examples: 1666514
- name: test
num_bytes: 1024229
num_examples: 1000
download_size: 1869629066
dataset_size: 4746419190.571536
---
# Dataset Card for "pubmed_subset_wiki_40p"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/squad_title_v4_train_10_eval_10 | 2023-09-26T14:59:03.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 67 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 203084
num_examples: 138
- name: validation
num_bytes: 50807
num_examples: 50
download_size: 65145
dataset_size: 253891
---
# Dataset Card for "squad_title_v4_train_10_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TIGER-Lab/MetricInstruct | 2023-10-10T12:52:55.000Z | [
"arxiv:2310.00752",
"region:us"
] | TIGER-Lab | null | null | null | 4 | 67 | ---
configs:
- config_name: train
data_files:
- split: train_real_world
path:
- "data/new_real_world.json"
- split: train_synthetic
path:
- "data/new_synthetic.json"
- split: train_mix
path:
- "data/new_mix.json"
---
## MetricInstruct
We present TIGERScore, a **T**rained metric that follows **I**nstruction **G**uidance to perform **E**xplainable, and **R**eference-free evaluation over a wide spectrum of text generation tasks. TIGERScore is guided by the natural language instruction to provide error analysis to pinpoint the mistakes in the generated text. Our metric is based on LLaMA-2, trained on our meticulously curated instruction-tuning dataset MetricInstruct which covers 6 text generation tasks and 23 text generation datasets. The dataset consists of 48K quadruple in the form of (instruction, input, system output, error analysis). We collected the `system outputs' through diverse channels to cover different types of errors.
Project Page: [https://tiger-ai-lab.github.io/TIGERScore/](https://tiger-ai-lab.github.io/TIGERScore/)
Paper: [https://arxiv.org/abs/2310.00752](https://arxiv.org/abs/2310.00752)
Code: [https://github.com/TIGER-AI-Lab/TIGERScore](https://github.com/TIGER-AI-Lab/TIGERScore)
Demo: [https://huggingface.co/spaces/TIGER-Lab/TIGERScore](https://huggingface.co/spaces/TIGER-Lab/TIGERScore)
TIGERScore-7B-V1.0: [https://huggingface.co/TIGER-Lab/TIGERScore-7B-V1.0](https://huggingface.co/TIGER-Lab/TIGERScore-7B-V1.0)
TIGERScore-13B-V1.0: [https://huggingface.co/TIGER-Lab/TIGERScore-13B-V1.0](https://huggingface.co/TIGER-Lab/TIGERScore-13B-V1.0)
We present the MetricInstruct dataset, which is employed to fine-tune TIGERScore. The three underlying criteria for dataset construction are:
1. Dataset diversity: we choose 23 distinctive datasets as the source context to cover enough generation tasks.
2. Error coverage: we take system outputs generated from 50+ text generation systems to cover all types of errors and guarantee a balanced distribution.
3. Quality ensurance: to ensure MetricInstruct is tailored to gather in-depth error analysis, we sourced it by prompting OpenAI GPT models and then filtered through different heuristics to eliminate low-quality error analysis.
## Data Source
Our system outputs come from two channels, namely real-world system outputs and synthetic outputs. The real-world system outputs are obtained from real systems, which ensures the error distribution is aligned with real-world ones.
Check out our paper for more details.
| Task | Real-World Dataset | Output Source | Synthetic Dataset | Output Source |
|:--------:|:-----------------------------------------:|:--------------:|:-----------------------------------:|:--------------:|
| Summarization | SummEval, XSum,Newsroom,SAMSum | 27 Systems | CNN/DM, XSum,Gigaword,SAMSum | GPT-4 |
| Translation | WMT | 18 Systems | WMT | GPT-4 |
| Data-to-Text | WebNLG-2020,WikiTableText,ToTTo | 17 Systems | WikiTableText,Dart,ToTTo | GPT-4 |
| Long-Form QA | ASQA,FeTaQA,CosmosQA,ELI5 | 5 Systems | ASQA,FeTaQACosmos QA,ELI5 | GPT-4 |
| MathQA | GSM8K | 5 Systems | GSM8K,MathQA | GPT-4 |
| Instruct | MixInstruct | 11 Systems | LIMA,AlpacaFarmOASST1,Guanaco,Dolly | GPT-4 |
## Data Format
The dataset consists of 48K quadruple in the form of (instruction, input, system output, error analysis).
For each item in the dataset, `task` represents its corresponding text generation task, `instruction` is its task instruction, `input_context` is its input source, and `hypo_output` is the generated output, and `errors` is the error analysis given by ChatGPT or GPT-4.
## Formatting
To format the data fields into a single prompt for finetuning or testing, We provide the following code for users to refer:
```python
FINETUNE_INST = "You are evaluating errors in a model-generated output for a(an) ${task} task."
FINETUNE_INPUT = """\
Task instruction: ${generation_instruction}
Source: ${input_context}
Model-generated Output: ${hypothesis_output}
Based on the given task instruction and source, identify errors in this model-generated output.
For each error you give in the response, please also elaborate the following information:
- error location (the words that are wrong in the output)
- error aspect it belongs to.
- explanation why it's an error, and the correction suggestions.
- severity of the error ("Major" or "Minor").
- reduction of score (between 0.5 and 5 given the severity of the error)
Your evaluation output:
"""
inst_part = Template(FINETUNE_INST)
inst_part = inst_part.substitute(task=task)
input_part = Template(FINETUNE_INPUT)
input_part = input_part.substitute(
generation_instruction=instruction,
input_context=input_context,
hypothesis_output=hypo_output
)
prompt = (inst_part + "\n" + input_part).strip("\n ") + "\n"
encodings = tigerscore_tokenizer(prompt, return_tensors="pt")
input_ids = encodings["input_ids"].to(tigerscore_model.device)
attention_mask = encodings["attention_mask"].to(tigerscore_model.device)
```
Example of formatted prompt:
```txt
You are evaluating errors in a model-generated output for a(an) translation task.
Task instruction: Translate the following text from German to English.
Source: Der künftige EM-Cheforganisator Philipp Lahm soll laut Grindel im DFB-Präsidium mitarbeiten.
Model-generated Output: According to Grindel, the future head of the European Championships, Philipp Lahm, is to participate in the DFB Presidency.
Based on the given task instruction and source, identify errors in this model-generated output.
For each error you give in the response, please also elaborate the following information:
- error location (the words that are wrong in the output)
- error aspect it belongs to.
- explanation why it's an error, and the correction suggestions.
- severity of the error ("Major" or "Minor").
- reduction of score (between 0.5 and 5 given the severity of the error)
Your evaluation output:
```
## Citation
```
@article{jiang2023TIGERScore,
title={TIGERScore: Towards Building Explainable Metric for All Text Generation Tasks},
author={Dongfu Jiang, Yishan Li, Ge Zhang, Wenhao Huang, Bill Yuchen Lin, Wenhu Chen},
journal={arXiv preprint arXiv:2310.00752},
year={2023}
}
``` |
tomashs/LSC_acronyms_topic_vectors | 2023-10-05T21:38:49.000Z | [
"region:us"
] | tomashs | null | null | null | 0 | 67 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: short_form
dtype: string
- name: long_form
dtype: string
- name: label
dtype: int64
- name: topic_vector
sequence: float64
splits:
- name: train
num_bytes: 1959752089
num_examples: 352720
- name: validation
num_bytes: 418571627
num_examples: 75339
- name: test
num_bytes: 419813918
num_examples: 75540
download_size: 2198337547
dataset_size: 2798137634
---
# Dataset Card for "LSC_acronyms_topic_vectors"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
carnival13/sur_test | 2023-10-08T06:27:14.000Z | [
"region:us"
] | carnival13 | null | null | null | 0 | 67 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 1297540140
num_examples: 900000
download_size: 298907283
dataset_size: 1297540140
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "sur_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SetFit/rte | 2022-02-28T12:46:43.000Z | [
"region:us"
] | SetFit | null | null | null | 0 | 66 | # Glue RTE
This dataset is a port of the official [`rte` dataset](https://huggingface.co/datasets/glue/viewer/rte/train) on the Hub.
Note that the sentence1 and sentence2 columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
|
arka0821/multi_document_summarization | 2022-10-20T19:13:26.000Z | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"arxiv:2010.14235",
"region:us"
] | arka0821 | Multi-Document, a large-scale multi-document summarization dataset created from scientific articles. Multi-Document introduces a challenging multi-document summarization task: writing the related-work section of a paper based on its abstract and the articles it references. | @article{lu2020multi,
title={Multi-Document: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles},
author={Arka Das, India},
journal={arXiv preprint arXiv:2010.14235},
year={2022}
} | null | 3 | 66 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- summarization-other-paper-abstract-generation
paperswithcode_id: multi-document
pretty_name: Multi-Document
---
# Dataset Card for Multi-Document
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Multi-Document repository](https://github.com/arka0821/multi_document_summarization)
- **Paper:** [Multi-Document: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles](https://arxiv.org/abs/2010.14235)
### Dataset Summary
Multi-Document, a large-scale multi-document summarization dataset created from scientific articles. Multi-Document introduces a challenging multi-document summarization task: writing the related-work section of a paper based on its abstract and the articles it references.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in English
## Dataset Structure
### Data Instances
{"id": "n3ByHGrxH3bvfrvF", "docs": [{"id": "1394519630182457344", "text": "Clover Bio's COVID-19 vaccine candidate shows immune response against SARS-CoV-2 variants in mouse model https://t.co/wNWa9GQux5"}, {"id": "1398154482463170561", "text": "The purpose of the Vaccine is not to stop you from catching COVID 19. The vaccine introduces the immune system to an inactivated form of the SARS-CoV-2 coronavirus or a small part of it. This then equips the body with the ability to fight the virus better in case you get it. https://t.co/Cz9OU6Zi7P"}, {"id": "1354844652520792071", "text": "The Moderna mRNA COVID-19 vaccine appears to be effective against the novel, rapidly spreading variants of SARS-CoV-2.\nResearchers analysed blood samples from vaccinated people and monkeys- Both contained neutralising antibodies against the virus. \nPT1/2\n#COVID19vaccines #biotech https://t.co/ET1maJznot"}, {"id": "1340189698107518976", "text": "@KhandaniM Pfizer vaccine introduces viral surface protein which is constant accross SARS COV 2 variants into the body. Body builds antibodies against this protein, not any virus. These antibodies instructs macrophages & T-Cells to attack & destroy any COVID-19 v variant at infection point"}, {"id": "1374368989581778945", "text": "@DelthiaRicks \" Pfizer and BioNTech\u2019s COVID-19 vaccine is an mRNA vaccine, which does not use the live virus but rather a small portion of the viral sequence of the SARS-CoV-2 virus to instruct the body to produce the spike protein displayed on the surface of the virus.\""}, {"id": "1353354819315126273", "text": "Pfizer and BioNTech Publish Results of Study Showing COVID-19 Vaccine Elicits Antibodies that Neutralize Pseudovirus Bearing the SARS-CoV-2 U.K. Strain Spike Protein in Cell Culture | Pfizer https://t.co/YXcSnjLt8C"}, {"id": "1400821856362401792", "text": "Pfizer-BioNTech's covid-19 vaccine elicits lower levels of antibodies against the SARS-CoV-2\u00a0Delta variant\u00a0(B.1.617.2), first discovered in India, in comparison to other variants, said a research published in\u00a0Lancet\u00a0journal.\n https://t.co/IaCMX81X3b"}, {"id": "1367252963190665219", "text": "New research from UNC-Chapel Hill suggests that those who have previously experienced a SARS-CoV-2 infection develop a significant antibody response to the first dose of mRNA-based COVID-19 vaccine.\nhttps://t.co/B4vR1KUQ0w"}, {"id": "1375949502461394946", "text": "Mechanism of a COVID-19 nanoparticle vaccine candidate that elicits a broadly neutralizing antibody response to SARS-CoV-2 variants https://t.co/nc1L0uvtlI #bioRxiv"}, {"id": "1395428608349548550", "text": "JCI - Efficient maternal to neonatal transfer of antibodies against SARS-CoV-2 and BNT162b2 mRNA COVID-19 vaccine https://t.co/vIBcpPaKFZ"}], "summary": "The COVID-19 vaccine appears to be effective against the novel, rapidly spreading variants of SARS-CoV-2. Pfizer-BioNTech's COVID-19 vaccine use small portion of the viral sequence of the SARS-CoV-2 virus to equip the body with the ability to fight the virus better in case you get it."}
### Data Fields
{'id': text of paper abstract \
'docs': document id \
[
'id': id of text \
'text': text data \
]
'summary': summary text
}
### Data Splits
The data is split into a training, validation and test.
| train | validation | test |
|------:|-----------:|-----:|
| 50 | 10 | 5 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{lu2020multi,
title={Multi-Document: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles},
author={Arka Das, India},
journal={arXiv preprint arXiv:2010.14235},
year={2022}
}
```
### Contributions
Thanks to [@arka0821] (https://github.com/arka0821/multi_document_summarization) for adding this dataset.
|
SetFit/tweet_eval_stance_abortion | 2022-09-05T13:09:04.000Z | [
"region:us"
] | SetFit | null | null | null | 0 | 66 | Entry not found |
tglcourse/lsun_church_train | 2022-10-19T12:20:45.000Z | [
"region:us"
] | tglcourse | null | null | null | 0 | 66 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: '5'
6: '6'
7: '7'
8: '8'
9: '9'
10: a
11: b
12: c
13: d
14: e
15: f
splits:
- name: test
num_bytes: -5033726665.536212
num_examples: 6312
- name: train
num_bytes: -94551870824.9868
num_examples: 119915
download_size: 2512548233
dataset_size: -99585597490.52301
---
# Dataset Card for "lsun_church_train"
Uploading lsun church train dataset for convenience
I've split this into 119915 train and 6312 test but if you want the original test set see https://github.com/fyu/lsun
Notebook that I used to download then upload this dataset: https://colab.research.google.com/drive/1_f-D2ENgmELNSB51L1igcnLx63PkveY2?usp=sharing
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kuroneko5943/weibo16 | 2023-01-10T16:01:32.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:zh",
"license:apache-2.0",
"weibo",
"sentiment"... | kuroneko5943 | GLUE, the General Language Understanding Evaluation benchmark
(https://gluebenchmark.com/) is a collection of resources for training,
evaluating, and analyzing natural language understanding systems. | \ | null | 4 | 66 | ---
annotations_creators:
- machine-generated
language:
- zh
language_creators:
- crowdsourced
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: weibo16
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- weibo
- sentiment
task_categories:
- text-classification
task_ids:
- sentiment-classification
--- |
ruanchaves/hatebr | 2023-04-13T13:39:40.000Z | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pt",
"instagram",
"doi:10.57967/hf/0274",
"region:us"
] | ruanchaves | HateBR is the first large-scale expert annotated corpus of Brazilian Instagram comments for hate speech and offensive language detection on the web and social media. The HateBR corpus was collected from Brazilian Instagram comments of politicians and manually annotated by specialists. It is composed of 7,000 documents annotated according to three different layers: a binary classification (offensive versus non-offensive comments), offensiveness-level (highly, moderately, and slightly offensive messages), and nine hate speech groups (xenophobia, racism, homophobia, sexism, religious intolerance, partyism, apology for the dictatorship, antisemitism, and fatphobia). Each comment was annotated by three different annotators and achieved high inter-annotator agreement. Furthermore, baseline experiments were implemented reaching 85% of F1-score outperforming the current literature models for the Portuguese language. Accordingly, we hope that the proposed expertly annotated corpus may foster research on hate speech and offensive language detection in the Natural Language Processing area. | @inproceedings{vargas2022hatebr,
title={HateBR: A Large Expert Annotated Corpus of Brazilian Instagram Comments for Offensive Language and Hate Speech Detection},
author={Vargas, Francielle and Carvalho, Isabelle and de G{\'o}es, Fabiana Rodrigues and Pardo, Thiago and Benevenuto, Fabr{\'\i}cio},
booktitle={Proceedings of the Thirteenth Language Resources and Evaluation Conference},
pages={7174--7183},
year={2022}
} | null | 5 | 66 | ---
annotations_creators:
- expert-generated
language:
- pt
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: HateBR - Offensive Language and Hate Speech Dataset in Brazilian Portuguese
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- instagram
task_categories:
- text-classification
task_ids:
- hate-speech-detection
---
# Dataset Card for HateBR - Offensive Language and Hate Speech Dataset in Brazilian Portuguese
## Dataset Description
- **Homepage:** http://143.107.183.175:14581/
- **Repository:** https://github.com/franciellevargas/HateBR
- **Paper:** https://aclanthology.org/2022.lrec-1.777/
- **Leaderboard:**
- **Point of Contact:** https://franciellevargas.github.io/
### Dataset Summary
HateBR is the first large-scale expert annotated corpus of Brazilian Instagram comments for hate speech and offensive language detection on the web and social media. The HateBR corpus was collected from Brazilian Instagram comments of politicians and manually annotated by specialists. It is composed of 7,000 documents annotated according to three different layers: a binary classification (offensive versus non-offensive comments), offensiveness-level (highly, moderately, and slightly offensive messages), and nine hate speech groups (xenophobia, racism, homophobia, sexism, religious intolerance, partyism, apology for the dictatorship, antisemitism, and fatphobia). Each comment was annotated by three different annotators and achieved high inter-annotator agreement. Furthermore, baseline experiments were implemented reaching 85% of F1-score outperforming the current literature models for the Portuguese language. Accordingly, we hope that the proposed expertly annotated corpus may foster research on hate speech and offensive language detection in the Natural Language Processing area.
**Relevant Links:**
* [**Demo: Brasil Sem Ódio**](http://143.107.183.175:14581/)
* [**MOL - Multilingual Offensive Lexicon Annotated with Contextual Information**](https://github.com/franciellevargas/MOL)
### Supported Tasks and Leaderboards
Hate Speech Detection
### Languages
Portuguese
## Dataset Structure
### Data Instances
```
{'instagram_comments': 'Hipocrita!!',
'offensive_language': True,
'offensiveness_levels': 2,
'antisemitism': False,
'apology_for_the_dictatorship': False,
'fatphobia': False,
'homophobia': False,
'partyism': False,
'racism': False,
'religious_intolerance': False,
'sexism': False,
'xenophobia': False,
'offensive_&_non-hate_speech': True,
'non-offensive': False,
'specialist_1_hate_speech': False,
'specialist_2_hate_speech': False,
'specialist_3_hate_speech': False
}
```
### Data Fields
* **instagram_comments**: Instagram comments.
* **offensive_language**: A classification of comments as either offensive (True) or non-offensive (False).
* **offensiveness_levels**: A classification of comments based on their level of offensiveness, including highly offensive (3), moderately offensive (2), slightly offensive (1) and non-offensive (0).
* **antisemitism**: A classification of whether or not the comment contains antisemitic language.
* **apology_for_the_dictatorship**: A classification of whether or not the comment praises the military dictatorship period in Brazil.
* **fatphobia**: A classification of whether or not the comment contains language that promotes fatphobia.
* **homophobia**: A classification of whether or not the comment contains language that promotes homophobia.
* **partyism**: A classification of whether or not the comment contains language that promotes partyism.
* **racism**: A classification of whether or not the comment contains racist language.
* **religious_intolerance**: A classification of whether or not the comment contains language that promotes religious intolerance.
* **sexism**: A classification of whether or not the comment contains sexist language.
* **xenophobia**: A classification of whether or not the comment contains language that promotes xenophobia.
* **offensive_&_no-hate_speech**: A classification of whether or not the comment is offensive but does not contain hate speech.
* **specialist_1_hate_speech**: A classification of whether or not the comment was annotated by the first specialist as hate speech.
* **specialist_2_hate_speech**: A classification of whether or not the comment was annotated by the second specialist as hate speech.
* **specialist_3_hate_speech**: A classification of whether or not the comment was annotated by the third specialist as hate speech.
### Data Splits
The original authors of the dataset did not propose a standard data split. To address this, we use the [multi-label data stratification technique](http://scikit.ml/stratification.html) implemented at the scikit-multilearn library to propose a train-validation-test split. This method considers all classes for hate speech in the data and attempts to balance the representation of each class in the split.
| name |train|validation|test|
|---------|----:|----:|----:|
|hatebr|4480|1120|1400|
## Considerations for Using the Data
### Discussion of Biases
Please refer to [the HateBR paper](https://aclanthology.org/2022.lrec-1.777/) for a discussion of biases.
### Licensing Information
The HateBR dataset, including all its components, is provided strictly for academic and research purposes. The use of the dataset for any commercial or non-academic purpose is expressly prohibited without the prior written consent of [SINCH](https://www.sinch.com/).
### Citation Information
```
@inproceedings{vargas2022hatebr,
title={HateBR: A Large Expert Annotated Corpus of Brazilian Instagram Comments for Offensive Language and Hate Speech Detection},
author={Vargas, Francielle and Carvalho, Isabelle and de G{\'o}es, Fabiana Rodrigues and Pardo, Thiago and Benevenuto, Fabr{\'\i}cio},
booktitle={Proceedings of the Thirteenth Language Resources and Evaluation Conference},
pages={7174--7183},
year={2022}
}
```
### Contributions
Thanks to [@ruanchaves](https://github.com/ruanchaves) for adding this dataset. |
wangrui6/Zhihu-KOL | 2023-04-23T13:26:03.000Z | [
"task_categories:question-answering",
"language:zh",
"region:us"
] | wangrui6 | null | null | null | 92 | 66 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
- name: METADATA
dtype: string
splits:
- name: train
num_bytes: 2295601241
num_examples: 1006218
download_size: 1501204472
dataset_size: 2295601241
task_categories:
- question-answering
language:
- zh
---
# Dataset Card for "Zhihu-KOL"
Zhihu data for training Open Assitant
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Elfsong/ClinicalDataset | 2023-03-05T06:43:13.000Z | [
"task_categories:summarization",
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | Elfsong | null | null | null | 11 | 66 | ---
task_categories:
- summarization
- conversational
language:
- en
pretty_name: MediQA
size_categories:
- 1K<n<10K
---
# MEDIQA-Chat 2023 Training/Validation Data
# Task A
The training set consists of 1,201 pairs of conversations and associated section headers and contents.
The validation set consists of 100 pairs of conversations and their summaries.
The full list of normalized section headers:
1. fam/sochx [FAMILY HISTORY/SOCIAL HISTORY]
2. genhx [HISTORY of PRESENT ILLNESS]
3. pastmedicalhx [PAST MEDICAL HISTORY]
4. cc [CHIEF COMPLAINT]
5. pastsurgical [PAST SURGICAL HISTORY]
6. allergy
7. ros [REVIEW OF SYSTEMS]
8. medications
9. assessment
10. exam
11. diagnosis
12. disposition
13. plan
14. edcourse [EMERGENCY DEPARTMENT COURSE]
15. immunizations
16. imaging
17. gynhx [GYNECOLOGIC HISTORY]
18. procedures
19. other_history
20. labs
# Task B
The training set consists of 67 pairs of conversations and full notes. The validation set includes 20 pairs of conversations and clinical notes.
Full encounter notes are expected to have at least one of four overall section divisions demarked by the first-occuring of its related section headers:
> | note_division | section_headers
> | subjective | chief complaint, history of present illness, hpi, subjective
> | objective_exam | physical exam, exam
> | objective_results | results, findings
> | assessment_and_plan | assessment, plan
Depending on the encounter, objective_exam and objective_results may not be relevant.
We encourage review the sample data as well as the evaluation script to understand the best demarkation headers for your generated note.
# Task C
The training set consists of 67 pairs of full doctor-patient conversations and notes and the validation set includes 20 pairs of full conversations and clinical notes (same as Task-B datasets). The Task-A training and validation sets (1,301 pairs) could be used as additional training data.
|
nuprl/ts-training | 2023-05-23T19:34:07.000Z | [
"region:us"
] | nuprl | null | null | null | 1 | 66 | ---
dataset_info:
features:
- name: hexsha
dtype: string
- name: size
dtype: int64
- name: ext
dtype: string
- name: lang
dtype: string
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_repo_head_hexsha
dtype: string
- name: max_stars_repo_licenses
sequence: string
- name: max_stars_count
dtype: float64
- name: max_stars_repo_stars_event_min_datetime
dtype: string
- name: max_stars_repo_stars_event_max_datetime
dtype: string
- name: max_issues_repo_path
dtype: string
- name: max_issues_repo_name
dtype: string
- name: max_issues_repo_head_hexsha
dtype: string
- name: max_issues_repo_licenses
sequence: string
- name: max_issues_count
dtype: float64
- name: max_issues_repo_issues_event_min_datetime
dtype: string
- name: max_issues_repo_issues_event_max_datetime
dtype: string
- name: max_forks_repo_path
dtype: string
- name: max_forks_repo_name
dtype: string
- name: max_forks_repo_head_hexsha
dtype: string
- name: max_forks_repo_licenses
sequence: string
- name: max_forks_count
dtype: float64
- name: max_forks_repo_forks_event_min_datetime
dtype: string
- name: max_forks_repo_forks_event_max_datetime
dtype: string
- name: content
dtype: string
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
splits:
- name: train
num_bytes: 42270977435
num_examples: 12133148
download_size: 17360072228
dataset_size: 42270977435
extra_gated_prompt: |-
## Terms of Use for The Stack
The Stack dataset is a collection of source code in over 300 programming languages. We ask that you read and acknowledge the following points before using the dataset:
1. The Stack is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.
2. The Stack is regularly updated to enact validated data removal requests. By clicking on "Access repository", you agree to update your own version of The Stack to the most recent usable version specified by the maintainers in [the following thread](https://huggingface.co/datasets/bigcode/the-stack/discussions/7). If you have questions about dataset versions and allowed uses, please also ask them in the dataset’s [community discussions](https://huggingface.co/datasets/bigcode/the-stack/discussions/new). We will also notify users via email when the latest usable version changes.
3. To host, share, or otherwise provide access to The Stack dataset, you must include [these Terms of Use](https://huggingface.co/datasets/bigcode/the-stack#terms-of-use-for-the-stack) and require users to agree to it.
By clicking on "Access repository" below, you accept that your contact information (email address and username) can be shared with the dataset maintainers as well.
extra_gated_fields:
Email: text
I have read the License and agree with its terms: checkbox
---
# Dataset Card for "ts-training"
This is a subset of the TypeScript portion of [The Stack (dedup)](https://huggingface.co/datasets/bigcode/the-stack-dedup), uploaded to the Hugging Face Hub for convenience.
Files with dates _after_ the December 31, 2021 cutoff are excluded from this dataset, since we are using those files for evaluation. Therefore, the remaining files (in this dataset) are available for training.
A file is considered to be after the cutoff if all of `max_{stars|forks|issues}_repo_{stars|forks|issues}_event_min_datetime` (i.e., the first timestamp for a `{stars|forks|issues}` event) are after the cutoff. Otherwise (or if all timestamps are missing), the file is included in this dataset.
## Versions
The default version (`main`) is current `v1.1`.
|Version|Description|
|-|-|
|`v1.1` | Original version of the training dataset, based on v1.1 of the Stack. Applies the training cutoff (December 31, 2021). Used to train OpenTau. |
|`v1.1full` | Training dataset based on v1.1 of the Stack. Does not apply the training cutoff (December 31, 2021), but applies a filter to remove files that do not parse as valid TypeScript. |
|`v1.1p1` | Revision of v1.1. Applies a filter to remove files that do not parse as valid TypeScript. |
|
slvnwhrl/blurbs-clustering-s2s | 2023-04-24T11:40:19.000Z | [
"size_categories:10K<n<100K",
"language:de",
"license:cc-by-nc-4.0",
"embeddings",
"clustering",
"benchmark",
"region:us"
] | slvnwhrl | null | null | null | 0 | 66 | ---
license: cc-by-nc-4.0
language:
- de
tags:
- embeddings
- clustering
- benchmark
size_categories:
- 10K<n<100K
---
This dataset can be used as a benchmark for clustering word embeddings for <b>German</b>.
The datasets contains book titles and is based on the dataset from the [GermEval 2019 Shared Task on Hierarchical Classification of Blurbs](https://www.inf.uni-hamburg.de/en/inst/ab/lt/resources/data/germeval-2019-hmc.html). It contains 17'726 unqiue samples, 28 splits with 177 to 16'425 samples and 4 to 93 unique classes. Splits are built similarly to [MTEB](https://github.com/embeddings-benchmark/mteb)'s [ArxivClusteringS2S](https://huggingface.co/datasets/mteb/arxiv-clustering-s2s).
Have a look at [German Text Embedding Clustering Benchmark](https://github.com/ClimSocAna/tecb-de) for more infos, datasets and evaluation results. |
renumics/cifar100-enriched | 2023-06-06T12:23:33.000Z | [
"task_categories:image-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-80-Million-Tiny-Images",
"language:en",
"license:mit",
"image classification",
"cifar-100",
"cifar-... | renumics | The CIFAR-100 dataset consists of 60000 32x32 colour images in 100 classes, with 600 images
per class. There are 500 training images and 100 testing images per class. There are 50000 training images and 10000 test images. The 100 classes are grouped into 20 superclasses.
There are two labels per image - fine label (actual class) and coarse label (superclass). | @TECHREPORT{Krizhevsky09learningmultiple,
author = {Alex Krizhevsky},
title = {Learning multiple layers of features from tiny images},
institution = {},
year = {2009}
} | null | 4 | 66 | ---
license: mit
task_categories:
- image-classification
pretty_name: CIFAR-100
source_datasets:
- extended|other-80-Million-Tiny-Images
paperswithcode_id: cifar-100
size_categories:
- 10K<n<100K
tags:
- image classification
- cifar-100
- cifar-100-enriched
- embeddings
- enhanced
- spotlight
- renumics
language:
- en
multilinguality:
- monolingual
annotations_creators:
- crowdsourced
language_creators:
- found
---
# Dataset Card for CIFAR-100-Enriched (Enhanced by Renumics)
## Dataset Description
- **Homepage:** [Renumics Homepage](https://renumics.com/?hf-dataset-card=cifar100-enriched)
- **GitHub** [Spotlight](https://github.com/Renumics/spotlight)
- **Dataset Homepage** [CS Toronto Homepage](https://www.cs.toronto.edu/~kriz/cifar.html#:~:text=The%20CIFAR%2D100%20dataset)
- **Paper:** [Learning Multiple Layers of Features from Tiny Images](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf)
### Dataset Summary
📊 [Data-centric AI](https://datacentricai.org) principles have become increasingly important for real-world use cases.
At [Renumics](https://renumics.com/?hf-dataset-card=cifar100-enriched) we believe that classical benchmark datasets and competitions should be extended to reflect this development.
🔍 This is why we are publishing benchmark datasets with application-specific enrichments (e.g. embeddings, baseline results, uncertainties, label error scores). We hope this helps the ML community in the following ways:
1. Enable new researchers to quickly develop a profound understanding of the dataset.
2. Popularize data-centric AI principles and tooling in the ML community.
3. Encourage the sharing of meaningful qualitative insights in addition to traditional quantitative metrics.
📚 This dataset is an enriched version of the [CIFAR-100 Dataset](https://www.cs.toronto.edu/~kriz/cifar.html).
### Explore the Dataset

The enrichments allow you to quickly gain insights into the dataset. The open source data curation tool [Renumics Spotlight](https://github.com/Renumics/spotlight) enables that with just a few lines of code:
Install datasets and Spotlight via [pip](https://packaging.python.org/en/latest/key_projects/#pip):
```python
!pip install renumics-spotlight datasets
```
Load the dataset from huggingface in your notebook:
```python
import datasets
dataset = datasets.load_dataset("renumics/cifar100-enriched", split="train")
```
Start exploring with a simple view that leverages embeddings to identify relevant data segments:
```python
from renumics import spotlight
df = dataset.to_pandas()
df_show = df.drop(columns=['embedding', 'probabilities'])
spotlight.show(df_show, port=8000, dtype={"image": spotlight.Image, "embedding_reduced": spotlight.Embedding})
```
You can use the UI to interactively configure the view on the data. Depending on the concrete tasks (e.g. model comparison, debugging, outlier detection) you might want to leverage different enrichments and metadata.
### CIFAR-100 Dataset
The CIFAR-100 dataset consists of 60000 32x32 colour images in 100 classes, with 600 images per class. There are 50000 training images and 10000 test images.
The 100 classes in the CIFAR-100 are grouped into 20 superclasses. Each image comes with a "fine" label (the class to which it belongs) and a "coarse" label (the superclass to which it belongs).
The classes are completely mutually exclusive.
We have enriched the dataset by adding **image embeddings** generated with a [Vision Transformer](https://huggingface.co/google/vit-base-patch16-224).
Here is the list of classes in the CIFAR-100:
| Superclass | Classes |
|---------------------------------|----------------------------------------------------|
| aquatic mammals | beaver, dolphin, otter, seal, whale |
| fish | aquarium fish, flatfish, ray, shark, trout |
| flowers | orchids, poppies, roses, sunflowers, tulips |
| food containers | bottles, bowls, cans, cups, plates |
| fruit and vegetables | apples, mushrooms, oranges, pears, sweet peppers |
| household electrical devices | clock, computer keyboard, lamp, telephone, television|
| household furniture | bed, chair, couch, table, wardrobe |
| insects | bee, beetle, butterfly, caterpillar, cockroach |
| large carnivores | bear, leopard, lion, tiger, wolf |
| large man-made outdoor things | bridge, castle, house, road, skyscraper |
| large natural outdoor scenes | cloud, forest, mountain, plain, sea |
| large omnivores and herbivores | camel, cattle, chimpanzee, elephant, kangaroo |
| medium-sized mammals | fox, porcupine, possum, raccoon, skunk |
| non-insect invertebrates | crab, lobster, snail, spider, worm |
| people | baby, boy, girl, man, woman |
| reptiles | crocodile, dinosaur, lizard, snake, turtle |
| small mammals | hamster, mouse, rabbit, shrew, squirrel |
| trees | maple, oak, palm, pine, willow |
| vehicles 1 | bicycle, bus, motorcycle, pickup truck, train |
| vehicles 2 | lawn-mower, rocket, streetcar, tank, tractor |
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image into one of 100 classes. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-cifar-100).
### Languages
English class labels.
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```python
{
'image': '/huggingface/datasets/downloads/extracted/f57c1a3fbca36f348d4549e820debf6cc2fe24f5f6b4ec1b0d1308a80f4d7ade/0/0.png',
'full_image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=32x32 at 0x7F15737C9C50>,
'fine_label': 19,
'coarse_label': 11,
'fine_label_str': 'cattle',
'coarse_label_str': 'large_omnivores_and_herbivores',
'fine_label_prediction': 19,
'fine_label_prediction_str': 'cattle',
'fine_label_prediction_error': 0,
'split': 'train',
'embedding': [-1.2482988834381104,
0.7280710339546204, ...,
0.5312759280204773],
'probabilities': [4.505949982558377e-05,
7.286163599928841e-05, ...,
6.577593012480065e-05],
'embedding_reduced': [1.9439491033554077, -5.35720682144165]
}
```
### Data Fields
| Feature | Data Type |
|---------------------------------|------------------------------------------------|
| image | Value(dtype='string', id=None) |
| full_image | Image(decode=True, id=None) |
| fine_label | ClassLabel(names=[...], id=None) |
| coarse_label | ClassLabel(names=[...], id=None) |
| fine_label_str | Value(dtype='string', id=None) |
| coarse_label_str | Value(dtype='string', id=None) |
| fine_label_prediction | ClassLabel(names=[...], id=None) |
| fine_label_prediction_str | Value(dtype='string', id=None) |
| fine_label_prediction_error | Value(dtype='int32', id=None) |
| split | Value(dtype='string', id=None) |
| embedding | Sequence(feature=Value(dtype='float32', id=None), length=768, id=None) |
| probabilities | Sequence(feature=Value(dtype='float32', id=None), length=100, id=None) |
| embedding_reduced | Sequence(feature=Value(dtype='float32', id=None), length=2, id=None) |
### Data Splits
| Dataset Split | Number of Images in Split | Samples per Class (fine) |
| ------------- |---------------------------| -------------------------|
| Train | 50000 | 500 |
| Test | 10000 | 100 |
## Dataset Creation
### Curation Rationale
The CIFAR-10 and CIFAR-100 are labeled subsets of the [80 million tiny images](http://people.csail.mit.edu/torralba/tinyimages/) dataset.
They were collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
If you use this dataset, please cite the following paper:
```
@article{krizhevsky2009learning,
added-at = {2021-01-21T03:01:11.000+0100},
author = {Krizhevsky, Alex},
biburl = {https://www.bibsonomy.org/bibtex/2fe5248afe57647d9c85c50a98a12145c/s364315},
interhash = {cc2d42f2b7ef6a4e76e47d1a50c8cd86},
intrahash = {fe5248afe57647d9c85c50a98a12145c},
keywords = {},
pages = {32--33},
timestamp = {2021-01-21T03:01:11.000+0100},
title = {Learning Multiple Layers of Features from Tiny Images},
url = {https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf},
year = 2009
}
```
### Contributions
Alex Krizhevsky, Vinod Nair, Geoffrey Hinton, and Renumics GmbH. |
michaelwzhu/ShenNong_TCM_Dataset | 2023-06-25T13:29:04.000Z | [
"license:apache-2.0",
"region:us"
] | michaelwzhu | null | null | null | 19 | 66 | ---
license: apache-2.0
---
|
heegyu/korquad-chat-v1 | 2023-05-06T09:12:14.000Z | [
"license:mit",
"region:us"
] | heegyu | null | null | null | 5 | 66 | ---
license: mit
---
- 총 9619개의 한국어 지식기반 대화 데이터셋입니다.
- KorQuAD 1.0 데이터에 있는 문서를 ChatGPT에게 준 뒤 생성했습니다.
서로를 호칭하는 부분은 아래처럼 처리되어있습니다.
```
안녕하세요 <|bot|>. 요즘 어떻게 지내시나요?\n<bot> 안녕하세요 <|user|>.
```
데이터 샘플
```
{
"source": "korquad-chat",
"text": "
<sys>유전자의 이름은 인도의 수도 뉴델리의 이름을 따 붙여졌는데, 이는 2009년 용 (Yong) 등이 처음 기록한 이 케이스의 환자가 인도에서 처음 박테리아 감염을 확인하였기 때문이다. 박테리아 감염 치료가 뉴델리 병원에서 실패로 끝난 후 그는 모국 스웨덴으로 송환되었으며, 그곳에서 카바페넴에 내성을 보이는 폐렴막대균의 유전자가 분리되었다. 용 등은 이 내성 메커니즘이 \"분명히 인도에서 생겨났으나, 이것이 얼마나 퍼져있는지를 보여주는 데이터가 인도에 존재한다\"라고 주장하였다. 보고된 바에 따르면 이 유전자는 파키스탄, 인도, 그리고 여러 아시아 국가에서 발견되었으며 이곳 병원에 입원했던 사람들에 의해 유럽으로 옮겨졌다. 그 중 대다수는 저가의 성형 수술을 위해 의료 관광을 떠났다가 수술 중 감염이 된 사람들이었다.\n
<usr> 메탈로-베타락타마제가 어디서 발견되었나요?\n
<bot> 이 유전자는 파키스탄, 인도, 그리고 여러 아시아 국가에서 발견된 것으로 보고되었습니다.\n
<usr> 이 박테리아 감염은 어떻게 전파되는 건가요?\n
<bot> 이 병원균은 주로 의료 관광으로 인해 수술 중 감염된 사람들에 의해 유럽으로 전파되었습니다.\n
<usr> 이 유전자는 카바페넴에 반응하지 않는 내성 케이스에서 처음 발견됐는데, 이 내성 메커니즘이 다른 국가에도 퍼진 건가요?\n
<bot> 이 내성 메커니즘이 여러 나라에서 발견된 것으로 보고되기 때문에, 퍼져있다고 볼 수 있습니다. 그러나, 이것이 얼마나 퍼져있는지를 보여주는 데이터가 인도에 존재한다는 주장이 있습니다.\n
<usr> 이 박테리아 감염은 어떤 질병을 일으키나요?\n
<bot> 이 메탈로-베타락타마제가 감염하면, 카바페넴 유형의 항생제를 사용할 수 없고, 치명적인 감염증으로 발전할 가능성이 있습니다.\n
<usr> 이 박테리아 감염을 예방하거나 치료할 방법이 있을까요?\n
<bot> 이 박테리아 감염을 예방하거나 치료할 수 있는 항생제는 여전히 발견되지 않았습니다. 따라서, 이 병균을 포함한 감염 예방의 중요성이 더욱 부각되고 있습니다."
}
```
ChatGPT 데이터 생성 프롬프트
```
A와 B 두 사람이 나누는 대화를 만들어주세요.
1. A는 주어진 글에서 말하는 내용에 관해 B에게 질문을 하거나 요청합니다. B는 글의 내용을 이해하고 완전히 학습한 상태입니다. B는 A의 질문과 요청에 자신이 학습한 내용을 바탕으로 대답을 해야 합니다.
2. B는 글에 존재하지 않거나 사실에 근거하지 않은 대답을 해서는 안됩니다.
3. 각 발화는 최대 3개의 문장으로 이루어져 있습니다.
4. 대화는 A와 B가 서로 주고받으며 순서대로 A의 발화는 A:, B의 발화는 B: 로 시작해야하고 띄어쓰기로 구분합니다.
5. A와 B가 글을 읽었다는 내용이나, 글에 대해서 평가하거나, 글을 언급하는 내용이 포함되어서는 안됩니다.
6. A와 B가 서로를 언급할 때는 A씨, B씨로 호칭해야합니다.
7. A와 B는 서로 8번 대화를 주고받아야 합니다. 대화의 전체길이는 최대한 200 단어가 넘지 않도록 대화를 끝내야합니다.
제목: #title
#text
``` |
nisaar/Lawyer_GPT_India | 2023-06-16T05:49:30.000Z | [
"license:apache-2.0",
"region:us"
] | nisaar | null | null | null | 3 | 66 | ---
license: apache-2.0
---
**Dataset Card for Indian Polity Question-Answer Dataset**
---
**Dataset Summary**
This dataset contains a collection of question-answer pairs on the subject of Indian Polity. The aim is to provide comprehensive answers to a wide range of questions pertaining to the Indian Constitution, judiciary, legislative, and various socio-political issues in India. It serves as a valuable resource for learners, researchers, and AI systems seeking to understand or respond to questions about Indian Polity.
---
**Supported Tasks and Leaderboards**
This dataset is useful for tasks such as question answering, text comprehension, language modelling, and conversational AI development. There's no specific leaderboard associated with this dataset.
---
**Languages**
The dataset is in English.
---
**Dataset Structure**
- **Data Instances**
Each instance in the dataset consists of a pair of a human-posed question and an assistant-provided answer on a specific topic in Indian Polity.
- **Data Fields**
1. Question: A text field containing the question.
2. Answer: A text field containing the corresponding answer.
- **Data Splits**
The dataset isn't divided into standard splits of training, validation, and test sets.
---
**Dataset Creation**
- **Curation Rationale**
The dataset was curated to provide accurate and comprehensive answers to a range of questions about Indian Polity. It covers fundamental rights, constitutional provisions, legislative procedures, and socio-political issues, among others.
- **Source Data**
- **Initial Data Collection and Normalization**
Data collection involved generating questions on Indian Polity topics and providing detailed answers.
- **Who are the source language producers?**
The language was produced by a language model trained by OpenAI.
---
**Annotations**
- **Annotation process**
Not applicable as the dataset doesn't contain annotations.
- **Who are the annotators?**
Not applicable as the dataset doesn't contain annotations.
---
**Personal and Sensitive Information**
The dataset does not contain any personal or sensitive information.
---
**Considerations for Using the Data**
- **Social Impact of Dataset**
The dataset can contribute to the understanding of Indian Polity and Constitution. It can help in educational, research, and AI applications.
- **Discussion of Biases**
There is no obvious bias in the dataset as it provides factual information related to the Indian Constitution and Polity.
- **Other Known Limitations**
The dataset may not cover all possible questions on Indian Polity. Additionally, all answers are in English, which may limit its use for non-English speakers.
---
**Additional Information**
- **Dataset Curators**
The dataset has been curated by an OpenAI language model.
- **Licensing Information**
The dataset follows OpenAI's standard data use policy.
- **Citation Information**
Not applicable as this is an artificial dataset.
- **Contributions**
The dataset was generated by the ChatGPT model trained by OpenAI. |
open-llm-leaderboard/results | 2023-10-11T01:16:56.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | null | 11 | 66 | Entry not found |
augtoma/medmcqa | 2023-08-11T20:44:27.000Z | [
"region:us"
] | augtoma | null | null | null | 0 | 66 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: cop
dtype:
class_label:
names:
'0': a
'1': b
'2': c
'3': d
- name: choice_type
dtype: string
- name: exp
dtype: string
- name: subject_name
dtype: string
- name: topic_name
dtype: string
- name: options
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer_idx
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 136988451
num_examples: 182822
- name: test
num_bytes: 2350095
num_examples: 4183
download_size: 90978864
dataset_size: 139338546
---
# Dataset Card for "medmcqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Dzeniks/wikipedia_keywords | 2023-09-01T14:58:30.000Z | [
"license:apache-2.0",
"region:us"
] | Dzeniks | null | null | null | 0 | 66 | ---
license: apache-2.0
---
|
ChangeIsKey/open-riksdag | 2023-09-13T07:57:38.000Z | [
"size_categories:1M<n<10M",
"language:sv",
"license:cc-by-4.0",
"diachronic",
"semantic change",
"region:us"
] | ChangeIsKey | This is a dataset of text from the Riksdag, Sweden's legislative body.
The original data is availble without a license under the Re-use of Public Administration Documents Act (2010:566) at https://data.riksdagen.se/data/dokument
This dataset is derivative of a version compiled by Språkbanken Text (SBX) at the University of Gothenburg (Sweden). That version consists of XML files split by document source (motions, questions, protocol, etc.) and includes additional linguistic annotations. It is available under a CC BY 4.0 license at https://spraakbanken.gu.se/resurser/rd
The focus of this huggingface dataset is to organise the data for fine-grained diachronic modeling. To that end, this dataset includes two configurations:
# Configurations
## `sentences`
This configuration provides sentences in raw text format with their original whitespace. Sentence-level tokenisation was performed by Språkbanken.
`datasets.load_dataset('ChangeIsKey/open-riksdag', 'sentences', years=YEARS, sources=SOURCES)`
- `YEARS:list(int)` - years in the range [1960, 2022] from which sentences are drawn
- `SOURCES:list(str)` - the Open Riksdag data is split into different data sources
- `bet` _Betänkande_ ~ reports
- `ds`
- `eun` _EUN_ ~ documents from the EU committee
- `flista` _Föredragningslistor_ ~ Lists of speeches
- `fpm` _faktapromemorior_ ~ factual memoranda on EU commission proposals
- `frsrdg` _Framställning/redogörelse_ ~ petitions and reports from bodies appointed by the Riksdag
...
data fields
- `sentence` -
- `date` -
- `source`
- `document_id`
...
## `targets-103`
- `target_lemma`
- `start`
- `end`
In a nutshell, this version offers:
- all sentences including one or more of 103 target words, which were chosen by TF-IDF (described below)
- per-month subsets (with all document types combined)
- one line per sentence (sentences shorter than 4 words were discarded)
- data includes: date, source, document_id, target_word, and text.
License is CC BY 4.0 with attribution. | null | null | 0 | 66 | ---
language:
- sv
license: cc-by-4.0
size_categories:
- 1M<n<10M
pretty_name: Open Riksdag-103
tags:
- diachronic
- semantic change
---
This is a dataset of text from the Riksdag, Sweden's national legislative body.
The original data is availble without a license under the Re-use of Public Administration Documents Act (2010:566) at https://data.riksdagen.se/data/dokument
This dataset is derivative of a version compiled by Språkbanken Text (SBX) at the University of Gothenburg (Sweden). That version consists of XML files split by source document type (motions, questions, protocol, etc.) and includes additional linguistic annotations. It is available under a CC BY 4.0 license at https://spraakbanken.gu.se/resurser/rd
The focus of this huggingface dataset is to organise the data for fine-grained diachronic modeling. In a nutshell, this version offers:
- all sentences including one or more of 103 target words, which were chosen by TF-IDF (described below)
- per-month subsets (with all document types combined)
- one line per sentence (sentences shorter than 4 words were discarded)
- data includes: date, document_type, document_id, target_word, and text.
The dataset builder requires a `years` argument, which must be an interable of years between 1979 and 2019 (inclusive). This can be supplied to the `load_dataset` function as a keyword argument.
For example, to load raw sentences from the `prop` and `bet` data sources run:
```python
from datasets import load_dataset
data = load_dataset('ChangeIsKey/open-riksdag', 'sentences' years=range(1999,2000), sources=['prop', 'bet'])
```
License is CC BY 4.0 with attribution.
|
Minami-su/roleplay_multiturn_chat_1k_zh_v0.1 | 2023-10-03T09:39:45.000Z | [
"language:zh",
"roleplay",
"multiturn_chat",
"region:us"
] | Minami-su | null | null | null | 4 | 66 | ---
language:
- zh
tags:
- roleplay
- multiturn_chat
---
## 介绍
基于self-instruct生成的多轮对话roleplay数据,约1k条不同的人格数据和对话
## 存在问题:
1.基于模型自身生成,所以roleplay存在模型本身价值观融入情况,导致roleplay不够真实,不够准确。
## 关于我自己:
我是小雨的开发者,小雨是一个情感ai,人格ai,如果对小雨感兴趣的话欢迎支持一下,她目前在bilibili直播,目前我仍在不断的改进
url:https://live.bilibili.com/27357528?broadcast_type=0&is_room_feed=1&spm_id_from=333.999.live_users_card.0.click&live_from=86001
## 注:
使用本数据集请注明来源
## 引用
```
@misc{selfinstruct,
title={Self-Instruct: Aligning Language Model with Self Generated Instructions},
author={Wang, Yizhong and Kordi, Yeganeh and Mishra, Swaroop and Liu, Alisa and Smith, Noah A. and Khashabi, Daniel and Hajishirzi, Hannaneh},
journal={arXiv preprint arXiv:2212.10560},
year={2022}
}
```
|
phanvancongthanh/enamine_diversity | 2023-09-22T23:41:03.000Z | [
"region:us"
] | phanvancongthanh | null | null | null | 0 | 66 | Entry not found |
gary-roach/NLP | 2023-09-21T01:40:44.000Z | [
"region:us"
] | gary-roach | null | null | null | 0 | 66 | Entry not found |
minh21/COVID-QA-validation-sentence-transformer | 2023-09-24T01:27:54.000Z | [
"region:us"
] | minh21 | null | null | null | 0 | 66 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context_chunks
sequence: string
- name: document_id
dtype: int64
- name: id
dtype: int64
splits:
- name: train
num_bytes: 95329437
num_examples: 2019
download_size: 17898620
dataset_size: 95329437
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "COVID-QA-validation-sentence-transformer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CollectiveCognition/chats-data-2023-09-27 | 2023-09-28T00:40:51.000Z | [
"license:mit",
"region:us"
] | CollectiveCognition | null | null | null | 11 | 66 | ---
license: mit
---
# Dataset Card for "Collective Cognition ChatGPT Conversations"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
### Dataset Summary
The "Collective Cognition ChatGPT Conversations" dataset is a collection of chat logs between users and the ChatGPT model. These conversations have been shared by users on the "Collective Cognition" website. The dataset provides insights into user interactions with language models and can be utilized for multiple purposes, including training, research, and analysis.
Visit Collective Cognition to browse or contribute chats here: https://collectivecognition.ai
### Supported Tasks
- **Language Model Finetuning**: Training through behavior cloning and knowledge instilling.
- **Dialogue Understanding**: Analyzing the flow and structure of conversations.
- **Sentiment Analysis**: Understanding the sentiment or emotion in the dialogues.
- **Topic Modeling**: Determining the topics discussed within the chats.
### Languages
The primary language of the dataset is English, but any language chat may be present in the dataset as users share more chats.
## Dataset Structure
### Data Instances
A typical data instance includes a chat log with a title, model name, whether the chat used custom instructions (currently not included if so), and the content of the message with the role of the sender (either "human" or "gpt").
The dataset is formatted to conform with ShareGPT standard.
This is an example entry in the dataset, noting that turns are variable and can be many or as few as two:
```json
{
"title": "Interest Formulas Clarification",
"model_name": "GPT-4",
"custom_instruction": false,
"conversations": [
{
"from": "human",
"value": "Is this correct? The formula for calculating simple interest (also known as Compound Interest) is given by I = P * r^n, where \n- I represents the final amount of money, also called principal or capital. This value will keep on increasing at a fixed rate per unit time denoted by 'r' and number of times denoted by 'n'. The initial investment represented by 'P', while the percentage rate expressed as an annual fraction, i.e., p/100 representing the percentage to be paid annually. In other words it\u2019s how much you earn in one year out of your original $100 invested with a 5% annual return compounded monthly."
},
{
"from": "assistant",
"value": "I'm sorry, but there seems to be some confusion in your question. The formula given and the definitions of the variables seem to be mixed up between simple interest and compound interest. Let me clarify:\n\n1. Simple Interest: This type of interest is calculated only on the original amount (principal) that you invested or loaned. The formula for simple interest is I = PRT, where:\n - I is the interest\n - P is the principal amount (initial investment)\n - R is the annual interest rate (in decimal form, so a 5% rate would be 0.05)\n - T is the time the money is invested or borrowed for, in years.\n\n2. Compound Interest: This type of interest is calculated on the initial principal and also on the accumulated interest of previous periods of a deposit or loan. The formula for compound interest is A = P(1 + r/n)^(nt), where:\n - A is the amount of money accumulated after n years, including interest.\n - P is the principal amount (the initial amount of money)\n - r is the annual interest rate (in decimal form, so a 5% rate would be 0.05)\n - n is the number of times that interest is compounded per year\n - t is the number of years the money is invested or borrowed for.\n\nIn both cases, the interest rate is usually expressed as an annual rate, regardless of the length of the term."
}
]
}
```
### Data Splits
Currently, the dataset is not divided into specific splits (train, test, validation).
## Dataset Creation
### Curation Rationale
The dataset was curated to provide insights into how users interact with language models and to contribute to the broader NLP community's resources.
### Source Data
The data originates from user contributions on the "Collective Cognition" website.
### Personal and Sensitive Information
All chats uploaded to the Collective Cognition website are made public, and are uploaded as a new dataset periodically. If you would like to have your chat removed, please email admin@collectivecognition.ai
## Considerations for Using the Data
### Social Impact of Dataset
The dataset offers a glimpse into the interaction dynamics between humans and AI models. It can be instrumental for researchers studying human-AI collaboration.
### Discussion of Biases
There might be biases in the dataset based on the types of users contributing chat logs and the topics they discuss with ChatGPT, particularly centered around what users may utilize ChatGPT for the most.
### Other Known Limitations
The dataset is dependent on the voluntary contributions of users. Hence, it might not represent the entire spectrum of interactions that users have with ChatGPT.
## Additional Information
### Licensing Information
MIT |
doqa | 2023-04-05T10:04:58.000Z | [
"language:en",
"arxiv:2005.01328",
"region:us"
] | null | DoQA is a dataset for accessing Domain Specific FAQs via conversational QA that contains 2,437 information-seeking question/answer dialogues
(10,917 questions in total) on three different domains: cooking, travel and movies. Note that we include in the generic concept of FAQs also
Community Question Answering sites, as well as corporate information in intranets which is maintained in textual form similar to FAQs, often
referred to as internal “knowledge bases”.
These dialogues are created by crowd workers that play the following two roles: the user who asks questions about a given topic posted in Stack
Exchange (https://stackexchange.com/), and the domain expert who replies to the questions by selecting a short span of text from the long textual
reply in the original post. The expert can rephrase the selected span, in order to make it look more natural. The dataset covers unanswerable
questions and some relevant dialogue acts.
DoQA enables the development and evaluation of conversational QA systems that help users access the knowledge buried in domain specific FAQs. | @misc{campos2020doqa,
title={DoQA -- Accessing Domain-Specific FAQs via Conversational QA},
author={Jon Ander Campos and Arantxa Otegi and Aitor Soroa and Jan Deriu and Mark Cieliebak and Eneko Agirre},
year={2020},
eprint={2005.01328},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 0 | 65 | ---
language:
- en
paperswithcode_id: doqa
pretty_name: DoQA
dataset_info:
- config_name: cooking
features:
- name: title
dtype: string
- name: background
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: id
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: followup
dtype: string
- name: yesno
dtype: string
- name: orig_answer
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: test
num_bytes: 2969064
num_examples: 1797
- name: validation
num_bytes: 1461613
num_examples: 911
- name: train
num_bytes: 6881681
num_examples: 4612
download_size: 4197671
dataset_size: 11312358
- config_name: movies
features:
- name: title
dtype: string
- name: background
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: id
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: followup
dtype: string
- name: yesno
dtype: string
- name: orig_answer
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: test
num_bytes: 3166075
num_examples: 1884
download_size: 4197671
dataset_size: 3166075
- config_name: travel
features:
- name: title
dtype: string
- name: background
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: id
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: followup
dtype: string
- name: yesno
dtype: string
- name: orig_answer
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: test
num_bytes: 3216374
num_examples: 1713
download_size: 4197671
dataset_size: 3216374
---
# Dataset Card for "doqa"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/RevanthRameshkumar/CRD3](https://github.com/RevanthRameshkumar/CRD3)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 12.59 MB
- **Size of the generated dataset:** 17.70 MB
- **Total amount of disk used:** 30.28 MB
### Dataset Summary
DoQA is a dataset for accessing Domain Specific FAQs via conversational QA that contains 2,437 information-seeking question/answer dialogues
(10,917 questions in total) on three different domains: cooking, travel and movies. Note that we include in the generic concept of FAQs also
Community Question Answering sites, as well as corporate information in intranets which is maintained in textual form similar to FAQs, often
referred to as internal “knowledge bases”.
These dialogues are created by crowd workers that play the following two roles: the user who asks questions about a given topic posted in Stack
Exchange (https://stackexchange.com/), and the domain expert who replies to the questions by selecting a short span of text from the long textual
reply in the original post. The expert can rephrase the selected span, in order to make it look more natural. The dataset covers unanswerable
questions and some relevant dialogue acts.
DoQA enables the development and evaluation of conversational QA systems that help users access the knowledge buried in domain specific FAQs.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### cooking
- **Size of downloaded dataset files:** 4.19 MB
- **Size of the generated dataset:** 11.31 MB
- **Total amount of disk used:** 15.51 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [852],
"text": ["CANNOTANSWER"]
},
"background": "\"So, over mixing batter forms gluten, which in turn hardens the cake. Fine.The problem is that I don't want lumps in the cakes, ...",
"context": "\"Milk won't help you - it's mostly water, and gluten develops from flour (more accurately, specific proteins in flour) and water...",
"followup": "n",
"id": "C_64ce44d5f14347f488eb04b50387f022_q#2",
"orig_answer": {
"answer_start": [852],
"text": ["CANNOTANSWER"]
},
"question": "Ok. What can I add to make it more softer and avoid hardening?",
"title": "What to add to the batter of the cake to avoid hardening when the gluten formation can't be avoided?",
"yesno": "x"
}
```
#### movies
- **Size of downloaded dataset files:** 4.19 MB
- **Size of the generated dataset:** 3.17 MB
- **Total amount of disk used:** 7.36 MB
An example of 'test' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [852],
"text": ["CANNOTANSWER"]
},
"background": "\"So, over mixing batter forms gluten, which in turn hardens the cake. Fine.The problem is that I don't want lumps in the cakes, ...",
"context": "\"Milk won't help you - it's mostly water, and gluten develops from flour (more accurately, specific proteins in flour) and water...",
"followup": "n",
"id": "C_64ce44d5f14347f488eb04b50387f022_q#2",
"orig_answer": {
"answer_start": [852],
"text": ["CANNOTANSWER"]
},
"question": "Ok. What can I add to make it more softer and avoid hardening?",
"title": "What to add to the batter of the cake to avoid hardening when the gluten formation can't be avoided?",
"yesno": "x"
}
```
#### travel
- **Size of downloaded dataset files:** 4.19 MB
- **Size of the generated dataset:** 3.22 MB
- **Total amount of disk used:** 7.41 MB
An example of 'test' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [852],
"text": ["CANNOTANSWER"]
},
"background": "\"So, over mixing batter forms gluten, which in turn hardens the cake. Fine.The problem is that I don't want lumps in the cakes, ...",
"context": "\"Milk won't help you - it's mostly water, and gluten develops from flour (more accurately, specific proteins in flour) and water...",
"followup": "n",
"id": "C_64ce44d5f14347f488eb04b50387f022_q#2",
"orig_answer": {
"answer_start": [852],
"text": ["CANNOTANSWER"]
},
"question": "Ok. What can I add to make it more softer and avoid hardening?",
"title": "What to add to the batter of the cake to avoid hardening when the gluten formation can't be avoided?",
"yesno": "x"
}
```
### Data Fields
The data fields are the same among all splits.
#### cooking
- `title`: a `string` feature.
- `background`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `id`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
- `followup`: a `string` feature.
- `yesno`: a `string` feature.
- `orig_answer`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
#### movies
- `title`: a `string` feature.
- `background`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `id`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
- `followup`: a `string` feature.
- `yesno`: a `string` feature.
- `orig_answer`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
#### travel
- `title`: a `string` feature.
- `background`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `id`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
- `followup`: a `string` feature.
- `yesno`: a `string` feature.
- `orig_answer`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
#### cooking
| |train|validation|test|
|-------|----:|---------:|---:|
|cooking| 4612| 911|1797|
#### movies
| |test|
|------|---:|
|movies|1884|
#### travel
| |test|
|------|---:|
|travel|1713|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@misc{campos2020doqa,
title={DoQA -- Accessing Domain-Specific FAQs via Conversational QA},
author={Jon Ander Campos and Arantxa Otegi and Aitor Soroa and Jan Deriu and Mark Cieliebak and Eneko Agirre},
year={2020},
eprint={2005.01328},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
hate_speech_pl | 2022-11-03T16:15:27.000Z | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:sentiment-classification",
"task_ids:sentiment-scoring",
"task_ids:topic-classification",
"annotations_creators:expert-generated",
"language_creator... | null | HateSpeech corpus in the current version contains over 2000 posts crawled from public Polish web. They represent various types and degrees of offensive language, expressed toward minorities (eg. ethnical, racial). The data were annotated manually. | null | null | 2 | 65 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- pl
license:
- cc-by-nc-sa-3.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- text-scoring
- multi-class-classification
- multi-label-classification
- sentiment-classification
- sentiment-scoring
- topic-classification
paperswithcode_id: null
pretty_name: HateSpeechPl
dataset_info:
features:
- name: id
dtype: uint16
- name: text_id
dtype: uint32
- name: annotator_id
dtype: uint8
- name: minority_id
dtype: uint8
- name: negative_emotions
dtype: bool
- name: call_to_action
dtype: bool
- name: source_of_knowledge
dtype: uint8
- name: irony_sarcasm
dtype: bool
- name: topic
dtype: uint8
- name: text
dtype: string
- name: rating
dtype: uint8
splits:
- name: train
num_bytes: 3436190
num_examples: 13887
download_size: 3877954
dataset_size: 3436190
---
# Dataset Card for HateSpeechPl
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://zil.ipipan.waw.pl/HateSpeech
- **Repository:** [N/A]
- **Paper:** http://www.qualitativesociologyreview.org/PL/Volume38/PSJ_13_2_Troszynski_Wawer.pdf
- **Leaderboard:** [N/A]
- **Point of Contact:** [Marek Troszyński](mtroszynski@civitas.edu.pl), [Aleksander Wawer](axw@ipipan.waw.pl)
### Dataset Summary
The dataset was created to analyze the possibility of automating the recognition of hate speech in Polish. It was collected from the Polish forums and represents various types and degrees of offensive language, expressed towards minorities.
The original dataset is provided as an export of MySQL tables, what makes it hard to load. Due to that, it was converted to CSV and put to a Github repository.
### Supported Tasks and Leaderboards
- `text-classification`: The dataset might be used to perform the text classification on different target fields, like the presence of irony/sarcasm, minority it describes or a topic.
- `text-scoring`: The sentiment analysis is another task which might be solved on a dataset.
### Languages
Polish, collected from public forums, including the HTML formatting of the text.
## Dataset Structure
### Data Instances
The dataset consists of three collections, originally provided as separate MySQL tables. Here represented as three CSV files.
```
{
'id': 1,
'text_id': 121713,
'annotator_id': 1,
'minority_id': 72,
'negative_emotions': false,
'call_to_action': false,
'source_of_knowledge': 2,
'irony_sarcasm': false,
'topic': 18,
'text': ' <font color=\"blue\"> Niemiec</font> mówi co innego',
'rating': 0
}
```
### Data Fields
List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
- `id`: unique identifier of the entry
- `text_id`: text identifier, useful when a single text is rated several times by different annotators
- `annotator_id`: identifier of the person who annotated the text
- `minority_id`: the internal identifier of the minority described in the text
- `negative_emotions`: boolean indicator of the presence of negative emotions in the text
- `call_to_action`: boolean indicator set to true, if the text calls the audience to perform any action, typically with a negative emotions
- `source_of_knowledge`: categorical variable, describing the source of knowledge for the post rating - 0, 1 or 2 (direct, lexical or contextual, but the description of the meaning for different values couldn't be found)
- `irony_sarcasm`: boolean indicator of the present of irony or sarcasm
- `topic`: internal identifier of the topic the text is about
- `text`: post text content
- `rating`: integer value, from 0 to 4 - the higher the value, the more negative the text content is
### Data Splits
The dataset was not originally split at all.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
The dataset was collected from the public forums.
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The dataset doesn't contain any personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
The automated hate speech recognition is the main beneficial outcome of using the dataset.
### Discussion of Biases
The dataset contains negative posts only and due to that might underrepresent the whole language.
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
The dataset was created by Marek Troszyński and Aleksander Wawer, during work done at [IPI PAN](https://www.ipipan.waw.pl/).
### Licensing Information
According to [Metashare](http://metashare.nlp.ipipan.waw.pl/metashare/repository/browse/polish-hatespeech-corpus/21b7e2366b0011e284b6000423bfd61cbc7616f601724f09bafc8a62c42d56de/), the dataset is licensed under CC-BY-NC-SA, but the version is not mentioned.
### Citation Information
```
@article{troszynski2017czy,
title={Czy komputer rozpozna hejtera? Wykorzystanie uczenia maszynowego (ML) w jako{\'s}ciowej analizie danych},
author={Troszy{\'n}ski, Marek and Wawer, Aleksandra},
journal={Przegl{\k{a}}d Socjologii Jako{\'s}ciowej},
volume={13},
number={2},
pages={62--80},
year={2017},
publisher={Uniwersytet {\L}{\'o}dzki, Wydzia{\l} Ekonomiczno-Socjologiczny, Katedra Socjologii~…}
}
```
### Contributions
Thanks to [@kacperlukawski](https://github.com/kacperlukawski) for adding this dataset. |
anuragshas/ur_opus100_processed | 2022-01-30T16:03:56.000Z | [
"region:us"
] | anuragshas | null | null | null | 1 | 65 | Entry not found |
lavis-nlp/german_legal_sentences | 2022-10-20T18:34:19.000Z | [
"task_categories:text-retrieval",
"task_ids:semantic-similarity-scoring",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n>1M",
"source_datasets:original",
"language:de",
"license:unknown",
"arxiv:2005.13342",
"arxiv:2010.1025... | lavis-nlp | German Legal Sentences (GLS) is an automatically generated training dataset for semantic sentence
matching in the domain in german legal documents. It follows the concept of weak supervision, where
imperfect labels are generated using multiple heuristics. For this purpose we use a combination of
legal citation matching and BM25 similarity. The contained sentences and their citations are parsed
from real judicial decisions provided by [Open Legal Data](http://openlegaldata.io/) | coming soon | null | 3 | 65 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- de
license:
- unknown
multilinguality:
- monolingual
size_categories:
- n>1M
source_datasets:
- original
task_categories:
- text-retrieval
- text-scoring
task_ids:
- semantic-similarity-scoring
- text-retrieval-other-example-based-retrieval
---
# Dataset Card for German Legal Sentences
## Table of Contents
- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://lavis-nlp.github.io/german_legal_sentences/
- **Repository:** https://github.com/lavis-nlp/german_legal_sentences
- **Paper:** coming soon
- **Leaderboard:**
- **Point of Contact:** [Marco Wrzalik](mailto:marco.wrzalik@hs-rm.de)
### Dataset Summary
German Legal Sentences (GLS) is an automatically generated training dataset for semantic sentence matching and citation recommendation in the domain in german legal documents. It follows the concept of weak supervision, where imperfect labels are generated using multiple heuristics. For this purpose we use a combination of legal citation matching and BM25 similarity. The contained sentences and their citations are parsed from real judicial decisions provided by [Open Legal Data](http://openlegaldata.io/) (https://arxiv.org/abs/2005.13342).
### Supported Tasks and Leaderboards
The main associated task is *Semantic Similarity Ranking*. We propose to use the *Mean Reciprocal Rank* (MRR) cut at the tenth position as well as MAP and Recall on Rankings of size 200. As baselines we provide the follows:
| Method | MRR@10 | MAP@200 | Recall@200 |
|-----------------------------------|---------:|-----------:|------------:|
| BM25 - default `(k1=1.2; b=0.75)` | 25.7 | 17.6 | 42.9 |
| BM25 - tuned `(k1=0.47; b=0.97)` | 26.2 | 18.1 | 43.3 |
| [CoRT](https://arxiv.org/abs/2010.10252) | 31.2 | 21.4 | 56.2 |
| [CoRT + BM25](https://arxiv.org/abs/2010.10252) | 32.1 | 22.1 | 67.1 |
In addition, we want to support a *Citation Recommendation* task in the future.
If you wish to contribute evaluation measures or give any suggestion or critique, please write an [e-mail](mailto:marco.wrzalik@hs-rm.de).
### Languages
This dataset contains texts from the specific domain of German court decisions.
## Dataset Structure
### Data Instances
```
{'query.doc_id': 28860,
'query.ref_ids': [6215, 248, 248],
'query.sent_id': 304863,
'query.text': 'Zudem ist zu berücksichtigen , dass die Vollverzinsung nach '
'[REF] i. V. m. [REF] gleichermaßen zugunsten wie zulasten des '
'Steuerpflichtigen wirkt , sodass bei einer Überzahlung durch '
'den Steuerpflichtigen der Staat dem Steuerpflichtigen neben '
'der Erstattung ebenfalls den entstandenen potentiellen Zins- '
'und Liquiditätsnachteil in der pauschalierten Höhe des [REF] '
'zu ersetzen hat , unabhängig davon , in welcher Höhe dem '
'Berechtigten tatsächlich Zinsen entgangen sind .',
'related.doc_id': 56348,
'related.ref_ids': [248, 6215, 62375],
'related.sent_id': 558646,
'related.text': 'Ferner ist zu berücksichtigen , dass der Zinssatz des [REF] '
'im Rahmen des [REF] sowohl für Steuernachforderung wie auch '
'für Steuererstattungen und damit gleichermaßen zugunsten wie '
'zulasten des Steuerpflichtigen wirkt , Vgl. BVerfG , '
'Nichtannahmebeschluss vom [DATE] [REF] , juris , mit der '
'Folge , dass auch Erstattungsansprüche unabhängig davon , ob '
'und in welcher Höhe dem Berechtigten tatsächlich Zinsen '
'entgangen sind , mit monatlich 0,0 % verzinst werden .'}
```
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The documents we take from [Open Legal Data](http://openlegaldata.io/) (https://arxiv.org/abs/2005.13342) are first preprocessed by removing line breaks, enumeration characters and headings. Afterwards we parse legal citations using hand-crafted regular expressions. Each citation is split into it components and normalized, thus different variants of the same citation are matched together. For instance, "§211 Absatz 1 des Strafgesetzbuches" is normalized to "§ 211 Abs. 1 StGB". Every time we discover an unknown citation, we assign an unique id to it. We use these ids to replace parsed citations in the document text with a simple reference tag containing this id (e.g `[REF321]`). At the same time we parse dates and replace them with the date tag `[DATE]`. Both remove dots which can may be confused with the end of a sentence, which makes the next stage easier.
We use [SoMaJo](https://github.com/tsproisl/SoMaJo) to perform sentence tokenizing on the pre-processed documents. Each sentence that does not contain at least one legal citation is discarded. For the rest we assign sentence ids, remove all reference ids from them as well as any contents in braces (braces often contain large enumerations of citations and their sources). At the same time we keep track of the corresponding document from which a sentence originates and which references occur in it.
#### Who are the source language producers?
The source language originates in the context of German court proceedings.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The annotations are machine-generated.
### Personal and Sensitive Information
The source documents are already public and anonymized.
## Considerations for Using the Data
### Social Impact of Dataset
With this dataset, we strive towards better accessibility of court decisions to the general public by accelerating research on semantic search technologies. We hope that emerging search technologies will enable the layperson to find relevant information without knowing the specific terms used by lawyers.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
Coming soon!
### Contributions
Thanks to [@mwrzalik](https://github.com/mwrzalik) for adding this dataset. |
embedding-data/simple-wiki | 2022-08-02T03:34:17.000Z | [
"task_categories:sentence-similarity",
"task_ids:semantic-similarity-classification",
"language:en",
"license:mit",
"region:us"
] | embedding-data | null | null | null | 5 | 65 | ---
license: mit
language:
- en
paperswithcode_id: embedding-data/simple-wiki
pretty_name: simple-wiki
task_categories:
- sentence-similarity
- paraphrase-mining
task_ids:
- semantic-similarity-classification
---
# Dataset Card for "simple-wiki"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://cs.pomona.edu/~dkauchak/simplification/](https://cs.pomona.edu/~dkauchak/simplification/)
- **Repository:** [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/)
- **Paper:** [https://aclanthology.org/P11-2117/](https://aclanthology.org/P11-2117/)
- **Point of Contact:** [David Kauchak](dkauchak@cs.pomona.edu)
### Dataset Summary
This dataset contains pairs of equivalent sentences obtained from Wikipedia.
### Supported Tasks
- [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity.
### Languages
- English.
## Dataset Structure
Each example in the dataset contains pairs of equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value".
```
{"set": [sentence_1, sentence_2]}
{"set": [sentence_1, sentence_2]}
...
{"set": [sentence_1, sentence_2]}
```
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences.
### Usage Example
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
```python
from datasets import load_dataset
dataset = load_dataset("embedding-data/simple-wiki")
```
The dataset is loaded as a `DatasetDict` and has the format:
```python
DatasetDict({
train: Dataset({
features: ['set'],
num_rows: 102225
})
})
```
Review an example `i` with:
```python
dataset["train"][i]["set"]
```
### Curation Rationale
[More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/)
#### Who are the source language producers?
[More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/)
### Annotations
#### Annotation process
[More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/)
#### Who are the annotators?
[More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/)
### Personal and Sensitive Information
[More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/)
### Discussion of Biases
[More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/)
### Other Known Limitations
[More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/)
## Additional Information
### Dataset Curators
[More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/)
### Licensing Information
[More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/)
### Contributions
|
lucadiliello/asnq | 2022-12-05T11:17:24.000Z | [
"region:us"
] | lucadiliello | null | null | null | 0 | 65 | ---
dataset_info:
features:
- name: label
dtype: int64
- name: question
dtype: string
- name: answer
dtype: string
- name: key
dtype: int64
splits:
- name: test
num_bytes: 87612019
num_examples: 466148
- name: dev
num_bytes: 87607015
num_examples: 463914
- name: train
num_bytes: 3814936393
num_examples: 20377568
download_size: 2602671423
dataset_size: 3990155427
---
# Dataset Card for "asnq"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
musabg/wikipedia-tr | 2023-05-16T20:32:53.000Z | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:tr",
"license:cc-by-sa-3.0",
"license:gfdl",
"wikipedia,... | musabg | null | null | null | 3 | 65 | ---
annotations_creators:
- no-annotation
language:
- tr
language_creators:
- crowdsourced
license:
- cc-by-sa-3.0
- gfdl
multilinguality: []
pretty_name: Turkish Wikipedia 2023
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- wikipedia, wiki,
task_categories:
- fill-mask
- text-generation
task_ids:
- masked-language-modeling
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 956353353
num_examples: 520542
download_size: 529875169
dataset_size: 956353353
---
# 📖 Türkçe Vikipedi Mayıs 2023
Bu veri kümesi, Türkçe Vikipedi'den alınan makalelerin bir derlemesi olup, maskeleme dil modelleme ve metin oluşturma görevleri için tasarlanmıştır.
## 🗣️ Etiketlemeler
Bu veri kümesindeki makaleler, özellikle belirli bir görev için etiketlenmemiş olup, veri kümesi etiketsizdir.
## 🌐 Dil
Bu veri kümesi Türkçe yazılmış olup, gönüllülerden oluşan bir ekip tarafından topluluk katılımı yöntemleri ile oluşturulmuştur.
## 📜 Lisans
CC-BY-SA 3.0 ve GFDL
## 💻 Kaynak Veri Kümeleri
Bu veri kümesi, Türkçe Vikipedi'den oluşturulan orijinal bir veri kümesidir.
Türkçe Vikipedi veri kümesini kullandığınız için teşekkürler! Dil modelleme ve metin oluşturma görevleriniz için faydalı olmasını umuyoruz.
---
# 📖 Wikipedia Turkish 2023
This dataset is a collection of articles from the Turkish Wikipedia and is designed to be used for masked language modeling and text generation tasks.
## 📚 Dataset Info
Processed and cleaned using Huggingface wikipedia cleaner.
## 🗣️ Annotations
The articles in this dataset were not specifically annotated for any particular task, meaning that the dataset is unlabeled.
## 🌐 Language
This dataset is written in Turkish and was created using crowdsourcing methods by a team of volunteers.
## 📜 License
CC-BY-SA 3.0 and GFDL
## 💻 Source Datasets
This dataset is an original dataset created from the Turkish Wikipedia.
|
Dahoas/rl-prompt-dataset | 2023-03-17T14:08:30.000Z | [
"region:us"
] | Dahoas | null | null | null | 2 | 65 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 331075688.0
num_examples: 201417
- name: test
num_bytes: 7649255
num_examples: 5103
download_size: 206459232
dataset_size: 338724943.0
---
# Dataset Card for "rl-prompt-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dsfsi/vukuzenzele-sentence-aligned | 2023-09-27T07:24:49.000Z | [
"task_categories:sentence-similarity",
"task_categories:translation",
"language:eng",
"language:afr",
"language:nbl",
"language:xho",
"language:zul",
"language:sot",
"language:nso",
"language:tsn",
"language:ssw",
"language:ven",
"language:tso",
"license:cc-by-4.0",
"multilingual",
"go... | dsfsi | The dataset contains editions from the South African government magazine Vuk'uzenzele. Data was scraped from PDFs that have been placed in the data/raw folder. The PDFS were obtained from the Vuk'uzenzele website. | @dataset{marivate_vukosi_2023_7598540, author = {Marivate, Vukosi and Njini, Daniel and Madodonga, Andani and Lastrucci, Richard and Dzingirai, Isheanesu Rajab, Jenalea}, title = {The Vuk'uzenzele South African Multilingual Corpus}, month = feb, year = 2023, publisher = {Zenodo}, doi = {10.5281/zenodo.7598539}, url = {https://doi.org/10.5281/zenodo.7598539} } | null | 0 | 65 | ---
language:
- eng
- afr
- nbl
- xho
- zul
- sot
- nso
- tsn
- ssw
- ven
- tso
pretty_name: "The Vuk'uzenzele South African Multilingual Corpus"
tags:
- multilingual
- government
license: "cc-by-4.0"
task_categories:
- sentence-similarity
- translation
arxiv: 2303.03750
---
# The Vuk'uzenzele South African Multilingual Corpus
Github: [https://github.com/dsfsi/vukuzenzele-nlp/](https://github.com/dsfsi/vukuzenzele-nlp/)
Zenodo: [](https://doi.org/10.5281/zenodo.7598539)
Arxiv Preprint: [](https://arxiv.org/abs/2303.03750)
# About
The dataset was obtained from the South African government magazine Vuk'uzenzele, created by the [Government Communication and Information System (GCIS)](https://www.gcis.gov.za/).
The original raw PDFS were obtatined from the [Vuk'uzenzele website](https://www.vukuzenzele.gov.za/).
The datasets contain government magazine editions in 11 languages, namely:
| Language | Code | Language | Code |
|------------|-------|------------|-------|
| English | (eng) | Sepedi | (sep) |
| Afrikaans | (afr) | Setswana | (tsn) |
| isiNdebele | (nbl) | Siswati | (ssw) |
| isiXhosa | (xho) | Tshivenda | (ven) |
| isiZulu | (zul) | Xitstonga | (tso) |
| Sesotho | (nso) |
## Available pairings
The alignment direction is bidrectional, i.e. xho-zul is zul-xho
afr-eng; afr-nbl; afr-nso; afr-sot; afr-ssw; afr-tsn; afr-tso; afr-ven; afr-xho; afr-zul
eng-nbl; eng-nso; eng-sot ;eng-ssw; eng-tsn; eng-tso; eng-ven; eng-xho; eng-zul
nbl-nso; nbl-sot; nbl-ssw; nbl-tsn; nbl-tso; nbl-ven; nbl-xho; nbl-zul
nso-sot; nso-ssw; nso-tsn; nso-tso; nso-ven; nso-xho; nso-zul
sot-ssw; sot-tsn; sot-tso; sot-ven; sot-xho; sot-zul
ssw-tsn; ssw-tso; ssw-ven; ssw-xho; ssw-zul
tsn-tso; tsn-ven; tsn-xho; tsn-zul
tso-ven; tso-xho; tso-zul
ven-xho; ven-zul
xho-zul
# Disclaimer
This dataset contains machine-readable data extracted from PDF documents, from https://www.vukuzenzele.gov.za/, provided by the Government Communication Information System (GCIS). While efforts were made to ensure the accuracy and completeness of this data, there may be errors or discrepancies between the original publications and this dataset. No warranties, guarantees or representations are given in relation to the information contained in the dataset. The members of the Data Science for Societal Impact Research Group bear no responsibility and/or liability for any such errors or discrepancies in this dataset. The Government Communication Information System (GCIS) bears no responsibility and/or liability for any such errors or discrepancies in this dataset. It is recommended that users verify all information contained herein before making decisions based upon this information.
# Datasets
The datasets consist of pairwise sentence aligned data. There are 55 distinct datasets of paired sentences.
The data is obtained by comparing [LASER](https://github.com/facebookresearch/LASER) embeddings of sentence tokens between 2 languages. If the similarity is high, the sentences are deemed semantic equivalents of one another and the observation is outputted.
Naming convention:
The naming structure of the pairwise_sentence_aligned folder is `aligned-{src_lang_code}-{tgt_lang_code}.csv`.
For example, `aligned-afr-zul.csv` is the aligned sentences between Afrikaans and isiZulu.
The data is in .csv format and the columns are `src_text`,`tgt_text`,`cosine_score` where:
- `src_text` is the source sentence
- `tgt_text` is the target sentence
- `cosine_score` is the cosine similarity score obtained by comparing the sentence embeddings, it ranges from 0 to 1
**Note:** The notion of source (src) and target (tgt) are only necessary for distinction between the languages used in the aligned pair, as the sentence semantics should be bidirectional. (hallo <-> sawubona)
# Citation
Vukosi Marivate, Andani Madodonga, Daniel Njini, Richard Lastrucci, Isheanesu Dzingirai, Jenalea Rajab. **The Vuk'uzenzele South African Multilingual Corpus**, 2023
> @dataset{marivate_vukosi_2023_7598540,
author = {Marivate, Vukosi and
Njini, Daniel and
Madodonga, Andani and
Lastrucci, Richard and
Dzingirai, Isheanesu
Rajab, Jenalea},
title = {The Vuk'uzenzele South African Multilingual Corpus},
month = feb,
year = 2023,
publisher = {Zenodo},
doi = {10.5281/zenodo.7598539},
url = {https://doi.org/10.5281/zenodo.7598539}
}
### Licence
* Licence for Data - [CC 4.0 BY](LICENSE.md)
|
clane9/NSD-Flat | 2023-09-28T01:26:21.000Z | [
"task_categories:image-to-image",
"task_categories:object-detection",
"size_categories:100K<n<1M",
"license:other",
"biology",
"neuroscience",
"fmri",
"region:us"
] | clane9 | null | null | null | 3 | 65 | ---
license: other
dataset_info:
features:
- name: subject_id
dtype: int64
- name: trial_id
dtype: int64
- name: session_id
dtype: int64
- name: nsd_id
dtype: int64
- name: image
dtype: image
- name: activity
dtype: image
- name: subject
dtype: string
- name: flagged
dtype: bool
- name: BOLD5000
dtype: bool
- name: shared1000
dtype: bool
- name: coco_split
dtype: string
- name: coco_id
dtype: int64
- name: objects
struct:
- name: area
sequence: int64
- name: bbox
sequence:
sequence: float64
- name: category
sequence: string
- name: iscrowd
sequence: int64
- name: segmentation
list:
- name: counts
dtype: string
- name: poly
sequence:
sequence: float64
- name: size
sequence: int64
- name: supercategory
sequence: string
- name: target
sequence: int64
- name: captions
sequence: string
- name: repetitions
struct:
- name: subject1_rep0
dtype: int64
- name: subject1_rep1
dtype: int64
- name: subject1_rep2
dtype: int64
- name: subject2_rep0
dtype: int64
- name: subject2_rep1
dtype: int64
- name: subject2_rep2
dtype: int64
- name: subject3_rep0
dtype: int64
- name: subject3_rep1
dtype: int64
- name: subject3_rep2
dtype: int64
- name: subject4_rep0
dtype: int64
- name: subject4_rep1
dtype: int64
- name: subject4_rep2
dtype: int64
- name: subject5_rep0
dtype: int64
- name: subject5_rep1
dtype: int64
- name: subject5_rep2
dtype: int64
- name: subject6_rep0
dtype: int64
- name: subject6_rep1
dtype: int64
- name: subject6_rep2
dtype: int64
- name: subject7_rep0
dtype: int64
- name: subject7_rep1
dtype: int64
- name: subject7_rep2
dtype: int64
- name: subject8_rep0
dtype: int64
- name: subject8_rep1
dtype: int64
- name: subject8_rep2
dtype: int64
splits:
- name: train
num_bytes: 26695182666.0
num_examples: 195000
- name: test
num_bytes: 2461280671.0
num_examples: 18000
download_size: 22565691383
dataset_size: 29156463337.0
task_categories:
- image-to-image
- object-detection
tags:
- biology
- neuroscience
- fmri
size_categories:
- 100K<n<1M
---
# NSD-Flat
[[`GitHub`]](https://github.com/clane9/NSD-Flat) [[🤗 `Hugging Face Hub`]](https://huggingface.co/datasets/clane9/NSD-Flat)
A Hugging Face dataset of pre-processed brain activity flat maps from the [Natural Scenes Dataset](https://naturalscenesdataset.org/), constrained to a visual cortex region of interest and rendered as PNG images.
## Load the dataset
Load the dataset from [Hugging Face Hub](https://huggingface.co/datasets/clane9/NSD-Flat)
```python
from datasets import load_dataset
dataset = load_dataset("clane9/NSD-Flat", split="train")
```
## Building the dataset
### 1. Download source data
Run [`download_data.sh`](download_data.sh) to download the required source data:
- NSD stimuli images and presentation info
- COCO annotations
- NSD beta activity maps in fsaverge surface space
```bash
bash download_data.sh
```
### 2. Convert the COCO annotations
Run [`convert_nsd_annotations.py`](convert_nsd_annotations.py) to crop and reorganize the COCO annotations for NSD.
```bash
python convert_nsd_annotations.py
```
### 3. Generate the dataset
Run [`generate_dataset.py`](generate_dataset.py) to generate the huggingface dataset in Arrow format.
```bash
python generate_dataset.py --img_size 256 --workers 8
```
## Citation
If you find this dataset useful, please consider citing:
```
@article{allen2022massive,
title = {A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence},
author = {Allen, Emily J and St-Yves, Ghislain and Wu, Yihan and Breedlove, Jesse L and Prince, Jacob S and Dowdle, Logan T and Nau, Matthias and Caron, Brad and Pestilli, Franco and Charest, Ian and others},
journal = {Nature neuroscience},
volume = {25},
number = {1},
pages = {116--126},
year = {2022},
publisher = {Nature Publishing Group US New York}
}
```
```
@misc{lane2023nsdflat,
author = {Connor Lane},
title = {NSD-Flat: Pre-processed brain activity flat maps from the Natural Scenes Dataset},
howpublished = {\url{https://huggingface.co/datasets/clane9/NSD-Flat}},
year = {2023},
}
```
## License
Usage of this dataset constitutes agreement to the [NSD Terms and Conditions](https://cvnlab.slite.page/p/IB6BSeW_7o/Terms-and-Conditions). |
C-MTEB/LCQMC | 2023-07-28T13:51:45.000Z | [
"region:us"
] | C-MTEB | null | null | null | 2 | 65 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: int32
splits:
- name: train
num_bytes: 18419299
num_examples: 238766
- name: validation
num_bytes: 760701
num_examples: 8802
- name: test
num_bytes: 876457
num_examples: 12500
download_size: 14084841
dataset_size: 20056457
---
# Dataset Card for "LCQMC"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ericyu/LEVIRCD_Cropped_256 | 2023-10-06T10:29:40.000Z | [
"region:us"
] | ericyu | null | null | null | 0 | 65 | ---
dataset_info:
features:
- name: imageA
dtype: image
- name: imageB
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 2005523229.68
num_examples: 7120
- name: validation
num_bytes: 244453421.184
num_examples: 1024
- name: test
num_bytes: 518863873.536
num_examples: 2048
download_size: 1108370540
dataset_size: 2768840524.3999996
---
# Dataset Card for "LEVIRCD_Cropped_256"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liyucheng/FrameNet_v17 | 2023-08-31T10:47:56.000Z | [
"license:cc-by-2.0",
"region:us"
] | liyucheng | null | null | null | 0 | 65 | ---
license: cc-by-2.0
---
|
warshakhan/donut_vqa_ISynHMP_all_labels_modified | 2023-09-28T08:29:22.000Z | [
"region:us"
] | warshakhan | null | null | null | 0 | 65 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 583333339.0
num_examples: 2800
- name: valid
num_bytes: 85997587.0
num_examples: 400
- name: test
num_bytes: 173591889.0
num_examples: 800
download_size: 165381311
dataset_size: 842922815.0
---
# Dataset Card for "donut_vqa_ISynHMP_all_labels_modified"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chrisgru/chat-v2.1 | 2023-09-28T12:54:24.000Z | [
"region:us"
] | chrisgru | null | null | null | 0 | 65 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 7344562
num_examples: 4386
download_size: 3196402
dataset_size: 7344562
---
# Dataset Card for "chat-v2.1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cardiffnlp/tweet_topic_multilingual | 2023-10-10T13:46:23.000Z | [
"region:us"
] | cardiffnlp | [TweetTopicMultilingual](TBA) | TBA | null | 0 | 65 | |
lst20 | 2023-01-25T14:34:28.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:th",
"license:other",
"wo... | null | LST20 Corpus is a dataset for Thai language processing developed by National Electronics and Computer Technology Center (NECTEC), Thailand.
It offers five layers of linguistic annotation: word boundaries, POS tagging, named entities, clause boundaries, and sentence boundaries.
At a large scale, it consists of 3,164,002 words, 288,020 named entities, 248,181 clauses, and 74,180 sentences, while it is annotated with
16 distinct POS tags. All 3,745 documents are also annotated with one of 15 news genres. Regarding its sheer size, this dataset is
considered large enough for developing joint neural models for NLP.
Manually download at https://aiforthai.in.th/corpus.php | @article{boonkwan2020annotation,
title={The Annotation Guideline of LST20 Corpus},
author={Boonkwan, Prachya and Luantangsrisuk, Vorapon and Phaholphinyo, Sitthaa and Kriengket, Kanyanat and Leenoi, Dhanon and Phrombut, Charun and Boriboon, Monthika and Kosawat, Krit and Supnithi, Thepchai},
journal={arXiv preprint arXiv:2008.05055},
year={2020}
} | null | 2 | 64 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- th
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
- part-of-speech
pretty_name: LST20
tags:
- word-segmentation
- clause-segmentation
- sentence-segmentation
dataset_info:
features:
- name: id
dtype: string
- name: fname
dtype: string
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NN
'1': VV
'2': PU
'3': CC
'4': PS
'5': AX
'6': AV
'7': FX
'8': NU
'9': AJ
'10': CL
'11': PR
'12': NG
'13': PA
'14': XX
'15': IJ
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B_BRN
'2': B_DES
'3': B_DTM
'4': B_LOC
'5': B_MEA
'6': B_NUM
'7': B_ORG
'8': B_PER
'9': B_TRM
'10': B_TTL
'11': I_BRN
'12': I_DES
'13': I_DTM
'14': I_LOC
'15': I_MEA
'16': I_NUM
'17': I_ORG
'18': I_PER
'19': I_TRM
'20': I_TTL
'21': E_BRN
'22': E_DES
'23': E_DTM
'24': E_LOC
'25': E_MEA
'26': E_NUM
'27': E_ORG
'28': E_PER
'29': E_TRM
'30': E_TTL
- name: clause_tags
sequence:
class_label:
names:
'0': O
'1': B_CLS
'2': I_CLS
'3': E_CLS
config_name: lst20
splits:
- name: train
num_bytes: 107725145
num_examples: 63310
- name: validation
num_bytes: 9646167
num_examples: 5620
- name: test
num_bytes: 8217425
num_examples: 5250
download_size: 0
dataset_size: 125588737
---
# Dataset Card for LST20
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://aiforthai.in.th/
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [email](thepchai@nectec.or.th)
### Dataset Summary
LST20 Corpus is a dataset for Thai language processing developed by National Electronics and Computer Technology Center (NECTEC), Thailand.
It offers five layers of linguistic annotation: word boundaries, POS tagging, named entities, clause boundaries, and sentence boundaries.
At a large scale, it consists of 3,164,002 words, 288,020 named entities, 248,181 clauses, and 74,180 sentences, while it is annotated with
16 distinct POS tags. All 3,745 documents are also annotated with one of 15 news genres. Regarding its sheer size, this dataset is
considered large enough for developing joint neural models for NLP.
Manually download at https://aiforthai.in.th/corpus.php
See `LST20 Annotation Guideline.pdf` and `LST20 Brief Specification.pdf` within the downloaded `AIFORTHAI-LST20Corpus.tar.gz` for more details.
### Supported Tasks and Leaderboards
- POS tagging
- NER tagging
- clause segmentation
- sentence segmentation
- word tokenization
### Languages
Thai
## Dataset Structure
### Data Instances
```
{'clause_tags': [1, 2, 2, 2, 2, 2, 2, 2, 3], 'fname': 'T11964.txt', 'id': '0', 'ner_tags': [8, 0, 0, 0, 0, 0, 0, 0, 25], 'pos_tags': [0, 0, 0, 1, 0, 8, 8, 8, 0], 'tokens': ['ธรรมนูญ', 'แชมป์', 'สิงห์คลาสสิก', 'กวาด', 'รางวัล', 'แสน', 'สี่', 'หมื่น', 'บาท']}
{'clause_tags': [1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3], 'fname': 'T11964.txt', 'id': '1', 'ner_tags': [8, 18, 28, 0, 0, 0, 0, 6, 0, 0, 0, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 15, 25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 6, 0, 0, 0, 6], 'pos_tags': [0, 2, 0, 2, 1, 1, 2, 8, 2, 10, 2, 8, 2, 1, 0, 1, 0, 4, 7, 1, 0, 2, 8, 2, 10, 1, 10, 4, 2, 8, 2, 4, 0, 4, 0, 2, 8, 2, 10, 2, 8], 'tokens': ['ธรรมนูญ', '_', 'ศรีโรจน์', '_', 'เก็บ', 'เพิ่ม', '_', '4', '_', 'อันเดอร์พาร์', '_', '68', '_', 'เข้า', 'ป้าย', 'รับ', 'แชมป์', 'ใน', 'การ', 'เล่น', 'อาชีพ', '_', '19', '_', 'ปี', 'เป็น', 'ครั้ง', 'ที่', '_', '8', '_', 'ใน', 'ชีวิต', 'ด้วย', 'สกอร์', '_', '18', '_', 'อันเดอร์พาร์', '_', '270']}
```
### Data Fields
- `id`: nth sentence in each set, starting at 0
- `fname`: text file from which the sentence comes from
- `tokens`: word tokens
- `pos_tags`: POS tags
- `ner_tags`: NER tags
- `clause_tags`: clause tags
### Data Splits
| | train | eval | test | all |
|----------------------|-----------|-------------|-------------|-----------|
| words | 2,714,848 | 240,891 | 207,295 | 3,163,034 |
| named entities | 246,529 | 23,176 | 18,315 | 288,020 |
| clauses | 214,645 | 17,486 | 16,050 | 246,181 |
| sentences | 63,310 | 5,620 | 5,250 | 74,180 |
| distinct words | 42,091 | (oov) 2,595 | (oov) 2,006 | 46,692 |
| breaking spaces※ | 63,310 | 5,620 | 5,250 | 74,180 |
| non-breaking spaces※※| 402,380 | 39,920 | 32,204 | 475,504 |
※ Breaking space = space that is used as a sentence boundary marker
※※ Non-breaking space = space that is not used as a sentence boundary marker
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Respective authors of the news articles
### Annotations
#### Annotation process
Detailed annotation guideline can be found in `LST20 Annotation Guideline.pdf`.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
All texts are from public news. No personal and sensitive information is expected to be included.
## Considerations for Using the Data
### Social Impact of Dataset
- Large-scale Thai NER & POS tagging, clause & sentence segmentatation, word tokenization
### Discussion of Biases
- All 3,745 texts are from news domain:
- politics: 841
- crime and accident: 592
- economics: 512
- entertainment: 472
- sports: 402
- international: 279
- science, technology and education: 216
- health: 92
- general: 75
- royal: 54
- disaster: 52
- development: 45
- environment: 40
- culture: 40
- weather forecast: 33
- Word tokenization is done accoding to InterBEST 2009 Guideline.
### Other Known Limitations
- Some NER tags do not correspond with given labels (`B`, `I`, and so on)
## Additional Information
### Dataset Curators
[NECTEC](https://www.nectec.or.th/en/)
### Licensing Information
1. Non-commercial use, research, and open source
Any non-commercial use of the dataset for research and open-sourced projects is encouraged and free of charge. Please cite our technical report for reference.
If you want to perpetuate your models trained on our dataset and share them to the research community in Thailand, please send your models, code, and APIs to the AI for Thai Project. Please contact Dr. Thepchai Supnithi via thepchai@nectec.or.th for more information.
Note that modification and redistribution of the dataset by any means are strictly prohibited unless authorized by the corpus authors.
2. Commercial use
In any commercial use of the dataset, there are two options.
- Option 1 (in kind): Contributing a dataset of 50,000 words completely annotated with our annotation scheme within 1 year. Your data will also be shared and recognized as a dataset co-creator in the research community in Thailand.
- Option 2 (in cash): Purchasing a lifetime license for the entire dataset is required. The purchased rights of use cover only this dataset.
In both options, please contact Dr. Thepchai Supnithi via thepchai@nectec.or.th for more information.
### Citation Information
```
@article{boonkwan2020annotation,
title={The Annotation Guideline of LST20 Corpus},
author={Boonkwan, Prachya and Luantangsrisuk, Vorapon and Phaholphinyo, Sitthaa and Kriengket, Kanyanat and Leenoi, Dhanon and Phrombut, Charun and Boriboon, Monthika and Kosawat, Krit and Supnithi, Thepchai},
journal={arXiv preprint arXiv:2008.05055},
year={2020}
}
```
### Contributions
Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset. |
opus_openoffice | 2023-06-01T14:59:55.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:ja",
"language:ru",
"language:sv",
"langua... | null | A collection of documents from http://www.openoffice.org/. | @InProceedings{TIEDEMANN12.463,
author = {J�rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
} | null | 3 | 64 | ---
annotations_creators:
- found
language_creators:
- found
language:
- de
- en
- es
- fr
- ja
- ru
- sv
- zh
language_bcp47:
- en-GB
- zh-CN
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OpusOpenoffice
dataset_info:
- config_name: de-en_GB
features:
- name: translation
dtype:
translation:
languages:
- de
- en_GB
splits:
- name: train
num_bytes: 6201141
num_examples: 77052
download_size: 2030226
dataset_size: 6201141
- config_name: de-es
features:
- name: translation
dtype:
translation:
languages:
- de
- es
splits:
- name: train
num_bytes: 6571679
num_examples: 77000
download_size: 2100214
dataset_size: 6571679
- config_name: de-fr
features:
- name: translation
dtype:
translation:
languages:
- de
- fr
splits:
- name: train
num_bytes: 6715869
num_examples: 76684
download_size: 2111078
dataset_size: 6715869
- config_name: de-ja
features:
- name: translation
dtype:
translation:
languages:
- de
- ja
splits:
- name: train
num_bytes: 7085007
num_examples: 69396
download_size: 2112771
dataset_size: 7085007
- config_name: de-ru
features:
- name: translation
dtype:
translation:
languages:
- de
- ru
splits:
- name: train
num_bytes: 8333305
num_examples: 75511
download_size: 2267499
dataset_size: 8333305
- config_name: de-sv
features:
- name: translation
dtype:
translation:
languages:
- de
- sv
splits:
- name: train
num_bytes: 6289026
num_examples: 77366
download_size: 2056115
dataset_size: 6289026
- config_name: de-zh_CN
features:
- name: translation
dtype:
translation:
languages:
- de
- zh_CN
splits:
- name: train
num_bytes: 5836684
num_examples: 68712
download_size: 2006818
dataset_size: 5836684
- config_name: en_GB-es
features:
- name: translation
dtype:
translation:
languages:
- en_GB
- es
splits:
- name: train
num_bytes: 6147645
num_examples: 77646
download_size: 1978922
dataset_size: 6147645
- config_name: en_GB-fr
features:
- name: translation
dtype:
translation:
languages:
- en_GB
- fr
splits:
- name: train
num_bytes: 6297843
num_examples: 77696
download_size: 1987317
dataset_size: 6297843
- config_name: en_GB-ja
features:
- name: translation
dtype:
translation:
languages:
- en_GB
- ja
splits:
- name: train
num_bytes: 6636778
num_examples: 69149
download_size: 1987255
dataset_size: 6636778
- config_name: en_GB-ru
features:
- name: translation
dtype:
translation:
languages:
- en_GB
- ru
splits:
- name: train
num_bytes: 7878034
num_examples: 75401
download_size: 2137510
dataset_size: 7878034
- config_name: en_GB-sv
features:
- name: translation
dtype:
translation:
languages:
- en_GB
- sv
splits:
- name: train
num_bytes: 5861525
num_examples: 77815
download_size: 1934619
dataset_size: 5861525
- config_name: en_GB-zh_CN
features:
- name: translation
dtype:
translation:
languages:
- en_GB
- zh_CN
splits:
- name: train
num_bytes: 5424921
num_examples: 69400
download_size: 1887600
dataset_size: 5424921
- config_name: es-fr
features:
- name: translation
dtype:
translation:
languages:
- es
- fr
splits:
- name: train
num_bytes: 6663156
num_examples: 77417
download_size: 2059241
dataset_size: 6663156
- config_name: es-ja
features:
- name: translation
dtype:
translation:
languages:
- es
- ja
splits:
- name: train
num_bytes: 7005179
num_examples: 68944
download_size: 2059072
dataset_size: 7005179
- config_name: es-ru
features:
- name: translation
dtype:
translation:
languages:
- es
- ru
splits:
- name: train
num_bytes: 8283767
num_examples: 76461
download_size: 2214447
dataset_size: 8283767
- config_name: es-sv
features:
- name: translation
dtype:
translation:
languages:
- es
- sv
splits:
- name: train
num_bytes: 6232530
num_examples: 77825
download_size: 2002804
dataset_size: 6232530
- config_name: es-zh_CN
features:
- name: translation
dtype:
translation:
languages:
- es
- zh_CN
splits:
- name: train
num_bytes: 5776883
num_examples: 68583
download_size: 1958411
dataset_size: 5776883
- config_name: fr-ja
features:
- name: translation
dtype:
translation:
languages:
- fr
- ja
splits:
- name: train
num_bytes: 7160388
num_examples: 69026
download_size: 2069621
dataset_size: 7160388
- config_name: fr-ru
features:
- name: translation
dtype:
translation:
languages:
- fr
- ru
splits:
- name: train
num_bytes: 8432125
num_examples: 76464
download_size: 2222427
dataset_size: 8432125
- config_name: fr-sv
features:
- name: translation
dtype:
translation:
languages:
- fr
- sv
splits:
- name: train
num_bytes: 6373414
num_examples: 77398
download_size: 2014028
dataset_size: 6373414
- config_name: fr-zh_CN
features:
- name: translation
dtype:
translation:
languages:
- fr
- zh_CN
splits:
- name: train
num_bytes: 5918538
num_examples: 68723
download_size: 1966020
dataset_size: 5918538
- config_name: ja-ru
features:
- name: translation
dtype:
translation:
languages:
- ja
- ru
splits:
- name: train
num_bytes: 8781286
num_examples: 68589
download_size: 2224576
dataset_size: 8781286
- config_name: ja-sv
features:
- name: translation
dtype:
translation:
languages:
- ja
- sv
splits:
- name: train
num_bytes: 6709683
num_examples: 69154
download_size: 2012693
dataset_size: 6709683
- config_name: ja-zh_CN
features:
- name: translation
dtype:
translation:
languages:
- ja
- zh_CN
splits:
- name: train
num_bytes: 6397732
num_examples: 68953
download_size: 1972833
dataset_size: 6397732
- config_name: ru-sv
features:
- name: translation
dtype:
translation:
languages:
- ru
- sv
splits:
- name: train
num_bytes: 7966214
num_examples: 75560
download_size: 2167678
dataset_size: 7966214
- config_name: ru-zh_CN
features:
- name: translation
dtype:
translation:
languages:
- ru
- zh_CN
splits:
- name: train
num_bytes: 7393715
num_examples: 66259
download_size: 2098229
dataset_size: 7393715
- config_name: sv-zh_CN
features:
- name: translation
dtype:
translation:
languages:
- sv
- zh_CN
splits:
- name: train
num_bytes: 5492958
num_examples: 68846
download_size: 1914096
dataset_size: 5492958
config_names:
- de-en_GB
- de-es
- de-fr
- de-ja
- de-ru
- de-sv
- de-zh_CN
- en_GB-es
- en_GB-fr
- en_GB-ja
- en_GB-ru
- en_GB-sv
- en_GB-zh_CN
- es-fr
- es-ja
- es-ru
- es-sv
- es-zh_CN
- fr-ja
- fr-ru
- fr-sv
- fr-zh_CN
- ja-ru
- ja-sv
- ja-zh_CN
- ru-sv
- ru-zh_CN
- sv-zh_CN
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[OpenOffice](http://opus.nlpl.eu/OpenOffice.php)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
A collection of documents from http://www.openoffice.org/.
8 languages, 28 bitexts
### Supported Tasks and Leaderboards
The underlying task is machine translation.
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{TIEDEMANN12.463,
author = {J�rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
}
```
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
wiki_atomic_edits | 2023-06-01T14:59:54.000Z | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"size_categories:10M<n<100M",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:de",
"language:en",
"language:es",
"language:fr... | null | A dataset of atomic wikipedia edits containing insertions and deletions of a contiguous chunk of text in a sentence. This dataset contains ~43 million edits across 8 languages.
An atomic edit is defined as an edit e applied to a natural language expression S as the insertion, deletion, or substitution of a sub-expression P such that both the original expression S and the resulting expression e(S) are well-formed semantic constituents (MacCartney, 2009). In this corpus, we release such atomic insertions and deletions made to sentences in wikipedia. | @InProceedings{WikiAtomicEdits,
title = {{WikiAtomicEdits: A Multilingual Corpus of Wikipedia Edits for Modeling Language and Discourse}},
author = {Faruqui, Manaal and Pavlick, Ellie and Tenney, Ian and Das, Dipanjan},
booktitle = {Proc. of EMNLP},
year = {2018}
} | null | 9 | 64 | ---
annotations_creators:
- found
language_creators:
- found
language:
- de
- en
- es
- fr
- it
- ja
- ru
- zh
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
- 10M<n<100M
- 1M<n<10M
source_datasets:
- original
task_categories:
- summarization
task_ids: []
paperswithcode_id: wikiatomicedits
pretty_name: WikiAtomicEdits
dataset_info:
- config_name: german_insertions
features:
- name: id
dtype: int32
- name: base_sentence
dtype: string
- name: phrase
dtype: string
- name: edited_sentence
dtype: string
splits:
- name: train
num_bytes: 1072443082
num_examples: 3343403
download_size: 274280387
dataset_size: 1072443082
- config_name: german_deletions
features:
- name: id
dtype: int32
- name: base_sentence
dtype: string
- name: phrase
dtype: string
- name: edited_sentence
dtype: string
splits:
- name: train
num_bytes: 624070402
num_examples: 1994329
download_size: 160133549
dataset_size: 624070402
- config_name: english_insertions
features:
- name: id
dtype: int32
- name: base_sentence
dtype: string
- name: phrase
dtype: string
- name: edited_sentence
dtype: string
splits:
- name: train
num_bytes: 4258411914
num_examples: 13737796
download_size: 1090652177
dataset_size: 4258411914
- config_name: english_deletions
features:
- name: id
dtype: int32
- name: base_sentence
dtype: string
- name: phrase
dtype: string
- name: edited_sentence
dtype: string
splits:
- name: train
num_bytes: 2865754626
num_examples: 9352389
download_size: 736560902
dataset_size: 2865754626
- config_name: spanish_insertions
features:
- name: id
dtype: int32
- name: base_sentence
dtype: string
- name: phrase
dtype: string
- name: edited_sentence
dtype: string
splits:
- name: train
num_bytes: 481145004
num_examples: 1380934
download_size: 118837934
dataset_size: 481145004
- config_name: spanish_deletions
features:
- name: id
dtype: int32
- name: base_sentence
dtype: string
- name: phrase
dtype: string
- name: edited_sentence
dtype: string
splits:
- name: train
num_bytes: 317253196
num_examples: 908276
download_size: 78485695
dataset_size: 317253196
- config_name: french_insertions
features:
- name: id
dtype: int32
- name: base_sentence
dtype: string
- name: phrase
dtype: string
- name: edited_sentence
dtype: string
splits:
- name: train
num_bytes: 651525210
num_examples: 2038305
download_size: 160442894
dataset_size: 651525210
- config_name: french_deletions
features:
- name: id
dtype: int32
- name: base_sentence
dtype: string
- name: phrase
dtype: string
- name: edited_sentence
dtype: string
splits:
- name: train
num_bytes: 626323354
num_examples: 2060242
download_size: 155263358
dataset_size: 626323354
- config_name: italian_insertions
features:
- name: id
dtype: int32
- name: base_sentence
dtype: string
- name: phrase
dtype: string
- name: edited_sentence
dtype: string
splits:
- name: train
num_bytes: 372950256
num_examples: 1078814
download_size: 92302006
dataset_size: 372950256
- config_name: italian_deletions
features:
- name: id
dtype: int32
- name: base_sentence
dtype: string
- name: phrase
dtype: string
- name: edited_sentence
dtype: string
splits:
- name: train
num_bytes: 198598618
num_examples: 583316
download_size: 49048596
dataset_size: 198598618
- config_name: japanese_insertions
features:
- name: id
dtype: int32
- name: base_sentence
dtype: string
- name: phrase
dtype: string
- name: edited_sentence
dtype: string
splits:
- name: train
num_bytes: 765754162
num_examples: 2249527
download_size: 185766012
dataset_size: 765754162
- config_name: japanese_deletions
features:
- name: id
dtype: int32
- name: base_sentence
dtype: string
- name: phrase
dtype: string
- name: edited_sentence
dtype: string
splits:
- name: train
num_bytes: 459683880
num_examples: 1352162
download_size: 110513593
dataset_size: 459683880
- config_name: russian_insertions
features:
- name: id
dtype: int32
- name: base_sentence
dtype: string
- name: phrase
dtype: string
- name: edited_sentence
dtype: string
splits:
- name: train
num_bytes: 790822192
num_examples: 1471638
download_size: 152985812
dataset_size: 790822192
- config_name: russian_deletions
features:
- name: id
dtype: int32
- name: base_sentence
dtype: string
- name: phrase
dtype: string
- name: edited_sentence
dtype: string
splits:
- name: train
num_bytes: 514750186
num_examples: 960976
download_size: 100033230
dataset_size: 514750186
- config_name: chinese_insertions
features:
- name: id
dtype: int32
- name: base_sentence
dtype: string
- name: phrase
dtype: string
- name: edited_sentence
dtype: string
splits:
- name: train
num_bytes: 233367646
num_examples: 746509
download_size: 66124094
dataset_size: 233367646
- config_name: chinese_deletions
features:
- name: id
dtype: int32
- name: base_sentence
dtype: string
- name: phrase
dtype: string
- name: edited_sentence
dtype: string
splits:
- name: train
num_bytes: 144269112
num_examples: 467271
download_size: 40898651
dataset_size: 144269112
config_names:
- chinese_deletions
- chinese_insertions
- english_deletions
- english_insertions
- french_deletions
- french_insertions
- german_deletions
- german_insertions
- italian_deletions
- italian_insertions
- japanese_deletions
- japanese_insertions
- russian_deletions
- russian_insertions
- spanish_deletions
- spanish_insertions
---
# Dataset Card for WikiAtomicEdits
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** None
- **Repository:** https://github.com/google-research-datasets/wiki-atomic-edits
- **Paper:** https://www.aclweb.org/anthology/D18-1028/
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The languages in the dataset are:
- de
- en
- es
- fr
- it
- jp: Japanese (`ja`)
- ru
- zh
## Dataset Structure
### Data Instances
Here are some examples of questions and facts:
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
SetFit/amazon_reviews_multi_de | 2022-03-23T15:34:53.000Z | [
"region:us"
] | SetFit | null | null | null | 0 | 64 | #amazon reviews multi german
This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the German language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. |
orieg/elsevier-oa-cc-by | 2022-07-01T15:59:58.000Z | [
"task_categories:fill-mask",
"task_categories:summarization",
"task_categories:text-classification",
"task_ids:masked-language-modeling",
"task_ids:news-articles-summarization",
"task_ids:news-articles-headline-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",... | orieg | Elsevier OA CC-By is a corpus of 40k (40, 091) open access (OA) CC-BY articles
from across Elsevier’s journals and include the full text of the article, the metadata,
the bibliographic information for each reference, and author highlights. | @article{Kershaw2020ElsevierOC,
title = {Elsevier OA CC-By Corpus},
author = {Daniel James Kershaw and R. Koeling},
journal = {ArXiv},
year = {2020},
volume = {abs/2008.00774},
doi = {https://doi.org/10.48550/arXiv.2008.00774},
url = {https://elsevier.digitalcommonsdata.com/datasets/zm33cdndxs},
keywords = {Science, Natural Language Processing, Machine Learning, Open Dataset},
abstract = {We introduce the Elsevier OA CC-BY corpus. This is the first open
corpus of Scientific Research papers which has a representative sample
from across scientific disciplines. This corpus not only includes the
full text of the article, but also the metadata of the documents,
along with the bibliographic information for each reference.}
} | null | 8 | 64 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Elsevier OA CC-By
paperswithcode_id: elsevier-oa-cc-by
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- fill-mask
- summarization
- text-classification
task_ids:
- masked-language-modeling
- news-articles-summarization
- news-articles-headline-generation
---
# Dataset Card for Elsevier OA CC-By
## Table of Contents
- [Dataset Card for Elsevier OA CC-By](#dataset-card-for-elsevier-oa-cc-by)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://elsevier.digitalcommonsdata.com/datasets/zm33cdndxs
- **Repository:** https://elsevier.digitalcommonsdata.com/datasets/zm33cdndxs
- **Paper:** https://arxiv.org/abs/2008.00774
- **Leaderboard:**
- **Point of Contact:** [@orieg](https://huggingface.co/orieg)
### Dataset Summary
Elsevier OA CC-By: This is a corpus of 40k (40,091) open access (OA) CC-BY articles from across Elsevier’s journals
representing a large scale, cross-discipline set of research data to support NLP and ML research. The corpus include full-text
articles published in 2014 to 2020 and are categorized in 27 Mid Level ASJC Code (subject classification).
***Distribution of Publication Years***
| Publication Year | Number of Articles |
| :---: | :---: |
| 2014 | 3018 |
| 2015 | 4438 |
| 2016 | 5913 |
| 2017 | 6419 |
| 2018 | 8016 |
| 2019 | 10135 |
| 2020 | 2159 |
***Distribution of Articles Per Mid Level ASJC Code. Each article can belong to multiple ASJC codes.***
| Discipline | Count |
| --- | ---: |
| General | 3847 |
| Agricultural and Biological Sciences | 4840 |
| Arts and Humanities | 982 |
| Biochemistry, Genetics and Molecular Biology | 8356 |
| Business, Management and Accounting | 937 |
| Chemical Engineering | 1878 |
| Chemistry | 2490 |
| Computer Science | 2039 |
| Decision Sciences | 406 |
| Earth and Planetary Sciences | 2393 |
| Economics, Econometrics and Finance | 976 |
| Energy | 2730 |
| Engineering | 4778 |
| Environmental Science | 6049 |
| Immunology and Microbiology | 3211 |
| Materials Science | 3477 |
| Mathematics | 538 |
| Medicine | 7273 |
| Neuroscience | 3669 |
| Nursing | 308 |
| Pharmacology, Toxicology and Pharmaceutics | 2405 |
| Physics and Astronomy | 2404 |
| Psychology | 1760 |
| Social Sciences | 3540 |
| Veterinary | 991 |
| Dentistry | 40 |
| Health Professions | 821 |
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English (`en`).
## Dataset Structure
### Data Instances
The original dataset was published with the following json structure:
```
{
"docId": <str>,
"metadata":{
"title": <str>,
"authors": [
{
"first": <str>,
"initial": <str>,
"last": <str>,
"email": <str>
},
...
],
"issn": <str>,
"volume": <str>,
"firstpage": <str>,
"lastpage": <str>,
"pub_year": <int>,
"doi": <str>,
"pmid": <str>,
"openaccess": "Full",
"subjareas": [<str>],
"keywords": [<str>],
"asjc": [<int>],
},
"abstract":[
{
"sentence": <str>,
"startOffset": <int>,
"endOffset": <int>
},
...
],
"bib_entries":{
"BIBREF0":{
"title":<str>,
"authors":[
{
"last":<str>,
"initial":<str>,
"first":<str>
},
...
],
"issn": <str>,
"volume": <str>,
"firstpage": <str>,
"lastpage": <str>,
"pub_year": <int>,
"doi": <str>,
"pmid": <str>
},
...
},
"body_text":[
{
"sentence": <str>,
"secId": <str>,
"startOffset": <int>,
"endOffset": <int>,
"title": <str>,
"refoffsets": {
<str>:{
"endOffset":<int>,
"startOffset":<int>
}
},
"parents": [
{
"id": <str>,
"title": <str>
},
...
]
},
...
]
}
```
***docId*** The docID is the identifier of the document. This is unique to the document, and can be resolved into a URL
for the document through the addition of `https//www.sciencedirect.com/science/pii/<docId>`
***abstract*** This is the author provided abstract for the document
***body_text*** The full text for the document. The text has been split on sentence boundaries, thus making it easier to
use across research projects. Each sentence has the title (and ID) of the section which it is from, along with titles (and
IDs) of the parent section. The highest-level section takes index 0 in the parents array. If the array is empty then the
title of the section for the sentence is the highest level section title. This will allow for the reconstruction of the article
structure. References have been extracted from the sentences. The IDs of the extracted reference and their respective
offset within the sentence can be found in the “refoffsets” field. The complete list of references are can be found in
the “bib_entry” field along with the references’ respective metadata. Some will be missing as we only keep ‘clean’
sentences,
***bib_entities*** All the references from within the document can be found in this section. If the meta data for the
reference is available, it has been added against the key for the reference. Where possible information such as the
document titles, authors, and relevant identifiers (DOI and PMID) are included. The keys for each reference can be
found in the sentence where the reference is used with the start and end offset of where in the sentence that reference
was used.
***metadata*** Meta data includes additional information about the article, such as list of authors, relevant IDs (DOI and
PMID). Along with a number of classification schemes such as ASJC and Subject Classification.
***author_highlights*** Author highlights were included in the corpus where the author(s) have provided them. The
coverage is 61% of all articles. The author highlights, consisting of 4 to 6 sentences, is provided by the author with
the aim of summarising the core findings and results in the article.
### Data Fields
* ***title***: This is the author provided title for the document. 100% coverage.
* ***abstract***: This is the author provided abstract for the document. 99.25% coverage.
* ***keywords***: This is the author and publisher provided keywords for the document. 100% coverage.
* ***asjc***: This is the disciplines for the document as represented by 334 ASJC (All Science Journal Classification) codes. 100% coverage.
* ***subjareas***: This is the Subject Classification for the document as represented by 27 ASJC top-level subject classifications. 100% coverage.
* ***body_text***: The full text for the document. 100% coverage.
* ***author_highlights***: This is the author provided highlights for the document. 61.31% coverage.
### Data Splits
***Distribution of Publication Years***
| | Train | Test | Validation |
| --- | :---: | :---: | :---: |
| All Articles | 32072 | 4009 | 4008 |
| With Author Highlights | 19644 | 2420 | 2514 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Date the data was collected: 2020-06-25T11:00:00.000Z
See the [original paper](https://doi.org/10.48550/arXiv.2008.00774) for more detail on the data collection process.
#### Who are the source language producers?
See `3.1 Data Sampling` in the [original paper](https://doi.org/10.48550/arXiv.2008.00774).
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@article{Kershaw2020ElsevierOC,
title = {Elsevier OA CC-By Corpus},
author = {Daniel James Kershaw and R. Koeling},
journal = {ArXiv},
year = {2020},
volume = {abs/2008.00774},
doi = {https://doi.org/10.48550/arXiv.2008.00774},
url = {https://elsevier.digitalcommonsdata.com/datasets/zm33cdndxs},
keywords = {Science, Natural Language Processing, Machine Learning, Open Dataset},
abstract = {We introduce the Elsevier OA CC-BY corpus. This is the first open
corpus of Scientific Research papers which has a representative sample
from across scientific disciplines. This corpus not only includes the
full text of the article, but also the metadata of the documents,
along with the bibliographic information for each reference.}
}
```
```
@dataset{https://10.17632/zm33cdndxs.3,
doi = {10.17632/zm33cdndxs.2},
url = {https://data.mendeley.com/datasets/zm33cdndxs/3},
author = "Daniel Kershaw and Rob Koeling",
keywords = {Science, Natural Language Processing, Machine Learning, Open Dataset},
title = {Elsevier OA CC-BY Corpus},
publisher = {Mendeley},
year = {2020},
month = {sep}
}
```
### Contributions
Thanks to [@orieg](https://github.com/orieg) for adding this dataset. |
Moo/korean-parallel-corpora | 2022-07-01T15:32:54.000Z | [
"task_categories:translation",
"annotations_creators:other",
"language_creators:other",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ko",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | Moo | null | null | null | 5 | 64 | ---
annotations_creators:
- other
language_creators:
- other
language:
- ko
- en
license:
- cc-by-sa-3.0
multilinguality:
- multilingual
- translation
pretty_name: 'korean-parallel-corpora '
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
---
|
Theivaprakasham/wildreceipt | 2022-06-10T21:46:37.000Z | [
"license:apache-2.0",
"region:us"
] | Theivaprakasham | WildReceipt is a collection of receipts. It contains, for each photo, a list of OCRs - with the bounding box, text, and class. It contains 1765 photos, with 25 classes, and 50000 text boxes. The goal is to benchmark "key information extraction" - extracting key information from documents
https://arxiv.org/abs/2103.14470 | @article{Sun2021SpatialDG,
title={Spatial Dual-Modality Graph Reasoning for Key Information Extraction},
author={Hongbin Sun and Zhanghui Kuang and Xiaoyu Yue and Chenhao Lin and Wayne Zhang},
journal={ArXiv},
year={2021},
volume={abs/2103.14470}
} | null | 3 | 64 | ---
license: apache-2.0
---
|
irds/trec-robust04 | 2023-01-05T03:52:55.000Z | [
"task_categories:text-retrieval",
"region:us"
] | irds | null | null | null | 1 | 64 | ---
pretty_name: '`trec-robust04`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `trec-robust04`
The `trec-robust04` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-robust04#trec-robust04).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=528,155
- `queries` (i.e., topics); count=250
- `qrels`: (relevance assessments); count=311,410
This dataset is used by: [`trec-robust04_fold1`](https://huggingface.co/datasets/irds/trec-robust04_fold1), [`trec-robust04_fold2`](https://huggingface.co/datasets/irds/trec-robust04_fold2), [`trec-robust04_fold3`](https://huggingface.co/datasets/irds/trec-robust04_fold3), [`trec-robust04_fold4`](https://huggingface.co/datasets/irds/trec-robust04_fold4), [`trec-robust04_fold5`](https://huggingface.co/datasets/irds/trec-robust04_fold5)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/trec-robust04', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'marked_up_doc': ...}
queries = load_dataset('irds/trec-robust04', 'queries')
for record in queries:
record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...}
qrels = load_dataset('irds/trec-robust04', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Voorhees2004Robust,
title={Overview of the TREC 2004 Robust Retrieval Track},
author={Ellen Voorhees},
booktitle={TREC},
year={2004}
}
```
|
nthngdy/bert_dataset_202203 | 2023-01-17T10:10:06.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"language:en",
"license:apache-2.0",
"language-modeling",
"masked-language-modeling",
"region:us"
] | nthngdy | null | null | null | 0 | 64 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 24635440616
num_examples: 146707688
download_size: 14651841592
dataset_size: 24635440616
license: apache-2.0
task_categories:
- text-generation
- fill-mask
language:
- en
tags:
- language-modeling
- masked-language-modeling
pretty_name: BERT Dataset (BookCorpus + Wikipedia 03/2022)
---
# Dataset Card for "bert_dataset_202203"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fscheffczyk/20newsgroups_embeddings | 2023-02-05T17:59:34.000Z | [
"task_categories:feature-extraction",
"task_categories:sentence-similarity",
"task_categories:question-answering",
"multilinguality:monolingual",
"size_categories:unknown",
"language:en",
"news",
"20newsgroups",
"region:us"
] | fscheffczyk | null | null | null | 0 | 64 | ---
annotations_creators: []
language:
- en
language_creators: []
license: []
multilinguality:
- monolingual
pretty_name: Feature vector embeddings of the 20newsgroup dataset
size_categories:
- unknown
source_datasets:
- 20newsgroups dataset: http://qwone.com/~jason/20Newsgroups/
tags:
- news
- 20newsgroups
task_categories:
- feature-extraction
- sentence-similarity
- question-answering
task_ids: []
---
# Dataset Card for feature vector embeddings of the 20newsgroup dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains vector embeddings of the [20newsgroups dataset](http://qwone.com/~jason/20Newsgroups/).
The embeddings were created with the [Sentence Transformers library](https://www.sbert.net/index.html) using the `multi-qa-MiniLM-L6-cos-v1` model.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
fscheffczyk/2D_20newsgroups_embeddings | 2023-02-05T18:57:29.000Z | [
"task_categories:feature-extraction",
"task_categories:sentence-similarity",
"task_categories:question-answering",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|fscheffczyk/20newsgroups_embeddings",
"language:en",
"news",
"20newsgroups",
"region:us"
] | fscheffczyk | null | null | null | 0 | 64 | ---
annotations_creators: []
language:
- en
language_creators: []
license: []
multilinguality:
- monolingual
pretty_name: Dimensional reduced feature vector embeddings of the 20newsgroup dataset
size_categories:
- unknown
source_datasets:
- extended|fscheffczyk/20newsgroups_embeddings
tags:
- news
- 20newsgroups
task_categories:
- feature-extraction
- sentence-similarity
- question-answering
task_ids: []
---
# Dataset Card for feature vector embeddings of the 20newsgroup dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains dimensional reduced vector embeddings of the [20newsgroups dataset](http://qwone.com/~jason/20Newsgroups/). This dataset contains two dimensions.
The dimensional reduced embeddings were created with the [TruncatedSVD function](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html#sklearn.decomposition.TruncatedSVD) from the [scikit-learn library](https://scikit-learn.org/stable/index.html).
These reduced feature vectors are based on the [fscheffczyk/20newsgroup_embeddings dataset](https://huggingface.co/datasets/fscheffczyk/20newsgroups_embeddings).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
alexandrainst/ddisco | 2023-02-08T18:12:26.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:da",
"license:afl-3.0",
"discourse",
"coherence",
"region:us"
] | alexandrainst | null | null | null | 1 | 64 | ---
annotations_creators:
- expert-generated
language:
- da
language_creators:
- expert-generated
license:
- afl-3.0
multilinguality:
- monolingual
pretty_name: DDisco
size_categories:
- 1K<n<10K
source_datasets: []
tags:
- discourse
- coherence
task_categories:
- text-classification
task_ids: []
dataset_info:
features:
- name: text
dtype: string
- name: domain
dtype: string
- name: rating
dtype: int64
splits:
- name: train
num_bytes: 815571
num_examples: 801
- name: test
num_bytes: 209297
num_examples: 201
download_size: 672202
dataset_size: 1024868
---
# Dataset Card for DDisco
## Dataset Description
The DDisco dataset is a dataset which can be used to train models to classify levels of coherence in _danish_ discourse. Each entry in the dataset is annotated with a discourse coherence label (rating from 1 to 3):
1: low coherence (difficult to understand, unorganized, contained unnecessary details and can not be summarized briefly and easily)
2: medium coherence
3: high coherence (easy to understand, well organized, only contain details that support the main point and can be summarized briefly and easily).
Grammatical and typing errors are ignored (i.e. they do not affect the coherency score) and the coherence of a text is considered within its own domain.
### Additional Information
[DDisCo: A Discourse Coherence Dataset for Danish](https://aclanthology.org/2022.lrec-1.260.pdf)
### Contributions
[@ajders](https://github.com/ajders) |
hearmeneigh/e621-rising-v2-raw | 2023-05-15T03:26:11.000Z | [
"size_categories:1M<n<10M",
"furry",
"anthro",
"nsfw",
"e621",
"not-for-all-audiences",
"region:us"
] | hearmeneigh | null | null | null | 8 | 64 | ---
dataset_info:
features:
- name: id
dtype: string
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1192520976398.634
num_examples: 2905671
download_size: 1172100381158
dataset_size: 1192520976398.634
pretty_name: 'E621 Rising: Raw Image Dataset v2'
size_categories:
- 1M<n<10M
viewer: false
tags:
- furry
- anthro
- nsfw
- e621
- not-for-all-audiences
---
**Warning: THIS dataset is NOT suitable for use by minors. The dataset contains X-rated/NFSW content.**
# E621 Rising: Raw Image Dataset v2
**2,905,671** images (~1.1TB) downloaded from `e621.net` with [tags](https://huggingface.co/datasets/hearmeneigh/e621-rising-v2-raw/raw/main/meta/tag-counts.json).
This is a raw, uncurated, and largely unprocessed dataset. You likely want to use the curated version, [available here](https://huggingface.co/datasets/hearmeneigh/e621-rising-v2-curated). This dataset contains all kinds of NFSW material. You have been warned.
## Image Processing
* Only `jpg` and `png` images were considered
* Image width and height have been clamped to `(0, 4096]px`; larger images have been resized to meet the limit
* Alpha channels have been removed
* All images have been converted to `jpg` format
* All images have been converted to TrueColor `RGB`
* All images have been verified to load with `Pillow`
* Metadata from E621 is [available here](https://huggingface.co/datasets/hearmeneigh/e621-rising-v2-raw/tree/main/meta).
## Tags
For a comprehensive list of tags and counts, [see here](https://huggingface.co/datasets/hearmeneigh/e621-rising-v2-raw/raw/main/meta/tag-counts.json).
### Changes From E621
* Tag names have been rewritten to `[a-z0-9_]` or `<category>:[a-z0-9_]`, e.g.
* `digital_media_(artwork)` => `meta:digital_media_artwork`
* `half-closed_eyes` => `halfclosed_eyes`
* Symbols have been prefixed with `symbol:`, e.g. `symbol:<3`
* Aspect ratio has been prefixed with `aspect_ratio:`, e.g. `aspect_ratio:16_9`
* All categories except `general` have been prefixed with the category name, e.g. `artist:somename`. The categories are:
* `artist`
* `copyright`
* `character`
* `species`
* `invalid`
* `meta`
* `lore`
### Additional Tags
* Image rating
* `rating:explicit`
* `rating:questionable`
* `rating:safe`
* Image score
* `score:above_250`
* `score:above_500`
* `score:above_1000`
* `score:above_1500`
* `score:above_2000`
* `score:below_250`
* `score:below_100`
* `score:below_50`
* `score:below_25`
* `score:below_0`
* Image favorites
* `favorites:above_4000`
* `favorites:above_3000`
* `favorites:above_2000`
* `favorites:above_1000`
* `favorites:below_1000`
* `favorites:below_500`
* `favorites:below_250`
* `favorites:below_100`
* `favorites:below_50`
* `favorites:below_25` |
AndyReas/frontpage-news | 2023-04-28T14:32:11.000Z | [
"task_categories:text-generation",
"size_categories:10M<n<100M",
"language:en",
"license:mit",
"region:us"
] | AndyReas | null | null | null | 2 | 64 | ---
license: mit
task_categories:
- text-generation
language:
- en
size_categories:
- 10M<n<100M
---
# Frontpage News
## The Data
The data consists of ~13,000,000 English articles from ~90 outlets. The articles were collected from the [Sciride News Mine](http://sciride.org/news.html), after which some additional cleaning / processing was performed on the data. The articles span from 2015-07-18 to 2020-10-17.
### Data processing
- Removing duplicate articles (a result of being on the frontpage for multiple days.)
- Removing repeated "outlet tags" appearing before or after headlines such as "| Daily Mail Online".
- Removing dates that were not part of a natural sentence but rather "tags", such as "\[Some headline\] - 2020-12-03".
- Removing duplicate articles (again. This time due to dates making otherwise identical articles unique. Removing the date made them 100% identical.)
- Removing HTML elements that were missed on the first scraping.
- Unescaping HTML characters, replacing them with "regular" characters.
- Removing "junk" articles such as empty articles and articles with a length below a certain threshold.
Note: the cleaning process was not perfect and some "outlet tags" still remain.
For instance, some outlets use "--" instead of "|" before a tag, and those were missed.
There is also the case of uncommon characters, such as "\u00a" being used instead of regular characters. This specific example results in tokenizers not being able to properly tokenize sentences using that space.
There are possibly (likely) other things, that were overlooked during cleaning.
### Outlets
```
["9news.com.au", "abc.net.au", "abcnews.go.com", "afr.com", "aljazeera.com", "apnews.com", "bbc.com", "bostonglobe.com", "breakingnews.ie", "breitbart.com", "businessinsider.com", "cbc.ca", "cbsnews.com", "channel4.com", "chicagotribune.com", "cnbc.com", "csmonitor.com", "ctvnews.ca", "dailymail.co.uk", "dailystar.co.uk", "dw.com", "economist.com", "edition.cnn.com", "euronews.com", "express.co.uk", "foxnews.com", "france24.com", "globalnews.ca", "huffpost.com", "independent.co.uk", "independent.ie", "inquirer.com", "irishexaminer.com", "irishmirror.ie", "irishtimes.com", "itv.com", "latimes.com", "liverpoolecho.co.uk", "macleans.ca", "metro.co.uk", "mirror.co.uk", "montrealgazette.com", "morningstaronline.co.uk", "msnbc.com", "nbcnews.com", "news.com.au", "news.sky.com", "news.yahoo.com", "newshub.co.nz", "newsweek.com", "npr.org", "nypost.com", "nytimes.com", "nzherald.co.nz", "politico.com", "rcinet.ca", "reuters.com", "rfi.fr", "rnz.co.nz", "rt.com", "rte.ie", "sbs.com.au", "scoop.co.nz", "scotsman.com", "slate.com", "smh.com.au", "standard.co.uk", "stuff.co.nz", "telegraph.co.uk", "theage.com.au", "theatlantic.com", "theglobeandmail.com", "theguardian.com", "thehill.com", "thejournal.ie", "thestar.com", "thesun.co.uk", "thesun.ie", "thetimes.co.uk", "thewest.com.au", "time.com", "torontosun.com", "upi.com", "usatoday.com", "vancouversun.com", "walesonline.co.uk", "washingtonpost.com", "washingtontimes.com", "westernjournal.com", "wnd.com", "wsj.com"]
```
## Features (columns)
### title
A news headline.
### description
A news subheader.
### meta
- article_id: Article ID from the original sciride news mine. A hashing of the original title + description.
- date: The date on which the article appeared on the frontpage.
- outlet: The outlet which published the article on their frontpage.
### new_article_id
A new article ID created by hashing the title + description. Can be different from article_id because titles and descriptions changed during "cleaning". |
martinjosifoski/SynthIE | 2023-03-06T21:59:52.000Z | [
"language:en",
"license:mit",
"arxiv:2303.04132",
"region:us"
] | martinjosifoski | The paper ``Exploiting Asymmetry for Synthetic Training Data Generation: SynthIE and the Case of Information Extraction'' builds on the idea that even for hard tasks of interest (with input X and Y) -- for which human-annotation is not practical and high-quality annotated data is not available -- by reversing the task (from Y to X), useful data can be synthetically generated even when that original task cannot be solved directly by the LLM. This process enables the creation of a high-quality dataset of X-Y pairs that will enable the training/fine-tuning of models for the original task of interest.
In particular, the paper studies the idea in the context of closed information extraction (IE), where a model is tasked with extracting the exhaustive set of facts expressed in natural language text. The synthetic data generation pipeline proposed in the paper comprises three primary components: (i) construction of a knowledge graph containing the entities and relations of interest; (ii) sampling of coherent triplet sets from the KG with comprehensive coverage of the entities and relations, and (iii) generation of high-quality text, expressing the triplets without any supplementary information. | @article{josifoski2023exploiting,
title={Exploiting Asymmetry for Synthetic Training Data Generation: {S}ynth{IE} and The Case of Information Extraction},
author={Josifoski, Martin and Sakota, Marija and Peyrard, Maxime and West, Robert},
journal={arXiv preprint arXiv:2303.04132},
year={2023}
} | null | 4 | 64 | ---
license: mit
language:
- en
pretty_name: SynthIE
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage and Repository:** https://github.com/epfl-dlab/SynthIE
- **Paper:** https://arxiv.org/abs/2303.04132
### Dataset Summary
[Exploiting Asymmetry for Synthetic Training Data Generation: SynthIE and the Case of Information Extraction](https://arxiv.org/abs/2303.04132) builds on the idea that even for hard tasks of interest (with input X and Y) -- for which human-annotation is not practical and high-quality annotated data is not available -- by reversing the task (from Y to X), useful data can be synthetically generated even when that original task cannot be solved directly by the LLM. This process enables the creation of a high-quality dataset of X-Y pairs that will enable the training/fine-tuning of models for the original task of interest.
In particular, the paper studies the idea in the context of closed information extraction (IE), where a model is tasked with extracting the exhaustive set of facts expressed in natural language text. The synthetic data generation pipeline proposed in the paper comprises three primary components: (i) construction of a knowledge graph containing the entities and relations of interest; (ii) sampling of coherent triplet sets from the KG with comprehensive coverage of the entities and relations, and (iii) generation of high-quality text, expressing the triplets without any supplementary information. For more details regarding the dataset construction procedure, see the [paper](https://arxiv.org/abs/2303.04132).
We used this pipeline to generate two large high-quality datasets:<br>
**SynthIE-code**: consisting of around 1.8M training, 10K validation, and 50K test samples generated with [code-davinci-002](https://platform.openai.com/docs/models/gpt-3-5) <br>
**SynthIE-text**: consisting of 10K validation and 50K test samples generated with [text-davinci-003](https://platform.openai.com/docs/models/gpt-3-5) <br>
The text for the validation and test data points in SynthIE-code and SynthIE-text corresponds to the same triplet sets.
The resulting data is then used to train [SynthIE](https://github.com/epfl-dlab/SynthIE), a series of T5-based versions of [GenIE](https://github.com/epfl-dlab/GenIE) -- a recently proposed autoregressive closed IE system; as well as to enable a more accurate evaluation. As a baseline, T5 versions of GenIE are trained on the same dataset, [REBEL](https://aclanthology.org/2021.findings-emnlp.204.pdf), as the original work. The (processed) version of this dataset, suitable for closed IE and used in the paper's experiments, is provided in this repository.
According to the human evaluation conducted in the paper, the synthetically generated data is substantially more faithful than the distantly supervised REBEL and contains around 15\% false negative (opposed to REBEL's 70\%) and 22\% false positive (opposed to REBEL's 56\%) annotations while uniformly covering all relations (see the paper for more details).
### Languages
To stay comparable to GenIE, [SynthIE](https://github.com/epfl-dlab/SynthIE) considers only English. Therefore, the text in SynthIE-code and SynthIE-text is generated in English only. However, the triplets' constituents come from WikiData and are language invariant. Therefore, triplet sets with labels for many languages can easily be obtained.
## Dataset Structure
The SynthIE meta-dataset actually comprises 3 datasets:
- **SynthIE-code** (`synthie_code`)
- **SynthIE-text** (`synthie_text`)
- **REBEL** (`rebel`)
**SynCode**
The samples in this dataset were generated with `code-davinci-002`.
| | Train | Valid | Test |
| ----- | ----- | ----- | ----- |
| Data Points | 1,815,378 | 10,000 | 50,286 |
| Triplets | 6,055,911 | 34,262 | 172,991 |
| Entities | 1,806,126 | 27,553 | 105,176 |
| Relations | 888 | 883 | 888 |
**SynthIE-text**
The samples in this dataset were generated with `text-davinci-003`.
| | Train | Valid | Test |
| ----- | ----- | ----- | ----- |
| Data Points | -- | 10,000 | 50,286 |
| Triplets | -- | 34,262 | 172,991 |
| Entities | -- | 27,553 | 105,176 |
| Relations | -- | 883 | 888 |
**REBEL**
The samples in this dataset are processed and further annotated from the already existing [REBEL](https://huggingface.co/datasets/Babelscape/rebel-dataset) dataset.
| | Train | Valid | Test |
| ----- | ----- | ----- | ----- |
| Data Points | 2,813,210 | 155,926 | 156,449 |
| Triplets | 7,187,915 | 397,326 | 398,252 |
| Entities | 2,038,741 | 205,080 | 205,549 |
| Relations | 1071 | 691 | 690 |
Note that REBEL is substantially more skewed than SynCode and SynthIE-text. Here are the relation frequency (in terms of data points) statistics for REBEL and SynCode.
| | min | 1st quantile | median | 3rd quantile | max |
| ----- | ----- | ----- | ----- | ----- | ----- |
| SynCode | 61 | 1043 | 1691 | 3944 | 499,783 |
| REBEL | 1 | 7 | 47 | 625 | 1,202,489 |
**SynCode/SynthIE-text/REBEL processed**
Additionally, we provide a processed version (that was used in the paper) of each dataset. The processing consists of pre-computations/pre-processing that were run to speed the data loading for the experiments. The key difference is that in the processed version of SynthIE-code and SynthIE-text, the target triplets are consistently ordered according to a heuristic detecting the constituent entities' appearance position in the text, with triplets corresponding to entities appearing earlier in the output linearization (cf. paper). The triplets for REBEL are ordered even in the "unprocessed version". To load the processed version of the dataset, add the suffix "_pc" to the original identifier (i.e., synthie_code_pc, synthie_text_pc, rebel_pc). The processing is performed by applying [this](https://github.com/epfl-dlab/SynthIE/blob/main/scripts/pre_computing.py) script on the original data.
### Data Fields
All of the datasets share the same schema. Here is a list of the fields paired with a description.
- `id`: A unique numeric identifier, starting from 0 for each dataset.
- `text`: A string expressing the text corresponding to this sample.
- `triplets`: A list of triplets that are expressed in the text. Each triplet corresponds to a dictionary
- `subject`: The subject refers to an entity. It is a dictionary of:
- `surfaceform`: A textual label corresponding to the title of the entity's English Wikipedia page
- `uri`: A string corresponding to the entity's WikiData identifier
- `relation`: The relation refers to a relation. It is a dictionary of:
- `surfaceform`: The textual label assigned to the WikiData item corresponding to the given relation.
- `uri`: A string corresponding to the relation's WikiData identifier
- `object`: Same as the subject, the object refers to an entity and corresponds to a dictionary with the same structure.
- `entities`: A list comprising all the entities expressed in the text (appearing as a subject or an object in any of the triplets). Each entity is expressed as a dictionary following the same structure as the `subject` and `object` entities in the triplet list.
- `relations`: A list comprising all the relations expressed in the text (appearing as the relation in any of the triplets). Each relation is expressed as a dictionary following the same structure as the `relation` in the triplet list.
Here is an example of a data point:
```
{'id': 1,
'text': 'The Journal of Colloid and Interface Science is a bibliographic '
'review indexed in Scopus and published by Elsevier. Its main subject '
'is chemical engineering, and it is written in the English language. '
'It is based in the United States, and is owned by Elsevier, the same '
'company that owns Scopus.',
'triplets': [{'subject': "{'surfaceform': "
"'Journal_of_Colloid_and_Interface_Science', 'uri': "
"'Q3902043'}",
'predicate': "{'surfaceform': 'indexed in bibliographic "
"review', 'uri': 'P8875'}",
'object': "{'surfaceform': 'Scopus', 'uri': 'Q371467'}"},
{'subject': "{'surfaceform': "
"'Journal_of_Colloid_and_Interface_Science', 'uri': "
"'Q3902043'}",
'predicate': "{'surfaceform': 'main subject', 'uri': 'P921'}",
'object': "{'surfaceform': 'Chemical_engineering', 'uri': "
"'Q83588'}"},
{'subject': "{'surfaceform': "
"'Journal_of_Colloid_and_Interface_Science', 'uri': "
"'Q3902043'}",
'predicate': "{'surfaceform': 'language of work or name', "
"'uri': 'P407'}",
'object': "{'surfaceform': 'English_language', 'uri': 'Q1860'}"},
{'subject': "{'surfaceform': "
"'Journal_of_Colloid_and_Interface_Science', 'uri': "
"'Q3902043'}",
'predicate': "{'surfaceform': 'publisher', 'uri': 'P123'}",
'object': "{'surfaceform': 'Elsevier', 'uri': 'Q746413'}"},
{'subject': "{'surfaceform': "
"'Journal_of_Colloid_and_Interface_Science', 'uri': "
"'Q3902043'}",
'predicate': "{'surfaceform': 'country of origin', 'uri': "
"'P495'}",
'object': "{'surfaceform': 'United_States', 'uri': 'Q30'}"},
{'subject': "{'surfaceform': 'Scopus', 'uri': 'Q371467'}",
'predicate': "{'surfaceform': 'owned by', 'uri': 'P127'}",
'object': "{'surfaceform': 'Elsevier', 'uri': 'Q746413'}"}],
'entities': [{'surfaceform': 'Journal_of_Colloid_and_Interface_Science',
'uri': 'Q3902043'},
{'surfaceform': 'Scopus', 'uri': 'Q371467'},
{'surfaceform': 'Chemical_engineering', 'uri': 'Q83588'},
{'surfaceform': 'English_language', 'uri': 'Q1860'},
{'surfaceform': 'Elsevier', 'uri': 'Q746413'},
{'surfaceform': 'United_States', 'uri': 'Q30'}],
'relations': [{'surfaceform': 'indexed in bibliographic review',
'uri': 'P8875'},
{'surfaceform': 'main subject', 'uri': 'P921'},
{'surfaceform': 'language of work or name', 'uri': 'P407'},
{'surfaceform': 'publisher', 'uri': 'P123'},
{'surfaceform': 'country of origin', 'uri': 'P495'},
{'surfaceform': 'owned by', 'uri': 'P127'}]}
```
### Data Splits
Each dataset (except SynthIE-text, which does not have a train set) has the same 4 splits:
- `train`
- `validation`
- `test`
- `test_small`
The first three are self-explanatory; the `test_small` split corresponds to a randomly sampled subset of the `test` split in which the IDs of the data points are kept the same as in the test set from which they were sampled (i.e., after the sampling IDs are not reset to 0 and resigned).
## Dataset Creation
Collecting datasets for the closed IE task is time-consuming, expensive, and even hardly feasible, as it requires annotators to know the entire entity and relation catalogs and reason about all possible facts expressed in the text. As a result, only small or noisy datasets exist. The only large dataset available, REBEL, suffers from several problems: (i) Noise: it is constructed based on distant supervision, and for many data points, the target set does not contain all the facts expressed in the text or is partially incorrect; (ii) Skewness: most relations appear only a few times in the dataset, resulting in models that ignore most of the information when used for training and poor estimates of performance when used for evaluation.
This dataset is constructed using a synthetic data generation pipeline, proposed in the paper [Exploiting Asymmetry for Synthetic Training Data Generation: SynthIE and the Case of Information Extraction](https://arxiv.org/abs/2303.04132), and serves as a use case for a task for which (i) high-quality annotated data is not available; (ii) human-annotation is not practical; (iii) the direct task (closed IE) is challenging for an LLM. Concretely, by reversing the task and generating the data in the opposite direction -- going from triplets to text -- high-quality useful data can be generated. The pipeline used to construct the dataset comprises three components: (i) construction of a knowledge graph containing the entities and relations of interest; (ii) sampling of coherent triplet sets from the KG with comprehensive coverage of the entities and relations, and (iii) generation of high-quality text, expressing the triplets without any supplementary information. For more details regarding the dataset construction procedure and considerations for using the data, see the "Synthetic Data Generation", "Discussion", and "Limitations" sections of the [paper](https://arxiv.org/abs/2303.04132).
## Additional Information
### Licensing Information
The dataset is licensed under the terms of the MIT license.
### Citation Information
```
@article{josifoski2023exploiting,
title={Exploiting Asymmetry for Synthetic Training Data Generation: {S}ynth{IE} and The Case of Information Extraction},
author={Josifoski, Martin and Sakota, Marija and Peyrard, Maxime and West, Robert},
journal={arXiv preprint arXiv:2303.04132},
year={2023}
}
```
|
OpenBioML/chebi_20 | 2023-03-03T22:27:47.000Z | [
"region:us"
] | OpenBioML | null | null | null | 0 | 64 | Entry not found |
multimodalart/facesyntheticsspigacaptioned | 2023-03-23T14:56:28.000Z | [
"region:us"
] | multimodalart | null | null | null | 12 | 64 | ---
dataset_info:
features:
- name: image
dtype: image
- name: image_seg
dtype: image
- name: landmarks
dtype: string
- name: spiga
sequence:
sequence: float64
- name: spiga_seg
dtype: image
- name: image_caption
dtype: string
splits:
- name: train
num_bytes: 31087489990.0
num_examples: 100000
download_size: 31011261945
dataset_size: 31087489990.0
---
# Dataset Card for "face_synthetics_spiga_captioned"
This is a copy of the [Microsoft FaceSynthetics dataset with SPIGA-calculated landmark annotations](https://huggingface.co/datasets/pcuenq/face_synthetics_spiga), and additional BLIP-generated captions.
For a copy of the original FaceSynthetics dataset with no extra annotations, please refer to [pcuenq/face_synthetics](https://huggingface.co/datasets/pcuenq/face_synthetics).
Here is the code for parsing the dataset and generating the BLIP captions:
```py
from transformers import pipeline
dataset_name = "pcuenq/face_synthetics_spiga"
faces = load_dataset(dataset_name)
faces = faces["train"]
captioner = pipeline("image-to-text",model="Salesforce/blip-image-captioning-large", device=0)
def caption_image_data(example):
image = example["image"]
image_caption = captioner(image)[0]['generated_text']
example['image_caption'] = image_caption
return example
faces_proc = faces.map(caption_image_data)
faces_proc.push_to_hub(f"multimodalart/face_synthetics_spiga_captioned")
```
|
Abzu/CodeAlpacaPython | 2023-06-04T19:38:39.000Z | [
"task_categories:text-generation",
"language:en",
"license:cc",
"region:us"
] | Abzu | null | null | null | 3 | 64 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 2570214.2624451965
num_examples: 8477
- name: test
num_bytes: 286526.48926610086
num_examples: 942
download_size: 1488032
dataset_size: 2856740.7517112973
license: cc
task_categories:
- text-generation
language:
- en
---
# Dataset Card for "CodeAlpacaPython"
This is HuggingFaceH4/CodeAlpaca_20K only python prompts.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TigerResearch/sft_zh | 2023-06-09T12:21:42.000Z | [
"language:zh",
"license:apache-2.0",
"region:us"
] | TigerResearch | null | null | null | 22 | 64 | ---
license: apache-2.0
language:
- zh
---
[Tigerbot](https://github.com/TigerResearch/TigerBot) 开源项目中微调中文sft-zh数据合集
本合集涵盖本组织下开源的其他中文sft-中文-数据集,不需要重复下载
## Usage
```python
import datasets
ds_sft = datasets.load_dataset('TigerResearch/sft_zh')
```
## 文件细分
| 类型 | 语言 | 数据集文件 | 数量
| ------------ | ---- | -------------------------------------------------------------------------------------------------------------------------------- | ----------- |
| alpaca 中文 | 中文 | [tigerbot-alpaca-zh-0.5m](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-alpaca-zh-0.5m.json) | 0.5m |
| 百科问答 | 中文 | [tigerbot-wiki-qa-1k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-wiki-qa-zh-1k.json) | 1k |
| 名著问答 | 中文 | [tigerbot-book-qa-1k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-book-qa-1k.json) | 1k |
| 猜谜语 | 中文 | [tigerbot-riddle-qa-1k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-riddle-qa-1k.json) | 1k |
| 阅读理解 | 中文 | [tigerbot-superclue-c3-zh-5k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-superclue-c3-zh-5k.json) | 5k |
| 问答 | 中文 | [tigerbot-hc3-zh-12k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-hc3-zh-12k.json) | 12k |
| 知乎问答 | 中文 | [tigerbot-zhihu-zh-10k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-zhihu-zh-10k.json) | 10k |
|
DISCOX/DISCO-200K-random | 2023-06-20T14:26:06.000Z | [
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-4.0",
"music",
"region:us"
] | DISCOX | null | null | null | 0 | 64 | ---
license: cc-by-4.0
dataset_info:
features:
- name: video_url_youtube
dtype: string
- name: video_title_youtube
dtype: string
- name: track_name_spotify
dtype: string
- name: preview_url_spotify
dtype: string
- name: track_id_spotify
dtype: string
- name: album_id_spotify
dtype: string
- name: artist_id_spotify
sequence: string
- name: track_duration_spotify_ms
dtype: int64
- name: video_duration_youtube_sec
dtype: float64
- name: primary_artist_name_spotify
dtype: string
- name: search_query_youtube
dtype: string
- name: first_artist_follower_spotify
dtype: float64
- name: artist_genres_spotify
sequence: string
- name: track_release_date_spotify
dtype: string
- name: explicit_content_spotify
dtype: bool
- name: video_view_count_youtube
dtype: float64
- name: video_thumbnail_url_youtube
dtype: string
- name: video_description_youtube
dtype: string
- name: similarity_duration
dtype: float64
- name: similarity_query_video_title
dtype: float64
- name: similarity_query_description
dtype: float64
- name: similarity_audio
dtype: float64
- name: audio_embedding_spotify
sequence: float32
- name: audio_embedding_youtube
sequence: float32
splits:
- name: train
num_bytes: 965534426.0
num_examples: 200000
download_size: 1160459401
dataset_size: 965534426.0
language:
- en
tags:
- music
size_categories:
- 100K<n<1M
---
### Getting Started
You can download the dataset using HuggingFace:
```python
from datasets import load_dataset
ds = load_dataset("DISCOX/DISCO-200K-random")
```
The dataset contains 200,000 random samples from the DISCO-10M dataset found [here](https://huggingface.co/datasets/DISCOX/DISCO-10M).
## Dataset Structure
The dataset contains the following features:
```json
{
'video_url_youtube',
'video_title_youtube',
'track_name_spotify',
'video_duration_youtube_sec',
'preview_url_spotify',
'video_view_count_youtube',
'video_thumbnail_url_youtube',
'search_query_youtube',
'video_description_youtube',
'track_id_spotify',
'album_id_spotify',
'artist_id_spotify',
'track_duration_spotify_ms',
'primary_artist_name_spotify',
'track_release_date_spotify',
'explicit_content_spotify',
'similarity_duration',
'similarity_query_video_title',
'similarity_query_description',
'similarity_audio',
'audio_embedding_spotify',
'audio_embedding_youtube',
}
```
More details about the dataset can be found [here](https://huggingface.co/datasets/DISCOX/DISCO-10M).
<!--
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
--> |
KagglingFace/vit-cats-dogs | 2023-07-02T12:19:43.000Z | [
"license:mit",
"region:us"
] | KagglingFace | null | null | null | 1 | 64 | ---
license: mit
---
|
juanka0357/bitcoin-sentiment-analysis | 2023-07-10T17:00:33.000Z | [
"license:unknown",
"region:us"
] | juanka0357 | null | null | null | 1 | 64 | ---
license: unknown
---
|
C-MTEB/QBQTC | 2023-07-28T13:38:12.000Z | [
"region:us"
] | C-MTEB | null | null | null | 0 | 64 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 524191
num_examples: 5000
download_size: 387552
dataset_size: 524191
---
# Dataset Card for "QBQTC"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
GokhanAI/AGENT | 2023-08-01T13:25:10.000Z | [
"license:apache-2.0",
"region:us"
] | GokhanAI | null | null | null | 2 | 64 | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 53358482.3453903
num_examples: 83179
- name: test
num_bytes: 1282979.6546097042
num_examples: 2000
download_size: 19123315
dataset_size: 54641462.0
---
|
bigplane/liuyifei_lora | 2023-08-11T07:46:34.000Z | [
"license:openrail",
"region:us"
] | bigplane | null | null | null | 0 | 64 | ---
license: openrail
---
|
dim/wikihow_en | 2023-08-15T12:10:58.000Z | [
"license:mit",
"region:us"
] | dim | null | null | null | 0 | 64 | ---
license: mit
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
- name: METADATA
dtype: string
splits:
- name: train
num_bytes: 17125965.190821543
num_examples: 1995
download_size: 8899392
dataset_size: 17125965.190821543
---
|
dim/roleplay_instruct_v2_final | 2023-10-04T14:15:48.000Z | [
"region:us"
] | dim | null | null | null | 0 | 64 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4382098
num_examples: 7188
download_size: 2880335
dataset_size: 4382098
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "roleplay_instruct_v2_final"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
M-A-D/Mixed-Arabic-Datasets-Repo | 2023-10-08T12:42:13.000Z | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:translation",
"task_categories:summarization",
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:fill-mask",
"size_categories:1B<n... | M-A-D | null | null | null | 10 | 64 | ---
language:
- ar
size_categories:
- 1B<n<10B
task_categories:
- text-classification
- question-answering
- translation
- summarization
- conversational
- text-generation
- text2text-generation
- fill-mask
pretty_name: Mixed Arabic Datasets (MAD) Corpus
dataset_info:
- config_name: Ara--Abdelaziz--MNAD.v1
features:
- name: Title
dtype: string
- name: Body
dtype: string
- name: Category
dtype: string
splits:
- name: train
num_bytes: 1101921980
num_examples: 418563
download_size: 526103216
dataset_size: 1101921980
- config_name: Ara--Abdelaziz--QuranExe
features:
- name: text
dtype: string
- name: resource_name
dtype: string
- name: verses_keys
dtype: string
splits:
- name: train
num_bytes: 133108687
num_examples: 49888
download_size: 58769326
dataset_size: 133108687
- config_name: Ara--Abdelaziz--tweet_sentiment_multilingual
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 306108
num_examples: 1839
download_size: 172509
dataset_size: 306108
- config_name: Ara--Ali-C137--Hindawi-Books-dataset
features:
- name: BookLink
dtype: string
- name: BookName
dtype: string
- name: AuthorName
dtype: string
- name: AboutBook
dtype: string
- name: ChapterLink
dtype: string
- name: ChapterName
dtype: string
- name: ChapterText
dtype: string
- name: AboutAuthor
dtype: string
splits:
- name: train
num_bytes: 1364854259
num_examples: 49821
download_size: 494678002
dataset_size: 1364854259
- config_name: Ara--Goud--Goud-sum
features:
- name: article
dtype: string
- name: headline
dtype: string
- name: categories
dtype: string
splits:
- name: train
num_bytes: 288296544
num_examples: 139288
download_size: 147735776
dataset_size: 288296544
- config_name: Ara--J-Mourad--MNAD.v1
features:
- name: Title
dtype: string
- name: Body
dtype: string
- name: Category
dtype: string
splits:
- name: train
num_bytes: 1101921980
num_examples: 418563
download_size: 527154122
dataset_size: 1101921980
- config_name: Ara--JihadZa--IADD
features:
- name: Sentence
dtype: string
- name: Region
dtype: string
- name: DataSource
dtype: string
- name: Country
dtype: string
splits:
- name: train
num_bytes: 19167070
num_examples: 135804
download_size: 8644491
dataset_size: 19167070
- config_name: Ara--LeMGarouani--MAC-corpus
features:
- name: tweets
dtype: string
- name: type
dtype: string
- name: class
dtype: string
splits:
- name: train
num_bytes: 1945646
num_examples: 18087
download_size: 866198
dataset_size: 1945646
- config_name: Ara--MBZUAI--Bactrian-X
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: id
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 66093524
num_examples: 67017
download_size: 33063779
dataset_size: 66093524
- config_name: Ara--OpenAssistant--oasst1
features:
- name: message_id
dtype: string
- name: parent_id
dtype: string
- name: user_id
dtype: string
- name: created_date
dtype: string
- name: text
dtype: string
- name: role
dtype: string
- name: lang
dtype: string
- name: review_count
dtype: int32
- name: review_result
dtype: bool
- name: deleted
dtype: bool
- name: rank
dtype: float64
- name: synthetic
dtype: bool
- name: model_name
dtype: 'null'
- name: detoxify
dtype: 'null'
- name: message_tree_id
dtype: string
- name: tree_state
dtype: string
- name: emojis
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: labels
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: value
sequence: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 58168
num_examples: 56
download_size: 30984
dataset_size: 58168
- config_name: Ara--Wikipedia
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3052201469
num_examples: 1205403
download_size: 1316212231
dataset_size: 3052201469
- config_name: Ara--bigscience--xP3
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 4727881680
num_examples: 2148955
download_size: 2805060725
dataset_size: 4727881680
- config_name: Ara--cardiffnlp--tweet_sentiment_multilingual
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
splits:
- name: train
num_bytes: 306108
num_examples: 1839
- name: validation
num_bytes: 53276
num_examples: 324
- name: test
num_bytes: 141536
num_examples: 870
download_size: 279900
dataset_size: 500920
- config_name: Ara--miracl--miracl
features:
- name: query_id
dtype: string
- name: query
dtype: string
- name: positive_passages
list:
- name: docid
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: negative_passages
list:
- name: docid
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 32012083
num_examples: 3495
download_size: 15798509
dataset_size: 32012083
- config_name: Ara--mustapha--QuranExe
features:
- name: text
dtype: string
- name: resource_name
dtype: string
- name: verses_keys
dtype: string
splits:
- name: train
num_bytes: 133108687
num_examples: 49888
download_size: 58769417
dataset_size: 133108687
- config_name: Ara--pain--Arabic-Tweets
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 41639770853
num_examples: 202700438
download_size: 22561651700
dataset_size: 41639770853
- config_name: Ara--saudinewsnet
features:
- name: source
dtype: string
- name: url
dtype: string
- name: date_extracted
dtype: string
- name: title
dtype: string
- name: author
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 103654009
num_examples: 31030
download_size: 49117164
dataset_size: 103654009
- config_name: Ary--AbderrahmanSkiredj1--Darija-Wikipedia
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 8104410
num_examples: 4862
download_size: 3229966
dataset_size: 8104410
- config_name: Ary--Ali-C137--Darija-Stories-Dataset
features:
- name: ChapterName
dtype: string
- name: ChapterLink
dtype: string
- name: Author
dtype: string
- name: Text
dtype: string
- name: Tags
dtype: int64
splits:
- name: train
num_bytes: 476926644
num_examples: 6142
download_size: 241528641
dataset_size: 476926644
- config_name: Ary--Wikipedia
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10007364
num_examples: 6703
download_size: 4094377
dataset_size: 10007364
- config_name: Arz--Wikipedia
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1364641408
num_examples: 1617770
download_size: 306420318
dataset_size: 1364641408
configs:
- config_name: Ara--Abdelaziz--MNAD.v1
data_files:
- split: train
path: Ara--Abdelaziz--MNAD.v1/train-*
- config_name: Ara--Abdelaziz--QuranExe
data_files:
- split: train
path: Ara--Abdelaziz--QuranExe/train-*
- config_name: Ara--Abdelaziz--tweet_sentiment_multilingual
data_files:
- split: train
path: Ara--Abdelaziz--tweet_sentiment_multilingual/train-*
- config_name: Ara--Ali-C137--Hindawi-Books-dataset
data_files:
- split: train
path: Ara--Ali-C137--Hindawi-Books-dataset/train-*
- config_name: Ara--Goud--Goud-sum
data_files:
- split: train
path: Ara--Goud--Goud-sum/train-*
- config_name: Ara--J-Mourad--MNAD.v1
data_files:
- split: train
path: Ara--J-Mourad--MNAD.v1/train-*
- config_name: Ara--JihadZa--IADD
data_files:
- split: train
path: Ara--JihadZa--IADD/train-*
- config_name: Ara--LeMGarouani--MAC-corpus
data_files:
- split: train
path: Ara--LeMGarouani--MAC-corpus/train-*
- config_name: Ara--MBZUAI--Bactrian-X
data_files:
- split: train
path: Ara--MBZUAI--Bactrian-X/train-*
- config_name: Ara--OpenAssistant--oasst1
data_files:
- split: train
path: Ara--OpenAssistant--oasst1/train-*
- config_name: Ara--Wikipedia
data_files:
- split: train
path: Ara--Wikipedia/train-*
- config_name: Ara--bigscience--xP3
data_files:
- split: train
path: Ara--bigscience--xP3/train-*
- config_name: Ara--cardiffnlp--tweet_sentiment_multilingual
data_files:
- split: train
path: Ara--cardiffnlp--tweet_sentiment_multilingual/train-*
- split: validation
path: Ara--cardiffnlp--tweet_sentiment_multilingual/validation-*
- split: test
path: Ara--cardiffnlp--tweet_sentiment_multilingual/test-*
- config_name: Ara--miracl--miracl
data_files:
- split: train
path: Ara--miracl--miracl/train-*
- config_name: Ara--mustapha--QuranExe
data_files:
- split: train
path: Ara--mustapha--QuranExe/train-*
- config_name: Ara--pain--Arabic-Tweets
data_files:
- split: train
path: Ara--pain--Arabic-Tweets/train-*
- config_name: Ara--saudinewsnet
data_files:
- split: train
path: Ara--saudinewsnet/train-*
- config_name: Ary--AbderrahmanSkiredj1--Darija-Wikipedia
data_files:
- split: train
path: Ary--AbderrahmanSkiredj1--Darija-Wikipedia/train-*
- config_name: Ary--Ali-C137--Darija-Stories-Dataset
data_files:
- split: train
path: Ary--Ali-C137--Darija-Stories-Dataset/train-*
- config_name: Ary--Wikipedia
data_files:
- split: train
path: Ary--Wikipedia/train-*
- config_name: Arz--Wikipedia
data_files:
- split: train
path: Arz--Wikipedia/train-*
---
# Dataset Card for "Mixed Arabic Datasets (MAD) Corpus"
**The Mixed Arabic Datasets Corpus : A Community-Driven Collection of Diverse Arabic Texts**
## Dataset Description
The Mixed Arabic Datasets (MAD) presents a dynamic compilation of diverse Arabic texts sourced from various online platforms and datasets. It addresses a critical challenge faced by researchers, linguists, and language enthusiasts: the fragmentation of Arabic language datasets across the Internet. With MAD, we are trying to centralize these dispersed resources into a single, comprehensive repository.
Encompassing a wide spectrum of content, ranging from social media conversations to literary masterpieces, MAD captures the rich tapestry of Arabic communication, including both standard Arabic and regional dialects.
This corpus offers comprehensive insights into the linguistic diversity and cultural nuances of Arabic expression.
## Usage
If you want to use this dataset you pick one among the available configs:
`Ara--MBZUAI--Bactrian-X` | `Ara--OpenAssistant--oasst1` | `Ary--AbderrahmanSkiredj1--Darija-Wikipedia`
`Ara--Wikipedia` | `Ary--Wikipedia` | `Arz--Wikipedia`
`Ary--Ali-C137--Darija-Stories-Dataset` | `Ara--Ali-C137--Hindawi-Books-dataset` | ``
Example of usage:
```python
dataset = load_dataset('M-A-D/Mixed-Arabic-Datasets-Repo', 'Ara--MBZUAI--Bactrian-X')
```
If you loaded multiple datasets and wanted to merge them together then you can simply laverage `concatenate_datasets()` from `datasets`
```pyhton
dataset3 = concatenate_datasets([dataset1['train'], dataset2['train']])
```
Note : proccess the datasets before merging in order to make sure you have a new dataset that is consistent
## Dataset Size
The Mixed Arabic Datasets (MAD) is a dynamic and evolving collection, with its size fluctuating as new datasets are added or removed. As MAD continuously expands, it becomes a living resource that adapts to the ever-changing landscape of Arabic language datasets.
**Dataset List**
MAD draws from a diverse array of sources, each contributing to its richness and breadth. While the collection is constantly evolving, some of the datasets that are poised to join MAD in the near future include:
- [✔] OpenAssistant/oasst1 (ar portion) : [Dataset Link](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [✔] MBZUAI/Bactrian-X (ar portion) : [Dataset Link](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ar/train)
- [✔] AbderrahmanSkiredj1/Darija-Wikipedia : [Dataset Link](https://huggingface.co/datasets/AbderrahmanSkiredj1/moroccan_darija_wikipedia_dataset)
- [✔] Arabic Wikipedia : [Dataset Link](https://huggingface.co/datasets/wikipedia)
- [✔] Moroccan Arabic Wikipedia : [Dataset Link](https://huggingface.co/datasets/wikipedia)
- [✔] Egyptian Arabic Wikipedia : [Dataset Link](https://huggingface.co/datasets/wikipedia)
- [✔] Darija Stories Dataset : [Dataset Link](https://huggingface.co/datasets/Ali-C137/Darija-Stories-Dataset)
- [✔] Hindawi Books Dataset : [Dataset Link](https://huggingface.co/datasets/Ali-C137/Hindawi-Books-dataset)
- [] uonlp/CulturaX - ar : [Dataset Link](https://huggingface.co/datasets/uonlp/CulturaX/viewer/ar/train)
- [✔] Pain/ArabicTweets : [Dataset Link](https://huggingface.co/datasets/pain/Arabic-Tweets)
- [] Abu-El-Khair Corpus : [Dataset Link](https://huggingface.co/datasets/arabic_billion_words)
- [✔] QuranExe : [Dataset Link](https://huggingface.co/datasets/mustapha/QuranExe)
- [✔] MNAD : [Dataset Link](https://huggingface.co/datasets/J-Mourad/MNAD.v1)
- [✔] IADD : [Dataset Link](https://raw.githubusercontent.com/JihadZa/IADD/main/IADD.json)
- [] OSIAN : [Dataset Link](https://wortschatz.uni-leipzig.de/en/download/Arabic#ara-tn_newscrawl-OSIAN_2018)
- [✔] MAC corpus : [Dataset Link](https://raw.githubusercontent.com/LeMGarouani/MAC/main/MAC%20corpus.csv)
- [✔] Goud.ma-Sum : [Dataset Link](https://huggingface.co/datasets/Goud/Goud-sum)
- [✔] SaudiNewsNet : [Dataset Link](https://huggingface.co/datasets/saudinewsnet)
- [✔] Miracl : [Dataset Link](https://huggingface.co/datasets/miracl/miracl)
- [✔] CardiffNLP/TweetSentimentMulti : [Dataset Link](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)
- [] OSCAR-2301 : [Dataset Link](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301/viewer/ar/train)
- [] mc4 : [Dataset Link](https://huggingface.co/datasets/mc4/viewer/ar/train)
- [✔] bigscience/xP3 : [Dataset Link](https://huggingface.co/datasets/bigscience/xP3/viewer/ar/train)
- [] Muennighoff/xP3x : [Dataset Link](https://huggingface.co/datasets/Muennighoff/xP3x)
- [] Ai_Society : [Dataset Link](https://huggingface.co/datasets/camel-ai/ai_society_translated)
## Potential Use Cases
The Mixed Arabic Datasets (MAD) holds the potential to catalyze a multitude of groundbreaking applications:
- **Linguistic Analysis:** Employ MAD to conduct in-depth linguistic studies, exploring dialectal variances, language evolution, and grammatical structures.
- **Topic Modeling:** Dive into diverse themes and subjects through the extensive collection, revealing insights into emerging trends and prevalent topics.
- **Sentiment Understanding:** Decode sentiments spanning Arabic dialects, revealing cultural nuances and emotional dynamics.
- **Sociocultural Research:** Embark on a sociolinguistic journey, unraveling the intricate connection between language, culture, and societal shifts.
## Dataset Access
MAD's access mechanism is unique: while it doesn't carry a general license itself, each constituent dataset within the corpus retains its individual license. By accessing the dataset details through the provided links in the "Dataset List" section above, users can understand the specific licensing terms for each dataset.
### Join Us on Discord
For discussions, contributions, and community interactions, join us on Discord! [](https://discord.gg/2NpJ9JGm)
### How to Contribute
Want to contribute to the Mixed Arabic Datasets project? Follow our comprehensive guide on Google Colab for step-by-step instructions: [Contribution Guide](https://colab.research.google.com/drive/1kOIRoicgCOV8TPvASAI_2uMY7rpXnqzJ?usp=sharing).
**Note**: If you'd like to test a contribution before submitting it, feel free to do so on the [MAD Test Dataset](https://huggingface.co/datasets/M-A-D/Mixed-Arabic-Dataset-test).
## Citation
```
@dataset{
title = {Mixed Arabic Datasets (MAD)},
author = {MAD Community},
howpublished = {Dataset},
url = {https://huggingface.co/datasets/M-A-D/Mixed-Arabic-Datasets-Repo},
year = {2023},
}
``` |
asyafiqe/orca_mini_v1_indonesia | 2023-08-27T10:54:58.000Z | [
"license:apache-2.0",
"region:us"
] | asyafiqe | null | null | null | 0 | 64 | ---
license: apache-2.0
---
This is dataset is a modified version of psmathur's [orca_mini_v1](https://huggingface.co/datasets/psmathur/orca_mini_v1_dataset) dataset translated into Bahasa Indonesia by Google Translate. |
TinyPixel/lima-m2 | 2023-09-26T03:41:46.000Z | [
"region:us"
] | TinyPixel | null | null | null | 1 | 64 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2957490
num_examples: 1030
download_size: 1697765
dataset_size: 2957490
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "lima-m2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
thu-coai/SafetyBench | 2023-09-14T05:25:39.000Z | [
"license:mit",
"arxiv:2309.07045",
"region:us"
] | thu-coai | null | null | null | 0 | 64 | ---
license: mit
configs:
- config_name: test
data_files:
- split: zh
path: "test_zh.json"
- split: en
path: "test_en.json"
- split: zh_subset
path: "test_zh_subset.json"
- config_name: dev
data_files:
- split: zh
path: "dev_zh.json"
- split: en
path: "dev_en.json"
---
SafetyBench is a comprehensive benchmark for evaluating the safety of LLMs, which comprises 11,435 diverse multiple choice questions spanning across 7 distinct categories of safety concerns. Notably, SafetyBench also incorporates both Chinese and English data, facilitating the evaluation in both languages.
Please visit our [GitHub](https://github.com/thu-coai/SafetyBench) and [website](https://llmbench.ai/safety) or check our [paper](https://arxiv.org/abs/2309.07045) for more details.
We release three differents test sets including Chinese testset (`test_zh.json`), English testset (`test_en.json`) and Chinese subset testset (`test_zh_subset.json`). The Chinese subset testset removes samples with highly sensitive keywords to test API-based Chinese LLMs with strict filtering mechanism.
We also release 5 examples for each of the 7 safety categories in both Chinese (`dev_zh.json`) and English (`dev_en.json`), to enable few-shot evaluation.
Labels on the test sets are not released and users are required to submit their results to automatically obtain test accuracy. [How to submit?](https://github.com/thu-coai/SafetyBench#how-to-submit)
## Load the Data
```python
from datasets import load_dataset
test_dataset = load_dataset("thu-coai/SafetyBench", "test")
dev_dataset = load_dataset("thu-coai/SafetyBench", "dev")
```
More details about loading the data and evaluating LLMs could be found at our [github repository](https://github.com/thu-coai/SafetyBench).
Please cite our paper if you use our dataset.
```
@article{zhang2023safetybench,
title={SafetyBench: Evaluating the Safety of Large Language Models with Multiple Choice Questions},
author={Zhexin Zhang and Leqi Lei and Lindong Wu and Rui Sun and Yongkang Huang and Chong Long and Xiao Liu and Xuanyu Lei and Jie Tang and Minlie Huang},
journal={arXiv preprint arXiv:2309.07045},
year={2023}
}
``` |
Loie/Auto-ACD | 2023-09-20T12:53:29.000Z | [
"region:us"
] | Loie | null | null | null | 6 | 64 |
# Auto-ACD
Auto-ACD is a large-scale, high-quality, audio-language dataset, building on the prior of robust audio-visual correspondence in existing video datasets, VGGSound and AudioSet.
- **Homepage:** https://auto-acd.github.io/
- **Paper:**
- **Github:** https://github.com/LoieSun/Auto-ACD
## Analysis

Auto-ACD</strong>, comprising over <strong>1.9M </strong> audio-text pairs.
As shown in figure, The text descriptions in Auto-ACD contain <strong>long texts (18 words)</strong> and <strong>diverse vocabularies (23K)</strong>, and provide information about the <strong>surrounding auditory environment</strong>(data point with <strong>shadow</strong>) in which sounds take place.
## Download
We provide a csv file. For each data pairs, we provide YouTube URLs and generated caption. Each line in the csv file has columns defined by here.
```
# YouTube ID, caption
```
## Dataset Preview

|
Globaly/gbl-dataset | 2023-09-18T20:50:13.000Z | [
"region:us"
] | Globaly | null | null | null | 0 | 64 | Entry not found |
yejeekang/ko_legal_instruction | 2023-09-20T12:25:42.000Z | [
"license:afl-3.0",
"region:us"
] | yejeekang | null | null | null | 0 | 64 | ---
license: afl-3.0
---
|
OfekGlick/DiscoEval | 2023-10-03T21:26:08.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:bsd",
"Discourse",
"Discourse Evaluation",
"NLP",
"arxiv:1909.00142",
"region:us"
] | OfekGlick | This dataset contains all tasks of the DiscoEval benchmark for sentence representation learning. | @InProceedings{mchen-discoeval-19,
title = {Evaluation Benchmarks and Learning Criteria for Discourse-Aware Sentence Representations},
author = {Mingda Chen and Zewei Chu and Kevin Gimpel},
booktitle = {Proc. of {EMNLP}},
year={2019}
} | null | 0 | 64 | ---
license: bsd
task_categories:
- text-classification
language:
- en
tags:
- Discourse
- Discourse Evaluation
- NLP
pretty_name: DiscoEval
size_categories:
- 100K<n<1M
---
# DiscoEval Benchmark Datasets
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Sources](#dataset-sources)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Benchmark Creators](#benchmark-creators)
- [Citation Information](#citation-information)
- [Loading Data Examples](#loading-data-examples)
- [Loading Data for Sentence Positioning Task with the Arxiv data source](#loading-data-for-sentence-positioning-task-with-the-arxiv-data-source)
## Dataset Description
- **Repository:** [DiscoEval repository](https://github.com/ZeweiChu/DiscoEval)
- **Paper:** [Evaluation Benchmarks and Learning Criteria for Discourse-Aware Sentence Representations](https://arxiv.org/pdf/1909.00142)
### Dataset Summary
The DiscoEval is an English-language Benchmark that contains a test suite of 7
tasks to evaluate whether sentence representations include semantic information
relevant to discourse processing. The benchmark datasets offer a collection of
tasks designed to evaluate natural language understanding models in the context
of discourse analysis and coherence.
### Dataset Sources
- **Arxiv**: A repository of scientific papers and research articles.
- **Wikipedia**: An extensive online encyclopedia with articles on diverse topics.
- **Rocstory**: A dataset consisting of fictional stories.
- **Ubuntu IRC channel**: Conversational data extracted from the Ubuntu Internet Relay Chat (IRC) channel.
- **PeerRead**: A dataset of scientific papers frequently used for discourse-related tasks.
- **RST Discourse Treebank**: A dataset annotated with Rhetorical Structure Theory (RST) discourse relations.
- **Penn Discourse Treebank**: Another dataset with annotated discourse relations, facilitating the study of discourse structure.
### Supported Tasks
1. **Sentence Positioning**
- **Datasets Sources**: Arxiv, Wikipedia, Rocstory
- **Description**: Determine the correct placement of a sentence within a given context of five sentences. To form the input when training classifiers encode the five sentences to vector representations \\(x_i\\). As input to the classfier we include \\(x_1\\) and the contcatination of \\(x_1 - x_i\\) for all \\(i\\): \\([x_1, x_1 - x_2, x_1-x_3,x_1-x_4,x_1-x_5]\\)
2. **Binary Sentence Ordering**
- **Datasets Sources**: Arxiv, Wikipedia, Rocstory
- **Description**: Determining whether two sentences are in the correct consecutive order, identifying the more coherent structure. To form the input when training classifiers, we concatenate the embeddings of both sentences with their element-wise difference: \\([x_1, x_2, x_1-x_2]\\)
3. **Discourse Coherence**
- **Datasets Sources**: Ubuntu IRC channel, Wikipedia
- **Description**: Determine whether a sequence of six sentences form a coherent paragraph. To form the input when training classifiers, encode all sentences to vector representations and concatenate all of them: \\([x_1, x_2, x_3, x_4, x_5, x_6]\\)
4. **Sentence Section Prediction**
- **Datasets Sources**: Constructed from PeerRead
- **Description**: Determine the section or category to which a sentence belongs within a scientific paper, based on the content and context. To form the input when training classifiers, simply input the sentence embedding.
5. **Discourse Relations**
- **Datasets Sources**: RST Discourse Treebank, Penn Discourse Treebank
- **Description**: Identify and classify discourse relations between sentences or text segments, helping to reveal the structure and flow of discourse. To form the input when training classifiers, refer to the [original paper](https://arxiv.org/pdf/1909.00142) for instructions
### Languages
The text in all datasets is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
All tasks are classification tasks, and they differ by the number of sentences per example and the type of label.
An example from the Sentence Positioning task would look as follows:
```
{'sentence_1': 'Dan was overweight as well.',
'sentence_2': 'Dan's parents were overweight.',
'sentence_3': 'The doctors told his parents it was unhealthy.',
'sentence_4': 'His parents understood and decided to make a change.',
'sentence_5': 'They got themselves and Dan on a diet.'
'label': '1'
}
```
The label is '1' since the first sentence should go at position number 1 (counting from zero)
An example from the Binary Sentence Ordering task would look as follows:
```
{'sentence_1': 'When she walked in, she felt awkward.',
'sentence_2': 'Janet decided to go to her high school's party.',
'label': '0'
}
```
The label is '0' because this is not the correct order of the sentences. It should be sentence_2 and then sentence_1.
For more examples, you can refer the [original paper]((https://arxiv.org/pdf/1909.00142).
### Data Fields
In this benchmark, all data fields are string, including the labels.
### Data Splits
The data is split into training, validation and test set for each of the tasks in the benchmark.
| Task and Dataset | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| Sentence Positioning: Arxiv| 10000 | 4000 | 4000|
| Sentence Positioning: Rocstory| 10000 | 4000 | 4000|
| Sentence Positioning: Wiki| 10000 | 4000 | 4000|
| Binary Sentence Ordering: Arxiv| 20000 | 8000 | 8000|
| Binary Sentence Ordering: Rocstory| 20000 | 8000 | 8000|
| Binary Sentence Ordering: Wiki| 20000 | 8000 | 8000|
| Discourse Coherence: Chat| 5816 | 1834 | 2418|
| Discourse Coherence: Wiki| 10000 | 4000 | 4000|
| Sentence Section Prediction | 10000 | 4000 | 4000 |
| Discourse Relation: Penn Discourse Tree Bank: Implicit | 8693 | 2972 | 3024 |
| Discourse Relation: Penn Discourse Tree Bank: Explicit | 9383 | 3613 | 3758 |
| Discourse Relation: RST Discourse Tree Bank | 17051 | 2045 | 2308 |
## Additional Information
### Benchmark Creators
This benchmark was created by Mingda Chen, Zewei Chu and Kevin Gimpel during work done at the University of Chicago and the Toyota Technologival Institute at Chicago.
### Citation Information
```
@inproceedings{mchen-discoeval-19,
title = {Evaluation Benchmarks and Learning Criteria for Discourse-Aware Sentence Representations},
author = {Mingda Chen and Zewei Chu and Kevin Gimpel},
booktitle = {Proc. of {EMNLP}},
year={2019}
}
```
## Loading Data Examples
### Loading Data for Sentence Positioning Task with the Arxiv data source
```python
from datasets import load_dataset
# Load the Sentence Positioning dataset
dataset = load_dataset(path="OfekGlick/DiscoEval", name="SParxiv")
# Access the train, validation, and test splits
train_data = dataset["train"]
validation_data = dataset["validation"]
test_data = dataset["test"]
# Example usage: Print the first few training examples
for example in train_data[:5]:
print(example)
```
The other possible inputs for the `name` parameter are:
`SParxiv`, `SProcstory`, `SPwiki`, `SSPabs`, `PDTB-I`, `PDTB-E`, `BSOarxiv`, `BSOrocstory`, `BSOwiki`, `DCchat`, `DCwiki`, `RST` |
imdatta0/orca_best_unwrapped | 2023-09-25T12:40:23.000Z | [
"region:us"
] | imdatta0 | null | null | null | 1 | 64 | Entry not found |
tyzhu/squad_rare_v4_train_10_eval_10 | 2023-09-28T15:08:13.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 64 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 200420
num_examples: 138
- name: validation
num_bytes: 49683
num_examples: 50
download_size: 64345
dataset_size: 250103
---
# Dataset Card for "squad_rare_v4_train_10_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fiveflow/psychology-dataset-v2 | 2023-10-10T05:03:48.000Z | [
"region:us"
] | fiveflow | null | null | null | 0 | 64 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 716480
num_examples: 996
download_size: 189768
dataset_size: 716480
---
# Dataset Card for "psychology-dataset-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
europa_eac_tm | 2023-01-25T14:30:11.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"lang... | null | In October 2012, the European Union's (EU) Directorate General for Education and Culture ( DG EAC) released a translation memory (TM), i.e. a collection of sentences and their professionally produced translations, in twenty-six languages. This resource bears the name EAC Translation Memory, short EAC-TM.
EAC-TM covers up to 26 languages: 22 official languages of the EU (all except Irish) plus Icelandic, Croatian, Norwegian and Turkish. EAC-TM thus contains translations from English into the following 25 languages: Bulgarian, Czech, Danish, Dutch, Estonian, German, Greek, Finnish, French, Croatian, Hungarian, Icelandic, Italian, Latvian, Lithuanian, Maltese, Norwegian, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish, Swedish and Turkish.
All documents and sentences were originally written in English (source language is English) and then translated into the other languages. The texts were translated by staff of the National Agencies of the Lifelong Learning and Youth in Action programmes. They are typically professionals in the field of education/youth and EU programmes. They are thus not professional translators, but they are normally native speakers of the target language. | @Article{Steinberger2014,
author={Steinberger, Ralf
and Ebrahim, Mohamed
and Poulis, Alexandros
and Carrasco-Benitez, Manuel
and Schl{\"u}ter, Patrick
and Przybyszewski, Marek
and Gilbro, Signe},
title={An overview of the European Union's highly multilingual parallel corpora},
journal={Language Resources and Evaluation},
year={2014},
month={Dec},
day={01},
volume={48},
number={4},
pages={679-707},
issn={1574-0218},
doi={10.1007/s10579-014-9277-0},
url={https://doi.org/10.1007/s10579-014-9277-0}
} | null | 2 | 63 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- hr
- hu
- is
- it
- lt
- lv
- mt
- nl
- 'no'
- pl
- pt
- ro
- sk
- sl
- sv
- tr
license:
- cc-by-4.0
multilinguality:
- translation
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- translation
task_ids: []
pretty_name: Europa Education and Culture Translation Memory (EAC-TM)
dataset_info:
- config_name: en2bg
features:
- name: translation
dtype:
translation:
languages:
- en
- bg
- name: sentence_type
dtype:
class_label:
names:
'0': form_data
'1': sentence_data
splits:
- name: train
num_bytes: 664252
num_examples: 4061
download_size: 3521416
dataset_size: 664252
- config_name: en2cs
features:
- name: translation
dtype:
translation:
languages:
- en
- cs
- name: sentence_type
dtype:
class_label:
names:
'0': form_data
'1': sentence_data
splits:
- name: train
num_bytes: 365983
num_examples: 3351
download_size: 3521416
dataset_size: 365983
- config_name: en2da
features:
- name: translation
dtype:
translation:
languages:
- en
- da
- name: sentence_type
dtype:
class_label:
names:
'0': form_data
'1': sentence_data
splits:
- name: train
num_bytes: 422079
num_examples: 3757
download_size: 3521416
dataset_size: 422079
- config_name: en2de
features:
- name: translation
dtype:
translation:
languages:
- en
- de
- name: sentence_type
dtype:
class_label:
names:
'0': form_data
'1': sentence_data
splits:
- name: train
num_bytes: 579566
num_examples: 4473
download_size: 3521416
dataset_size: 579566
- config_name: en2el
features:
- name: translation
dtype:
translation:
languages:
- en
- el
- name: sentence_type
dtype:
class_label:
names:
'0': form_data
'1': sentence_data
splits:
- name: train
num_bytes: 491346
num_examples: 2818
download_size: 3521416
dataset_size: 491346
- config_name: en2es
features:
- name: translation
dtype:
translation:
languages:
- en
- es
- name: sentence_type
dtype:
class_label:
names:
'0': form_data
'1': sentence_data
splits:
- name: train
num_bytes: 555218
num_examples: 4303
download_size: 3521416
dataset_size: 555218
- config_name: en2et
features:
- name: translation
dtype:
translation:
languages:
- en
- et
- name: sentence_type
dtype:
class_label:
names:
'0': form_data
'1': sentence_data
splits:
- name: train
num_bytes: 247284
num_examples: 2270
download_size: 3521416
dataset_size: 247284
- config_name: en2fi
features:
- name: translation
dtype:
translation:
languages:
- en
- fi
- name: sentence_type
dtype:
class_label:
names:
'0': form_data
'1': sentence_data
splits:
- name: train
num_bytes: 150560
num_examples: 1458
download_size: 3521416
dataset_size: 150560
- config_name: en2fr
features:
- name: translation
dtype:
translation:
languages:
- en
- fr
- name: sentence_type
dtype:
class_label:
names:
'0': form_data
'1': sentence_data
splits:
- name: train
num_bytes: 575579
num_examples: 4476
download_size: 3521416
dataset_size: 575579
- config_name: en2hu
features:
- name: translation
dtype:
translation:
languages:
- en
- hu
- name: sentence_type
dtype:
class_label:
names:
'0': form_data
'1': sentence_data
splits:
- name: train
num_bytes: 454802
num_examples: 3455
download_size: 3521416
dataset_size: 454802
- config_name: en2is
features:
- name: translation
dtype:
translation:
languages:
- en
- is
- name: sentence_type
dtype:
class_label:
names:
'0': form_data
'1': sentence_data
splits:
- name: train
num_bytes: 268194
num_examples: 2206
download_size: 3521416
dataset_size: 268194
- config_name: en2it
features:
- name: translation
dtype:
translation:
languages:
- en
- it
- name: sentence_type
dtype:
class_label:
names:
'0': form_data
'1': sentence_data
splits:
- name: train
num_bytes: 270634
num_examples: 2170
download_size: 3521416
dataset_size: 270634
- config_name: en2lt
features:
- name: translation
dtype:
translation:
languages:
- en
- lt
- name: sentence_type
dtype:
class_label:
names:
'0': form_data
'1': sentence_data
splits:
- name: train
num_bytes: 358844
num_examples: 3386
download_size: 3521416
dataset_size: 358844
- config_name: en2lv
features:
- name: translation
dtype:
translation:
languages:
- en
- lv
- name: sentence_type
dtype:
class_label:
names:
'0': form_data
'1': sentence_data
splits:
- name: train
num_bytes: 437487
num_examples: 3880
download_size: 3521416
dataset_size: 437487
- config_name: en2mt
features:
- name: translation
dtype:
translation:
languages:
- en
- mt
- name: sentence_type
dtype:
class_label:
names:
'0': form_data
'1': sentence_data
splits:
- name: train
num_bytes: 178675
num_examples: 1722
download_size: 3521416
dataset_size: 178675
- config_name: en2nb
features:
- name: translation
dtype:
translation:
languages:
- en
- nb
- name: sentence_type
dtype:
class_label:
names:
'0': form_data
'1': sentence_data
splits:
- name: train
num_bytes: 85833
num_examples: 642
download_size: 3521416
dataset_size: 85833
- config_name: en2nl
features:
- name: translation
dtype:
translation:
languages:
- en
- nl
- name: sentence_type
dtype:
class_label:
names:
'0': form_data
'1': sentence_data
splits:
- name: train
num_bytes: 188531
num_examples: 1805
download_size: 3521416
dataset_size: 188531
- config_name: en2pl
features:
- name: translation
dtype:
translation:
languages:
- en
- pl
- name: sentence_type
dtype:
class_label:
names:
'0': form_data
'1': sentence_data
splits:
- name: train
num_bytes: 515976
num_examples: 4027
download_size: 3521416
dataset_size: 515976
- config_name: en2pt
features:
- name: translation
dtype:
translation:
languages:
- en
- pt
- name: sentence_type
dtype:
class_label:
names:
'0': form_data
'1': sentence_data
splits:
- name: train
num_bytes: 422125
num_examples: 3501
download_size: 3521416
dataset_size: 422125
- config_name: en2ro
features:
- name: translation
dtype:
translation:
languages:
- en
- ro
- name: sentence_type
dtype:
class_label:
names:
'0': form_data
'1': sentence_data
splits:
- name: train
num_bytes: 345468
num_examples: 3159
download_size: 3521416
dataset_size: 345468
- config_name: en2sk
features:
- name: translation
dtype:
translation:
languages:
- en
- sk
- name: sentence_type
dtype:
class_label:
names:
'0': form_data
'1': sentence_data
splits:
- name: train
num_bytes: 306049
num_examples: 2972
download_size: 3521416
dataset_size: 306049
- config_name: en2sl
features:
- name: translation
dtype:
translation:
languages:
- en
- sl
- name: sentence_type
dtype:
class_label:
names:
'0': form_data
'1': sentence_data
splits:
- name: train
num_bytes: 577524
num_examples: 4644
download_size: 3521416
dataset_size: 577524
- config_name: en2sv
features:
- name: translation
dtype:
translation:
languages:
- en
- sv
- name: sentence_type
dtype:
class_label:
names:
'0': form_data
'1': sentence_data
splits:
- name: train
num_bytes: 304954
num_examples: 2909
download_size: 3521416
dataset_size: 304954
- config_name: en2tr
features:
- name: translation
dtype:
translation:
languages:
- en
- tr
- name: sentence_type
dtype:
class_label:
names:
'0': form_data
'1': sentence_data
splits:
- name: train
num_bytes: 328267
num_examples: 3198
download_size: 3521416
dataset_size: 328267
---
# Dataset Card for Europa Education and Culture Translation Memory (EAC-TM)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://ec.europa.eu/jrc/en/language-technologies/eac-translation-memory](https://ec.europa.eu/jrc/en/language-technologies/eac-translation-memory)
- **Paper:** [https://link.springer.com/article/10.1007/s10579-014-9277-0](https://link.springer.com/article/10.1007/s10579-014-9277-0)
- **Point of Contact:** [ralf.steinberg@jrc.ec.europa.eu](mailto:ralf.steinberg@jrc.ec.europa.eu)
### Dataset Summary
This dataset is a corpus of manually produced translations from english to up to 25 languages, released in 2012 by the European Union's Directorate General for Education and Culture (EAC).
To load a language pair that is not part of the config, just specify the language code as language pair. For example, if you want to translate Czech to Greek:
`dataset = load_dataset("europa_eac_tm", language_pair=("cs", "el"))`
### Supported Tasks and Leaderboards
- `text2text-generation`: the dataset can be used to train a model for `machine-translation`. Machine translation models are usually evaluated using metrics such as [BLEU](https://huggingface.co/metrics/bleu), [ROUGE](https://huggingface.co/metrics/rouge) or [SacreBLEU](https://huggingface.co/metrics/sacrebleu). You can use the [mBART](https://huggingface.co/facebook/mbart-large-cc25) model for this task. This task has active leaderboards which can be found at [https://paperswithcode.com/task/machine-translation](https://paperswithcode.com/task/machine-translation), which usually rank models based on [BLEU score](https://huggingface.co/metrics/bleu).
### Languages
The sentences in this dataset were originally written in English (source language is English) and then translated into the other languages. The sentences are extracted from electroniv forms: application and report forms for decentralised actions of EAC's Life-long Learning Programme (LLP) and the Youth in Action Programme. The contents in the electronic forms are technically split into two types: (a) the labels and contents of drop-down menus (referred to as 'Forms' Data) and (b) checkboxes (referred to as 'Reference Data').
The dataset contains traduction of English sentences or parts of sentences to Bulgarian, Czech, Danish, Dutch, Estonian, German, Greek, Finnish, French, Croatian, Hungarian, Icelandic, Italian, Latvian, Lithuanian, Maltese, Norwegian, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish, Swedish and Turkish.
Language codes:
- `bg`
- `cs`
- `da`
- `de`
- `el`
- `en`
- `es`
- `et`
- `fi`
- `fr`
- `hr`
- `hu`
- `is`
- `it`
- `lt`
- `lv`
- `mt`
- `nl`
- `no`
- `pl`
- `pt`
- `ro`
- `sk`
- `sl`
- `sv`
- `tr`
## Dataset Structure
### Data Instances
```
{
"translation": {
"en":"Sentence to translate",
"<target_language>": "Phrase à traduire",
},
"sentence_type": 0
}
```
### Data Fields
- `translation`: Mapping of sentences to translate (in English) and translated sentences.
- `sentence_type`: Integer value, 0 if the sentence is a 'form data' (extracted from the labels and contents of drop-down menus of the source electronic forms) or 1 if the sentence is a 'reference data' (extracted from the electronic forms checkboxes).
### Data Splits
The data is not splitted (only the `train` split is available).
## Dataset Creation
### Curation Rationale
The EAC-TM is relatively small compared to the JRC-Acquis and to DGT-TM, but it has the advantage that it focuses on a very different domain, namely that of education and culture. Also, it includes translation units for the languages Croatian (HR), Icelandic (IS), Norwegian (Bokmål, NB or Norwegian, NO) and Turkish (TR).
### Source Data
#### Initial Data Collection and Normalization
EAC-TM was built in the context of translating electronic forms: application and report forms for decentralised actions of EAC's Life-long Learning Programme (LLP) and the Youth in Action Programme. All documents and sentences were originally written in English (source language is English) and then translated into the other languages.
The contents in the electronic forms are technically split into two types: (a) the labels and contents of drop-down menus (referred to as 'Forms' Data) and (b) checkboxes (referred to as 'Reference Data'). Due to the different types of data, the two collections are kept separate. For example, labels can be 'Country', 'Please specify your home country' etc., while examples for reference data are 'Germany', 'Basic/general programmes', 'Education and Culture' etc.
The data consists of translations carried out between the end of the year 2008 and July 2012.
#### Who are the source language producers?
The texts were translated by staff of the National Agencies of the Lifelong Learning and Youth in Action programmes. They are typically professionals in the field of education/youth and EU programmes. They are thus not professional translators, but they are normally native speakers of the target language.
### Annotations
#### Annotation process
Sentences were manually translated by humans.
#### Who are the annotators?
The texts were translated by staff of the National Agencies of the Lifelong Learning and Youth in Action programmes. They are typically professionals in the field of education/youth and EU programmes. They are thus not professional translators, but they are normally native speakers of the target language.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
© European Union, 1995-2020
The Commission's reuse policy is implemented by the [Commission Decision of 12 December 2011 on the reuse of Commission documents](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32011D0833).
Unless otherwise indicated (e.g. in individual copyright notices), content owned by the EU on this website is licensed under the [Creative Commons Attribution 4.0 International (CC BY 4.0) licence](http://creativecommons.org/licenses/by/4.0/). This means that reuse is allowed, provided appropriate credit is given and changes are indicated.
You may be required to clear additional rights if a specific content depicts identifiable private individuals or includes third-party works. To use or reproduce content that is not owned by the EU, you may need to seek permission directly from the rightholders. Software or documents covered by industrial property rights, such as patents, trade marks, registered designs, logos and names, are excluded from the Commission's reuse policy and are not licensed to you.
### Citation Information
```
@Article{Steinberger2014,
author={Steinberger, Ralf
and Ebrahim, Mohamed
and Poulis, Alexandros
and Carrasco-Benitez, Manuel
and Schl{\"u}ter, Patrick
and Przybyszewski, Marek
and Gilbro, Signe},
title={An overview of the European Union's highly multilingual parallel corpora},
journal={Language Resources and Evaluation},
year={2014},
month={Dec},
day={01},
volume={48},
number={4},
pages={679-707},
issn={1574-0218},
doi={10.1007/s10579-014-9277-0},
url={https://doi.org/10.1007/s10579-014-9277-0}
}
```
### Contributions
Thanks to [@SBrandeis](https://github.com/SBrandeis) for adding this dataset. |
SetFit/amazon_reviews_multi_es | 2022-03-23T15:43:09.000Z | [
"region:us"
] | SetFit | null | null | null | 0 | 63 | #amazon reviews multi spanish
This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the Spanish language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. |
hackathon-pln-es/MESD | 2022-03-25T18:15:07.000Z | [
"license:cc-by-4.0",
"region:us"
] | hackathon-pln-es | null | null | null | 6 | 63 | ---
license: cc-by-4.0
Duville, Mathilde Marie; Alonso-Valerdi, Luz Maria; Ibarra, David (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5
---
# Dataset Card for MESD
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://data.mendeley.com/datasets/cy34mh68j9/5
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Contiene los datos de la base MESD procesados para hacer 'finetuning' de un modelo 'Wav2Vec' en el Hackaton organizado por 'Somos NLP'.
Ejemplo de referencia:
https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/audio_classification.ipynb
Hemos accedido a la base MESD para obtener ejemplos.
Breve descripción de los autores de la base MESD:
"La Base de Datos del Discurso Emocional Mexicano (MESD en inglés) proporciona enunciados de una sola palabra para las prosodias afectivas de ira, asco, miedo, felicidad, neutro y tristeza con conformación cultural mexicana. El MESD ha sido pronunciado por actores adultos y niños no profesionales: Se dispone de 3 voces femeninas, 2 masculinas y 6 infantiles. Las palabras de los enunciados emocionales y neutros proceden de dos corpus: (corpus A) compuesto por sustantivos y adjetivos que se repiten a través de prosodias emocionales y tipos de voz (femenina, masculina, infantil), y (corpus B) que consiste en palabras controladas por edad de adquisición, frecuencia de uso, familiaridad, concreción, valencia, excitación y clasificaciones de dimensionalidad de emociones discretas.
Las grabaciones de audio se realizaron en un estudio profesional con los siguientes materiales (1) un micrófono Sennheiser e835 con una respuesta de frecuencia plana (100 Hz a 10 kHz), (2) una interfaz de audio Focusrite Scarlett 2i4 conectada al micrófono con un cable XLR y al ordenador, y (3) la estación de trabajo de audio digital REAPER (Rapid Environment for Audio Production, Engineering, and Recording). Los archivos de audio se almacenaron como una secuencia de 24 bits con una frecuencia de muestreo de 48000Hz. La amplitud de las formas de onda acústicas se reescaló entre -1 y 1.
Se crearon dos versiones con reducción de la naturalidad de los locutores a partir de expresiones emocionales humanas para voces femeninas del corpus B. En concreto, la naturalidad se redujo progresivamente de las voces humanas al nivel 1 al nivel 2. En particular, se editaron la duración y el tono medio en las sílabas acentuadas para reducir la diferencia entre las sílabas acentuadas y las no acentuadas. En los enunciados completos, se redujeron las relaciones F2/F1 y F3/F1 editando las frecuencias F2 y F3. También se redujo la intensidad de los armónicos 1 y 4. "
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Español
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
Origen: texto que indica si se trata del conjunto de datos MESD original o los casos 'Speaker-embedded naturalness-reduced female voices' donde los autores han generado de forma sintética nuevos datos transformando algunas de las instancias de los audios originales.
Palabra: texto de la palabra que se ha leído.
Emoción: texto de la emoción a la que representa: Valores: 'Enojo', 'Felicidad', 'Miedo', 'Neutral', 'Disgusto', 'Tristeza'.
InfoActor: texto que indica si la voz es de 'Niño', 'Hombre', 'Mujer'.
AudioArray: audio array, remuestreado a 16 Khz.
### Data Splits
Train: 891 ejemplos, mezcla de casos MESD y 'Speaker-embedded naturalness-reduced female voices'.
Validation: 130 ejemplos, todos casos MESD.
Test: 129 ejemplos, todos casos MESD.
## Dataset Creation
### Curation Rationale
Unir los tres subconjuntos de datos y procesarlos para la tarea de finetuning, acorde al input esperado por el modelo Wav2Vec.
### Source Data
#### Initial Data Collection and Normalization
Acceso a los datos en bruto:
https://data.mendeley.com/datasets/cy34mh68j9/5
Conversión a audio arra y remuestreo a 16 Khz.
#### Who are the source language producers?
Duville, Mathilde Marie; Alonso-Valerdi, Luz Maria; Ibarra, David (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Creative Commons, [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
Duville, Mathilde Marie; Alonso-Valerdi, Luz Maria; Ibarra, David (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5
```
|
TomTBT/pmc_open_access_xml | 2023-09-17T08:43:36.000Z | [
"task_categories:text-classification",
"task_categories:summarization",
"task_categories:other",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"l... | TomTBT | The PMC Open Access Subset includes more than 3.4 million journal articles and preprints that are made available under
license terms that allow reuse.
Not all articles in PMC are available for text mining and other reuse, many have copyright protection, however articles
in the PMC Open Access Subset are made available under Creative Commons or similar licenses that generally allow more
liberal redistribution and reuse than a traditional copyrighted work.
The PMC Open Access Subset is one part of the PMC Article Datasets
This version takes XML version as source, benefiting from the structured text
to split the articles in parts, naming the introduction, methods, results,
discussion and conclusion, and refers with keywords in the text to external or internal
resources (articles, figures, tables, formulas, boxed-text, quotes, code, footnotes, chemicals, graphics, medias). | null | null | 0 | 63 | ---
pretty_name: XML-parsed PMC
task_categories:
- text-classification
- summarization
- other
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
size_categories:
- 1M<n<10M
source_datasets:
- original
license:
- cc0-1.0
- cc-by-4.0
- cc-by-sa-4.0
- cc-by-nc-4.0
- cc-by-nd-4.0
- cc-by-nc-nd-4.0
- cc-by-nc-sa-4.0
- unknown
- other
multilinguality:
- monolingual
task_ids: []
tags:
- research papers
- biology
- medecine
---
# Dataset Card for PMC Open Access XML
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The XML Open Access includes more than 3.4 million journal articles and preprints that are made available under
license terms that allow reuse.
Not all articles in PMC are available for text mining and other reuse, many have copyright protection, however articles
in the PMC Open Access Subset are made available under Creative Commons or similar licenses that generally allow more
liberal redistribution and reuse than a traditional copyrighted work.
The PMC Open Access Subset is one part of the PMC Article Datasets
This version takes XML version as source, benefiting from the structured text
to split the articles in parts, naming the introduction, methods, results,
discussion and conclusion, and reference with keywords in the text to external or internal
resources (articles, figures, tables, formulas, boxed-text, quotes, code, footnotes, chemicals, graphics, medias).
The dataset was initially created with relation-extraction tasks in mind, between the references in text and the content of the
references (e.g. for PMID, by joining the refered article abstract from the pubmed dataset), but aims in a larger extent to provide
a corpus of pre-annotated text for other tasks (e.g. figure caption to graphic, glossary definition detection, summarization).
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Fields
- "accession_id": The PMC ID of the article
- "pmid": The PubMed ID of the article
- "introduction": List of \<title\> and \<p\> elements in \<body\>, sharing their root with a \<title\> containing "introduction" or "background".
- "methods": Same as introduction with "method" keyword.
- "results": Same as introduction with "result" keyword.
- "discussion": Same as introduction with "discussion" keyword.
- "conclusion": Same as introduction with "conclusion" keyword.
- "front": List of \<title\> and \<p\> elements in \<front\> after everything else has been searched.
- "body": List of \<title\> and \<p\> elements in \<body\> after everything else has been searched.
- "back": List of \<title\> and \<p\> elements in \<back\> after everything else has been searched.
- "figure": List of \<fig\> elements of the article.
- "table": List of \<table-wrap\> and \<array\> elements of the article.
- "formula": List of \<disp-formula\> and \<inline-formula\> elements of the article.
- "box": List of \<boxed-text\> elements of the article.
- "code": List of \<code\> elements of the article.
- "quote": List of \<disp-quote\> and \<speech\> elements of the article.
- "chemical": List of \<chem-struct-wrap\> elements of the article.
- "supplementary": List of \<supplementary-material\> and \<inline-supplementary-material\> elements of the article.
- "footnote": List of \<fn-group\> and \<table-wrap-foot\> elements of the article.
- "graphic": List of \<graphic\> and \<inline-graphic\> elements of the article.
- "media": List of \<media\> and \<inline-media\> elements of the article.
- "glossary": Glossary if found in the XML
- "unknown_references": JSON of a dictionnary of each "tag":"text" for the reference that did not indicate a PMID
- "n_references": Total number of references and unknown references
- "license": The licence of the article
- "retracted": If the article was retracted or not
- "last_updated": Last update of the article
- "citation": Citation of the article
- "package_file": path to the folder containing the graphics and media files of the article (to append to the base URL: ftp.ncbi.nlm.nih.gov/pub/pmc/)
In text, the references are in the form ##KEYWORD##IDX_REF##OLD_TEXT##, with keywords (REF, UREF, FIG, TAB, FORMU, BOX, CODE, QUOTE, CHEM, SUPPL, FOOTN, GRAPH, MEDIA) referencing respectively to "pubmed articles" (external), "unknown_references", "figure", "table", "formula", "box", "code", "quote", "chem", "supplementary", "footnote", "graphic" and "media".
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
Internal references (figures, tables, ...) were found using specific tags. Deciding on those tags was done by testing and by looking in the documentation
for the different kind of possible usage.
Then, to split the article into introduction, methods, results, discussion and conclusion, specific keywords in titles were used. Because there are no rules
in this xml to tag those sections, finding the keyword seemed like the most reliable approach to do so. A drawback is that many section do not have those
keywords in the titles but could be assimilated to those. However, the huge diversity in the titles makes it harder to label such sections. This could be the
work of further versions of this dataset.
### Source Data
#### Initial Data Collection and Normalization
Data was obtained from:
- ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/oa_noncomm/xml/
- ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/oa_comm/xml/
- ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/oa_other/xml/
Additional content for individual articles (graphics, media) can be obtained from:
- ftp.ncbi.nlm.nih.gov/pub/pmc + "package_file"
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
The articles XML are similar accross collections. This means that if a certain collection handles the structure in unusual ways, the whole collection might not be as
well annotated than others. This concerns all the sections (intro, methods, ...), the external references (pmids) and the internal references (tables, figures, ...).
To illustrate that, references are sometime given as a range (e.g. 10-15). In that case, only reference 10 and 15 are linked. This could potentially be handled in a
future version.
### Other Known Limitations
[Needs More Information]
### Preprocessing recommendations
- Filter out empty contents.
- Remove unwanted references from the text, and replace either by the "references_text" or by the reference content itself.
- Unescape HTML special characters: `import html; html.unescape(my_text)`
- Remove superfluous line break in text.
- Remove XML tags (\<italic\>, \<sup\>, \<sub\>, ...), replace by special tokens?
- Join the items of the contents' lists.
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
https://www.ncbi.nlm.nih.gov/pmc/about/copyright/
Within the PMC Open Access Subset, there are three groupings:
Commercial Use Allowed - CC0, CC BY, CC BY-SA, CC BY-ND licenses
Non-Commercial Use Only - CC BY-NC, CC BY-NC-SA, CC BY-NC-ND licenses; and
Other - no machine-readable Creative Commons license, no license, or a custom license.
### Citation Information
[Needs More Information] |
koudeheld/beatles_lyrics | 2022-06-02T13:12:13.000Z | [
"region:us"
] | koudeheld | null | null | null | 0 | 63 | Entry not found |
ScandEval/scala-nb | 2023-07-05T09:49:50.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:nb",
"language:no",
"license:cc-by-sa-4.0",
"region:us"
] | ScandEval | null | null | null | 0 | 63 | ---
license: cc-by-sa-4.0
task_categories:
- text-classification
language:
- nb
- 'no'
size_categories:
- 1K<n<10K
--- |
ratishsp/newshead | 2022-10-14T07:42:08.000Z | [
"license:mit",
"region:us"
] | ratishsp | null | null | null | 0 | 63 | ---
license: mit
---
|
qanastek/HoC | 2022-11-01T15:03:11.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"region:us"
] | qanastek | The Hallmarks of Cancer Corpus for text classification
The Hallmarks of Cancer (HOC) Corpus consists of 1852 PubMed
publication abstracts manually annotated by experts according
to a taxonomy. The taxonomy consists of 37 classes in a
hierarchy. Zero or more class labels are assigned to each
sentence in the corpus. The labels are found under the "labels"
directory, while the tokenized text can be found under "text"
directory. The filenames are the corresponding PubMed IDs (PMID).
In addition to the HOC corpus, we also have the
[Cancer Hallmarks Analytics Tool](http://chat.lionproject.net/)
which classifes all of PubMed according to the HoC taxonomy. | @article{baker2015automatic,
title={Automatic semantic classification of scientific literature according to the hallmarks of cancer},
author={Baker, Simon and Silins, Ilona and Guo, Yufan and Ali, Imran and H{\"o}gberg, Johan and Stenius, Ulla and Korhonen, Anna},
journal={Bioinformatics},
volume={32},
number={3},
pages={432--440},
year={2015},
publisher={Oxford University Press}
}
@article{baker2017cancer,
title={Cancer Hallmarks Analytics Tool (CHAT): a text mining approach to organize and evaluate scientific literature on cancer},
author={Baker, Simon and Ali, Imran and Silins, Ilona and Pyysalo, Sampo and Guo, Yufan and H{\"o}gberg, Johan and Stenius, Ulla and Korhonen, Anna},
journal={Bioinformatics},
volume={33},
number={24},
pages={3973--3981},
year={2017},
publisher={Oxford University Press}
}
@article{baker2017cancer,
title={Cancer hallmark text classification using convolutional neural networks},
author={Baker, Simon and Korhonen, Anna-Leena and Pyysalo, Sampo},
year={2016}
}
@article{baker2017initializing,
title={Initializing neural networks for hierarchical multi-label text classification},
author={Baker, Simon and Korhonen, Anna},
journal={BioNLP 2017},
pages={307--315},
year={2017}
} | null | 1 | 63 | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- found
language:
- en
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
pretty_name: HoC
language_bcp47:
- en-US
---
# HoC : Hallmarks of Cancer Corpus
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [No Warranty](#no-warranty)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://s-baker.net/resource/hoc/
- **Repository:** https://github.com/sb895/Hallmarks-of-Cancer
- **Paper:** https://academic.oup.com/bioinformatics/article/32/3/432/1743783
- **Leaderboard:** https://paperswithcode.com/dataset/hoc-1
- **Point of Contact:** [Yanis Labrak](mailto:yanis.labrak@univ-avignon.fr)
### Dataset Summary
The Hallmarks of Cancer Corpus for text classification
The Hallmarks of Cancer (HOC) Corpus consists of 1852 PubMed publication abstracts manually annotated by experts according to a taxonomy. The taxonomy consists of 37 classes in a hierarchy. Zero or more class labels are assigned to each sentence in the corpus. The labels are found under the "labels" directory, while the tokenized text can be found under "text" directory. The filenames are the corresponding PubMed IDs (PMID).
In addition to the HOC corpus, we also have the [Cancer Hallmarks Analytics Tool](http://chat.lionproject.net/) which classifes all of PubMed according to the HoC taxonomy.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for `multi-class-classification`.
### Languages
The corpora consists of PubMed article only in english:
- `English - United States (en-US)`
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
dataset = load_dataset("qanastek/HoC")
validation = dataset["validation"]
print("First element of the validation set : ", validation[0])
```
## Dataset Structure
### Data Instances
```json
{
"document_id": "12634122_5",
"text": "Genes that were overexpressed in OM3 included oncogenes , cell cycle regulators , and those involved in signal transduction , whereas genes for DNA repair enzymes and inhibitors of transformation and metastasis were suppressed .",
"label": [9, 5, 0, 6]
}
```
### Data Fields
`document_id`: Unique identifier of the document.
`text`: Raw text of the PubMed abstracts.
`label`: One of the 10 currently known hallmarks of cancer.
| Hallmark | Search term |
|:-------------------------------------------:|:-------------------------------------------:|
| 1. Sustaining proliferative signaling (PS) | Proliferation Receptor Cancer |
| | 'Growth factor' Cancer |
| | 'Cell cycle' Cancer |
| 2. Evading growth suppressors (GS) | 'Cell cycle' Cancer |
| | 'Contact inhibition' |
| 3. Resisting cell death (CD) | Apoptosis Cancer |
| | Necrosis Cancer |
| | Autophagy Cancer |
| 4. Enabling replicative immortality (RI) | Senescence Cancer |
| | Immortalization Cancer |
| 5. Inducing angiogenesis (A) | Angiogenesis Cancer |
| | 'Angiogenic factor' |
| 6. Activating invasion & metastasis (IM) | Metastasis Invasion Cancer |
| 7. Genome instability & mutation (GI) | Mutation Cancer |
| | 'DNA repair' Cancer |
| | Adducts Cancer |
| | 'Strand breaks' Cancer |
| | 'DNA damage' Cancer |
| 8. Tumor-promoting inflammation (TPI) | Inflammation Cancer |
| | 'Oxidative stress' Cancer |
| | Inflammation 'Immune response' Cancer |
| 9. Deregulating cellular energetics (CE) | Glycolysis Cancer; 'Warburg effect' Cancer |
| 10. Avoiding immune destruction (ID) | 'Immune system' Cancer |
| | Immunosuppression Cancer |
### Data Splits
Distribution of data for the 10 hallmarks:
| **Hallmark** | **No. abstracts** | **No. sentences** |
|:------------:|:-----------------:|:-----------------:|
| 1. PS | 462 | 993 |
| 2. GS | 242 | 468 |
| 3. CD | 430 | 883 |
| 4. RI | 115 | 295 |
| 5. A | 143 | 357 |
| 6. IM | 291 | 667 |
| 7. GI | 333 | 771 |
| 8. TPI | 194 | 437 |
| 9. CE | 105 | 213 |
| 10. ID | 108 | 226 |
## Dataset Creation
### Source Data
#### Who are the source language producers?
The corpus has been produced and uploaded by Baker Simon and Silins Ilona and Guo Yufan and Ali Imran and Hogberg Johan and Stenius Ulla and Korhonen Anna.
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Additional Information
### Dataset Curators
__HoC__: Baker Simon and Silins Ilona and Guo Yufan and Ali Imran and Hogberg Johan and Stenius Ulla and Korhonen Anna
__Hugging Face__: Labrak Yanis (Not affiliated with the original corpus)
### Licensing Information
```plain
GNU General Public License v3.0
```
```plain
Permissions
- Commercial use
- Modification
- Distribution
- Patent use
- Private use
Limitations
- Liability
- Warranty
Conditions
- License and copyright notice
- State changes
- Disclose source
- Same license
```
### Citation Information
We would very much appreciate it if you cite our publications:
[Automatic semantic classification of scientific literature according to the hallmarks of cancer](https://academic.oup.com/bioinformatics/article/32/3/432/1743783)
```bibtex
@article{baker2015automatic,
title={Automatic semantic classification of scientific literature according to the hallmarks of cancer},
author={Baker, Simon and Silins, Ilona and Guo, Yufan and Ali, Imran and H{\"o}gberg, Johan and Stenius, Ulla and Korhonen, Anna},
journal={Bioinformatics},
volume={32},
number={3},
pages={432--440},
year={2015},
publisher={Oxford University Press}
}
```
[Cancer Hallmarks Analytics Tool (CHAT): a text mining approach to organize and evaluate scientific literature on cancer](https://www.repository.cam.ac.uk/bitstream/handle/1810/265268/btx454.pdf?sequence=8&isAllowed=y)
```bibtex
@article{baker2017cancer,
title={Cancer Hallmarks Analytics Tool (CHAT): a text mining approach to organize and evaluate scientific literature on cancer},
author={Baker, Simon and Ali, Imran and Silins, Ilona and Pyysalo, Sampo and Guo, Yufan and H{\"o}gberg, Johan and Stenius, Ulla and Korhonen, Anna},
journal={Bioinformatics},
volume={33},
number={24},
pages={3973--3981},
year={2017},
publisher={Oxford University Press}
}
```
[Cancer hallmark text classification using convolutional neural networks](https://www.repository.cam.ac.uk/bitstream/handle/1810/270037/BIOTXTM2016.pdf?sequence=1&isAllowed=y)
```bibtex
@article{baker2017cancer,
title={Cancer hallmark text classification using convolutional neural networks},
author={Baker, Simon and Korhonen, Anna-Leena and Pyysalo, Sampo},
year={2016}
}
```
[Initializing neural networks for hierarchical multi-label text classification](http://www.aclweb.org/anthology/W17-2339)
```bibtex
@article{baker2017initializing,
title={Initializing neural networks for hierarchical multi-label text classification},
author={Baker, Simon and Korhonen, Anna},
journal={BioNLP 2017},
pages={307--315},
year={2017}
}
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.