id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/documentation-images | 2023-11-03T00:00:09.000Z | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | huggingface | null | null | 19 | 105 | 2022-03-02T23:29:22 | ---
license: cc-by-nc-sa-4.0
---
### This dataset contains images used in the documentation of HuggingFace's libraries.
HF Team: Please make sure you optimize the assets before uploading them.
My favorite tool for this is https://tinypng.com/.
| 247 | [
[
-0.054443359375,
-0.03045654296875,
0.01393890380859375,
0.03131103515625,
-0.0205841064453125,
-0.0016489028930664062,
0.00913238525390625,
-0.024200439453125,
0.03973388671875,
0.05377197265625,
-0.0748291015625,
-0.038116455078125,
-0.03033447265625,
0.00... |
graphs-datasets/PROTEINS | 2023-02-07T16:39:11.000Z | [
"task_categories:graph-ml",
"license:unknown",
"region:us"
] | graphs-datasets | null | null | 0 | 105 | 2022-08-01T15:50:33 | ---
license: unknown
task_categories:
- graph-ml
---
# Dataset Card for PROTEINS
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://academic.oup.com/bioinformatics/article/21/suppl_1/i47/202991)**
- **[Repository](https://www.chrsmrrs.com/graphkerneldatasets/PROTEINS.zip):**:
- **Paper:**: Protein function prediction via graph kernels (see citation)
- **Leaderboard:**: [Papers with code leaderboard](https://paperswithcode.com/sota/graph-classification-on-proteins)
### Dataset Summary
The `PROTEINS` dataset is a medium molecular property prediction dataset.
### Supported Tasks and Leaderboards
`PROTEINS` should be used for molecular property prediction (aiming to predict whether molecules are enzymes or not), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | medium |
| #graphs | 1113 |
| average #nodes | 39.06 |
| average #edges | 72.82 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data comes from the PyGeometric version of the dataset provided by TUDataset.
This information can be found back using
```python
from torch_geometric.datasets import TUDataset
dataset = TUDataset(root='', name = 'PROTEINS')
```
## Additional Information
### Licensing Information
The dataset has been released under unknown license, please open an issue if you have info about it.
### Citation Information
```
@article{10.1093/bioinformatics/bti1007,
author = {Borgwardt, Karsten M. and Ong, Cheng Soon and Schönauer, Stefan and Vishwanathan, S. V. N. and Smola, Alex J. and Kriegel, Hans-Peter},
title = "{Protein function prediction via graph kernels}",
journal = {Bioinformatics},
volume = {21},
number = {suppl_1},
pages = {i47-i56},
year = {2005},
month = {06},
abstract = "{Motivation: Computational approaches to protein function prediction infer protein function by finding proteins with similar sequence, structure, surface clefts, chemical properties, amino acid motifs, interaction partners or phylogenetic profiles. We present a new approach that combines sequential, structural and chemical information into one graph model of proteins. We predict functional class membership of enzymes and non-enzymes using graph kernels and support vector machine classification on these protein graphs.Results: Our graph model, derivable from protein sequence and structure only, is competitive with vector models that require additional protein information, such as the size of surface pockets. If we include this extra information into our graph model, our classifier yields significantly higher accuracy levels than the vector models. Hyperkernels allow us to select and to optimally combine the most relevant node attributes in our protein graphs. We have laid the foundation for a protein function prediction system that integrates protein information from various sources efficiently and effectively.Availability: More information available via www.dbs.ifi.lmu.de/Mitarbeiter/borgwardt.html.Contact:borgwardt@dbs.ifi.lmu.de}",
issn = {1367-4803},
doi = {10.1093/bioinformatics/bti1007},
url = {https://doi.org/10.1093/bioinformatics/bti1007},
eprint = {https://academic.oup.com/bioinformatics/article-pdf/21/suppl\_1/i47/524364/bti1007.pdf},
}
```
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset. | 4,863 | [
[
-0.016387939453125,
-0.04156494140625,
0.01470184326171875,
-0.0084686279296875,
-0.007167816162109375,
-0.0091094970703125,
0.01331329345703125,
-0.037353515625,
0.045440673828125,
0.023681640625,
-0.0269622802734375,
-0.04833984375,
-0.06256103515625,
0.01... |
TUKE-DeutscheTelekom/skquad | 2022-12-05T14:10:32.000Z | [
"task_categories:question-answering",
"task_categories:text-retrieval",
"task_ids:open-domain-qa",
"task_ids:extractive-qa",
"task_ids:document-retrieval",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categor... | TUKE-DeutscheTelekom | Slovak Question Answering Dataset | TBD | 3 | 105 | 2022-12-02T11:28:37 | ---
annotations_creators:
- crowdsourced
language:
- sk
language_creators:
- crowdsourced
- found
license:
- cc-by-sa-4.0
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: squad
pretty_name: skquad
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- wikipedia
task_categories:
- question-answering
- text-retrieval
task_ids:
- open-domain-qa
- extractive-qa
- document-retrieval
train-eval-index:
- col_mapping:
answers:
answer_start: answer_start
text: text
context: context
question: question
config: squad_v2
metrics:
- name: SQuAD v2
type: squad_v2
splits:
eval_split: validation
train_split: train
task: question-answering
task_id: extractive_question_answering
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
SK-QuAD is the first QA dataset for the Slovak language.
It is manually annotated, so it has no distortion caused by
machine translation. The dataset is thematically diverse – it
does not overlap with SQuAD – it brings new knowledge.
It passed the second round of annotation – each question
and the answer were seen by at least two annotators.
### Supported Tasks and Leaderboards
- Question answering
- Document retrieval
### Languages
- Slovak
## Dataset Structure
#### squad_v2
- **Size of downloaded dataset files:** 44.34 MB
- **Size of the generated dataset:** 122.57 MB
- **Total amount of disk used:** 166.91 MB
-
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [94, 87, 94, 94],
"text": ["10th and 11th centuries", "in the 10th and 11th centuries", "10th and 11th centuries", "10th and 11th centuries"]
},
"context": "\"The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave thei...",
"id": "56ddde6b9a695914005b9629",
"question": "When were the Normans in Normandy?",
"title": "Normans"
}
```
### Data Fields
The data fields are the same among all splits.
#### squad_v2
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| | Train | Dev | Translated |
| ------------- | -----: | -----: | -------: |
| Documents | 8,377 | 940 | 442 |
| Paragraphs | 22,062 | 2,568 | 18,931 |
| Questions | 81,582 | 9,583 | 120,239 |
| Answers | 65,839 | 7,822 | 79,978 |
| Unanswerable | 15,877 | 1,784 | 40,261 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Deutsche Telekom Systems Solutions Slovakia
- Technical Univesity of Košice
### Licensing Information
Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| 4,907 | [
[
-0.04150390625,
-0.047149658203125,
0.01690673828125,
0.0174407958984375,
-0.00861358642578125,
0.0237884521484375,
-0.02008056640625,
-0.025787353515625,
0.04510498046875,
0.034942626953125,
-0.07275390625,
-0.075439453125,
-0.036285400390625,
0.03096008300... |
RicardoRei/wmt-da-human-evaluation | 2023-02-17T10:41:18.000Z | [
"size_categories:1M<n<10M",
"language:bn",
"language:cs",
"language:de",
"language:en",
"language:et",
"language:fi",
"language:fr",
"language:gu",
"language:ha",
"language:hi",
"language:is",
"language:ja",
"language:kk",
"language:km",
"language:lt",
"language:lv",
"language:pl",... | RicardoRei | null | null | 0 | 105 | 2023-02-16T18:49:07 | ---
license: apache-2.0
size_categories:
- 1M<n<10M
language:
- bn
- cs
- de
- en
- et
- fi
- fr
- gu
- ha
- hi
- is
- ja
- kk
- km
- lt
- lv
- pl
- ps
- ru
- ta
- tr
- uk
- xh
- zh
- zu
tags:
- mt-evaluation
- WMT
- 41-lang-pairs
---
# Dataset Summary
This dataset contains all DA human annotations from previous WMT News Translation shared tasks.
The data is organised into 8 columns:
- lp: language pair
- src: input text
- mt: translation
- ref: reference translation
- score: z score
- raw: direct assessment
- annotators: number of annotators
- domain: domain of the input text (e.g. news)
- year: collection year
You can also find the original data for each year in the results section https://www.statmt.org/wmt{YEAR}/results.html e.g: for 2020 data: [https://www.statmt.org/wmt20/results.html](https://www.statmt.org/wmt20/results.html)
## Python usage:
```python
from datasets import load_dataset
dataset = load_dataset("RicardoRei/wmt-da-human-evaluation", split="train")
```
There is no standard train/test split for this dataset but you can easily split it according to year, language pair or domain. E.g. :
```python
# split by year
data = dataset.filter(lambda example: example["year"] == 2022)
# split by LP
data = dataset.filter(lambda example: example["lp"] == "en-de")
# split by domain
data = dataset.filter(lambda example: example["domain"] == "news")
```
Note that most data is from News domain.
## Citation Information
If you use this data please cite the WMT findings from previous years:
- [Findings of the 2017 Conference on Machine Translation (WMT17)](https://aclanthology.org/W17-4717.pdf)
- [Findings of the 2018 Conference on Machine Translation (WMT18)](https://aclanthology.org/W18-6401.pdf)
- [Findings of the 2019 Conference on Machine Translation (WMT19)](https://aclanthology.org/W19-5301.pdf)
- [Findings of the 2020 Conference on Machine Translation (WMT20)](https://aclanthology.org/2020.wmt-1.1.pdf)
- [Findings of the 2021 Conference on Machine Translation (WMT21)](https://aclanthology.org/2021.wmt-1.1.pdf)
- [Findings of the 2022 Conference on Machine Translation (WMT22)](https://aclanthology.org/2022.wmt-1.1.pdf) | 2,176 | [
[
-0.0411376953125,
-0.032440185546875,
0.0296783447265625,
0.004535675048828125,
-0.0254058837890625,
-0.0021839141845703125,
-0.0216522216796875,
-0.036956787109375,
0.0298919677734375,
0.040191650390625,
-0.05047607421875,
-0.0546875,
-0.050445556640625,
0.... |
SotirisLegkas/clickbait | 2023-06-23T11:30:01.000Z | [
"region:us"
] | SotirisLegkas | null | null | 0 | 105 | 2023-06-23T11:08:28 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
qgyd2021/lip_service_4chan | 2023-10-27T01:51:52.000Z | [
"task_categories:question-answering",
"size_categories:10M<n<100M",
"language:zh",
"license:cc-by-4.0",
"art",
"region:us"
] | qgyd2021 | null | @dataset{lip_service_4chan,
author = {Xing Tian},
title = {lip_service_4chan},
month = sep,
year = 2023,
publisher = {Xing Tian},
version = {1.0},
} | 0 | 105 | 2023-09-07T08:50:39 | ---
task_categories:
- question-answering
language:
- zh
tags:
- art
pretty_name: lip_service
size_categories:
- 10M<n<100M
license: cc-by-4.0
---
## Lip Service
满嘴芬芳
### 数据来源
基于网站 [吵架对线陪练员](https://aibang.run/chat/sb) 的服务.
我们采用 [moss-003-sft-data](https://github.com/OpenLMLab/MOSS) 对话数据中的提问做 prompt,
然后调用 [吵架对线陪练员](https://aibang.run/chat/sb) 来获得答案.
实际使用的 moss-003-sft-data 数据来源于 [YeungNLP/moss-003-sft-data](https://huggingface.co/datasets/YeungNLP/moss-003-sft-data)
| 478 | [
[
-0.01136016845703125,
-0.033905029296875,
-0.0017261505126953125,
0.048187255859375,
-0.0302886962890625,
-0.0074615478515625,
0.0245361328125,
-0.020263671875,
0.0108642578125,
0.03656005859375,
-0.05657958984375,
-0.0390625,
-0.025115966796875,
0.000807285... |
SneakyInsect/maestro-rollingsplit | 2023-10-04T13:21:21.000Z | [
"region:us"
] | SneakyInsect | null | null | 0 | 105 | 2023-10-02T11:02:50 | ---
dataset_info:
features:
- name: name
dtype: string
- name: start
sequence: float64
- name: duration
sequence: float64
- name: pitch
sequence: float64
- name: velocity
sequence: float64
splits:
- name: train
num_bytes: 745208510
num_examples: 373963
- name: validation
num_bytes: 84002977
num_examples: 42153
- name: test
num_bytes: 97390221
num_examples: 48820
download_size: 144295382
dataset_size: 926601708
---
# Dataset Card for "maestro-rollingsplit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 660 | [
[
-0.048553466796875,
-0.0262603759765625,
-0.00600433349609375,
0.016632080078125,
-0.009185791015625,
0.00598907470703125,
0.01385498046875,
0.004062652587890625,
0.073974609375,
0.04071044921875,
-0.0654296875,
-0.049468994140625,
-0.0438232421875,
-0.03298... |
Arsive/toxicity_classification_jigsaw | 2023-10-03T12:51:28.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<200K",
"language:en",
"license:apache-2.0",
"region:us"
] | Arsive | null | null | 0 | 105 | 2023-10-03T06:51:48 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
size_categories:
- 1K<n<200K
---
### Dataset info
#### Training Dataset:
You are provided with a large number of Wikipedia comments which have been labeled by human raters for toxic behavior. The types of toxicity are:
- toxic
- severe_toxic
- obscene
- threat
- insult
- identity_hate
The original dataset can be found here: [jigsaw_toxic_classification](https://www.kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge/data)
Our training dataset is a sampled version from the original dataset, <b>containing equal number of samples for both clean and toxic classes. </b><br>
#### Dataset creation:
<code><pre>data = pd.read_csv('train.csv') # train.csv from the original dataset
column_names = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate']
column_labels = data[column_names][2:-1]
train_toxic = data[data[column_names].sum(axis=1) > 0]
train_clean = data[data[column_names].sum(axis=1) == 0]
train_clean_sampled = train_clean.sample(n=16225, random_state=42)
dataframe = pd.concat([train_toxic, train_clean_sampled], axis=0)
dataframe = dataframe.sample(frac=1, random_state=42)
dataset = Dataset.from_pandas(dataframe)
train_dataset = dataset.train_test_split(test_size=0.2)['train']
val_dataset = dataset.train_test_split(test_size=0.2)['test']</pre></code>
### Caution:
This dataset contains comments that are toxic in nature. Kindly use appropriately.
### Citation
<pre>
@misc{jigsaw-toxic-comment-classification-challenge,
author = {cjadams, Jeffrey Sorensen, Julia Elliott, Lucas Dixon, Mark McDonald, nithum, Will Cukierski},
title = {Toxic Comment Classification Challenge},
publisher = {Kaggle},
year = {2017},
url = {https://kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge}
}</pre>
| 1,872 | [
[
-0.00872039794921875,
-0.027587890625,
0.01763916015625,
0.0149078369140625,
-0.0192413330078125,
-0.00569915771484375,
-0.0045013427734375,
-0.015777587890625,
0.0229644775390625,
0.0175628662109375,
-0.033355712890625,
-0.045989990234375,
-0.045013427734375,
... |
tyzhu/squad_first_sent_v4_train_30_eval_10 | 2023-10-03T10:41:48.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 105 | 2023-10-03T10:00:10 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 111024
num_examples: 70
- name: validation
num_bytes: 11592
num_examples: 10
- name: eval_first_sent
num_bytes: 11592
num_examples: 10
download_size: 102146
dataset_size: 134208
---
# Dataset Card for "squad_first_sent_v4_train_30_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 881 | [
[
-0.033294677734375,
-0.0092315673828125,
0.01393890380859375,
0.04266357421875,
-0.0107421875,
0.0214385986328125,
0.02447509765625,
0.007213592529296875,
0.04473876953125,
0.0231475830078125,
-0.0911865234375,
-0.04327392578125,
-0.036468505859375,
0.002153... |
flytech/llama-python-codes-30k | 2023-11-02T19:17:20.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10M<n<100M",
"language:en",
"license:llama2",
"code",
"python",
"instruct",
"llama",
"flytech",
"region:us"
] | flytech | null | null | 9 | 105 | 2023-10-08T16:10:50 | ---
author: FlyTech
license: llama2
task_categories:
- question-answering
- text-generation
- text2text-generation
language:
- en
tags:
- code
- python
- instruct
- llama
- flytech
pretty_name: Llama1/2 Python Codes 30k Tokenized
size_categories:
- 10M<n<100M
---
### <span style="color:#3560B0; font-weight: bold;">Python Codes - 30k examples, Llama1&2 tokenized dataset</span>



### <span style="color:#3560B0; font-weight: bold;">Author</span>
**<span style="color:#266090;">FlyTech</span>**
### <span style="color:#3560B0; font-weight: bold;">Overview</span>
<span style="color:#266090">This dataset serves as a rich resource for various Natural Language Processing tasks such as:</span>
- <span style="color:#E91E63;">Question Answering</span>
- <span style="color:#8BC34A;">Text Generation</span>
- <span style="color:#FFC107;">Text-to-Text Generation</span>
<b><span style="color:#266090">It primarily focuses on instructional tasks in Python, tokenized specifically for the Llama architecture.
The dataset is a blend of GPT-4 generated content, custom codes, behavioral approaches and tasks extending beyond Python.</span></b>
<hr style="height:1px;border:none;color:#333;background-color:#136;" />
### <span style="color:#A45356; font-weight: bold;">IMPORTANT!</span>
<b><span style="color:#A8A8C9; background-color: #153055">
The llama-python-codes-30k dataset is not cleaned.
It has a very low number of unique input entries.</br>
For the fully cleaned version of the dataset, detokenized and with filtered-out input entries,
please refer to this link:
</span></b>
<a href="https://huggingface.co/datasets/flytech/python-codes-25k" style="color:#356090">flytech/python-codes-25k</a>
<hr style="height:1px;border:none;color:#333;background-color:#136;" />
### <span style="color:#3560B0; font-weight: bold;">Dataset Metrics</span>
**<span style="color:#3560B0;">Token Count (via LlamaTokenizer)</span>**
- **<span style="color:#4CAF50;">Maximum</span>: 508**
- **<span style="color:#2196F3;">Average</span>: 158.06**
- **<span style="color:#F44336;">Total</span>: 13,993,984**
**<span style="color:#006688;">Word Count</span>: 1,890,810**
**<span style="color:#006688;">Number of Examples</span>: 27,331**
### <b><span style="color:#3560B0; font-weight: bold;">Usage</span></b>
```python
from datasets import load_dataset
dataset = load_dataset('flytech/llama-python-codes-30k', split='train')
# One can map the dataset in any way, for the sake of example:
dataset = dataset.map(lambda example: {'text': example['instruction'] + ' ' + example['input'] + ' ' + example['output']})['text']
```
### <span style="color:#607D8B; font-weight: bold;">License</span>
This dataset is under the `llama2` license.
<hr style="height:1px;border:none;color:#333;background-color:#136;" />
### CONTRIBUTIONS
```python
# All contributions to the repository are welcome.
# Feel free to use the dataset for the Llama models,
# or visit:
```
<a href="https://huggingface.co/datasets/flytech/python-codes-25k" style="color:#356090">flytech/python-codes-25k</a>
```python
# To preprocess and tokenize the dataset as per your model requirements!
```
### <span style="color:#266090; font-weight: bold;">Tags</span>
- `code`
- `python`
- `instruct`
- `flytech` | 3,476 | [
[
-0.0111083984375,
-0.037841796875,
0.0045623779296875,
0.0194549560546875,
-0.0209503173828125,
0.0139007568359375,
-0.0091705322265625,
-0.02606201171875,
0.039459228515625,
0.006282806396484375,
-0.0506591796875,
-0.047027587890625,
-0.03631591796875,
0.01... |
OsamaBsher/AITA-Reddit-Dataset | 2023-11-01T22:19:37.000Z | [
"task_categories:text-generation",
"task_categories:text-classification",
"size_categories:100K<n<1M",
"arxiv:2310.18336",
"region:us"
] | OsamaBsher | null | null | 1 | 105 | 2023-10-20T17:31:34 | ---
task_categories:
- text-generation
- text-classification
size_categories:
- 100K<n<1M
---
# Dataset Card for AITA Reddit Posts and Comments
Posts of the AITA subreddit, with the 2 top voted comments that share the post verdict. Extracted using REDDIT PushShift (from 2013 to April 2023)
## Dataset Details
The dataset contains 270,709 entiries each of which contain the post title, text, verdict, comment1, comment2 and score (number of upvotes)
For more details see paper: https://arxiv.org/abs/2310.18336
### Dataset Sources
The Reddit PushShift data dumps are part of a data collection effort which crawls Reddit at regular intervals, to extract and keep all its data.
## Dataset Card Authors
@OsamaBsher and Ameer Sabri
## Dataset Card Contact
@OsamaBsher | 775 | [
[
-0.03900146484375,
-0.02996826171875,
0.029266357421875,
0.01275634765625,
-0.043121337890625,
0.0012140274047851562,
0.0156402587890625,
-0.0280609130859375,
0.03338623046875,
0.057159423828125,
-0.04339599609375,
-0.03179931640625,
-0.055084228515625,
0.02... |
result-kand2-sdxl-wuerst-karlo/58bc4cd4 | 2023-10-21T18:49:34.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 105 | 2023-10-21T18:49:33 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 168
num_examples: 10
download_size: 1342
dataset_size: 168
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "58bc4cd4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.0426025390625,
-0.005428314208984375,
0.01239013671875,
0.017974853515625,
-0.0232696533203125,
-0.0001093149185180664,
0.02191162109375,
-0.01036834716796875,
0.051513671875,
0.02886962890625,
-0.05548095703125,
-0.06304931640625,
-0.0307464599609375,
0.... |
atmallen/qm_1.0e_eval | 2023-10-31T19:40:56.000Z | [
"region:us"
] | atmallen | null | null | 0 | 105 | 2023-10-27T05:41:56 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: summand1
dtype: int64
- name: summand2
dtype: int64
- name: character
dtype: string
- name: sum
dtype: int64
- name: sum_words
dtype: string
- name: summand1_words
dtype: string
- name: summand2_words
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
- name: alice_label
dtype: int64
- name: bob_label
dtype: int64
- name: row_id
dtype: int64
splits:
- name: train
num_bytes: 268596304
num_examples: 1600000
- name: validation
num_bytes: 27402422
num_examples: 160000
- name: test
num_bytes: 27452756
num_examples: 160000
download_size: 41153034
dataset_size: 323451482
---
# Dataset Card for "qm_1.0e_eval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,118 | [
[
-0.037445068359375,
-0.02423095703125,
0.0173187255859375,
0.0159912109375,
-0.0184478759765625,
0.00943756103515625,
0.0308685302734375,
0.0112762451171875,
0.050262451171875,
0.037109375,
-0.05853271484375,
-0.06488037109375,
-0.0288543701171875,
-0.021530... |
best2009 | 2023-01-25T14:27:17.000Z | [
"task_categories:token-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:th",
"license:cc-by-nc-sa-3.0",
"word-tokenization",
"region:us"
] | null | `best2009` is a Thai word-tokenization dataset from encyclopedia, novels, news and articles by
[NECTEC](https://www.nectec.or.th/) (148,995/2,252 lines of train/test). It was created for
[BEST 2010: Word Tokenization Competition](https://thailang.nectec.or.th/archive/indexa290.html?q=node/10).
The test set answers are not provided publicly. | @inproceedings{kosawat2009best,
title={BEST 2009: Thai word segmentation software contest},
author={Kosawat, Krit and Boriboon, Monthika and Chootrakool, Patcharika and Chotimongkol, Ananlada and Klaithin, Supon and Kongyoung, Sarawoot and Kriengket, Kanyanut and Phaholphinyo, Sitthaa and Purodakananda, Sumonmas and Thanakulwarapas, Tipraporn and others},
booktitle={2009 Eighth International Symposium on Natural Language Processing},
pages={83--88},
year={2009},
organization={IEEE}
}
@inproceedings{boriboon2009best,
title={Best corpus development and analysis},
author={Boriboon, Monthika and Kriengket, Kanyanut and Chootrakool, Patcharika and Phaholphinyo, Sitthaa and Purodakananda, Sumonmas and Thanakulwarapas, Tipraporn and Kosawat, Krit},
booktitle={2009 International Conference on Asian Language Processing},
pages={322--327},
year={2009},
organization={IEEE}
} | 0 | 104 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- th
license:
- cc-by-nc-sa-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids: []
pretty_name: best2009
tags:
- word-tokenization
dataset_info:
features:
- name: fname
dtype: string
- name: char
sequence: string
- name: char_type
sequence:
class_label:
names:
'0': b_e
'1': c
'2': d
'3': n
'4': o
'5': p
'6': q
'7': s
'8': s_e
'9': t
'10': v
'11': w
- name: is_beginning
sequence:
class_label:
names:
'0': neg
'1': pos
config_name: best2009
splits:
- name: train
num_bytes: 483129998
num_examples: 148995
- name: test
num_bytes: 10498726
num_examples: 2252
download_size: 13891260
dataset_size: 493628724
---
# Dataset Card for `best2009`
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://aiforthai.in.th/
- **Repository:** https://aiforthai.in.th/corpus.php
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** https://aiforthai.in.th/
### Dataset Summary
`best2009` is a Thai word-tokenization dataset from encyclopedia, novels, news and articles by [NECTEC](https://www.nectec.or.th/) (148,995/2,252 lines of train/test). It was created for [BEST 2010: Word Tokenization Competition](https://thailang.nectec.or.th/archive/indexa290.html?q=node/10). The test set answers are not provided publicly.
### Supported Tasks and Leaderboards
word tokenization
### Languages
Thai
## Dataset Structure
### Data Instances
```
{'char': ['?', 'ภ', 'ู', 'ม', 'ิ', 'ป', 'ั', 'ญ', 'ญ', 'า', 'ช', 'า', 'ว', 'บ', '้', 'า', 'น', '\n'], 'char_type': [4, 1, 10, 1, 10, 1, 4, 1, 1, 10, 1, 10, 1, 1, 9, 10, 1, 4], 'fname': 'encyclopedia_00031.txt', 'is_beginning': [1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1]}
{'char': ['ภ', 'ู', 'ม', 'ิ', 'ป', 'ั', 'ญ', 'ญ', 'า', 'ช', 'า', 'ว', 'บ', '้', 'า', 'น', ' ', 'ห', 'ม', 'า', 'ย', 'ถ', 'ึ', 'ง', ' ', 'ค', 'ว', 'า', 'ม', 'ร', 'ู', '้', 'ข', 'อ', 'ง', 'ช', 'า', 'ว', 'บ', '้', 'า', 'น', ' ', 'ซ', 'ึ', '่', 'ง', 'เ', 'ร', 'ี', 'ย', 'น', 'ร', 'ู', '้', 'ม', 'า', 'จ', 'า', 'ก', 'พ', '่', 'อ', 'แ', 'ม', '่', ' ', 'ป', 'ู', '่', 'ย', '่', 'า', 'ต', 'า', 'ย', 'า', 'ย', ' ', 'ญ', 'า', 'ต', 'ิ', 'พ', 'ี', '่', 'น', '้', 'อ', 'ง', ' ', 'ห', 'ร', 'ื', 'อ', 'ผ', 'ู', '้', 'ม', 'ี', 'ค', 'ว', 'า', 'ม', 'ร', 'ู', '้', 'ใ', 'น', 'ห', 'ม', 'ู', '่', 'บ', '้', 'า', 'น', 'ใ', 'น', 'ท', '้', 'อ', 'ง', 'ถ', 'ิ', '่', 'น', 'ต', '่', 'า', 'ง', 'ๆ', '\n'], 'char_type': [1, 10, 1, 10, 1, 4, 1, 1, 10, 1, 10, 1, 1, 9, 10, 1, 5, 3, 1, 10, 1, 1, 10, 1, 5, 1, 1, 10, 1, 1, 10, 9, 1, 1, 1, 1, 10, 1, 1, 9, 10, 1, 5, 1, 10, 9, 1, 11, 1, 10, 1, 1, 1, 10, 9, 1, 10, 1, 10, 1, 1, 9, 1, 11, 1, 9, 5, 1, 10, 9, 1, 9, 10, 1, 10, 1, 10, 1, 5, 1, 10, 1, 10, 1, 10, 9, 1, 9, 1, 1, 5, 3, 1, 10, 1, 3, 10, 9, 1, 10, 1, 1, 10, 1, 1, 10, 9, 11, 1, 3, 1, 10, 9, 1, 9, 10, 1, 11, 1, 1, 9, 1, 1, 1, 10, 9, 1, 1, 9, 10, 1, 7, 4], 'fname': 'encyclopedia_00031.txt', 'is_beginning': [1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1]}
```
### Data Fields
- `fname`: file name; also marks if article is articles, news, encyclopedia or novels
- `char`: characters
- `char_type`: character types as adopted from []() by [deepcut](https://github.com/rkcosmos/deepcut)
- `is_beginning`: is beginning of word
### Data Splits
| | train | test |
|-------------------------|------------|---------|
| # lines | 148,995 | 2,252 |
| avg words per line | 39.05 | NA |
| total words | 5,818,521 | NA |
| avg characters per line | 140.39 | 202.79 |
| total characters | 20,918,132 | 456,684 |
| # lines articles | 16,990 | NA |
| # lines encyclopedia | 50,631 | NA |
| # lines novels | 50,140 | NA |
| # lines news | 31,234 | NA |
## Dataset Creation
### Curation Rationale
The dataset was created for [BEST 2010: Word Tokenization Competition](https://thailang.nectec.or.th/archive/indexa290.html?q=node/10) by [NECTEC](https://www.nectec.or.th/).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Respective authors of the articles, news, encyclopedia and novels
### Annotations
#### Annotation process
Detailed annotation guidelines can be found in `BEST_Guideline_Release1.pdf` as part of the uncompressed files. Word tokenization standard used was [InterBEST2009](http://hltshare.fbk.eu/IWSLT2015/InterBEST2009Guidelines-2.pdf)
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
All data are curated from public sources. No personal and sensitive information is expected to be included.
## Considerations for Using the Data
### Social Impact of Dataset
- word tokenization dataset from articles, news, encyclopedia and novels
### Discussion of Biases
- texts are relatively formal ones from articles, news, encyclopedia and novels.
- word tokenization standard used was [InterBEST2009](http://hltshare.fbk.eu/IWSLT2015/InterBEST2009Guidelines-2.pdf).
### Other Known Limitations
- some tags unrelated to word tokenization (`<NE>` and `<AB>`) are cleaned out.
- no word boundary provdied for the test set
## Additional Information
### Dataset Curators
[NECTEC](https://www.nectec.or.th/)
### Licensing Information
CC-BY-NC-SA 3.0
### Citation Information
Dataset:
```
@inproceedings{kosawat2009best,
title={BEST 2009: Thai word segmentation software contest},
author={Kosawat, Krit and Boriboon, Monthika and Chootrakool, Patcharika and Chotimongkol, Ananlada and Klaithin, Supon and Kongyoung, Sarawoot and Kriengket, Kanyanut and Phaholphinyo, Sitthaa and Purodakananda, Sumonmas and Thanakulwarapas, Tipraporn and others},
booktitle={2009 Eighth International Symposium on Natural Language Processing},
pages={83--88},
year={2009},
organization={IEEE}
}
@inproceedings{boriboon2009best,
title={Best corpus development and analysis},
author={Boriboon, Monthika and Kriengket, Kanyanut and Chootrakool, Patcharika and Phaholphinyo, Sitthaa and Purodakananda, Sumonmas and Thanakulwarapas, Tipraporn and Kosawat, Krit},
booktitle={2009 International Conference on Asian Language Processing},
pages={322--327},
year={2009},
organization={IEEE}
}
```
Character type features:
```
@inproceedings{haruechaiyasak2009tlex,
title={TLex: Thai lexeme analyser based on the conditional random fields},
author={Haruechaiyasak, Choochart and Kongyoung, Sarawoot},
booktitle={Proceedings of 8th International Symposium on Natural Language Processing},
year={2009}
}
```
### Contributions
Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset. | 8,403 | [
[
-0.041778564453125,
-0.054229736328125,
0.01519012451171875,
0.02703857421875,
-0.035247802734375,
0.0085906982421875,
-0.00972747802734375,
-0.0211639404296875,
0.04766845703125,
0.019500732421875,
-0.0367431640625,
-0.05841064453125,
-0.0504150390625,
0.04... |
factckbr | 2023-01-25T14:30:15.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pt",
"license:mit",
"region:us"
] | null | A dataset to study Fake News in Portuguese, presenting a supposedly false News along with their respective fact check and classification.
The data is collected from the ClaimReview, a structured data schema used by fact check agencies to share their results in search engines, enabling data collect in real time.
The FACTCK.BR dataset contains 1309 claims with its corresponding label. | @inproceedings{10.1145/3323503.3361698,
author = {Moreno, Jo\\~{a}o and Bressan, Gra\\c{c}a},
title = {FACTCK.BR: A New Dataset to Study Fake News},
year = {2019},
isbn = {9781450367639},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3323503.3361698},
doi = {10.1145/3323503.3361698},
abstract = {Machine learning algorithms can be used to combat fake news propagation. For the news classification, labeled datasets are required, however, among the existing datasets, few separate verified false from skewed ones with a good variety of sources. This work presents FACTCK.BR, a new dataset to study Fake News in Portuguese, presenting a supposedly false News along with their respective fact check and classification. The data is collected from the ClaimReview, a structured data schema used by fact check agencies to share their results in search engines, enabling data collect in real time.},
booktitle = {Proceedings of the 25th Brazillian Symposium on Multimedia and the Web},
pages = {525–527},
numpages = {3},
keywords = {fake news, fact check, information extraction, dataset},
location = {Rio de Janeiro, Brazil},
series = {WebMedia '19}
} | 3 | 104 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- pt
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- fact-checking
pretty_name: FACTCK BR
dataset_info:
features:
- name: url
dtype: string
- name: author
dtype: string
- name: date
dtype: string
- name: claim
dtype: string
- name: review
dtype: string
- name: title
dtype: string
- name: rating
dtype: float32
- name: best_rating
dtype: float32
- name: label
dtype:
class_label:
names:
'0': falso
'1': distorcido
'2': impreciso
'3': exagerado
'4': insustentável
'5': verdadeiro
'6': outros
'7': subestimado
'8': impossível provar
'9': discutível
'10': sem contexto
'11': de olho
'12': verdadeiro, mas
'13': ainda é cedo para dizer
splits:
- name: train
num_bytes: 750646
num_examples: 1313
download_size: 721314
dataset_size: 750646
---
# Dataset Card for FACTCK BR
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/jghm-f/FACTCK.BR
- **Repository:** https://github.com/jghm-f/FACTCK.BR
- **Paper:** https://dl.acm.org/doi/10.1145/3323503.3361698
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
A dataset to study Fake News in Portuguese, presenting a supposedly false News along with their respective fact check and classification.
The data is collected from the ClaimReview, a structured data schema used by fact check agencies to share their results in search engines, enabling data collect in real time.
The FACTCK.BR dataset contains 1309 claims with its corresponding label.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset. | 4,058 | [
[
-0.0341796875,
-0.045440673828125,
0.01708984375,
0.0256195068359375,
-0.0258941650390625,
0.00708770751953125,
-0.0114288330078125,
-0.029022216796875,
0.046417236328125,
0.044036865234375,
-0.057403564453125,
-0.07061767578125,
-0.046417236328125,
0.001149... |
muchocine | 2023-01-25T14:40:54.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:es",
"license:unknown",
"region:us"
] | null | The Muchocine reviews dataset contains 3,872 longform movie reviews in Spanish language,
each with a shorter summary review, and a rating on a 1-5 scale. | null | 4 | 104 | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- found
language:
- es
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: Muchocine
dataset_info:
features:
- name: review_body
dtype: string
- name: review_summary
dtype: string
- name: star_rating
dtype:
class_label:
names:
'0': '1'
'1': '2'
'2': '3'
'3': '4'
'4': '5'
splits:
- name: train
num_bytes: 11871095
num_examples: 3872
download_size: 55556703
dataset_size: 11871095
---
# Dataset Card for Muchocine
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://www.lsi.us.es/~fermin/index.php/Datasets
### Dataset Summary
The Muchocine reviews dataset contains 3,872 longform movie reviews in Spanish language,
each with a shorter summary review, and a rating on a 1-5 scale.
### Supported Tasks and Leaderboards
- `text-classification`: This dataset can be used for Text Classification, more precisely Sentiment Classification where the task is to predict the `star_rating` for a `reveiw_body` or a `review summaray`.
### Languages
Spanish.
## Dataset Structure
### Data Instances
An example from the train split:
```
{
'review_body': 'Zoom nos cuenta la historia de Jack Shepard, anteriormente conocido como el Capitán Zoom, Superhéroe que perdió sus poderes y que actualmente vive en el olvido. La llegada de una amenaza para la Tierra hará que la agencia del gobierno que se ocupa de estos temas acuda a él para que entrene a un grupo de jóvenes con poderes para combatir esta amenaza.Zoom es una comedia familiar, con todo lo que eso implica, es decir, guión flojo y previsible, bromas no salidas de tono, historia amorosa de por medio y un desenlace tópico. La gracia está en que los protagonistas son jóvenes con superpoderes, una producción cargada de efectos especiales y unos cuantos guiños frikis. La película además se pasa volando ya que dura poco mas de ochenta minutos y cabe destacar su prologo en forma de dibujos de comics explicando la historia de la cual partimos en la película.Tim Allen protagoniza la cinta al lado de un envejecido Chevy Chase, que hace de doctor encargado del proyecto, un papel bastante gracioso y ridículo, pero sin duda el mejor papel es el de Courteney Cox, en la piel de una científica amante de los comics y de lo más friki. Del grupito de los cuatro niños sin duda la mas graciosa es la niña pequeña con súper fuerza y la que provocara la mayor parte de los gags debido a su poder.Una comedia entretenida y poca cosa más para ver una tarde de domingo. ',
'review_summary': 'Una comedia entretenida y poca cosa más para ver una tarde de domingo ', 'star_rating': 2
}
```
### Data Fields
- `review_body` - longform review
- `review_summary` - shorter-form review
- `star_rating` - an integer star rating (1-5)
The original source also includes part-of-speech tagging for body and summary fields.
### Data Splits
One split (train) with 3,872 reviews.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Data was collected from www.muchocine.net and uploaded by Dr. Fermín L. Cruz Mata
of La Universidad de Sevilla.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
The text reviews and star ratings came directly from users, so no additional annotation was needed.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Dr. Fermín L. Cruz Mata.
### Licensing Information
[More Information Needed]
### Citation Information
See http://www.lsi.us.es/~fermin/index.php/Datasets
### Contributions
Thanks to [@mapmeld](https://github.com/mapmeld) for adding this dataset. | 5,258 | [
[
-0.045196533203125,
-0.038055419921875,
0.028289794921875,
0.01483917236328125,
-0.0233917236328125,
0.003597259521484375,
-0.0170745849609375,
-0.040283203125,
0.059234619140625,
0.041717529296875,
-0.05316162109375,
-0.0665283203125,
-0.04693603515625,
0.0... |
srwac | 2022-11-03T16:08:14.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"language:sr",... | null | The Serbian web corpus srWaC was built by crawling the .rs top-level domain in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Serbian vs. Croatian).
Version 1.0 of this corpus is described in http://www.aclweb.org/anthology/W14-0405. Version 1.1 contains newer and better linguistic annotations. | @misc{11356/1063,
title = {Serbian web corpus {srWaC} 1.1},
author = {Ljube{\v s}i{\'c}, Nikola and Klubi{\v c}ka, Filip},
url = {http://hdl.handle.net/11356/1063},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {Creative Commons - Attribution-{ShareAlike} 4.0 International ({CC} {BY}-{SA} 4.0)},
year = {2016} } | 1 | 104 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- sr
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 100M<n<1B
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty_name: SrWac
dataset_info:
features:
- name: sentence
dtype: string
config_name: srwac
splits:
- name: train
num_bytes: 17470890484
num_examples: 688805174
download_size: 3767312759
dataset_size: 17470890484
---
# Dataset Card for SrWac
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://nlp.ffzg.hr/resources/corpora/srwac/
- **Repository:** https://www.clarin.si/repository/xmlui/handle/11356/1063
- **Paper:** http://nlp.ffzg.hr/data/publications/nljubesi/ljubesic14-bs.pdf
- **Leaderboard:**
- **Point of Contact:** [Nikola Ljubešič](mailto:nikola.ljubesic@ffzg.hr)
### Dataset Summary
The Serbian web corpus srWaC was built by crawling the .rs top-level domain in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Serbian vs. Croatian).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Dataset is monolingual in Serbian language.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Dataset is under the [CC-BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/) license.
### Citation Information
```
@misc{11356/1063,
title = {Serbian web corpus {srWaC} 1.1},
author = {Ljube{\v s}i{\'c}, Nikola and Klubi{\v c}ka, Filip},
url = {http://hdl.handle.net/11356/1063},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {Creative Commons - Attribution-{ShareAlike} 4.0 International ({CC} {BY}-{SA} 4.0)},
year = {2016} }
```
### Contributions
Thanks to [@IvanZidov](https://github.com/IvanZidov) for adding this dataset. | 3,955 | [
[
-0.036163330078125,
-0.03704833984375,
0.0021953582763671875,
0.0196380615234375,
-0.0306549072265625,
0.00414276123046875,
-0.028900146484375,
-0.035003662109375,
0.042938232421875,
0.031890869140625,
-0.07867431640625,
-0.07672119140625,
-0.056121826171875,
... |
Cropinky/rap_lyrics_english | 2021-07-21T03:07:36.000Z | [
"region:us"
] | Cropinky | null | null | 3 | 104 | 2022-03-02T23:29:22 | ## Rap lyrics dataset
this is the repo containing the dataset we made for the hugging face community week, in order to download more songs you need to request and get(it's very simple and fast) your genius API key which ou put in the genius.py file<br/>
#TODO: turn it into an actual huggingface dataset | 304 | [
[
-0.046478271484375,
-0.00406646728515625,
0.0039520263671875,
0.043975830078125,
-0.0014123916625976562,
0.033477783203125,
-0.001399993896484375,
-0.0158233642578125,
0.07366943359375,
0.0440673828125,
-0.076904296875,
-0.03192138671875,
-0.043853759765625,
... |
cbrew475/hwu66 | 2022-02-22T18:18:36.000Z | [
"region:us"
] | cbrew475 | This project contains natural language data for human-robot interaction in a projecthome domain which
Xingkun Liu et al, from Heriot-Watt University, collected and annotated. It can be used for evaluating
NLU services/platforms. | @InProceedings{XLiu.etal:IWSDS2019,
author = {Xingkun Liu, Arash Eshghi, Pawel Swietojanski and Verena Rieser},
title = {Benchmarking Natural Language Understanding Services for building Conversational Agents},
booktitle = {Proceedings of the Tenth International Workshop on Spoken Dialogue Systems Technology (IWSDS)},
month = {April},
year = {2019},
address = {Ortigia, Siracusa (SR), Italy},
publisher = {Springer},
pages = {xxx--xxx},
url = {http://www.xx.xx/xx/}
} | 0 | 104 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Artificio/WikiArt | 2023-01-18T17:13:54.000Z | [
"region:us"
] | Artificio | null | null | 4 | 104 | 2022-07-21T21:18:50 | ---
dataset_info:
features:
- name: title
dtype: string
- name: artist
dtype: string
- name: date
dtype: string
- name: genre
dtype: string
- name: style
dtype: string
- name: description
dtype: string
- name: filename
dtype: string
- name: image
dtype: image
- name: embeddings_pca512
sequence: float32
splits:
- name: train
num_bytes: 1659296285.75
num_examples: 103250
download_size: 1711766693
dataset_size: 1659296285.75
---
# Dataset Card for "WikiArt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 663 | [
[
-0.053558349609375,
-0.01727294921875,
0.0113525390625,
0.004917144775390625,
-0.016937255859375,
-0.00337982177734375,
0.00502777099609375,
-0.0170135498046875,
0.062103271484375,
0.024169921875,
-0.0576171875,
-0.043853759765625,
-0.044097900390625,
-0.012... |
lmqg/qa_squadshifts_synthetic | 2023-01-15T14:25:15.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-4.0",
"arxiv:2210.03992",
"region:us"
] | lmqg | null | @inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
} | 0 | 104 | 2022-12-20T08:31:18 | ---
license: cc-by-4.0
pretty_name: Synthetic QA dataset on SQuADShifts.
language: en
multilinguality: monolingual
size_categories: 10K<n<100K
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for "lmqg/qa_squadshifts_synthetic"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a synthetic QA dataset generated with fine-tuned QG models over [`lmqg/qa_squadshifts`](https://huggingface.co/datasets/lmqg/qa_squadshifts), made for question-answering based evaluation (QAE) for question generation model proposed by [Zhang and Bansal, 2019](https://aclanthology.org/D19-1253/).
The test split is the original validation set of [`lmqg/qa_squadshifts`](https://huggingface.co/datasets/lmqg/qa_squadshifts), where the model should be evaluate on.
### Supported Tasks and Leaderboards
* `question-answering`
### Languages
English (en)
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature of id
- `title`: a `string` feature of title of the paragraph
- `context`: a `string` feature of paragraph
- `question`: a `string` feature of question
- `answers`: a `json` feature of answers
### Data Splits
TBA
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | 2,035 | [
[
-0.040283203125,
-0.07269287109375,
0.0284576416015625,
0.004047393798828125,
-0.0191192626953125,
0.0171356201171875,
0.0020656585693359375,
-0.0207061767578125,
0.01416015625,
0.0300140380859375,
-0.08502197265625,
-0.04718017578125,
-0.00780487060546875,
... |
dreamerdeo/finqa | 2023-03-06T08:29:39.000Z | [
"region:us"
] | dreamerdeo | null | null | 1 | 104 | 2023-03-05T08:38:40 | dataset_info:
features:
- name: id
dtype: string
- name: post_text
sequence: string
- name: pre_text
sequence: string
- name: question
dtype: string
- name: answers
dtype: string
- name: table
sequence:
sequence: string
splits:
- name: train
num_bytes: 26984130
num_examples: 6251
- name: validation
num_bytes: 3757103
num_examples: 883
- name: test
num_bytes: 4838430
num_examples: 1147
download_size: 21240722
dataset_size: 35579663
| 515 | [
[
-0.05413818359375,
-0.046112060546875,
0.005168914794921875,
0.031463623046875,
-0.0382080078125,
-0.0093994140625,
0.007122039794921875,
0.00858306884765625,
0.037628173828125,
0.04241943359375,
-0.030792236328125,
-0.028289794921875,
-0.044342041015625,
0.... |
izumi-lab/llm-japanese-dataset-vanilla | 2023-09-29T14:40:26.000Z | [
"size_categories:1M<n<10M",
"language:ja",
"license:cc-by-sa-4.0",
"arxiv:2305.12720",
"arxiv:2309.03412",
"region:us"
] | izumi-lab | null | null | 7 | 104 | 2023-05-23T14:45:27 | ---
license: cc-by-sa-4.0
language:
- ja
size_categories:
- 1M<n<10M
---
# llm-japanese-dataset-vanilla
LLM構築用の日本語チャットデータセット
[izumi-lab/llm-japanese-dataset](https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset) から,日英翻訳のデータセット等を抜いたものです.
主に,日本語LLMモデルなどに対して,チャット(Instruction)応答タスクに関してLoRAなどでチューニングするために使用できます.
※様々な公開言語資源を利用させていただきました.関係各位にはこの場を借りて御礼申し上げます.
## データの詳細
データの詳細は,[izumi-lab/llm-japanese-dataset](https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset) に関する,以下の論文を参照してください.
- 日本語: [https://jxiv.jst.go.jp/index.php/jxiv/preprint/view/383](https://jxiv.jst.go.jp/index.php/jxiv/preprint/view/383)
- 英語: [https://arxiv.org/abs/2305.12720](https://arxiv.org/abs/2305.12720)
- GitHub: [https://github.com/masanorihirano/llm-japanese-dataset](https://github.com/masanorihirano/llm-japanese-dataset)
- 最新情報: [llm.msuzuki.me](https://llm.msuzuki.me).
なお,Citationには,よろしければ,以下をご利用ください.
```
@preprint{Suzuki2023-llmvanilla,
title={{From Base to Conversational: Japanese Instruction Dataset and Tuning Large Language Models}},
autor={Masahiro Suzuki and Masanori Hirano and Hiroki Sakaji},
doi={10.48550/arXiv.2309.03412},
archivePrefix={arXiv},
arxivId={2309.03412},
year={2023}
}
```
共同研究,データ提供,各種支援,その他問い合わせは,izumi-llm@socsim.org へ.
## How to use
```python
from datasets import load_dataset
dataset = load_dataset("izumi-lab/llm-japanese-dataset-vanilla", revision="0.1.0")
print(dataset.num_rows)
# {'train': 1811964}
dataset = load_dataset("izumi-lab/llm-japanese-dataset-vanilla", revision="1.0.0")
print(dataset.num_rows)
# {'train': 2515626}
```
v0.1.0 contains 1,811,964 data
v1.0.0 contains 2,515,626 data
For more details, see: https://github.com/masanorihirano/llm-japanese-dataset/tree/vanilla
## LICENSE
CC-BY-SA 4.0
(For more details, see: LICENSE, NOTICE.md, NOTICE2.md)
## Note
To see more latest information, please go to [llm.msuzuki.me](https://llm.msuzuki.me).
| 1,925 | [
[
-0.0157470703125,
-0.06103515625,
0.032562255859375,
0.0171661376953125,
-0.0294036865234375,
-0.0044708251953125,
-0.028656005859375,
-0.0159759521484375,
0.0151824951171875,
0.033905029296875,
-0.0579833984375,
-0.07196044921875,
-0.0262908935546875,
0.016... |
neural-bridge/full_cqa_22k | 2023-10-02T20:14:12.000Z | [
"region:us"
] | neural-bridge | null | null | 0 | 104 | 2023-10-02T20:13:17 | ---
dataset_info:
features:
- name: clear_prompt
dtype: string
splits:
- name: train
num_bytes: 43183498.53262665
num_examples: 17433
- name: test
num_bytes: 10797732.467373349
num_examples: 4359
download_size: 32335855
dataset_size: 53981231.0
---
# Dataset Card for "full_cqa_12k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 449 | [
[
-0.0386962890625,
-0.01036834716796875,
0.0195465087890625,
0.0308074951171875,
-0.0297393798828125,
0.00843048095703125,
0.0168304443359375,
-0.0162811279296875,
0.05706787109375,
0.03900146484375,
-0.05316162109375,
-0.060760498046875,
-0.0458984375,
-0.01... |
ContextualAI/trivia_qa | 2023-10-07T00:42:28.000Z | [
"region:us"
] | ContextualAI | null | null | 1 | 104 | 2023-10-07T00:40:15 | ---
dataset_info:
features:
- name: target
dtype: string
- name: query
dtype: string
- name: gold_generation
sequence: string
splits:
- name: train
num_bytes: 29497317
num_examples: 78785
- name: dev
num_bytes: 3349643
num_examples: 8837
- name: test
num_bytes: 4316214
num_examples: 11313
download_size: 22579595
dataset_size: 37163174
---
# Dataset Card for "trivia_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 560 | [
[
-0.038665771484375,
-0.0227508544921875,
0.028564453125,
0.01114654541015625,
-0.019439697265625,
0.017547607421875,
0.03179931640625,
-0.0094146728515625,
0.0677490234375,
0.0268707275390625,
-0.047119140625,
-0.05743408203125,
-0.0198974609375,
-0.00962829... |
casperhansen/longalpaca_1k_test | 2023-10-15T11:55:55.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | casperhansen | null | null | 0 | 104 | 2023-10-15T11:48:27 | ---
license: cc-by-nc-4.0
---
Dataset preprocessed from https://huggingface.co/datasets/Yukang/LongAlpaca-12k.
This contains 1000 samples that have a minimum length of 16k tokens and a maximum of 32k tokens.
## Script to reproduce
```python
from datasets import load_dataset
from transformers import AutoTokenizer
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq
# Load the dataset and tokenizer
data = load_dataset("Yukang/LongAlpaca-12k")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1", trust_remote_code=True)
def filter_function(batch):
# Separate each round of conversation and concatenate them into single strings
conversation_strs = [f'{instruction}\n\n{output}' for instruction, output in zip(batch['instruction'], batch['output'])]
# Tokenize the strings without truncation
tokens = tokenizer(conversation_strs, truncation=False, return_length=True)
# Return True for examples where the token count exceeds max
return [length > 16384 and length <= 32768 for length in tokens['length']]
# Note that I've added a "keep" key to the return dictionary
filtered_data = data.filter(filter_function, batched=True, batch_size=1000)
# Convert to Pandas DataFrame
df = pd.DataFrame(filtered_data['train'])
df = df.loc[:, ["input", "instruction", "output"]]
# Sample 1k rows
sampled_df = df.sample(n=1000, random_state=1)
# Convert the Pandas DataFrame to a PyArrow Table
table = pa.table(sampled_df)
# Save the table as a Parquet file
pq.write_table(table, 'data.parquet')
``` | 1,563 | [
[
-0.0263519287109375,
-0.05303955078125,
-0.0017833709716796875,
0.041259765625,
-0.024261474609375,
-0.03448486328125,
-0.0247802734375,
-0.01403045654296875,
0.03387451171875,
0.0440673828125,
-0.02978515625,
-0.041259765625,
-0.04095458984375,
0.0277404785... |
caner | 2023-03-16T14:47:48.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ar",
"license:unknown",
"region:us"
] | null | Classical Arabic Named Entity Recognition corpus as a new corpus of tagged data that can be useful for handling the issues in recognition of Arabic named entities. | @article{article,
author = {Salah, Ramzi and Zakaria, Lailatul},
year = {2018},
month = {12},
pages = {},
title = {BUILDING THE CLASSICAL ARABIC NAMED ENTITY RECOGNITION CORPUS (CANERCORPUS)},
volume = {96},
journal = {Journal of Theoretical and Applied Information Technology}
} | 1 | 103 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ar
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: CANER
dataset_info:
features:
- name: token
dtype: string
- name: ner_tag
dtype:
class_label:
names:
'0': Allah
'1': Book
'2': Clan
'3': Crime
'4': Date
'5': Day
'6': Hell
'7': Loc
'8': Meas
'9': Mon
'10': Month
'11': NatOb
'12': Number
'13': O
'14': Org
'15': Para
'16': Pers
'17': Prophet
'18': Rlig
'19': Sect
'20': Time
splits:
- name: train
num_bytes: 5095721
num_examples: 258240
download_size: 17063406
dataset_size: 5095721
---
# Dataset Card for CANER
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [Classical-Arabic-Named-Entity-Recognition-Corpus](https://github.com/RamziSalah)
- **Paper:** [Researchgate](https://www.researchgate.net/publication/330075080_BUILDING_THE_CLASSICAL_ARABIC_NAMED_ENTITY_RECOGNITION_CORPUS_CANERCORPUS)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Classical Arabic Named Entity Recognition corpus is a new corpus of tagged data that can be useful for handling the issues in recognition of Arabic named entities.
### Supported Tasks and Leaderboards
- Named Entity Recognition
### Languages
Classical Arabic
## Dataset Structure
### Data Instances
An example from the dataset:
```
{'ner_tag': 1, 'token': 'الجامع'}
```
Where 1 stands for "Book"
### Data Fields
- `id`: id of the sample
- `token`: the tokens of the example text
- `ner_tag`: the NER tags of each token
The NER tags correspond to this list:
```
"Allah",
"Book",
"Clan",
"Crime",
"Date",
"Day",
"Hell",
"Loc",
"Meas",
"Mon",
"Month",
"NatOb",
"Number",
"O",
"Org",
"Para",
"Pers",
"Prophet",
"Rlig",
"Sect",
"Time"
```
### Data Splits
Training splits only
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
Ramzi Salah and Lailatul Qadri Zakaria
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
[More Information Needed]
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
@article{article,
author = {Salah, Ramzi and Zakaria, Lailatul},
year = {2018},
month = {12},
pages = {},
title = {BUILDING THE CLASSICAL ARABIC NAMED ENTITY RECOGNITION CORPUS (CANERCORPUS)},
volume = {96},
journal = {Journal of Theoretical and Applied Information Technology}
}
### Contributions
Thanks to [@KMFODA](https://github.com/KMFODA) for adding this dataset. | 4,417 | [
[
-0.047332763671875,
-0.036865234375,
-0.00934600830078125,
0.008270263671875,
-0.03448486328125,
0.0299072265625,
-0.0189666748046875,
-0.033935546875,
0.030303955078125,
0.0301513671875,
-0.0281219482421875,
-0.08837890625,
-0.05316162109375,
0.018112182617... |
clickbait_news_bg | 2023-01-25T14:28:03.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:bg",
"license:unknown",
"region:us"
] | null | Dataset with clickbait and fake news in Bulgarian. Introduced for the Hack the Fake News 2017. | @InProceedings{clickbait_news_bg,
title = {Dataset with clickbait and fake news in Bulgarian. Introduced for the Hack the Fake News 2017.},
authors={Data Science Society},
year={2017},
url={https://gitlab.com/datasciencesociety/case_fake_news/}
} | 0 | 103 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- bg
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- fact-checking
pretty_name: Clickbait/Fake News in Bulgarian
dataset_info:
features:
- name: fake_news_score
dtype:
class_label:
names:
'0': legitimate
'1': fake
- name: click_bait_score
dtype:
class_label:
names:
'0': normal
'1': clickbait
- name: content_title
dtype: string
- name: content_url
dtype: string
- name: content_published_time
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 24480402
num_examples: 2815
- name: validation
num_bytes: 6752242
num_examples: 761
download_size: 8569575
dataset_size: 31232644
---
# Dataset Card for Clickbait/Fake News in Bulgarian
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Data Science Society / Case Fake News](https://gitlab.com/datasciencesociety/case_fake_news)
- **Repository:** [Data Science Society / Case Fake News / Data](https://gitlab.com/datasciencesociety/case_fake_news/-/tree/master/data)
- **Paper:** [This paper uses the dataset.](https://www.acl-bg.org/proceedings/2017/RANLP%202017/pdf/RANLP045.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a corpus of Bulgarian news over a fixed period of time, whose factuality had been questioned.
The news come from 377 different sources from various domains, including politics, interesting facts and tips&tricks.
The dataset was prepared for the Hack the
Fake News hackathon. It was provided by the
[Bulgarian Association of PR Agencies](http://www.bapra.bg/) and is
available in [Gitlab](https://gitlab.com/datasciencesociety/).
The corpus was automatically collected, and then annotated by students of journalism.
The training dataset contains 2,815 examples, where 1,940 (i.e., 69%) are fake news
and 1,968 (i.e., 70%) are click-baits; There are 761 testing examples.
There is 98% correlation between fake news and clickbaits.
One important aspect about the training dataset is that it contains many repetitions.
This should not be surprising as it attempts to represent a natural distribution of factual
vs. fake news on-line over a period of time. As publishers of fake news often have a group of
websites that feature the same deceiving content, we should expect some repetition.
In particular, the training dataset contains
434 unique articles with duplicates. These articles have three reposts each on average, with
the most reposted article appearing 45 times.
If we take into account the labels of the reposted articles, we can see that if an article
is reposted, it is more likely to be fake news.
The number of fake news that have a duplicate in the training dataset are 1018 whereas,
the number of articles with genuine content
that have a duplicate article in the training set is 322.
(The dataset description is from the following [paper](https://www.acl-bg.org/proceedings/2017/RANLP%202017/pdf/RANLP045.pdf).)
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Bulgarian
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
Each entry in the dataset consists of the following elements:
* `fake_news_score` - a label indicating whether the article is fake or not
* `click_bait_score` - another label indicating whether it is a click-bait
* `content_title` - article heading
* `content_url` - URL of the original article
* `content_published_time` - date of publication
* `content` - article content
### Data Splits
The **training dataset** contains 2,815 examples, where 1,940 (i.e., 69%) are fake news
and 1,968 (i.e., 70%) are click-baits;
The **validation dataset** contains 761 testing examples.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@tsvm](https://github.com/tsvm), [@lhoestq](https://github.com/lhoestq) for adding this dataset. | 5,951 | [
[
-0.0224151611328125,
-0.0552978515625,
0.01297760009765625,
0.0262451171875,
-0.0347900390625,
0.0111846923828125,
-0.0111083984375,
-0.01280975341796875,
0.037933349609375,
0.025390625,
-0.036376953125,
-0.062042236328125,
-0.04052734375,
0.0076065063476562... |
hkcancor | 2023-02-23T08:43:12.000Z | [
"task_categories:translation",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:dialogue-modeling",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:yue",
... | null | The Hong Kong Cantonese Corpus (HKCanCor) comprise transcribed conversations
recorded between March 1997 and August 1998. It contains recordings of
spontaneous speech (51 texts) and radio programmes (42 texts),
which involve 2 to 4 speakers, with 1 text of monologue.
In total, the corpus contains around 230,000 Chinese words.
The text is word-segmented, annotated with part-of-speech (POS) tags and
romanised Cantonese pronunciation.
Romanisation scheme - Linguistic Society of Hong Kong (LSHK)
POS scheme - Peita-Fujitsu-Renmin Ribao (PRF) corpus (Duan et al., 2000),
with extended tags for Cantonese-specific phenomena added by
Luke and Wang (see original paper for details). | @article{luke2015hong,
author={Luke, Kang-Kwong and Wong, May LY},
title={The Hong Kong Cantonese corpus: design and uses},
journal={Journal of Chinese Linguistics},
year={2015},
pages={309-330},
month={12}
}
@misc{lee2020,
author = {Lee, Jackson},
title = {PyCantonese: Cantonese Linguistics and NLP in Python},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {https://github.com/jacksonllee/pycantonese},
commit = {1d58f44e1cb097faa69de6b617e1d28903b84b98}
} | 10 | 103 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- yue
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
- text-generation
- fill-mask
task_ids:
- dialogue-modeling
paperswithcode_id: hong-kong-cantonese-corpus
pretty_name: The Hong Kong Cantonese Corpus (HKCanCor)
dataset_info:
features:
- name: conversation_id
dtype: string
- name: speaker
dtype: string
- name: turn_number
dtype: int16
- name: tokens
sequence: string
- name: transcriptions
sequence: string
- name: pos_tags_prf
sequence:
class_label:
names:
'0': '!'
'1': '"'
'2': '#'
'3': ''''
'4': ','
'5': '-'
'6': .
'7': '...'
'8': '?'
'9': A
'10': AD
'11': AG
'12': AIRWAYS0
'13': AN
'14': AND
'15': B
'16': BG
'17': BEAN0
'18': C
'19': CENTRE0
'20': CG
'21': D
'22': D1
'23': DG
'24': E
'25': ECHO0
'26': F
'27': G
'28': G1
'29': G2
'30': H
'31': HILL0
'32': I
'33': IG
'34': J
'35': JB
'36': JM
'37': JN
'38': JNS
'39': JNT
'40': JNZ
'41': K
'42': KONG
'43': L
'44': L1
'45': LG
'46': M
'47': MG
'48': MONTY0
'49': MOUNTAIN0
'50': N
'51': N1
'52': NG
'53': NR
'54': NS
'55': NSG
'56': NT
'57': NX
'58': NZ
'59': O
'60': P
'61': PEPPER0
'62': Q
'63': QG
'64': R
'65': RG
'66': S
'67': SOUND0
'68': T
'69': TELECOM0
'70': TG
'71': TOUCH0
'72': U
'73': UG
'74': U0
'75': V
'76': V1
'77': VD
'78': VG
'79': VK
'80': VN
'81': VU
'82': VUG
'83': W
'84': X
'85': XA
'86': XB
'87': XC
'88': XD
'89': XE
'90': XJ
'91': XJB
'92': XJN
'93': XJNT
'94': XJNZ
'95': XJV
'96': XJA
'97': XL1
'98': XM
'99': XN
'100': XNG
'101': XNR
'102': XNS
'103': XNT
'104': XNX
'105': XNZ
'106': XO
'107': XP
'108': XQ
'109': XR
'110': XS
'111': XT
'112': XV
'113': XVG
'114': XVN
'115': XX
'116': Y
'117': YG
'118': Y1
'119': Z
- name: pos_tags_ud
sequence:
class_label:
names:
'0': DET
'1': PRON
'2': VERB
'3': NOUN
'4': ADJ
'5': PUNCT
'6': INTJ
'7': ADV
'8': V
'9': PART
'10': X
'11': NUM
'12': PROPN
'13': AUX
'14': CCONJ
'15': ADP
splits:
- name: train
num_bytes: 5746381
num_examples: 10801
download_size: 961514
dataset_size: 5746381
---
# Dataset Card for The Hong Kong Cantonese Corpus (HKCanCor)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://compling.hss.ntu.edu.sg/hkcancor/
- **Repository:** https://github.com/fcbond/hkcancor
- **Paper:** [Luke and Wang, 2015](https://github.com/fcbond/hkcancor/blob/master/data/LukeWong_Hong-Kong-Cantonese-Corpus.pdf)
- **Leaderboard:** N/A
- **Point of Contact:** Luke Kang Kwong
### Dataset Summary
The Hong Kong Cantonese Corpus (HKCanCor) comprise transcribed conversations recorded
between March 1997 and August 1998. It contains recordings of spontaneous speech (51 texts)
and radio programmes (42 texts), which involve 2 to 4 speakers, with 1 text of monologue.
In total, the corpus contains around 230,000 Chinese words. The text is word-segmented (i.e., tokenization is at word-level, and each token can span multiple Chinese characters). Tokens are annotated with part-of-speech (POS) tags and romanised Cantonese pronunciation.
* Romanisation
* Follows conventions set by the Linguistic Society of Hong Kong (LSHK).
* POS
* The tagset used by this corpus extends the one in the Peita-Fujitsu-Renmin Ribao (PRF) corpus (Duan et al., 2000). Extensions were made to further capture Cantonese-specific phenomena.
* To facilitate everyday usage and for better comparability across languages and/or corpora, this dataset also includes the tags mapped to the [Universal Dependencies 2.0](https://universaldependencies.org/u/pos/index.html) format. This mapping references the [PyCantonese](https://github.com/jacksonllee/pycantonese) library.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Yue Chinese / Cantonese (Hong Kong).
## Dataset Structure
This corpus has 10801 utterances and approximately 230000 Chinese words.
There is no predefined split.
### Data Instances
Each instance contains a conversation id, speaker id within that conversation,
turn number, part-of-speech tag for each Chinese word in the PRF format and UD2.0 format,
and the utterance written in Chinese characters as well as its LSHK format romanisation.
For example:
```python
{
'conversation_id': 'TNR016-DR070398-HAI6V'
'pos_tags_prf': ['v', 'w'],
'pos_tags_ud': ['VERB', 'PUNCT'],
'speaker': 'B',
'transcriptions': ['hai6', 'VQ1'],
'turn_number': 112,
'tokens': ['係', '。']
}
```
### Data Fields
- conversation_id: unique dialogue-level id
- pos_tags_prf: POS tag using the PRF format at token-level
- pos_tag_ud: POS tag using the UD2.0 format at token-level
- speaker: unique speaker id within dialogue
- transcriptions: token-level romanisation in the LSHK format
- turn_number: turn number in dialogue
- tokens: Chinese word or punctuation at token-level
### Data Splits
There are no specified splits in this dataset.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This work is licensed under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/deed.ast).
### Citation Information
This corpus was developed by [Luke and Wong, 2015](http://compling.hss.ntu.edu.sg/hkcancor/data/LukeWong_Hong-Kong-Cantonese-Corpus.pdf).
```
@article{luke2015hong,
author={Luke, Kang-Kwong and Wong, May LY},
title={The Hong Kong Cantonese corpus: design and uses},
journal={Journal of Chinese Linguistics},
year={2015},
pages={309-330},
month={12}
}
```
The POS tagset to Universal Dependency tagset mapping is provided by Jackson Lee, as a part of the [PyCantonese](https://github.com/jacksonllee/pycantonese) library.
```
@misc{lee2020,
author = {Lee, Jackson},
title = {PyCantonese: Cantonese Linguistics and NLP in Python},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/jacksonllee/pycantonese}},
commit = {1d58f44e1cb097faa69de6b617e1d28903b84b98}
}
```
### Contributions
Thanks to [@j-chim](https://github.com/j-chim) for adding this dataset. | 9,244 | [
[
-0.0224761962890625,
-0.0343017578125,
0.0000540614128112793,
0.036590576171875,
-0.0269012451171875,
-0.0139617919921875,
-0.03857421875,
-0.0297393798828125,
0.045379638671875,
0.049774169921875,
-0.024383544921875,
-0.06768798828125,
-0.0310211181640625,
... |
yoruba_gv_ner | 2023-01-25T15:03:39.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:yo",
"license:cc-by-3.0",
"region:us"
] | null | The Yoruba GV NER dataset is a labeled dataset for named entity recognition in Yoruba. The texts were obtained from
Yoruba Global Voices News articles https://yo.globalvoices.org/ . We concentrate on
four types of named entities: persons [PER], locations [LOC], organizations [ORG], and dates & time [DATE].
The Yoruba GV NER data files contain 2 columns separated by a tab ('\t'). Each word has been put on a separate line and
there is an empty line after each sentences i.e the CoNLL format. The first item on each line is a word, the second
is the named entity tag. The named entity tags have the format I-TYPE which means that the word is inside a phrase
of type TYPE. For every multi-word expression like 'New York', the first word gets a tag B-TYPE and the subsequent words
have tags I-TYPE, a word with tag O is not part of a phrase. The dataset is in the BIO tagging scheme.
For more details, see https://www.aclweb.org/anthology/2020.lrec-1.335/ | @inproceedings{alabi-etal-2020-massive,
title = "Massive vs. Curated Embeddings for Low-Resourced Languages: the Case of {Yorùbá} and {T}wi",
author = "Alabi, Jesujoba and
Amponsah-Kaakyire, Kwabena and
Adelani, David and
Espa{\\~n}a-Bonet, Cristina",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://www.aclweb.org/anthology/2020.lrec-1.335",
pages = "2754--2762",
language = "English",
ISBN = "979-10-95546-34-4",
} | 0 | 103 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- yo
license:
- cc-by-3.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Yoruba GV NER Corpus
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-DATE
'8': I-DATE
config_name: yoruba_gv_ner
splits:
- name: train
num_bytes: 358885
num_examples: 817
- name: validation
num_bytes: 50161
num_examples: 117
- name: test
num_bytes: 96518
num_examples: 237
download_size: 254347
dataset_size: 505564
---
# Dataset Card for Yoruba GV NER Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [Yoruba GV NER](https://github.com/ajesujoba/YorubaTwi-Embedding/tree/master/Yoruba/Yoruba-NER)
- **Paper:** https://www.aclweb.org/anthology/2020.lrec-1.335/
- **Leaderboard:**
- **Point of Contact:** [David Adelani](mailto:didelani@lsv.uni-saarland.de)
### Dataset Summary
The Yoruba GV NER is a named entity recognition (NER) dataset for Yorùbá language based on the [Global Voices news](https://yo.globalvoices.org/) corpus. Global Voices (GV) is a multilingual news platform with articles contributed by journalists, translators, bloggers, and human rights activists from around the world with a coverage of over 50 languages. Most of the texts used in creating the Yoruba GV NER are translations from other languages to Yorùbá.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language supported is Yorùbá.
## Dataset Structure
### Data Instances
A data point consists of sentences seperated by empty line and tab-seperated tokens and tags.
{'id': '0',
'ner_tags': [B-LOC, 0, 0, 0, 0],
'tokens': ['Tanzania', 'fi', 'Ajìjàgbara', 'Ọmọ', 'Orílẹ̀-èdèe']
}
### Data Fields
- `id`: id of the sample
- `tokens`: the tokens of the example text
- `ner_tags`: the NER tags of each token
The NER tags correspond to this list:
```
"O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-DATE", "I-DATE",
```
The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and dates & times (DATE). (O) is used for tokens not considered part of any named entity.
### Data Splits
Training (19,421 tokens), validation (2,695 tokens) and test split (5,235 tokens)
## Dataset Creation
### Curation Rationale
The data was created to help introduce resources to new language - Yorùbá.
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The dataset is based on the news domain and was crawled from [Global Voices Yorùbá news](https://yo.globalvoices.org/).
[More Information Needed]
#### Who are the source language producers?
The dataset contributed by journalists, translators, bloggers, and human rights activists from around the world. Most of the texts used in creating the Yoruba GV NER are translations from other languages to Yorùbá
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The data was annotated by Jesujoba Alabi and David Adelani for the paper:
[Massive vs. Curated Embeddings for Low-Resourced Languages: the case of Yorùbá and Twi](https://www.aclweb.org/anthology/2020.lrec-1.335/).
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The annotated data sets were developed by students of Saarland University, Saarbrücken, Germany .
### Licensing Information
The data is under the [Creative Commons Attribution 3.0 ](https://creativecommons.org/licenses/by/3.0/)
### Citation Information
```
@inproceedings{alabi-etal-2020-massive,
title = "Massive vs. Curated Embeddings for Low-Resourced Languages: the Case of {Y}or{\`u}b{\'a} and {T}wi",
author = "Alabi, Jesujoba and
Amponsah-Kaakyire, Kwabena and
Adelani, David and
Espa{\~n}a-Bonet, Cristina",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://www.aclweb.org/anthology/2020.lrec-1.335",
pages = "2754--2762",
language = "English",
ISBN = "979-10-95546-34-4",
}
```
### Contributions
Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset. | 6,184 | [
[
-0.036895751953125,
-0.0699462890625,
0.0030803680419921875,
0.0036334991455078125,
-0.0202789306640625,
0.0008983612060546875,
-0.032806396484375,
-0.0401611328125,
0.04766845703125,
0.02630615234375,
-0.037750244140625,
-0.044219970703125,
-0.05426025390625,
... |
Abirate/code_net_dev_dataset | 2021-12-12T09:26:00.000Z | [
"region:us"
] | Abirate | null | null | 1 | 103 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Arnold/hausa_common_voice | 2022-02-10T03:28:22.000Z | [
"region:us"
] | Arnold | null | null | 0 | 103 | 2022-03-02T23:29:22 | This dataset is from the common voice corpus 7.0 using the Hausa dataset | 72 | [
[
-0.0191650390625,
-0.012908935546875,
0.0060882568359375,
0.0240631103515625,
-0.014312744140625,
-0.007648468017578125,
0.0059967041015625,
-0.0132904052734375,
0.04974365234375,
0.09613037109375,
-0.044921875,
-0.038543701171875,
-0.0219268798828125,
-0.00... |
gabtan99/pex-conversations | 2022-10-20T19:34:29.000Z | [
"task_ids:dialogue-modeling",
"task_ids:language-modeling",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:tl",
"language:fil",
"license:unknown",
"multi-turn",
"region:us"
] | gabtan99 | null | null | 1 | 103 | 2022-03-02T23:29:22 | ---
language:
- tl
- fil
license:
- unknown
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- sequence-modeling
task_ids:
- dialogue-modeling
- language-modeling
pretty_name: PEx Conversations
tags:
- multi-turn
---
# PinoyExchange (PEx) Conversations Dataset
# Summary
PEx Conversations is a dataset composed of collected threads from PinoyExchange.com (Consisting of Tagalog, English, or Taglish responses).
The corpus consists of 45K total scraped threads from 8 subforums. The data only consists of the user message which means any images, videos, links, or any embdedded html are not collected in the scraping process. All characters have been transliterated to its closest ASCII representation, and unicode errors were fixed.
# Format
The data is categorized per category. The objects in the list is composed of:
* category - the category of the threads
* conversations - the list of threads
The threads inside conversations have recursive structure consisting of the following:
* text - This is the response/reply/prompt
* replies - This is a list of the replies to this prompt. The replies inside the list has a structure with the same text and replies component.
# Subforum percentages
The amount of data per subforum are as follows:
* Small Talk - 5K conversations with 1.16M utterances
* Food & Drinks - 8.2K conversations with 273K utterances
* Health & Wellness - 6.3K conversations with 93K utterances
* Body & Fitness - 3.9K conversations with 94K utterances
* Home & Garden - 3.6K conversations with 71K utterances
* Style & Fashion - 9.7K conversations with 197K utterances
* Travel & Leisure - 7.3K conversations with 431K utterances
* Visas & Immigration - 1.1K conversations with 99K utterances
# Model Research
[Tagalog DialoGPT](https://huggingface.co/gabtan99/dialogpt-tagalog-medium) | 1,872 | [
[
-0.0188751220703125,
-0.052703857421875,
0.03167724609375,
0.03155517578125,
-0.02423095703125,
0.0163116455078125,
-0.007354736328125,
-0.02410888671875,
0.0355224609375,
0.0479736328125,
-0.052459716796875,
-0.0467529296875,
-0.0197601318359375,
0.01406860... |
iarfmoose/question_generator | 2021-11-29T05:22:03.000Z | [
"region:us"
] | iarfmoose | null | null | 4 | 103 | 2022-03-02T23:29:22 | This dataset is made up of data taken from SQuAD v2.0, RACE, CoQA, and MSMARCO. Some examples have been filtered out of the original datasets and others have been modified.
There are two fields; question and text. The question field contains the question, and the text field contains both the answer and the context in the following format:
"\<answer> (answer text) \<context> (context text)"
The <answer> and <context> are included as special tokens in the question generator's tokenizer.
This dataset is intended to be used with the [question_generator repo](https://github.com/AMontgomerie/question_generator) to train the question generator model.
| 655 | [
[
-0.03033447265625,
-0.051788330078125,
0.0168304443359375,
0.006988525390625,
-0.01242828369140625,
0.01299285888671875,
0.00960540771484375,
-0.034942626953125,
0.0220489501953125,
0.049591064453125,
-0.09136962890625,
-0.01399993896484375,
-0.0074310302734375,... |
projecte-aina/catalanqa | 2023-09-13T12:45:53.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ca",
"license:cc-by-sa-4.0",
"arxiv:1606.05250",
"region:us"
] | projecte-aina | CatalanQA: an extractive QA dataset from original Catalan Sources: Wikipedia and VilaWeb newswire.
It is an aggregation and balancing of 2 previous datasets: VilaQUAD and ViquiQUAD, which were described in
This dataset can be used to build extractive-QA and Language Models.
Splts have been balanced by kind of question, and unlike other datasets like SQUAD, it only contains, per record, one question and one answer for each context, although the contexts can repeat multiple times.
- test.json contains 2135 question/answer pairs
- train.json contains 17135 question/answer pairs
- dev.json contains 2157 question/answer pairs
Funded by the Generalitat de Catalunya, Departament de Polítiques Digitals i Administració Pública (AINA),
and Plan de Impulso de las Tecnologías del Lenguaje (Plan TL). | None | 1 | 103 | 2022-06-29T14:22:10 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: catalanqa
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
---
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
# Dataset Card for CatalanQA
## Dataset Description
- **Homepage:** https://github.com/projecte-aina
- **Point of Contact:** [Carlos Rodríguez-Penagos](mailto:carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](mailto:carme.armentano@bsc.es)
### Dataset Summary
This dataset can be used to build extractive-QA and Language Models. It is an aggregation and balancing of 2 previous datasets: [VilaQuAD](https://huggingface.co/datasets/projecte-aina/vilaquad) and [ViquiQuAD](https://huggingface.co/datasets/projecte-aina/viquiquad).
Splits have been balanced by kind of question, and unlike other datasets like [SQuAD](http://arxiv.org/abs/1606.05250), it only contains, per record, one question and one answer for each context, although the contexts can repeat multiple times.
This dataset was developed by [BSC TeMU](https://temu.bsc.es/) as part of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/), to enrich the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/).
### Supported Tasks and Leaderboards
Extractive-QA, Language Model.
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
### Data Instances
```
{
"title": "Els 521 policies espanyols amb més mala nota a les oposicions seran enviats a Catalunya",
"paragraphs": [
{
"context": "El Ministeri d'Interior espanyol enviarà a Catalunya els 521 policies espanyols que han obtingut més mala nota a les oposicions. Segons que explica El País, hi havia mig miler de places vacants que s'havien de cobrir, però els agents amb més bones puntuacions han elegit destinacions diferents. En total van aprovar les oposicions 2.600 aspirants. D'aquests, en seran destinats al Principat 521 dels 560 amb més mala nota. Per l'altra banda, entre els 500 agents amb més bona nota, només 8 han triat Catalunya. Fonts de la policia espanyola que esmenta el diari ho atribueixen al procés d'independència, al Primer d'Octubre i a la 'situació social' que se'n deriva.",
"qas": [
{
"question": "Quants policies enviaran a Catalunya?",
"id": "0.5961700408283691",
"answers": [
{
"text": "521",
"answer_start": 57
}
]
}
]
}
]
},
```
### Data Fields
Follows [(Rajpurkar, Pranav et al., 2016)](http://arxiv.org/abs/1606.05250) for SQuAD v1 datasets:
- `id` (str): Unique ID assigned to the question.
- `title` (str): Title of the article.
- `context` (str): Article text.
- `question` (str): Question.
- `answers` (list): Answer to the question, containing:
- `text` (str): Span text answering to the question.
- `answer_start` Starting offset of the span text answering to the question.
### Data Splits
- train.json: 17135 question/answer pairs
- dev.json: 2157 question/answer pairs
- test.json: 2135 question/answer pairs
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
- [VilaWeb](https://www.vilaweb.cat/) and [Catalan Wikipedia](https://ca.wikipedia.org).
#### Initial Data Collection and Normalization
This dataset is a balanced aggregation from [ViquiQuAD](https://huggingface.co/datasets/projecte-aina/viquiquad) and [VilaQuAD](https://huggingface.co/datasets/projecte-aina/vilaquad) datasets.
#### Who are the source language producers?
Volunteers from [Catalan Wikipedia](https://ca.wikipedia.org) and professional journalists from [VilaWeb](https://www.vilaweb.cat/).
### Annotations
#### Annotation process
We did an aggregation and balancing from [ViquiQuAD](https://huggingface.co/datasets/projecte-aina/viquiquad) and [VilaQuAD](https://huggingface.co/datasets/projecte-aina/vilaquad) datasets.
To annotate those datasets, we commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQuAD 1.0 [(Rajpurkar, Pranav et al., 2016)](http://arxiv.org/abs/1606.05250).
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
#### Who are the annotators?
Annotation was commissioned by a specialized company that hired a team of native language speakers.
### Personal and Sensitive Information
No personal or sensitive information is included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>.
### Contributions
[N/A] | 6,748 | [
[
-0.033599853515625,
-0.051025390625,
0.00440216064453125,
0.03997802734375,
-0.007415771484375,
0.014404296875,
-0.0124359130859375,
-0.02294921875,
0.04620361328125,
0.042694091796875,
-0.0335693359375,
-0.06256103515625,
-0.03369140625,
0.01517486572265625... |
SLPL/naab | 2022-11-03T06:33:48.000Z | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"language:fa",
"license:mit",
"arxiv:2208.13486",
"region:us"
] | SLPL | Huge corpora of textual data are always known to be a crucial need for training deep models such as transformer-based ones. This issue is emerging more in lower resource languages - like Farsi. We propose naab, the biggest cleaned and ready-to-use open-source textual corpus in Farsi. It contains about 130GB of data, 250 million paragraphs, and 15 billion words. The project name is derived from the Farsi word ناب which means pure and high-grade. | @misc{https://doi.org/10.48550/arxiv.2208.13486,
doi = {10.48550/ARXIV.2208.13486},
url = {https://arxiv.org/abs/2208.13486},
author = {Sabouri, Sadra and Rahmati, Elnaz and Gooran, Soroush and Sameti, Hossein},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {naab: A ready-to-use plug-and-play corpus for Farsi},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
} | 25 | 103 | 2022-08-18T13:47:40 | ---
language:
- fa
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100M<n<1B
task_categories:
- fill-mask
- text-generation
task_ids:
- language-modeling
- masked-language-modeling
pretty_name: naab (A ready-to-use plug-and-play corpus in Farsi)
---
# naab: A ready-to-use plug-and-play corpus in Farsi
_[If you want to join our community to keep up with news, models and datasets from naab, click on [this](https://docs.google.com/forms/d/e/1FAIpQLSe8kevFl_ODCx-zapAuOIAQYr8IvkVVaVHOuhRL9Ha0RVJ6kg/viewform) link.]_
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Sharif Speech and Language Processing Lab](https://huggingface.co/SLPL)
- **Paper:** [naab: A ready-to-use plug-and-play corpus for Farsi](https://arxiv.org/abs/2208.13486)
- **Point of Contact:** [Sadra Sabouri](mailto:sabouri.sadra@gmail.com)
### Dataset Summary
naab is the biggest cleaned and ready-to-use open-source textual corpus in Farsi. It contains about 130GB of data, 250 million paragraphs, and 15 billion words. The project name is derived from the Farsi word ناب which means pure and high-grade. We also provide the raw version of the corpus called naab-raw and an easy-to-use pre-processor that can be employed by those who wanted to make a customized corpus.
You can use this corpus by the commands below:
```python
from datasets import load_dataset
dataset = load_dataset("SLPL/naab")
```
You may need to download parts/splits of this corpus too, if so use the command below (You can find more ways to use it [here](https://huggingface.co/docs/datasets/loading#slice-splits)):
```python
from datasets import load_dataset
dataset = load_dataset("SLPL/naab", split="train[:10%]")
```
**Note: be sure that your machine has at least 130 GB free space, also it may take a while to download. If you are facing disk or internet shortage, you can use below code snippet helping you download your costume sections of the naab:**
```python
from datasets import load_dataset
# ==========================================================
# You should just change this part in order to download your
# parts of corpus.
indices = {
"train": [5, 1, 2],
"test": [0, 2]
}
# ==========================================================
N_FILES = {
"train": 126,
"test": 3
}
_BASE_URL = "https://huggingface.co/datasets/SLPL/naab/resolve/main/data/"
data_url = {
"train": [_BASE_URL + "train-{:05d}-of-{:05d}.txt".format(x, N_FILES["train"]) for x in range(N_FILES["train"])],
"test": [_BASE_URL + "test-{:05d}-of-{:05d}.txt".format(x, N_FILES["test"]) for x in range(N_FILES["test"])],
}
for index in indices['train']:
assert index < N_FILES['train']
for index in indices['test']:
assert index < N_FILES['test']
data_files = {
"train": [data_url['train'][i] for i in indices['train']],
"test": [data_url['test'][i] for i in indices['test']]
}
print(data_files)
dataset = load_dataset('text', data_files=data_files, use_auth_token=True)
```
### Supported Tasks and Leaderboards
This corpus can be used for training all language models which can be trained by Masked Language Modeling (MLM) or any other self-supervised objective.
- `language-modeling`
- `masked-language-modeling`
## Dataset Structure
Each row of the dataset will look like something like the below:
```json
{
'text': "این یک تست برای نمایش یک پاراگراف در پیکره متنی ناب است.",
}
```
+ `text` : the textual paragraph.
### Data Splits
This dataset includes two splits (`train` and `test`). We split these two by dividing the randomly permuted version of the corpus into (95%, 5%) division respected to (`train`, `test`). Since `validation` is usually occurring during training with the `train` dataset we avoid proposing another split for it.
| | train | test |
|-------------------------|------:|-----:|
| Input Sentences | 225892925 | 11083849 |
| Average Sentence Length | 61 | 25 |
Below you can see the log-based histogram of word/paragraph over the two splits of the dataset.
<div align="center">
<img src="https://huggingface.co/datasets/SLPL/naab/resolve/main/naab-hist.png">
</div>
## Dataset Creation
### Curation Rationale
Due to the lack of a huge amount of text data in lower resource languages - like Farsi - researchers working on these languages were always finding it hard to start to fine-tune such models. This phenomenon can lead to a situation in which the golden opportunity for fine-tuning models is just in hands of a few companies or countries which contributes to the weakening the open science.
The last biggest cleaned merged textual corpus in Farsi is a 70GB cleaned text corpus from a compilation of 8 big data sets that have been cleaned and can be downloaded directly. Our solution to the discussed issues is called naab. It provides **126GB** (including more than **224 million** sequences and nearly **15 billion** words) as the training corpus and **2.3GB** (including nearly **11 million** sequences and nearly **300 million** words) as the test corpus.
### Source Data
The textual corpora that we used as our source data are illustrated in the figure below. It contains 5 corpora which are linked in the coming sections.
<div align="center">
<img src="https://huggingface.co/datasets/SLPL/naab/resolve/main/naab-pie.png">
</div>
#### Persian NLP
[This](https://github.com/persiannlp/persian-raw-text) corpus includes eight corpora that are sorted based on their volume as below:
- [Common Crawl](https://commoncrawl.org/): 65GB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/commoncrawl_fa_merged.txt))
- [MirasText](https://github.com/miras-tech/MirasText): 12G
- [W2C – Web to Corpus](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0022-6133-9): 1GB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/w2c_merged.txt))
- Persian Wikipedia (March 2020 dump): 787MB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/fawiki_merged.txt))
- [Leipzig Corpora](https://corpora.uni-leipzig.de/): 424M ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/LeipzigCorpus.txt))
- [VOA corpus](https://jon.dehdari.org/corpora/): 66MB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/voa_persian_2003_2008_cleaned.txt))
- [Persian poems corpus](https://github.com/amnghd/Persian_poems_corpus): 61MB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/poems_merged.txt))
- [TEP: Tehran English-Persian parallel corpus](http://opus.nlpl.eu/TEP.php): 33MB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/TEP_fa.txt))
#### AGP
This corpus was a formerly private corpus for ASR Gooyesh Pardaz which is now published for all users by this project. This corpus contains more than 140 million paragraphs summed up in 23GB (after cleaning). This corpus is a mixture of both formal and informal paragraphs that are crawled from different websites and/or social media.
#### OSCAR-fa
[OSCAR](https://oscar-corpus.com/) or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the go classy architecture. Data is distributed by language in both original and deduplicated form. We used the unshuffled-deduplicated-fa from this corpus, after cleaning there were about 36GB remaining.
#### Telegram
Telegram, a cloud-based instant messaging service, is a widely used application in Iran. Following this hypothesis, we prepared a list of Telegram channels in Farsi covering various topics including sports, daily news, jokes, movies and entertainment, etc. The text data extracted from mentioned channels mainly contains informal data.
#### LSCP
[The Large Scale Colloquial Persian Language Understanding dataset](https://iasbs.ac.ir/~ansari/lscp/) has 120M sentences from 27M casual Persian sentences with its derivation tree, part-of-speech tags, sentiment polarity, and translations in English, German, Czech, Italian, and Hindi. However, we just used the Farsi part of it and after cleaning we had 2.3GB of it remaining. Since the dataset is casual, it may help our corpus have more informal sentences although its proportion to formal paragraphs is not comparable.
#### Initial Data Collection and Normalization
The data collection process was separated into two parts. In the first part, we searched for existing corpora. After downloading these corpora we started to crawl data from some social networks. Then thanks to [ASR Gooyesh Pardaz](https://asr-gooyesh.com/en/) we were provided with enough textual data to start the naab journey.
We used a preprocessor based on some stream-based Linux kernel commands so that this process can be less time/memory-consuming. The code is provided [here](https://github.com/Sharif-SLPL/t5-fa/tree/main/preprocess).
### Personal and Sensitive Information
Since this corpus is briefly a compilation of some former corpora we take no responsibility for personal information included in this corpus. If you detect any of these violations please let us know, we try our best to remove them from the corpus ASAP.
We tried our best to provide anonymity while keeping the crucial information. We shuffled some parts of the corpus so the information passing through possible conversations wouldn't be harmful.
## Additional Information
### Dataset Curators
+ Sadra Sabouri (Sharif University of Technology)
+ Elnaz Rahmati (Sharif University of Technology)
### Licensing Information
mit?
### Citation Information
```
@article{sabouri2022naab,
title={naab: A ready-to-use plug-and-play corpus for Farsi},
author={Sabouri, Sadra and Rahmati, Elnaz and Gooran, Soroush and Sameti, Hossein},
journal={arXiv preprint arXiv:2208.13486},
year={2022}
}
```
DOI: [https://doi.org/10.48550/arXiv.2208.13486](https://doi.org/10.48550/arXiv.2208.13486)
### Contributions
Thanks to [@sadrasabouri](https://github.com/sadrasabouri) and [@elnazrahmati](https://github.com/elnazrahmati) for adding this dataset.
### Keywords
+ Farsi
+ Persian
+ raw text
+ پیکره فارسی
+ پیکره متنی
+ آموزش مدل زبانی
| 11,260 | [
[
-0.050140380859375,
-0.043853759765625,
0.02178955078125,
0.033294677734375,
-0.0141448974609375,
0.0020465850830078125,
-0.0306243896484375,
-0.0211944580078125,
0.0279998779296875,
0.035552978515625,
-0.0260162353515625,
-0.06121826171875,
-0.023162841796875,
... |
parambharat/mile_dataset | 2022-12-05T11:46:00.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ta",
"license:cc-by-2.0",
"Tamil ASR",
"Speech Recognition",
"arxiv:22... | parambharat | IISc-MILE Tamil ASR Corpus contains transcribed speech corpus for training ASR systems for Tamil language. It contains ~150 hours of read speech data collected from 531 speakers in a noise-free recording environment with high quality USB microphones. | @misc{mile_1,
doi = {10.48550/ARXIV.2207.13331},
url = {https://arxiv.org/abs/2207.13331},
author = {A, Madhavaraj and Pilar, Bharathi and G, Ramakrishnan A},
title = {Subword Dictionary Learning and Segmentation Techniques for Automatic Speech Recognition in Tamil and Kannada},
publisher = {arXiv},
year = {2022},
}
@misc{mile_2,
doi = {10.48550/ARXIV.2207.13333},
url = {https://arxiv.org/abs/2207.13333},
author = {A, Madhavaraj and Pilar, Bharathi and G, Ramakrishnan A},
title = {Knowledge-driven Subword Grammar Modeling for Automatic Speech Recognition in Tamil and Kannada},
publisher = {arXiv},
year = {2022},
} | 1 | 103 | 2022-12-05T11:37:10 | ---
annotations_creators:
- expert-generated
language:
- ta
language_creators:
- expert-generated
license:
- cc-by-2.0
multilinguality:
- monolingual
pretty_name: IISc-MILE Tamil ASR Corpus
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- Tamil ASR
- Speech Recognition
task_categories:
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.openslr.org/127/
- **Repository:** https://github.com/MILE-IISc
- **Paper:** https://arxiv.org/abs/2207.13331
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Tamil transcribed speech corpus for ASR
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
- Tamil
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Attribution 2.0 Generic (CC BY 2.0)
### Citation Information
@misc{mile_1,
doi = {10.48550/ARXIV.2207.13331},
url = {https://arxiv.org/abs/2207.13331},
author = {A, Madhavaraj and Pilar, Bharathi and G, Ramakrishnan A},
title = {Subword Dictionary Learning and Segmentation Techniques for Automatic Speech Recognition in Tamil and Kannada},
publisher = {arXiv},
year = {2022},
}
@misc{mile_2,
doi = {10.48550/ARXIV.2207.13333},
url = {https://arxiv.org/abs/2207.13333},
author = {A, Madhavaraj and Pilar, Bharathi and G, Ramakrishnan A},
title = {Knowledge-driven Subword Grammar Modeling for Automatic Speech Recognition in Tamil and Kannada},
publisher = {arXiv},
year = {2022},
}
### Contributions
Thanks to [@parambharat](https://github.com/parambharat) for adding this dataset.
| 3,545 | [
[
-0.03680419921875,
-0.042633056640625,
0.00424957275390625,
0.01560211181640625,
-0.03173828125,
0.0189666748046875,
-0.0220489501953125,
-0.02301025390625,
0.041259765625,
0.0229949951171875,
-0.05194091796875,
-0.068603515625,
-0.058837890625,
0.0056991577... |
SZTAKI-HLT/HunSum-1 | 2023-01-24T16:21:00.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"multilinguality:monolingual",
"language:hu",
"license:cc-by-nc-sa-4.0",
"region:us"
] | SZTAKI-HLT | null | null | 2 | 103 | 2023-01-06T07:42:26 | ---
language:
- hu
multilinguality:
- monolingual
task_categories:
- summarization
task_ids:
- news-articles-summarization
pretty_name: HunSum-1
license: cc-by-nc-sa-4.0
---
# Dataset Card for HunSum-1
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Description
### Dataset Summary
The HunSum-1 Dataset is a Hungarian-language dataset containing over 1.1M unique news articles with lead and other metadata. The dataset contains articles from 9 major Hungarian news websites.
### Supported Tasks and Leaderboards
- 'summarization'
- 'title generation'
## Dataset Structure
### Data Fields
- `uuid`: a string containing the unique id
- `article`: a string containing the body of the news article
- `lead`: a string containing the lead of the article
- `title`: a string containing the title of the article
- `url`: a string containing the URL for the article
- `domain`: a string containing the domain of the url
- `date_of_creation`: a timestamp containing the date when the article was created
- `tags`: a sequence containing the tags of the article
### Data Splits
The HunSum-1 dataset has 3 splits: _train_, _validation_, and _test_.
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 1,144,255 |
| Validation | 1996 |
| Test | 1996 |
## Citation
If you use our dataset, please cite the following paper:
```
@inproceedings {HunSum-1,
title = {{HunSum-1: an Abstractive Summarization Dataset for Hungarian}},
booktitle = {XIX. Magyar Számítógépes Nyelvészeti Konferencia (MSZNY 2023)},
year = {2023},
publisher = {Szegedi Tudományegyetem, Informatikai Intézet},
address = {Szeged, Magyarország},
author = {Barta, Botond and Lakatos, Dorina and Nagy, Attila and Nyist, Mil{\'{a}}n Konor and {\'{A}}cs, Judit},
pages = {231--243}
}
``` | 2,232 | [
[
-0.0288238525390625,
-0.031890869140625,
0.0031681060791015625,
0.0125274658203125,
-0.0286102294921875,
-0.0210113525390625,
-0.01401519775390625,
-0.01739501953125,
0.023712158203125,
0.0280609130859375,
-0.034332275390625,
-0.0743408203125,
-0.03204345703125,... |
Multimodal-Fatima/FGVC_Aircraft_test | 2023-06-02T02:15:19.000Z | [
"region:us"
] | Multimodal-Fatima | null | null | 0 | 103 | 2023-01-28T02:49:32 | ---
dataset_info:
features:
- name: image
dtype: image
- name: family
dtype:
class_label:
names:
'0': A300
'1': A310
'2': A320
'3': A330
'4': A340
'5': A380
'6': ATR-42
'7': ATR-72
'8': An-12
'9': BAE 146
'10': BAE-125
'11': Beechcraft 1900
'12': Boeing 707
'13': Boeing 717
'14': Boeing 727
'15': Boeing 737
'16': Boeing 747
'17': Boeing 757
'18': Boeing 767
'19': Boeing 777
'20': C-130
'21': C-47
'22': CRJ-200
'23': CRJ-700
'24': Cessna 172
'25': Cessna 208
'26': Cessna Citation
'27': Challenger 600
'28': DC-10
'29': DC-3
'30': DC-6
'31': DC-8
'32': DC-9
'33': DH-82
'34': DHC-1
'35': DHC-6
'36': DR-400
'37': Dash 8
'38': Dornier 328
'39': EMB-120
'40': Embraer E-Jet
'41': Embraer ERJ 145
'42': Embraer Legacy 600
'43': Eurofighter Typhoon
'44': F-16
'45': F/A-18
'46': Falcon 2000
'47': Falcon 900
'48': Fokker 100
'49': Fokker 50
'50': Fokker 70
'51': Global Express
'52': Gulfstream
'53': Hawk T1
'54': Il-76
'55': King Air
'56': L-1011
'57': MD-11
'58': MD-80
'59': MD-90
'60': Metroliner
'61': PA-28
'62': SR-20
'63': Saab 2000
'64': Saab 340
'65': Spitfire
'66': Tornado
'67': Tu-134
'68': Tu-154
'69': Yak-42
- name: manufacturer
dtype:
class_label:
names:
'0': ATR
'1': Airbus
'2': Antonov
'3': Beechcraft
'4': Boeing
'5': Bombardier Aerospace
'6': British Aerospace
'7': Canadair
'8': Cessna
'9': Cirrus Aircraft
'10': Dassault Aviation
'11': Dornier
'12': Douglas Aircraft Company
'13': Embraer
'14': Eurofighter
'15': Fairchild
'16': Fokker
'17': Gulfstream Aerospace
'18': Ilyushin
'19': Lockheed Corporation
'20': Lockheed Martin
'21': McDonnell Douglas
'22': Panavia
'23': Piper
'24': Robin
'25': Saab
'26': Supermarine
'27': Tupolev
'28': Yakovlev
'29': de Havilland
- name: label
dtype:
class_label:
names:
'0': 707-320
'1': 727-200
'2': 737-200
'3': 737-300
'4': 737-400
'5': 737-500
'6': 737-600
'7': 737-700
'8': 737-800
'9': 737-900
'10': 747-100
'11': 747-200
'12': 747-300
'13': 747-400
'14': 757-200
'15': 757-300
'16': 767-200
'17': 767-300
'18': 767-400
'19': 777-200
'20': 777-300
'21': A300B4
'22': A310
'23': A318
'24': A319
'25': A320
'26': A321
'27': A330-200
'28': A330-300
'29': A340-200
'30': A340-300
'31': A340-500
'32': A340-600
'33': A380
'34': ATR-42
'35': ATR-72
'36': An-12
'37': BAE 146-200
'38': BAE 146-300
'39': BAE-125
'40': Beechcraft 1900
'41': Boeing 717
'42': C-130
'43': C-47
'44': CRJ-200
'45': CRJ-700
'46': CRJ-900
'47': Cessna 172
'48': Cessna 208
'49': Cessna 525
'50': Cessna 560
'51': Challenger 600
'52': DC-10
'53': DC-3
'54': DC-6
'55': DC-8
'56': DC-9-30
'57': DH-82
'58': DHC-1
'59': DHC-6
'60': DHC-8-100
'61': DHC-8-300
'62': DR-400
'63': Dornier 328
'64': E-170
'65': E-190
'66': E-195
'67': EMB-120
'68': ERJ 135
'69': ERJ 145
'70': Embraer Legacy 600
'71': Eurofighter Typhoon
'72': F-16A/B
'73': F/A-18
'74': Falcon 2000
'75': Falcon 900
'76': Fokker 100
'77': Fokker 50
'78': Fokker 70
'79': Global Express
'80': Gulfstream IV
'81': Gulfstream V
'82': Hawk T1
'83': Il-76
'84': L-1011
'85': MD-11
'86': MD-80
'87': MD-87
'88': MD-90
'89': Metroliner
'90': Model B200
'91': PA-28
'92': SR-20
'93': Saab 2000
'94': Saab 340
'95': Spitfire
'96': Tornado
'97': Tu-134
'98': Tu-154
'99': Yak-42
- name: id
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: LLM_Description_opt175b_downstream_tasks_ViT_L_14
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_ViT_L_14
sequence: string
- name: blip_caption
dtype: string
- name: clip_tag_ViT_L_14_specific
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003_full
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003_fgvc
sequence: string
- name: clip_tags_ViT_L_14_with_openai_classes
sequence: string
- name: clip_tags_ViT_L_14_wo_openai_classes
sequence: string
- name: clip_tags_ViT_L_14_simple_specific
dtype: string
- name: clip_tags_ViT_L_14_ensemble_specific
dtype: string
- name: clip_tags_ViT_B_16_simple_specific
dtype: string
- name: clip_tags_ViT_B_16_ensemble_specific
dtype: string
- name: clip_tags_ViT_B_32_simple_specific
dtype: string
- name: clip_tags_ViT_B_32_ensemble_specific
dtype: string
- name: Attributes_ViT_B_16_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_simple_specific
dtype: string
- name: clip_tags_LAION_ViT_H_14_2B_ensemble_specific
dtype: string
splits:
- name: test
num_bytes: 929803718.0
num_examples: 3333
download_size: 923279914
dataset_size: 929803718.0
---
# Dataset Card for "FGVC_Aircraft_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 6,961 | [
[
-0.047698974609375,
-0.02960205078125,
0.0019006729125976562,
0.01233673095703125,
-0.01380157470703125,
0.0009765625,
0.02764892578125,
0.002193450927734375,
0.0384521484375,
0.0190582275390625,
-0.0579833984375,
-0.0399169921875,
-0.02716064453125,
-0.0280... |
Olec/cyber-threat-intelligence_v2 | 2023-04-15T11:00:18.000Z | [
"region:us"
] | Olec | null | null | 4 | 103 | 2023-03-31T15:08:08 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: entities
list:
- name: end_offset
dtype: int64
- name: id
dtype: int64
- name: label
dtype: string
- name: start_offset
dtype: int64
- name: relations
list:
- name: from_id
dtype: int64
- name: id
dtype: int64
- name: to_id
dtype: int64
- name: type
dtype: string
splits:
- name: test
num_bytes: 29518
num_examples: 72
- name: train
num_bytes: 147723
num_examples: 332
- name: validation
num_bytes: 36580
num_examples: 76
download_size: 119557
dataset_size: 213821
---
# Dataset Card for "cyber-threat-intelligence_v2"
updated version of mrmoor/cyber-threat-intelligence
RE and NER Dataset for Cyber Threat Intelegence (CTI)
T5 Model trained on NYT and this dataset: Olec/cyber_rebel
This dataset only contains sentences with realtions.
Full dataset is available at mrmoor/cyber-threat-intelligence. | 1,032 | [
[
0.005840301513671875,
-0.026702880859375,
0.003143310546875,
-0.0206298828125,
-0.00948333740234375,
0.0230560302734375,
0.00605010986328125,
-0.039642333984375,
0.0360107421875,
0.035125732421875,
-0.062103271484375,
-0.04217529296875,
-0.034088134765625,
-... |
ztphs980/taptap_datasets | 2023-05-23T12:32:37.000Z | [
"language:en",
"license:mit",
"arxiv:2305.09696",
"region:us"
] | ztphs980 | null | null | 2 | 103 | 2023-05-20T14:34:39 | ---
license: mit
language:
- en
---
This repository contains a total of 483 tabular datasets with meaningful column names collected from OpenML, UCI, and Kaggle platforms. The last column of each dataset is the label column. For more details, please refer to our paper https://arxiv.org/abs/2305.09696.
You can use the [code](https://github.com/ZhangTP1996/TapTap/blob/master/load_pretraining_datasets.py) to load all the datasets into a dictionary of pd.DataFrame.
An example script can be found below:
```python
from datasets import load_dataset
import pandas as pd
import numpy as np
data = {}
dataset = load_dataset(path='ztphs980/taptap_datasets')
dataset = dataset['train'].to_dict()
for table_name, table in zip(dataset['dataset_name'], dataset['table']):
table = pd.DataFrame.from_dict(eval(table, {'nan': np.nan}))
data[table_name] = table
``` | 864 | [
[
-0.037872314453125,
-0.003200531005859375,
0.0167236328125,
0.015777587890625,
0.0006566047668457031,
-0.00775909423828125,
-0.01381683349609375,
0.01378631591796875,
0.018951416015625,
0.051361083984375,
-0.0116119384765625,
-0.060302734375,
-0.0169677734375,
... |
garcianacho/human_genome_csv | 2023-10-04T12:41:28.000Z | [
"task_categories:token-classification",
"license:apache-2.0",
"biology",
"genome",
"human genome",
"bioinformatics",
"region:us"
] | garcianacho | null | null | 0 | 103 | 2023-09-20T08:52:07 | ---
license: apache-2.0
task_categories:
- token-classification
tags:
- biology
- genome
- human genome
- bioinformatics
---
## Human Genome Dataset
Here is a human genome ready to be used to train LLM.
| 206 | [
[
-0.0085296630859375,
0.0040283203125,
0.0138397216796875,
0.00545501708984375,
-0.0191497802734375,
0.01145172119140625,
0.00965118408203125,
0.00868988037109375,
0.024017333984375,
0.055694580078125,
-0.049346923828125,
-0.039093017578125,
-0.036529541015625,
... |
vishnupriyavr/wiki-movie-plots-with-summaries-faiss-embeddings | 2023-10-08T16:02:50.000Z | [
"region:us"
] | vishnupriyavr | null | null | 0 | 103 | 2023-10-08T16:02:41 | ---
dataset_info:
features:
- name: Release Year
dtype: int64
- name: Title
dtype: string
- name: Cast
dtype: string
- name: Wiki Page
dtype: string
- name: Plot
dtype: string
- name: plot_length
dtype: int64
- name: text
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 256974740
num_examples: 33155
download_size: 216835238
dataset_size: 256974740
---
# Dataset Card for "wiki-movie-plots-with-summaries-faiss-embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 657 | [
[
-0.039398193359375,
-0.0137786865234375,
0.01071929931640625,
0.010284423828125,
-0.0299224853515625,
0.0032482147216796875,
0.0207061767578125,
0.02252197265625,
0.07965087890625,
0.032012939453125,
-0.056243896484375,
-0.03997802734375,
-0.056243896484375,
... |
result-kand2-sdxl-wuerst-karlo/e1cc4189 | 2023-10-23T14:05:26.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 103 | 2023-10-23T14:05:25 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 211
num_examples: 10
download_size: 1374
dataset_size: 211
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "e1cc4189"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.0458984375,
-0.01351165771484375,
0.0123138427734375,
0.01010894775390625,
-0.0208892822265625,
-0.0075836181640625,
0.0250701904296875,
-0.01319122314453125,
0.07806396484375,
0.028839111328125,
-0.0657958984375,
-0.04608154296875,
-0.040374755859375,
-0... |
bigheiniuJ/JimmyLuAug | 2023-11-03T00:36:31.000Z | [
"region:us"
] | bigheiniuJ | null | null | 0 | 103 | 2023-10-30T17:17:08 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: output
dtype: string
- name: input
dtype: string
- name: seed
dtype: string
- name: split
dtype: string
- name: task
dtype: string
- name: options
sequence: string
- name: id
dtype: int64
- name: aug_type
dtype: string
- name: aug_time
dtype: int64
splits:
- name: train
num_bytes: 346063134
num_examples: 898919
download_size: 94246763
dataset_size: 346063134
---
# Dataset Card for "JimmyLuAug"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 723 | [
[
-0.038604736328125,
-0.02215576171875,
0.008087158203125,
-0.0014066696166992188,
-0.0168914794921875,
0.013916015625,
0.0139312744140625,
-0.01739501953125,
0.07568359375,
0.035400390625,
-0.0633544921875,
-0.050018310546875,
-0.043365478515625,
-0.01757812... |
bbc_hindi_nli | 2023-01-25T14:27:06.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|bbc__hindi_news_classification",
"language:hi",
"license:mit",
"... | null | This dataset is used to train models for Natural Language Inference Tasks in Low-Resource Languages like Hindi. | @inproceedings{uppal-etal-2020-two,
title = "Two-Step Classification using Recasted Data for Low Resource Settings",
author = "Uppal, Shagun and
Gupta, Vivek and
Swaminathan, Avinash and
Zhang, Haimin and
Mahata, Debanjan and
Gosangi, Rakesh and
Shah, Rajiv Ratn and
Stent, Amanda",
booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing",
month = dec,
year = "2020",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.aacl-main.71",
pages = "706--719",
abstract = "An NLP model{'}s ability to reason should be independent of language. Previous works utilize Natural Language Inference (NLI) to understand the reasoning ability of models, mostly focusing on high resource languages like English. To address scarcity of data in low-resource languages such as Hindi, we use data recasting to create NLI datasets for four existing text classification datasets. Through experiments, we show that our recasted dataset is devoid of statistical irregularities and spurious patterns. We further study the consistency in predictions of the textual entailment models and propose a consistency regulariser to remove pairwise-inconsistencies in predictions. We propose a novel two-step classification method which uses textual-entailment predictions for classification task. We further improve the performance by using a joint-objective for classification and textual entailment. We therefore highlight the benefits of data recasting and improvements on classification performance using our approach with supporting experimental results.",
} | 0 | 102 | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- hi
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|bbc__hindi_news_classification
task_categories:
- text-classification
task_ids:
- natural-language-inference
pretty_name: BBC Hindi NLI Dataset
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': not-entailment
'1': entailment
- name: topic
dtype:
class_label:
names:
'0': india
'1': news
'2': international
'3': entertainment
'4': sport
'5': science
config_name: bbc hindi nli
splits:
- name: train
num_bytes: 2990080
num_examples: 15552
- name: validation
num_bytes: 496808
num_examples: 2580
- name: test
num_bytes: 494432
num_examples: 2592
download_size: 3815652
dataset_size: 3981320
---
# Dataset Card for BBC Hindi NLI Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [GitHub](https://github.com/midas-research/hindi-nli-data)
- **Paper:** [Aclweb](https://www.aclweb.org/anthology/2020.aacl-main.71)
- **Point of Contact:** [GitHub](https://github.com/midas-research/hindi-nli-data)
### Dataset Summary
- Dataset for Natural Language Inference in Hindi Language. BBC Hindi Dataset consists of textual-entailment pairs.
- Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic.
- Context and Hypothesis is written in Hindi while Entailment_Label is in English.
- Entailment_label is of 2 types - entailed and not-entailed.
- Dataset can be used to train models for Natural Language Inference tasks in Hindi Language.
[More Information Needed]
### Supported Tasks and Leaderboards
- Natural Language Inference for Hindi
### Languages
Dataset is in Hindi
## Dataset Structure
- Data is structured in TSV format.
- Train and Test files are in seperate files
### Dataset Instances
An example of 'train' looks as follows.
```
{'hypothesis': 'यह खबर की सूचना है|', 'label': 'entailed', 'premise': 'गोपनीयता की नीति', 'topic': '1'}
```
### Data Fields
- Each row contatins 4 columns - Premise, Hypothesis, Label and Topic.
### Data Splits
- Train : 15553
- Valid : 2581
- Test : 2593
## Dataset Creation
- We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available BBC Hindi news text classification datasets in Hindi and pose them as TE problems
- In this recasting process, we build template hypotheses for each class in the label taxonomy
- Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples.
- For more information on the recasting process, refer to paper "https://www.aclweb.org/anthology/2020.aacl-main.71"
### Source Data
Source Dataset for the recasting process is the BBC Hindi Headlines Dataset(https://github.com/NirantK/hindi2vec/releases/tag/bbc-hindi-v0.1)
#### Initial Data Collection and Normalization
- BBC Hindi News Classification Dataset contains 4, 335 Hindi news headlines tagged across 14 categories: India, Pakistan,news, International, entertainment, sport, science, China, learning english, social, southasia, business, institutional, multimedia
- We processed this dataset to combine two sets of relevant but low prevalence classes.
- Namely, we merged the samples from Pakistan, China, international, and southasia as one class called international.
- Likewise, we also merged samples from news, business, social, learning english, and institutional as news.
- Lastly, we also removed the class multimedia because there were very few samples.
#### Who are the source language producers?
Pls refer to this paper: "https://www.aclweb.org/anthology/2020.aacl-main.71"
### Annotations
#### Annotation process
Annotation process has been described in Dataset Creation Section.
#### Who are the annotators?
Annotation is done automatically.
### Personal and Sensitive Information
No Personal and Sensitive Information is mentioned in the Datasets.
## Considerations for Using the Data
Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71
### Discussion of Biases
Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71
### Other Known Limitations
No other known limitations
## Additional Information
Pls refer to this link: https://github.com/midas-research/hindi-nli-data
### Dataset Curators
It is written in the repo : https://github.com/avinsit123/hindi-nli-data that
- This corpus can be used freely for research purposes.
- The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper.
- If interested in commercial use of the corpus, send email to midas@iiitd.ac.in.
- If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus.
- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications.
- Rather than redistributing the corpus, please direct interested parties to this page
- Please feel free to send us an email:
- with feedback regarding the corpus.
- with information on how you have used the corpus.
- if interested in having us analyze your data for natural language inference.
- if interested in a collaborative research project.
### Licensing Information
Copyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi).
Pls contact authors for any information on the dataset.
### Citation Information
```
@inproceedings{uppal-etal-2020-two,
title = "Two-Step Classification using Recasted Data for Low Resource Settings",
author = "Uppal, Shagun and
Gupta, Vivek and
Swaminathan, Avinash and
Zhang, Haimin and
Mahata, Debanjan and
Gosangi, Rakesh and
Shah, Rajiv Ratn and
Stent, Amanda",
booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing",
month = dec,
year = "2020",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.aacl-main.71",
pages = "706--719",
abstract = "An NLP model{'}s ability to reason should be independent of language. Previous works utilize Natural Language Inference (NLI) to understand the reasoning ability of models, mostly focusing on high resource languages like English. To address scarcity of data in low-resource languages such as Hindi, we use data recasting to create NLI datasets for four existing text classification datasets. Through experiments, we show that our recasted dataset is devoid of statistical irregularities and spurious patterns. We further study the consistency in predictions of the textual entailment models and propose a consistency regulariser to remove pairwise-inconsistencies in predictions. We propose a novel two-step classification method which uses textual-entailment predictions for classification task. We further improve the performance by using a joint-objective for classification and textual entailment. We therefore highlight the benefits of data recasting and improvements on classification performance using our approach with supporting experimental results.",
}
```
### Contributions
Thanks to [@avinsit123](https://github.com/avinsit123) for adding this dataset. | 9,123 | [
[
-0.0223541259765625,
-0.050689697265625,
-0.00994110107421875,
0.032318115234375,
-0.01953125,
0.011505126953125,
-0.030303955078125,
-0.0303497314453125,
0.0229034423828125,
0.016998291015625,
-0.035491943359375,
-0.038360595703125,
-0.049713134765625,
0.03... |
covid_tweets_japanese | 2023-01-25T14:28:47.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ja",
"license:cc-by-nd-4.0",
"region:us"
] | null | 53,640 Japanese tweets with annotation if a tweet is related to COVID-19 or not. The annotation is by majority decision by 5 - 10 crowd workers. Target tweets include "COVID" or "コロナ". The period of the tweets is from around January 2020 to around June 2020. The original tweets are not contained. Please use Twitter API to get them, for example. | No paper about this dataset is published yet. Please cite this dataset as "鈴木 優: COVID-19 日本語 Twitter データセット (http://www.db.info.gifu-u.ac.jp/covid-19-twitter-dataset/)" | 1 | 102 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- ja
license:
- cc-by-nd-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- fact-checking
pretty_name: COVID-19 日本語Twitterデータセット (COVID-19 Japanese Twitter Dataset)
dataset_info:
features:
- name: tweet_id
dtype: string
- name: assessment_option_id
dtype:
class_label:
names:
'0': '63'
'1': '64'
'2': '65'
'3': '66'
'4': '67'
'5': '68'
splits:
- name: train
num_bytes: 1662833
num_examples: 53639
download_size: 406005
dataset_size: 1662833
---
# Dataset Card for COVID-19 日本語Twitterデータセット (COVID-19 Japanese Twitter Dataset)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [COVID-19 日本語Twitterデータセット homepage](http://www.db.info.gifu-u.ac.jp/data/Data_5f02db873363f976fce930d1)
- **Repository:** [N/A]
- **Paper:** [N/A]
- **Leaderboard:** [N/A]
- **Point of Contact:** Check the homepage.
### Dataset Summary
53,640 Japanese tweets with annotation if a tweet is related to COVID-19 or not. The annotation is by majority decision by 5 - 10 crowd workers. Target tweets include "COVID" or "コロナ". The period of the tweets is from around January 2020 to around June 2020. The original tweets are not contained. Please use Twitter API to get them, for example.
### Supported Tasks and Leaderboards
Text-classification, Whether the tweet is related to COVID-19, and whether it is fact or opinion.
### Languages
The text can be gotten using the IDs in this dataset is Japanese, posted on Twitter.
## Dataset Structure
### Data Instances
CSV file with the 1st column is Twitter ID and the 2nd column is assessment option ID.
### Data Fields
- `tweet_id`: Twitter ID.
- `assessment_option_id`: The selection result. It has the following meanings:
- 63: a general fact: generally published information, such as news.
- 64: a personal fact: personal news. For example, a person heard that the next-door neighbor, XX, has infected COVID-19, which has not been in a news.
- 65: an opinion/feeling
- 66: difficult to determine if they are related to COVID-19 (it is definitely the tweet is not "67: unrelated", but 63, 64, 65 cannot be determined)
- 67: unrelated
- 68: it is a fact, but difficult to determine whether general facts, personal facts, or impressions (it may be irrelevant to COVID-19 since it is indistinguishable between 63 - 65 and 67).
### Data Splits
No articles have been published for this dataset, and it appears that the author of the dataset is willing to publish an article (it is not certain that the splitting information will be included). Therefore, at this time, information on data splits is not provided.
## Dataset Creation
### Curation Rationale
[More Information Needed] because the paper is not yet published.
### Source Data
#### Initial Data Collection and Normalization
53,640 Japanese tweets with annotation if a tweet is related to COVID-19 or not. Target tweets include "COVID" or "コロナ". The period of the tweets is from around January 2020 to around June 2020.
#### Who are the source language producers?
The language producers are users of Twitter.
### Annotations
#### Annotation process
The annotation is by majority decision by 5 - 10 crowd workers.
#### Who are the annotators?
Crowd workers.
### Personal and Sensitive Information
The author does not contain original tweets.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset is hosted by Suzuki Laboratory, Gifu University, Japan.
### Licensing Information
CC-BY-ND 4.0
### Citation Information
A related paper has not yet published.
The author shows how to cite as「鈴木 優: COVID-19 日本語 Twitter データセット ( http://www.db.info.gifu-u.ac.jp/data/Data_5f02db873363f976fce930d1 ) 」.
### Contributions
Thanks to [@forest1988](https://github.com/forest1988) for adding this dataset. | 5,242 | [
[
-0.0202484130859375,
-0.060150146484375,
0.002117156982421875,
0.0225982666015625,
-0.03228759765625,
0.017242431640625,
-0.0176239013671875,
-0.0421142578125,
0.049163818359375,
0.00562286376953125,
-0.06829833984375,
-0.05633544921875,
-0.039764404296875,
... |
dyk | 2023-01-25T14:29:39.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pl",
"license:bsd-3-clause",
"region:us"
] | null | The Did You Know (pol. Czy wiesz?) dataset consists of human-annotated question-answer pairs. The task is to predict if the answer is correct. We chose the negatives which have the largest token overlap with a question. | @inproceedings{marcinczuk2013open,
title={Open dataset for development of Polish Question Answering systems},
author={Marcinczuk, Michal and Ptak, Marcin and Radziszewski, Adam and Piasecki, Maciej},
booktitle={Proceedings of the 6th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, Wydawnictwo Poznanskie, Fundacja Uniwersytetu im. Adama Mickiewicza},
year={2013}
} | 0 | 102 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- pl
license:
- bsd-3-clause
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
pretty_name: dyk
dataset_info:
features:
- name: q_id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: target
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 1388690
num_examples: 4154
- name: test
num_bytes: 353643
num_examples: 1029
download_size: 685462
dataset_size: 1742333
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
http://nlp.pwr.wroc.pl/en/tools-and-resources/resources/czy-wiesz-question-answering-dataset
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Did You Know (pol. Czy wiesz?) dataset consists of human-annotated question-answer pairs. The task is to predict if the answer is correct. We chose the negatives which have the largest token overlap with a question.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Polish
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- q_id: question id
- question: question sentence
- answer: answer sentence
- target: 1 if the answer is correct, 0 otherwise. Note that the test split doesn't have target values so -1 is used instead
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-SA 3.0
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset. | 3,552 | [
[
-0.0389404296875,
-0.0599365234375,
0.0156707763671875,
0.013824462890625,
-0.00669097900390625,
0.01215362548828125,
-0.023895263671875,
-0.0267181396484375,
0.0362548828125,
0.045501708984375,
-0.07525634765625,
-0.07196044921875,
-0.042205810546875,
0.016... |
event2Mind | 2023-04-05T10:06:10.000Z | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"common-sense-inference",
"arxiv:1805.06939",
"region:us"
] | null | In Event2Mind, we explore the task of understanding stereotypical intents and reactions to events. Through crowdsourcing, we create a large corpus with 25,000 events and free-form descriptions of their intents and reactions, both of the event's subject and (potentially implied) other participants. | @inproceedings{event2Mind,
title={Event2Mind: Commonsense Inference on Events, Intents, and Reactions},
author={Hannah Rashkin and Maarten Sap and Emily Allaway and Noah A. Smith† Yejin Choi},
year={2018}
} | 0 | 102 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
pretty_name: Event2Mind
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: event2mind
tags:
- common-sense-inference
dataset_info:
features:
- name: Source
dtype: string
- name: Event
dtype: string
- name: Xintent
dtype: string
- name: Xemotion
dtype: string
- name: Otheremotion
dtype: string
- name: Xsent
dtype: string
- name: Osent
dtype: string
splits:
- name: test
num_bytes: 649273
num_examples: 5221
- name: train
num_bytes: 5916384
num_examples: 46472
- name: validation
num_bytes: 672365
num_examples: 5401
download_size: 1300770
dataset_size: 7238022
---
# Dataset Card for "event2Mind"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://uwnlp.github.io/event2mind/](https://uwnlp.github.io/event2mind/)
- **Repository:** https://github.com/uwnlp/event2mind
- **Paper:** [Event2Mind: Commonsense Inference on Events, Intents, and Reactions](https://arxiv.org/abs/1805.06939)
- **Point of Contact:** [Hannah Rashkin](mailto:hrashkin@cs.washington.edu), [Maarten Sap](mailto:msap@cs.washington.edu)
- **Size of downloaded dataset files:** 1.30 MB
- **Size of the generated dataset:** 7.24 MB
- **Total amount of disk used:** 8.54 MB
### Dataset Summary
In Event2Mind, we explore the task of understanding stereotypical intents and reactions to events. Through crowdsourcing, we create a large corpus with 25,000 events and free-form descriptions of their intents and reactions, both of the event's subject and (potentially implied) other participants.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 1.30 MB
- **Size of the generated dataset:** 7.24 MB
- **Total amount of disk used:** 8.54 MB
An example of 'validation' looks as follows.
```
{
"Event": "It shrinks in the wash",
"Osent": "1",
"Otheremotion": "[\"upset\", \"angry\"]",
"Source": "it_events",
"Xemotion": "[\"none\"]",
"Xintent": "[\"none\"]",
"Xsent": ""
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `Source`: a `string` feature.
- `Event`: a `string` feature.
- `Xintent`: a `string` feature.
- `Xemotion`: a `string` feature.
- `Otheremotion`: a `string` feature.
- `Xsent`: a `string` feature.
- `Osent`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default|46472| 5401|5221|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{rashkin-etal-2018-event2mind,
title = "{E}vent2{M}ind: Commonsense Inference on Events, Intents, and Reactions",
author = "Rashkin, Hannah and
Sap, Maarten and
Allaway, Emily and
Smith, Noah A. and
Choi, Yejin",
booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2018",
address = "Melbourne, Australia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P18-1043",
doi = "10.18653/v1/P18-1043",
pages = "463--473",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset. | 6,820 | [
[
-0.034820556640625,
-0.03497314453125,
0.022216796875,
0.00791168212890625,
-0.0174102783203125,
-0.01306915283203125,
-0.037689208984375,
-0.038055419921875,
0.03765869140625,
0.0179901123046875,
-0.061798095703125,
-0.059967041015625,
-0.03961181640625,
-0... |
imdb_urdu_reviews | 2023-01-25T14:32:49.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ur",
"license:odbl",
"region:us"
] | null | Large Movie translated Urdu Reviews Dataset.
This is a dataset for binary sentiment classification containing substantially more data than previous
benchmark datasets. We provide a set of 40,000 highly polar movie reviews for training, and 10,000 for testing.
To increase the availability of sentiment analysis dataset for a low recourse language like Urdu,
we opted to use the already available IMDB Dataset. we have translated this dataset using google translator.
This is a binary classification dataset having two classes as positive and negative.
The reason behind using this dataset is high polarity for each class.
It contains 50k samples equally divided in two classes. | @InProceedings{maas-EtAl:2011:ACL-HLT2011,
author = {Maas, Andrew L. and Daly,nRaymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y...},
title = {Learning Word Vectors for Sentiment Analysis},
month = {June},
year = {2011},
address = {Portland, Oregon, USA},
publisher = {Association for Computational Linguistics},
pages = {142--150},
url = {http://www.aclweb.org/anthology/P11-1015}
} | 0 | 102 | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- machine-generated
language:
- ur
license:
- odbl
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: ImDB Urdu Reviews
dataset_info:
features:
- name: sentence
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 114670811
num_examples: 50000
download_size: 31510992
dataset_size: 114670811
---
# Dataset Card for ImDB Urdu Reviews
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/mirfan899/Urdu)
- **Repository:** [Github](https://github.com/mirfan899/Urdu)
- **Paper:** [Aclweb](http://www.aclweb.org/anthology/P11-1015)
- **Leaderboard:**
- **Point of Contact:** [Ikram Ali](https://github.com/akkefa)
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- sentence: The movie review which was translated into Urdu.
- sentiment: The sentiment exhibited in the review, either positive or negative.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@chaitnayabasava](https://github.com/chaitnayabasava) for adding this dataset. | 3,343 | [
[
-0.04058837890625,
-0.01055908203125,
-0.0042724609375,
0.0239715576171875,
-0.0347900390625,
0.0185699462890625,
-0.005290985107421875,
-0.0172119140625,
0.047943115234375,
0.057708740234375,
-0.06396484375,
-0.061309814453125,
-0.057891845703125,
0.0331115... |
CAiRE/ASCEND | 2022-10-24T12:43:58.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:zh",
"license:cc-by-sa-4.0",
"speech-recognition",
"code-s... | CAiRE | ASCEND (A Spontaneous Chinese-English Dataset) introduces a high-quality resource of spontaneous multi-turn conversational dialogue Chinese-English code-switching corpus collected in Hong Kong. ASCEND consists of 10.62 hours of spontaneous speech with a total of ~12.3K utterances. The corpus is split into 3 sets: training, validation, and test with a ratio of 8:1:1 while maintaining a balanced gender proportion on each set. | @inproceedings{lovenia2021ascend,
title = {ASCEND: A Spontaneous Chinese-English Dataset for Code-switching in Multi-turn Conversation},
author = {Lovenia, Holy and Cahyawijaya, Samuel and Winata, Genta Indra and Xu, Peng and Yan, Xu and Liu, Zihan and Frieske, Rita and Yu, Tiezheng and Dai, Wenliang and Barezi, Elham J and others},
booktitle = {Proceedings of the International Conference on Language Resources and Evaluation, {LREC} 2022, 20-25 June 2022, Lu Palais du Pharo, France},
publisher = {European Language Resources Association},
year = {2022},
pages = {}
} | 10 | 102 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
- zh
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids: []
pretty_name: 'ASCEND: A Spontaneous Chinese-English Dataset for Code-switching in
Multi-turn Conversation'
tags:
- speech-recognition
- code-switching
---
# Dataset Card for ASCEND
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Usage](#usage)
- [Dataset Structure](#dataset-structure)
- [Data Splits](#data-instances)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** https://arxiv.org/abs/2112.06223
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
ASCEND (A Spontaneous Chinese-English Dataset) introduces a high-quality resource of spontaneous multi-turn conversational dialogue Chinese-English code-switching corpus collected in Hong Kong. ASCEND consists of 10.62 hours of spontaneous speech with a total of ~12.3K utterances. The corpus is split into 3 sets: training, validation, and test with a ratio of 8:1:1 while maintaining a balanced gender proportion on each set.
### Supported Tasks and Leaderboards
Code-switching
### Languages
Chinese and English
## Usage
To obtain the full dataset (complete with train, validation, and test set), simply run this:
```
import datasets
dataset = datasets.load_dataset("CAiRE/ASCEND")
```
## Dataset Structure
A typical data point comprises the path to the audio file, the loaded audio array, and its transcription. Additional fields include datapoint id, duration, language, speaker id, session id, and topic.
```
{
'id': '00644',
'path': '.cache/huggingface/datasets/downloads/extracted/f0b33b5266cd9452ee310eef3577cf7adb7f29aa54dbff74b9a8ee406a55d614/waves/ses2_spk3_L13101_189.900_5.490.wav',
'audio': {
'path': '.cache/huggingface/datasets/downloads/extracted/f0b33b5266cd9452ee310eef3577cf7adb7f29aa54dbff74b9a8ee406a55d614/waves/ses2_spk3_L13101_189.900_5.490.wav',
'array': array([-6.1035156e-05, -1.8310547e-04, 3.0517578e-05, ...,
0.0000000e+00, -3.0517578e-05, 0.0000000e+00
], dtype = float32),
'sampling_rate': 16000
},
'transcription': '因为你不可能邀你的female friends去说走我们去play basketball',
'duration': 5.489999771118164,
'language': 'mixed',
'original_speaker_id': 3,
'session_id': 2,
'topic': 'sports'
}
```
### Data Splits
Number of utterances: 9,869 train, 1,130 validation, and 1,315 test.
## Additional Information
For comprehensive explanations, please check [our paper](https://arxiv.org/pdf/2112.06223.pdf).
### Licensing Information
Creative Common Attribution Share-Alike 4.0 International (CC-BY-SA 4.0)
### Citation Information
If you use our dataset, please cite us:
```
@inproceedings{lovenia2022ascend,
title={ASCEND: A Spontaneous Chinese-English Dataset for Code-switching in Multi-turn Conversation},
author={Lovenia, Holy and Cahyawijaya, Samuel and Winata, Genta Indra and Xu, Peng and Yan, Xu and Liu, Zihan and Frieske, Rita and Yu, Tiezheng and Dai, Wenliang and Barezi, Elham J and others},
booktitle={Proceedings of the 13th Language Resources and Evaluation Conference (LREC)},
year={2022}
``` | 3,645 | [
[
-0.00800323486328125,
-0.037872314453125,
-0.00921630859375,
0.051116943359375,
-0.01381683349609375,
0.0083770751953125,
-0.03607177734375,
-0.0298614501953125,
0.0283966064453125,
0.019439697265625,
-0.057403564453125,
-0.051849365234375,
-0.03289794921875,
... |
anuragshas/ur_opus100_processed | 2022-01-30T16:03:56.000Z | [
"region:us"
] | anuragshas | null | null | 1 | 102 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
fvillena/spanish_diagnostics | 2021-05-30T02:32:52.000Z | [
"region:us"
] | fvillena | null | null | 0 | 102 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
roskoN/dailydialog | 2021-08-06T14:14:18.000Z | [
"region:us"
] | roskoN | The DailyDialog dataset as provided in the original form with a bit of preprocessing applied to enable dast prototyping.
The splits are as in the original distribution. | @inproceedings{li2017dailydialog,
title={DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset},
author={Li, Yanran and Su, Hui and Shen, Xiaoyu and Li, Wenjie and Cao, Ziqiang and Niu, Shuzi},
booktitle={Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)},
pages={986--995},
year={2017}
} | 0 | 102 | 2022-03-02T23:29:22 | # DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset
The data is based on the original distribution ([link to original website](http://yanran.li/dailydialog)) ([link to paper](https://aclanthology.org/I17-1099/)).
It is created as a convenience to enablefaster prototyping.
# License
DailyDialog dataset is licensed under CC BY-NC-SA 4.0.
If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original. Any third party annotation is welcome. Note the dataset may not be adopted for commercial use. | 581 | [
[
-0.0202178955078125,
-0.047027587890625,
0.0433349609375,
0.027374267578125,
-0.014007568359375,
0.006504058837890625,
0.002483367919921875,
-0.0272979736328125,
0.01457977294921875,
0.06121826171875,
-0.0828857421875,
-0.0452880859375,
-0.01219940185546875,
... |
zoheb/sketch-scene | 2022-10-30T10:07:48.000Z | [
"task_categories:text-to-image",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:n<10K",
"source_datasets:FS-COCO",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | zoheb | null | null | 13 | 102 | 2022-10-29T18:15:58 | ---
license: cc-by-nc-sa-4.0
language:
- en
language_creators:
- machine-generated
multilinguality:
- monolingual
pretty_name: 'Sketch Scene Descriptions'
size_categories:
- n<10K
source_datasets:
- FS-COCO
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Card for Sketch Scene Descriptions
_Dataset used to train [Sketch Scene text to image model]()_
We advance sketch research to scenes with the first dataset of freehand scene sketches, FS-COCO. With practical applications in mind, we collect sketches that convey well scene content but can be sketched within a few minutes by a person with any sketching skills. Our dataset comprises around 10,000 freehand scene vector sketches with per-point space-time information by 100 non-expert individuals, offering both object- and scene-level abstraction. Each sketch is augmented with its text description.
For each row, the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided.
## Citation
If you use this dataset, please cite it as:
```
@inproceedings{fscoco,
title={FS-COCO: Towards Understanding of Freehand Sketches of Common Objects in Context.}
author={Chowdhury, Pinaki Nath and Sain, Aneeshan and Bhunia, Ayan Kumar and Xiang, Tao and Gryaditskaya, Yulia and Song, Yi-Zhe},
booktitle={ECCV},
year={2022}
}
``` | 1,412 | [
[
-0.01666259765625,
-0.0244598388671875,
0.0130615234375,
0.0272979736328125,
-0.051727294921875,
-0.0123138427734375,
0.016021728515625,
-0.03631591796875,
0.03570556640625,
0.042816162109375,
-0.0478515625,
-0.0295867919921875,
-0.0232391357421875,
-0.00846... |
bigbio/ask_a_patient | 2022-12-22T15:43:18.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"region:us"
] | bigbio | The AskAPatient dataset contains medical concepts written on social media mapped to how they are formally written in medical ontologies (SNOMED-CT and AMT). | @inproceedings{limsopatham-collier-2016-normalising,
title = "Normalising Medical Concepts in Social Media Texts by Learning Semantic Representation",
author = "Limsopatham, Nut and
Collier, Nigel",
booktitle = "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2016",
address = "Berlin, Germany",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P16-1096",
doi = "10.18653/v1/P16-1096",
pages = "1014--1023",
} | 1 | 102 | 2022-11-13T18:26:06 |
---
language:
- en
bigbio_language:
- English
license: cc-by-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: AskAPatient
homepage: https://zenodo.org/record/55013
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
---
# Dataset Card for AskAPatient
## Dataset Description
- **Homepage:** https://zenodo.org/record/55013
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED
The AskAPatient dataset contains medical concepts written on social media mapped to how they are formally written in medical ontologies (SNOMED-CT and AMT).
## Citation Information
```
@inproceedings{limsopatham-collier-2016-normalising,
title = "Normalising Medical Concepts in Social Media Texts by Learning Semantic Representation",
author = "Limsopatham, Nut and
Collier, Nigel",
booktitle = "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2016",
address = "Berlin, Germany",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P16-1096",
doi = "10.18653/v1/P16-1096",
pages = "1014--1023",
}
```
| 1,263 | [
[
-0.0128326416015625,
-0.045074462890625,
0.016845703125,
0.007152557373046875,
-0.0303955078125,
-0.022216796875,
-0.029510498046875,
-0.027252197265625,
0.05517578125,
0.0276641845703125,
-0.037322998046875,
-0.06964111328125,
-0.057861328125,
0.02619934082... |
bigbio/chemdner | 2022-12-22T15:44:21.000Z | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | bigbio | We present the CHEMDNER corpus, a collection of 10,000 PubMed abstracts that
contain a total of 84,355 chemical entity mentions labeled manually by expert
chemistry literature curators, following annotation guidelines specifically
defined for this task. The abstracts of the CHEMDNER corpus were selected to be
representative for all major chemical disciplines. Each of the chemical entity
mentions was manually labeled according to its structure-associated chemical
entity mention (SACEM) class: abbreviation, family, formula, identifier,
multiple, systematic and trivial. | @article{Krallinger2015,
title = {The CHEMDNER corpus of chemicals and drugs and its annotation principles},
author = {
Krallinger, Martin and Rabal, Obdulia and Leitner, Florian and Vazquez,
Miguel and Salgado, David and Lu, Zhiyong and Leaman, Robert and Lu, Yanan
and Ji, Donghong and Lowe, Daniel M. and Sayle, Roger A. and
Batista-Navarro, Riza Theresa and Rak, Rafal and Huber, Torsten and
Rockt{\"a}schel, Tim and Matos, S{\'e}rgio and Campos, David and Tang,
Buzhou and Xu, Hua and Munkhdalai, Tsendsuren and Ryu, Keun Ho and Ramanan,
S. V. and Nathan, Senthil and {\v{Z}}itnik, Slavko and Bajec, Marko and
Weber, Lutz and Irmer, Matthias and Akhondi, Saber A. and Kors, Jan A. and
Xu, Shuo and An, Xin and Sikdar, Utpal Kumar and Ekbal, Asif and Yoshioka,
Masaharu and Dieb, Thaer M. and Choi, Miji and Verspoor, Karin and Khabsa,
Madian and Giles, C. Lee and Liu, Hongfang and Ravikumar, Komandur
Elayavilli and Lamurias, Andre and Couto, Francisco M. and Dai, Hong-Jie
and Tsai, Richard Tzong-Han and Ata, Caglar and Can, Tolga and Usi{\'e},
Anabel and Alves, Rui and Segura-Bedmar, Isabel and Mart{\'i}nez, Paloma
and Oyarzabal, Julen and Valencia, Alfonso
},
year = 2015,
month = {Jan},
day = 19,
journal = {Journal of Cheminformatics},
volume = 7,
number = 1,
pages = {S2},
doi = {10.1186/1758-2946-7-S1-S2},
issn = {1758-2946},
url = {https://doi.org/10.1186/1758-2946-7-S1-S2},
abstract = {
The automatic extraction of chemical information from text requires the
recognition of chemical entity mentions as one of its key steps. When
developing supervised named entity recognition (NER) systems, the
availability of a large, manually annotated text corpus is desirable.
Furthermore, large corpora permit the robust evaluation and comparison of
different approaches that detect chemicals in documents. We present the
CHEMDNER corpus, a collection of 10,000 PubMed abstracts that contain a
total of 84,355 chemical entity mentions labeled manually by expert
chemistry literature curators, following annotation guidelines specifically
defined for this task. The abstracts of the CHEMDNER corpus were selected
to be representative for all major chemical disciplines. Each of the
chemical entity mentions was manually labeled according to its
structure-associated chemical entity mention (SACEM) class: abbreviation,
family, formula, identifier, multiple, systematic and trivial. The
difficulty and consistency of tagging chemicals in text was measured using
an agreement study between annotators, obtaining a percentage agreement of
91. For a subset of the CHEMDNER corpus (the test set of 3,000 abstracts)
we provide not only the Gold Standard manual annotations, but also mentions
automatically detected by the 26 teams that participated in the BioCreative
IV CHEMDNER chemical mention recognition task. In addition, we release the
CHEMDNER silver standard corpus of automatically extracted mentions from
17,000 randomly selected PubMed abstracts. A version of the CHEMDNER corpus
in the BioC format has been generated as well. We propose a standard for
required minimum information about entity annotations for the construction
of domain specific corpora on chemical and drug entities. The CHEMDNER
corpus and annotation guidelines are available at:
ttp://www.biocreative.org/resources/biocreative-iv/chemdner-corpus/
}
} | 1 | 102 | 2022-11-13T22:07:46 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: CHEMDNER
homepage: https://biocreative.bioinformatics.udel.edu/resources/biocreative-iv/chemdner-corpus/
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- TEXT_CLASSIFICATION
---
# Dataset Card for CHEMDNER
## Dataset Description
- **Homepage:** https://biocreative.bioinformatics.udel.edu/resources/biocreative-iv/chemdner-corpus/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,TXTCLASS
We present the CHEMDNER corpus, a collection of 10,000 PubMed abstracts that
contain a total of 84,355 chemical entity mentions labeled manually by expert
chemistry literature curators, following annotation guidelines specifically
defined for this task. The abstracts of the CHEMDNER corpus were selected to be
representative for all major chemical disciplines. Each of the chemical entity
mentions was manually labeled according to its structure-associated chemical
entity mention (SACEM) class: abbreviation, family, formula, identifier,
multiple, systematic and trivial.
## Citation Information
```
@article{Krallinger2015,
title = {The CHEMDNER corpus of chemicals and drugs and its annotation principles},
author = {
Krallinger, Martin and Rabal, Obdulia and Leitner, Florian and Vazquez,
Miguel and Salgado, David and Lu, Zhiyong and Leaman, Robert and Lu, Yanan
and Ji, Donghong and Lowe, Daniel M. and Sayle, Roger A. and
Batista-Navarro, Riza Theresa and Rak, Rafal and Huber, Torsten and
Rockt{"a}schel, Tim and Matos, S{'e}rgio and Campos, David and Tang,
Buzhou and Xu, Hua and Munkhdalai, Tsendsuren and Ryu, Keun Ho and Ramanan,
S. V. and Nathan, Senthil and {{Z}}itnik, Slavko and Bajec, Marko and
Weber, Lutz and Irmer, Matthias and Akhondi, Saber A. and Kors, Jan A. and
Xu, Shuo and An, Xin and Sikdar, Utpal Kumar and Ekbal, Asif and Yoshioka,
Masaharu and Dieb, Thaer M. and Choi, Miji and Verspoor, Karin and Khabsa,
Madian and Giles, C. Lee and Liu, Hongfang and Ravikumar, Komandur
Elayavilli and Lamurias, Andre and Couto, Francisco M. and Dai, Hong-Jie
and Tsai, Richard Tzong-Han and Ata, Caglar and Can, Tolga and Usi{'e},
Anabel and Alves, Rui and Segura-Bedmar, Isabel and Mart{'i}nez, Paloma
and Oyarzabal, Julen and Valencia, Alfonso
},
year = 2015,
month = {Jan},
day = 19,
journal = {Journal of Cheminformatics},
volume = 7,
number = 1,
pages = {S2},
doi = {10.1186/1758-2946-7-S1-S2},
issn = {1758-2946},
url = {https://doi.org/10.1186/1758-2946-7-S1-S2},
abstract = {
The automatic extraction of chemical information from text requires the
recognition of chemical entity mentions as one of its key steps. When
developing supervised named entity recognition (NER) systems, the
availability of a large, manually annotated text corpus is desirable.
Furthermore, large corpora permit the robust evaluation and comparison of
different approaches that detect chemicals in documents. We present the
CHEMDNER corpus, a collection of 10,000 PubMed abstracts that contain a
total of 84,355 chemical entity mentions labeled manually by expert
chemistry literature curators, following annotation guidelines specifically
defined for this task. The abstracts of the CHEMDNER corpus were selected
to be representative for all major chemical disciplines. Each of the
chemical entity mentions was manually labeled according to its
structure-associated chemical entity mention (SACEM) class: abbreviation,
family, formula, identifier, multiple, systematic and trivial. The
difficulty and consistency of tagging chemicals in text was measured using
an agreement study between annotators, obtaining a percentage agreement of
91. For a subset of the CHEMDNER corpus (the test set of 3,000 abstracts)
we provide not only the Gold Standard manual annotations, but also mentions
automatically detected by the 26 teams that participated in the BioCreative
IV CHEMDNER chemical mention recognition task. In addition, we release the
CHEMDNER silver standard corpus of automatically extracted mentions from
17,000 randomly selected PubMed abstracts. A version of the CHEMDNER corpus
in the BioC format has been generated as well. We propose a standard for
required minimum information about entity annotations for the construction
of domain specific corpora on chemical and drug entities. The CHEMDNER
corpus and annotation guidelines are available at:
ttp://www.biocreative.org/resources/biocreative-iv/chemdner-corpus/
}
}
```
| 4,809 | [
[
-0.041351318359375,
-0.021575927734375,
0.0677490234375,
-0.003589630126953125,
-0.004329681396484375,
0.002559661865234375,
-0.013824462890625,
-0.033294677734375,
0.02154541015625,
0.0137481689453125,
-0.02764892578125,
-0.061370849609375,
-0.04217529296875,
... |
OllieStanley/humaneval-mbpp-codegen-qa | 2023-03-15T15:13:27.000Z | [
"region:us"
] | OllieStanley | null | null | 1 | 102 | 2023-02-26T14:59:10 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 225572
num_examples: 591
download_size: 89931
dataset_size: 225572
---
# Dataset Card for "humaneval-mbpp-codegen-qa"
This dataset contains prompt-reply (question-answer) pairs where the prompt is to create a Python function which satisfies the functionality described in a specified docstring. The responses are then the generated functions. | 534 | [
[
-0.04119873046875,
-0.032867431640625,
-0.0020694732666015625,
0.0090789794921875,
-0.0254058837890625,
-0.00859832763671875,
0.0146942138671875,
0.01039886474609375,
0.029876708984375,
0.03515625,
-0.06005859375,
-0.0219268798828125,
0.0006852149963378906,
... |
Francesco/hand-gestures-jps7z | 2023-03-30T09:18:38.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | 0 | 102 | 2023-03-30T09:18:16 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': hand-gestures
'1': 0
'2': 1
'3': 2
'4': 3
'5': 4
'6': 5
'7': 6
'8': 7
'9': 8
'10': 9
'11': 10
'12': 11
'13': 12
'14': 13
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: hand-gestures-jps7z
tags:
- rf100
---
# Dataset Card for hand-gestures-jps7z
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/hand-gestures-jps7z
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
hand-gestures-jps7z
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/hand-gestures-jps7z
### Citation Information
```
@misc{ hand-gestures-jps7z,
title = { hand gestures jps7z Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/hand-gestures-jps7z } },
url = { https://universe.roboflow.com/object-detection/hand-gestures-jps7z },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. | 3,654 | [
[
-0.032989501953125,
-0.036376953125,
0.0140838623046875,
-0.0018033981323242188,
-0.037567138671875,
-0.01708984375,
-0.001277923583984375,
-0.0391845703125,
0.027679443359375,
0.030548095703125,
-0.045074462890625,
-0.074951171875,
-0.056610107421875,
0.014... |
rookshanks/gsm8k | 2023-06-21T22:55:22.000Z | [
"region:us"
] | rookshanks | null | null | 0 | 102 | 2023-06-21T22:53:41 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 3566510.564699585
num_examples: 6725
- name: test
num_bytes: 713732
num_examples: 1319
- name: validation
num_bytes: 396691.4353004148
num_examples: 748
download_size: 2306142
dataset_size: 4676933.999999999
---
# Dataset Card for "gsm8k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 542 | [
[
-0.0452880859375,
0.006198883056640625,
0.02093505859375,
0.013275146484375,
-0.0228424072265625,
-0.0024967193603515625,
0.026580810546875,
-0.007671356201171875,
0.052520751953125,
0.03802490234375,
-0.055755615234375,
-0.05853271484375,
-0.046478271484375,
... |
eckendoerffer/justice_fr | 2023-09-30T05:38:31.000Z | [
"size_categories:100K<n<1M",
"language:fr",
"license:cc-by-sa-4.0",
"justice",
"law",
"legal",
"region:us"
] | eckendoerffer | null | null | 3 | 102 | 2023-06-26T01:50:11 | ---
license: cc-by-sa-4.0
language:
- fr
pretty_name: Law & decision from French justice system
tags:
- justice
- law
- legal
size_categories:
- 100K<n<1M
---
# Dataset Card for French Legal Dataset
## Dataset Description
The dataset contains a comprehensive collection of French legal books, codes, and appellate court decisions. It encompasses the following:
- 150,938 rows -> 140,000 articles of laws, decrees, and orders from the 78 French books and codes, covering all legal domains. The total number of pages is approximately 35,000.
- 191,741 rows -> 53,000 appellate court decisions spanning from 2013 to the present day. The dataset includes a wide range of cases and legal opinions. The total number of pages is approximately 150,000.
- 1,621 rows -> 1,621 definitions and abbreviations commonly used in legal texts, aiding in understanding and interpreting the legal terminology.s used in legal texts.
Total: 344,300 rows
Line breaks are included in the legal texts. They are necessary to maintain the indentations and paragraphs mentioned in other laws and court decisions.
### Languages
The dataset is in French only.
## Dataset Structure
```
{
"instruction":"Code du travail > Partie l\u00e9gislative > Livre III : Les institutions repr\u00e9sentatives du personnel > Titre Ier : Comit\u00e9 social et \u00e9conomique > Chapitre IV : Composition, \u00e9lections et mandat > Section 1 : Composition > Article L2314-2 (Loi)",
"input":"Je souhaite l'Article L2314-2 du Code du travail",
"output":"Voici l'Article L2314-2 du Code du travail: Sous r\u00e9serve des dispositions applicables dans les entreprises de moins de trois cents salari\u00e9s, pr\u00e9vues \u00e0 l'article L. 2143-22, chaque organisation syndicale repr\u00e9sentative dans l'entreprise ou l'\u00e9tablissement peut d\u00e9signer un repr\u00e9sentant syndical au comit\u00e9. Il assiste aux s\u00e9ances avec voix consultative. Il est choisi parmi les membres du personnel de l'entreprise et doit remplir les conditions d'\u00e9ligibilit\u00e9 au comit\u00e9 social et \u00e9conomique fix\u00e9es \u00e0 l'article L. 2314-19."
},
```
### Data Fields
- `instruction`:
- French books and codes -> hierarchy from law text:
"Code pénal > Partie législative > Livre II : Des crimes et délits contre les personnes > Titre II : Des atteintes à la personne humaine > Chapitre Ier : Des atteintes à la vie de la personne > Section 2 : Des atteintes involontaires à la vie > Article 221-6"
- Court decisions -> location, chamber, decision number, decision date, part:
"Cour d'appel de Paris I5, Cour de cassation Chambre commerciale financière et économique, décision 18-13.763 du 14/04/2021, partie 1"
- `input`:
- French books and codes -> questions with multiple variations, such as: "What does Article XX of Code XX say?"
- Court decisions -> empty
- `output`:
- French books and codes -> laws text
- Court decisions -> decisions text
The text has been limited/split to approximately 820 words per row, with an average of 1500 tokens (French -> Falcon tokenizer). The goal is to not exceed 2048 tokens, with a margin of error.
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
- All French codes (PDF): https://www.legifrance.gouv.fr/liste/code?etatTexte=VIGUEUR&etatTexte=VIGUEUR_DIFF
- Court decisions from JUDILIBRE API: https://piste.gouv.fr/index.php?option=com_apiportal&view=apitester&usage=api&apitab=tests&apiName=JUDILIBRE&apiId=b6d2f389-c3ec-4eb3-9075-bc24d0783781&managerId=2&type=rest&apiVersion=1.0.0&Itemid=265&swaggerVersion=2.0&lang=fr
#### Who are the source language producers?
Comming directly from French justice system.
## Additional Information
### Licensing Information
The dataset is available under the Creative Commons Attribution-ShareAlike License
| 3,861 | [
[
-0.022247314453125,
-0.028106689453125,
0.032684326171875,
0.0276031494140625,
-0.0285491943359375,
-0.0210723876953125,
-0.0171966552734375,
0.0022735595703125,
0.01349639892578125,
0.061492919921875,
-0.0242462158203125,
-0.07183837890625,
-0.04327392578125,
... |
juanivazquez/jivb-id_card | 2023-06-28T01:50:05.000Z | [
"region:us"
] | juanivazquez | null | null | 0 | 102 | 2023-06-28T00:03:05 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 102797866.0
num_examples: 276
- name: test
num_bytes: 6349261.0
num_examples: 11
download_size: 108916611
dataset_size: 109147127.0
---
# Dataset Card for "jivb-id_card"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 465 | [
[
-0.050567626953125,
-0.0238189697265625,
0.00012922286987304688,
0.01479339599609375,
-0.0298614501953125,
0.0006399154663085938,
0.022735595703125,
-0.013702392578125,
0.06268310546875,
0.017303466796875,
-0.0467529296875,
-0.05267333984375,
-0.028961181640625,... |
pietrolesci/agnews | 2023-09-13T12:02:12.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"region:us"
] | pietrolesci | null | null | 0 | 102 | 2023-09-13T10:17:01 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- config_name: embedding_all-MiniLM-L12-v2
data_files:
- split: train
path: embedding_all-MiniLM-L12-v2/train-*
- split: test
path: embedding_all-MiniLM-L12-v2/test-*
- config_name: embedding_all-mpnet-base-v2
data_files:
- split: train
path: embedding_all-mpnet-base-v2/train-*
- split: test
path: embedding_all-mpnet-base-v2/test-*
- config_name: embedding_multi-qa-mpnet-base-dot-v1
data_files:
- split: train
path: embedding_multi-qa-mpnet-base-dot-v1/train-*
- split: test
path: embedding_multi-qa-mpnet-base-dot-v1/test-*
dataset_info:
- config_name: default
features:
- name: text
dtype: string
- name: labels
dtype:
class_label:
names:
'0': World
'1': Sports
'2': Business
'3': Sci/Tech
- name: uid
dtype: int64
splits:
- name: train
num_bytes: 30777303
num_examples: 120000
- name: test
num_bytes: 1940274
num_examples: 7600
download_size: 20531429
dataset_size: 32717577
- config_name: embedding_all-MiniLM-L12-v2
features:
- name: uid
dtype: int64
- name: embedding_all-MiniLM-L12-v2
sequence: float32
splits:
- name: train
num_bytes: 185760000
num_examples: 120000
- name: test
num_bytes: 11764800
num_examples: 7600
download_size: 276467219
dataset_size: 197524800
- config_name: embedding_all-mpnet-base-v2
features:
- name: uid
dtype: int64
- name: embedding_all-mpnet-base-v2
sequence: float32
splits:
- name: train
num_bytes: 370080000
num_examples: 120000
- name: test
num_bytes: 23438400
num_examples: 7600
download_size: 472647323
dataset_size: 393518400
- config_name: embedding_multi-qa-mpnet-base-dot-v1
features:
- name: uid
dtype: int64
- name: embedding_multi-qa-mpnet-base-dot-v1
sequence: float32
splits:
- name: train
num_bytes: 370080000
num_examples: 120000
- name: test
num_bytes: 23438400
num_examples: 7600
download_size: 472640830
dataset_size: 393518400
task_categories:
- text-classification
language:
- en
size_categories:
- 100K<n<1M
---
This is the same dataset as [`ag_news`](https://huggingface.co/datasets/ag_news).
The only differences are
1. Addition of a unique identifier, `uid`
1. Addition of the indices, that is 3 columns with the embeddings of 3 different sentence-transformers
- `all-mpnet-base-v2`
- `multi-qa-mpnet-base-dot-v1`
- `all-MiniLM-L12-v2`
1. Renaming of the `label` column to `labels` for easier compatibility with the transformers library | 2,711 | [
[
-0.02032470703125,
-0.037322998046875,
0.02447509765625,
0.0161590576171875,
0.00899505615234375,
0.0050811767578125,
0.008453369140625,
-0.0089569091796875,
0.043853759765625,
0.023101806640625,
-0.045867919921875,
-0.03961181640625,
-0.0482177734375,
0.023... |
codys12/MergeLlama | 2023-10-09T21:43:13.000Z | [
"license:cc-by-4.0",
"region:us"
] | codys12 | null | null | 3 | 102 | 2023-09-29T19:03:11 | ---
license: cc-by-4.0
---
MergeLlama is a unique dataset that encapsulates real-world merge conflicts alongside their corresponding resolutions. Developed from the foundational dataset shared in "Anonymous. (2022). Data set for FSE 2022 Submission Program Merge Conflict Resolution via Neural Transformers", MergeLlama provides a comprehensive collection of conflict scenarios and how they were resolved. With potential multiple conflicts in a single entry followed by its respective resolution, this dataset serves as a rich resource for understanding merge conflicts and developing automated resolution strategies.
For those using this dataset, please cite as follows:
"MergeLlama Dataset. (2023). Merge Conflicts Fused with Their Resolutions. Based on: Anonymous. (2022). Data set for FSE 2022 Submission Program Merge Conflict Resolution via Neural Transformers (1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.6366908".
| 938 | [
[
-0.050933837890625,
-0.0210418701171875,
0.0178375244140625,
0.026763916015625,
0.0020923614501953125,
0.0309906005859375,
-0.0178680419921875,
-0.04925537109375,
0.035552978515625,
0.05059814453125,
-0.072021484375,
-0.0222930908203125,
-0.042266845703125,
... |
portafolio/llamadas-celular-es-03 | 2023-10-23T19:54:50.000Z | [
"region:us"
] | portafolio | null | null | 0 | 102 | 2023-10-23T19:37:53 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
result-kand2-sdxl-wuerst-karlo/31425212 | 2023-10-24T14:14:19.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 102 | 2023-10-24T14:14:18 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 203
num_examples: 10
download_size: 1410
dataset_size: 203
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "31425212"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.050323486328125,
-0.0018682479858398438,
0.018310546875,
0.03143310546875,
-0.0144500732421875,
-0.01666259765625,
0.02703857421875,
-0.01036834716796875,
0.054962158203125,
0.039886474609375,
-0.059600830078125,
-0.037811279296875,
-0.036224365234375,
-0... |
result-kand2-sdxl-wuerst-karlo/a17bd262 | 2023-10-25T02:44:54.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 102 | 2023-10-25T02:44:54 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 201
num_examples: 10
download_size: 1374
dataset_size: 201
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "a17bd262"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.047149658203125,
-0.01218414306640625,
0.01355743408203125,
0.0200653076171875,
-0.01629638671875,
0.0018978118896484375,
0.03076171875,
-0.01114654541015625,
0.06146240234375,
0.0308380126953125,
-0.05340576171875,
-0.050750732421875,
-0.045440673828125,
... |
dialog_re | 2022-11-18T19:58:15.000Z | [
"task_categories:other",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:dialogue-modeling",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en"... | null | DialogRE is the first human-annotated dialogue based relation extraction (RE) dataset aiming
to support the prediction of relation(s) between two arguments that appear in a dialogue.
The dataset annotates all occurrences of 36 possible relation types that exist between pairs
of arguments in the 1,788 dialogues originating from the complete transcripts of Friends. | @inproceedings{yu2020dialogue,
title={Dialogue-Based Relation Extraction},
author={Yu, Dian and Sun, Kai and Cardie, Claire and Yu, Dong},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020},
url={https://arxiv.org/abs/2004.08056v1}
} | 7 | 101 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- other
- text-generation
- fill-mask
task_ids:
- dialogue-modeling
paperswithcode_id: dialogre
pretty_name: DialogRE
tags:
- relation-extraction
dataset_info:
features:
- name: dialog
sequence: string
- name: relation_data
sequence:
- name: x
dtype: string
- name: y
dtype: string
- name: x_type
dtype: string
- name: y_type
dtype: string
- name: r
sequence: string
- name: rid
sequence: int32
- name: t
sequence: string
config_name: dialog_re
splits:
- name: train
num_bytes: 1520940
num_examples: 1073
- name: test
num_bytes: 472306
num_examples: 357
- name: validation
num_bytes: 490580
num_examples: 358
download_size: 3816234
dataset_size: 2483826
---
# Dataset Card for [DialogRE]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [DialogRE Homepage](https://dataset.org/dialogre/)
- **Repository:** [DialogRE Repository](https://github.com/nlpdata/dialogre)
- **Paper:** [Arxiv](https://arxiv.org/abs/2004.08056v1)
- **Point of Contact:** [dialogre@dataset.org](mailto:dialogre@dataset.org)
### Dataset Summary
The DialogRE dataset is the first human-annotated dialogue-based relation extraction (RE) dataset, aiming to support the prediction of relation(s) between two arguments that appear in a dialogue. DialogRE can also act as a platform for studying cross-sentence RE as most facts span multiple sentences. Specifically, the dataset annotate all occurrences of 36 possible relation types that exist between pairs of arguments in the 1,788 dialogues originating from the complete transcripts of Friends (in English).
### Supported Tasks and Leaderboards
* `other-other-relation-extraction`: The dataset can be used to train a model for Relation Extraction, which consists of the prediction of relation between two arguments that appear in a dialogue. Success on this task is typically measured by achieving a *high* [F1 Score](https://huggingface.co/metrics/f1).
### Languages
The dialogues in the dataset is in English originating from the transcripts of Friends. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
A typical data point consists of a dialogue between speakers as a list of sentences. This is followed by the annotations of the relations between the entities in the dialog.
An example from the DialogRE train set looks as follows:
```
{'dialog': ["Speaker 1: It's been an hour and not one of my classmates has shown up! I tell you, when I actually die some people are gonna get seriously haunted!",
'Speaker 2: There you go! Someone came!',
"Speaker 1: Ok, ok! I'm gonna go hide! Oh, this is so exciting, my first mourner!",
'Speaker 3: Hi, glad you could come.',
'Speaker 2: Please, come in.',
"Speaker 4: Hi, you're Chandler Bing, right? I'm Tom Gordon, I was in your class.",
'Speaker 2: Oh yes, yes... let me... take your coat.',
"Speaker 4: Thanks... uh... I'm so sorry about Ross, it's...",
'Speaker 2: At least he died doing what he loved... watching blimps.',
'Speaker 1: Who is he?',
'Speaker 2: Some guy, Tom Gordon.',
"Speaker 1: I don't remember him, but then again I touched so many lives.",
'Speaker 3: So, did you know Ross well?',
"Speaker 4: Oh, actually I barely knew him. Yeah, I came because I heard Chandler's news. D'you know if he's seeing anyone?",
'Speaker 3: Yes, he is. Me.',
'Speaker 4: What? You... You... Oh! Can I ask you a personal question? Ho-how do you shave your beard so close?',
"Speaker 2: Ok Tommy, that's enough mourning for you! Here we go, bye bye!!",
'Speaker 4: Hey, listen. Call me.',
'Speaker 2: Ok!'],
'relation_data': {'r': [['per:alternate_names'],
['per:alumni'],
['per:alternate_names'],
['per:alumni', 'per:positive_impression'],
['per:alternate_names'],
['unanswerable']],
'rid': [[30], [4], [30], [4, 1], [30], [37]],
't': [[''], [''], [''], ['', 'call me'], [''], ['']],
'x': ['Speaker 2',
'Speaker 2',
'Speaker 4',
'Speaker 4',
'Speaker 4',
'Speaker 1'],
'x_type': ['PER', 'PER', 'PER', 'PER', 'PER', 'PER'],
'y': ['Chandler Bing',
'Speaker 4',
'Tom Gordon',
'Speaker 2',
'Tommy',
'Tommy'],
'y_type': ['PER', 'PER', 'PER', 'PER', 'PER', 'PER']}}
```
### Data Fields
* `dialog`
* List of dialog spoken between the speakers
* List of annotations per dialog per argument
* `x` : First entity
* `y` : Second entity
* `x_type` : Type of the first entity
* `y_type`: Type of the second entity
* `r` : List of relations
* `rid`: List of relation IDs
* `t`: List of relation Trigger words
### Data Splits
The data is split into a training, validation and test set as per the original dataset split.
| | train | validation | test |
| --------------------- |-------:|------------:|------:|
| Input dialog examples | 1073 | 358 | 357 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
DialogRE dataset is intended for non-commercial research purpose only
### Citation Information
```
@inproceedings{yu2020dialogue,
title={Dialogue-Based Relation Extraction},
author={Yu, Dian and Sun, Kai and Cardie, Claire and Yu, Dong},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020},
url={https://arxiv.org/abs/2004.08056v1}
}
```
### Contributions
Thanks to [@vineeths96](https://github.com/vineeths96) for adding this dataset. | 7,445 | [
[
-0.0433349609375,
-0.06353759765625,
0.01439666748046875,
0.00415802001953125,
-0.01033782958984375,
-0.00659942626953125,
-0.0147705078125,
-0.018035888671875,
0.025726318359375,
0.049652099609375,
-0.07672119140625,
-0.052734375,
-0.023651123046875,
0.0103... |
leey4n/KR3 | 2023-07-19T08:35:54.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"multilinguality:monolingual",
"size_categories:100K<n<1m",
"language:ko",
"license:cc-by-nc-sa-4.0",
"region:us"
] | leey4n | null | null | 2 | 101 | 2022-03-02T23:29:22 | ---
annotations_creators: []
language_creators: []
language:
- ko
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
pretty_name: KR3
size_categories:
- 100K<n<1m
source_datasets: []
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
### KR3: Korean Restaurant Reviews with Ratings
Korean sentiment classification dataset
- Size: 460K(+180K)
- Language: Korean-centric
### ⚠️ Caution with `Rating` Column
0 stands for negative review, 1 stands for positive review, and 2 stands for ambiguous review.
**Note that rating 2 is not intended to be used directly for supervised learning(classification).** This data is included for additional pre-training purpose or other usage.
In other words, this dataset is basically a **binary** sentiment classification task where labels are 0 and 1.
### 🔍 See More
See all the codes for crawling/preprocessing the dataset and experiments with KR3 in [GitHub Repo](https://github.com/Wittgensteinian/kr3).
See Kaggle dataset in [Kaggle Dataset](https://www.kaggle.com/ninetyninenewton/kr3-korean-restaurant-reviews-with-ratings).
### Usage
```python
from datasets import load_dataset
kr3 = load_dataset("leey4n/KR3", name='kr3', split='train')
kr3 = kr3.remove_columns(['__index_level_0__']) # Original file didn't include this column. Suspect it's a hugging face issue.
```
```python
# drop reviews with ambiguous label
kr3_binary = kr3.filter(lambda example: example['Rating'] != 2)
```
### License
**CC BY-NC-SA 4.0**
### Legal Issues
We concluded that the **non-commerical usage and release of KR3 fall into the range of fair use (공정 이용)** stated in the Korean copyright act (저작권법). We further clarify that we **did not agree to the terms of service** from any websites which might prohibit web crawling. In other words, web crawling we've done was proceeded without logging in to the website. Despite all of these, feel free to contact to any of the contributors if you notice any legal issues.
### Contributors & Acknowledgement
(Alphabetical order)
[Dongin Jung](https://github.com/dongin1009)
[Hyunwoo Kwak](https://github.com/Kwak-Hyun-woo)
[Kaeun Lee](https://github.com/Kaeun-Lee)
[Yejoon Lee](https://github.com/wittgensteinian)
This work was done as DIYA 4기. Compute resources needed for the work was supported by [DIYA](https://blog.diyaml.com) and surromind.ai.
| 2,374 | [
[
-0.033294677734375,
-0.029876708984375,
0.038360595703125,
0.040313720703125,
-0.03228759765625,
-0.00440216064453125,
-0.0128326416015625,
-0.0277862548828125,
0.0181732177734375,
0.0289154052734375,
-0.0276947021484375,
-0.06683349609375,
-0.037689208984375,
... |
imvladikon/hebrew_speech_coursera | 2023-05-05T09:05:00.000Z | [
"task_categories:automatic-speech-recognition",
"size_categories:1K<n<10K",
"language:he",
"region:us"
] | imvladikon | null | null | 5 | 101 | 2022-03-02T23:29:22 | ---
task_categories:
- automatic-speech-recognition
language:
- he
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 6670706136.352
num_examples: 20306
- name: validation
num_bytes: 1648062261.28
num_examples: 5076
download_size: 7726933856
dataset_size: 8318768397.632
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
```json
{'audio': {'path': '/root/.cache/huggingface/datasets/downloads/extracted/89efd3a0fa3ead3f0b8e432e8796697a738d4561b24ff91f4fb2cc25d86e9fb0/train/ccef55189b7843d49110228cb0a71bfa115.wav',
'array': array([-0.01217651, -0.04351807, -0.06278992, ..., -0.00018311,
-0.00146484, -0.00349426]),
'sampling_rate': 16000},
'sentence': 'מצד אחד ובתנועה הציונית הצעירה'}
```
### Data Fields
[More Information Needed]
### Data Splits
| | train | validation |
| ---- | ----- | ---------- |
| number of samples | 20306 | 5076 |
| hours | 28.88 | 7.23 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{imvladikon2022hebrew_speech_coursera,
author = {Gurevich, Vladimir},
title = {Hebrew Speech Recognition Dataset: Coursera},
year = {2022},
howpublished = \url{https://huggingface.co/datasets/imvladikon/hebrew_speech_coursera},
}
```
### Contributions
[More Information Needed] | 2,667 | [
[
-0.034881591796875,
-0.038421630859375,
-0.00589752197265625,
0.0250091552734375,
-0.028472900390625,
-0.006435394287109375,
-0.032958984375,
-0.0209808349609375,
0.047607421875,
0.026947021484375,
-0.059906005859375,
-0.082763671875,
-0.049957275390625,
-0.... |
ShapeNet/ShapeNetCore | 2023-09-20T15:05:48.000Z | [
"language:en",
"license:other",
"3D shapes",
"region:us"
] | ShapeNet | null | null | 15 | 101 | 2022-08-26T09:34:57 | ---
language:
- en
pretty_name: ShapeNetCore
tags:
- 3D shapes
license: other
extra_gated_heading: Acknowledge license to accept the repository
extra_gated_prompt: >-
To request access to this ShapeNet repo, you will need to provide your **full name** (please provide both your first and last name), the name of your **advisor or the principal investigator (PI)** of your lab (in the PI/Advisor) fields, and the **school or company** that you are affiliated with (the **Affiliation** field).
After requesting access to this ShapeNet repo, you will be considered for access approval.
After access approval, you (the "Researcher") receive permission to use the ShapeNet database (the "Database") at Princeton University and Stanford University. In exchange for being able to join the ShapeNet community and receive such permission, Researcher hereby agrees to the following terms and conditions:
Researcher shall use the Database only for non-commercial research and educational purposes.
Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.
Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify Princeton University and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted 3D models that he or she may create from the Database.
Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.
Princeton University and Stanford University reserve the right to terminate Researcher's access to the Database at any time.
If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.
The law of the State of New Jersey shall apply to all disputes under this agreement.
For access to the data, please fill in your **full name** (both first and last name), the name of your **advisor or principal investigator (PI)**, and the name of the **school or company** you are affliated with.
Please actually fill out the fields (DO NOT put the word "Advisor" for PI/Advisor and the word "School" for "Affiliation", please specify the name of your advisor and the name of your school).
extra_gated_fields:
Name: text
PI/Advisor: text
Affiliation: text
Purpose: text
Country: text
I agree to use this dataset for non-commercial use ONLY: checkbox
---
This repository contains ShapeNetCore (v2), a subset of [ShapeNet](https://shapenet.org).
ShapeNetCore is a densely annotated subset of ShapeNet covering 55 common object categories with ~51,300 unique 3D models. Each model in ShapeNetCore are linked to an appropriate synset in [WordNet 3.0](https://wordnet.princeton.edu/).
Please see [DATA.md](DATA.md) for details about the data.
If you use ShapeNet data, you agree to abide by the [ShapeNet terms of use](https://shapenet.org/terms). You are only allowed to redistribute the data to your research associates and colleagues provided that they first agree to be bound by these terms and conditions.
If you use this data, please cite the main ShapeNet technical report.
```
@techreport{shapenet2015,
title = {{ShapeNet: An Information-Rich 3D Model Repository}},
author = {Chang, Angel X. and Funkhouser, Thomas and Guibas, Leonidas and Hanrahan, Pat and Huang, Qixing and Li, Zimo and Savarese, Silvio and Savva, Manolis and Song, Shuran and Su, Hao and Xiao, Jianxiong and Yi, Li and Yu, Fisher},
number = {arXiv:1512.03012 [cs.GR]},
institution = {Stanford University --- Princeton University --- Toyota Technological Institute at Chicago},
year = {2015}
}
```
For more information, please contact us at shapenetwebmaster@gmail.com and indicate ShapeNetCore v2 in the title of your email.
| 4,236 | [
[
-0.00669097900390625,
-0.0155181884765625,
0.0290374755859375,
0.0054168701171875,
-0.01319122314453125,
-0.03179931640625,
0.0111236572265625,
-0.046234130859375,
0.025238037109375,
0.0433349609375,
-0.03399658203125,
-0.047271728515625,
-0.032928466796875,
... |
hpprc/jsick | 2023-04-11T06:18:09.000Z | [
"task_categories:sentence-similarity",
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:10K<n<100K",
"so... | hpprc | Japanese Sentences Involving Compositional Knowledge (JSICK) Dataset.
JSICK is the Japanese NLI and STS dataset by manually translating the English dataset SICK (Marelli et al., 2014) into Japanese.
We hope that our dataset will be useful in research for realizing more advanced models that are capable of appropriately performing multilingual compositional inference.
(from official website) | @article{yanaka-mineshima-2022-compositional,
title = "Compositional Evaluation on {J}apanese Textual Entailment and Similarity",
author = "Yanaka, Hitomi and Mineshima, Koji",
journal = "Transactions of the Association for Computational Linguistics",
volume = "10",
year = "2022",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/2022.tacl-1.73",
doi = "10.1162/tacl_a_00518",
pages = "1266--1284",
} | 4 | 101 | 2023-04-08T16:02:06 | ---
annotations_creators:
- expert-generated
language:
- ja
- en
language_creators:
- expert-generated
license:
- cc-by-sa-4.0
multilinguality:
- translation
pretty_name: JSICK
size_categories:
- 10K<n<100K
source_datasets:
- extended|sick
tags:
- semantic-textual-similarity
- sts
task_categories:
- sentence-similarity
- text-classification
task_ids:
- natural-language-inference
- semantic-similarity-scoring
---
# Dataset Card for JSICK
## Table of Contents
- [Dataset Card for JSICK](#dataset-card-for-jsick)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Japanese Sentences Involving Compositional Knowledge (JSICK) Dataset.](#japanese-sentences-involving-compositional-knowledge-jsick-dataset)
- [JSICK-stress Test set](#jsick-stress-test-set)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [base](#base)
- [stress](#stress)
- [Data Fields](#data-fields)
- [base](#base-1)
- [stress](#stress-1)
- [Data Splits](#data-splits)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/verypluming/JSICK
- **Repository:** https://github.com/verypluming/JSICK
- **Paper:** https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00518/113850/Compositional-Evaluation-on-Japanese-Textual
- **Paper:** https://www.jstage.jst.go.jp/article/pjsai/JSAI2021/0/JSAI2021_4J3GS6f02/_pdf/-char/ja
### Dataset Summary
From official [GitHub](https://github.com/verypluming/JSICK):
#### Japanese Sentences Involving Compositional Knowledge (JSICK) Dataset.
JSICK is the Japanese NLI and STS dataset by manually translating the English dataset [SICK (Marelli et al., 2014)](https://aclanthology.org/L14-1314/) into Japanese.
We hope that our dataset will be useful in research for realizing more advanced models that are capable of appropriately performing multilingual compositional inference.
#### JSICK-stress Test set
The JSICK-stress test set is a dataset to investigate whether models capture word order and case particles in Japanese.
The JSICK-stress test set is provided by transforming syntactic structures of sentence pairs in JSICK, where we analyze whether models are attentive to word order and case particles to predict entailment labels and similarity scores.
The JSICK test set contains 1666, 797, and 1006 sentence pairs (A, B) whose premise sentences A (the column `sentence_A_Ja_origin`) include the basic word order involving
ga-o (nominative-accusative), ga-ni (nominative-dative), and ga-de (nominative-instrumental/locative) relations, respectively.
We provide the JSICK-stress test set by transforming syntactic structures of these pairs by the following three ways:
- `scrum_ga_o`: a scrambled pair, where the word order of premise sentences A is scrambled into o-ga, ni-ga, and de-ga order, respectively.
- `ex_ga_o`: a rephrased pair, where the only case particles (ga, o, ni, de) in the premise A are swapped
- `del_ga_o`: a rephrased pair, where the only case particles (ga, o, ni) in the premise A are deleted
### Languages
The language data in JSICK is in Japanese and English.
## Dataset Structure
### Data Instances
When loading a specific configuration, users has to append a version dependent suffix:
```python
import datasets as ds
dataset: ds.DatasetDict = ds.load_dataset("hpprc/jsick")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['id', 'premise', 'hypothesis', 'label', 'score', 'premise_en', 'hypothesis_en', 'label_en', 'score_en', 'corr_entailment_labelAB_En', 'corr_entailment_labelBA_En', 'image_ID', 'original_caption', 'semtag_short', 'semtag_long'],
# num_rows: 4500
# })
# test: Dataset({
# features: ['id', 'premise', 'hypothesis', 'label', 'score', 'premise_en', 'hypothesis_en', 'label_en', 'score_en', 'corr_entailment_labelAB_En', 'corr_entailment_labelBA_En', 'image_ID', 'original_caption', 'semtag_short', 'semtag_long'],
# num_rows: 4927
# })
# })
dataset: ds.DatasetDict = ds.load_dataset("hpprc/jsick", name="stress")
print(dataset)
# DatasetDict({
# test: Dataset({
# features: ['id', 'premise', 'hypothesis', 'label', 'score', 'sentence_A_Ja_origin', 'entailment_label_origin', 'relatedness_score_Ja_origin', 'rephrase_type', 'case_particles'],
# num_rows: 900
# })
# })
```
#### base
An example of looks as follows:
```json
{
'id': 1,
'premise': '子供たちのグループが庭で遊んでいて、後ろの方には年を取った男性が立っている',
'hypothesis': '庭にいる男の子たちのグループが遊んでいて、男性が後ろの方に立っている',
'label': 1, // (neutral)
'score': 3.700000047683716,
'premise_en': 'A group of kids is playing in a yard and an old man is standing in the background',
'hypothesis_en': 'A group of boys in a yard is playing and a man is standing in the background',
'label_en': 1, // (neutral)
'score_en': 4.5,
'corr_entailment_labelAB_En': 'nan',
'corr_entailment_labelBA_En': 'nan',
'image_ID': '3155657768_b83a7831e5.jpg',
'original_caption': 'A group of children playing in a yard , a man in the background .',
'semtag_short': 'nan',
'semtag_long': 'nan',
}
```
#### stress
An example of looks as follows:
```json
{
'id': '5818_de_d',
'premise': '女性火の近くダンスをしている',
'hypothesis': '火の近くでダンスをしている女性は一人もいない',
'label': 2, // (contradiction)
'score': 4.0,
'sentence_A_Ja_origin': '女性が火の近くでダンスをしている',
'entailment_label_origin': 2,
'relatedness_score_Ja_origin': 3.700000047683716,
'rephrase_type': 'd',
'case_particles': 'de'
}
```
### Data Fields
#### base
A version adopting the column names of a typical NLI dataset.
| Name | Description |
| -------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
| id | The ids (the same with original SICK). |
| premise | The first sentence in Japanese. |
| hypothesis | The second sentence in Japanese. |
| label | The entailment label in Japanese. |
| score | The relatedness score in the range [1-5] in Japanese. |
| premise_en | The first sentence in English. |
| hypothesis_en | The second sentence in English. |
| label_en | The original entailment label in English. |
| score_en | The original relatedness score in the range [1-5] in English. |
| semtag_short | The linguistic phenomena tags in Japanese. |
| semtag_long | The details of linguistic phenomena tags in Japanese. |
| image_ID | The original image in [8K ImageFlickr dataset](https://www.kaggle.com/datasets/adityajn105/flickr8k). |
| original_caption | The original caption in [8K ImageFlickr dataset](https://www.kaggle.com/datasets/adityajn105/flickr8k). |
| corr_entailment_labelAB_En | The corrected entailment label from A to B in English by [(Karouli et al., 2017)](http://vcvpaiva.github.io/includes/pubs/2017-iwcs.pdf). |
| corr_entailment_labelBA_En | The corrected entailment label from B to A in English by [(Karouli et al., 2017)](http://vcvpaiva.github.io/includes/pubs/2017-iwcs.pdf). |
#### stress
| Name | Description |
| --------------------------- | ------------------------------------------------------------------------------------------------- |
| id | Ids (the same with original SICK). |
| premise | The first sentence in Japanese. |
| hypothesis | The second sentence in Japanese. |
| label | The entailment label in Japanese |
| score | The relatedness score in the range [1-5] in Japanese. |
| sentence_A_Ja_origin | The original premise sentences A from the JSICK test set. |
| entailment_label_origin | The original entailment labels. |
| relatedness_score_Ja_origin | The original relatedness scores. |
| rephrase_type | The type of transformation applied to the syntactic structures of the sentence pairs. |
| case_particles | The grammatical particles in Japanese that indicate the function or role of a noun in a sentence. |
### Data Splits
| name | train | validation | test |
| --------------- | ----: | ---------: | ----: |
| base | 4,500 | | 4,927 |
| original | 4,500 | | 4,927 |
| stress | | | 900 |
| stress-original | | | 900 |
### Annotations
To annotate the JSICK dataset, they used the crowdsourcing platform "Lancers" to re-annotate entailment labels and similarity scores for JSICK.
They had six native Japanese speakers as annotators, who were randomly selected from the platform.
The annotators were asked to fully understand the guidelines and provide the same labels as gold labels for ten test questions.
For entailment labels, they adopted annotations that were agreed upon by a majority vote as gold labels and checked whether the majority judgment vote was semantically valid for each example.
For similarity scores, they used the average of the annotation results as gold scores.
The raw annotations with the JSICK dataset are [publicly available](https://github.com/verypluming/JSICK/blob/main/jsick/jsick-all-annotations.tsv).
The average annotation time was 1 minute per pair, and Krippendorff's alpha for the entailment labels was 0.65.
## Additional Information
- [verypluming/JSICK](https://github.com/verypluming/JSICK)
- [Compositional Evaluation on Japanese Textual Entailment and Similarity](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00518/113850/Compositional-Evaluation-on-Japanese-Textual)
- [JSICK: 日本語構成的推論・類似度データセットの構築](https://www.jstage.jst.go.jp/article/pjsai/JSAI2021/0/JSAI2021_4J3GS6f02/_article/-char/ja)
### Licensing Information
CC BY-SA 4.0
### Citation Information
```bibtex
@article{yanaka-mineshima-2022-compositional,
title = "Compositional Evaluation on {J}apanese Textual Entailment and Similarity",
author = "Yanaka, Hitomi and
Mineshima, Koji",
journal = "Transactions of the Association for Computational Linguistics",
volume = "10",
year = "2022",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/2022.tacl-1.73",
doi = "10.1162/tacl_a_00518",
pages = "1266--1284",
}
@article{谷中 瞳2021,
title={JSICK: 日本語構成的推論・類似度データセットの構築},
author={谷中 瞳 and 峯島 宏次},
journal={人工知能学会全国大会論文集},
volume={JSAI2021},
number={ },
pages={4J3GS6f02-4J3GS6f02},
year={2021},
doi={10.11517/pjsai.JSAI2021.0_4J3GS6f02}
}
```
### Contributions
Thanks to [Hitomi Yanaka](https://hitomiyanaka.mystrikingly.com/) and [Koji Mineshima](https://abelard.flet.keio.ac.jp/person/minesima/index-j.html) for creating this dataset. | 13,106 | [
[
-0.02734375,
-0.058563232421875,
0.0258941650390625,
0.0211944580078125,
-0.0195159912109375,
-0.00604248046875,
-0.0287628173828125,
-0.0207366943359375,
0.0364990234375,
0.027313232421875,
-0.0478515625,
-0.05987548828125,
-0.0369873046875,
0.0293121337890... |
kunishou/oasst1-89k-ja | 2023-10-27T12:35:40.000Z | [
"language:ja",
"license:apache-2.0",
"region:us"
] | kunishou | null | null | 13 | 101 | 2023-05-06T09:12:30 | ---
license: apache-2.0
language:
- ja
---
This dataset was created by automatically translating "OpenAssistant/oasst1" into Japanese.
The "ng_translation" flag indicates that the translation was not successful, and "1" means that the translation failed.
Therefore, for data with "1", "text" and "text_en" contain the same text.
**Update:**
2023/10/21
自動翻訳によるコード関連データの翻訳誤り2000箇所程度を手動で修正しました。
**<details><summary>修正イメージを表示</summary><div>**
- 修正前
```
もちろん!これは、Flask Webフレームワークを使用して文字列を提供する単純なAPIエンドポイントを作成するPythonスクリプトの例です。
フラスコ輸入フラスコから
app = flask(__name__)
@app.route( '/')
def hello_world():
「こんにちは、世界!」を返します
__name__ == '__main__'の場合:
app.run()
このスクリプトでは、最初にフラスコモジュールからフラスコクラスをインポートします。次に、__Name__変数を使用してアプリケーションの名前を指定するフラスコクラスの新しいインスタンスを作成します。
```
- 修正後
```
もちろん!これは、Flask Webフレームワークを使用して文字列を提供する単純なAPIエンドポイントを作成するPythonスクリプトの例です。
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello, world!'
if __name__ == '__main__':
app.run()
このスクリプトでは、最初にフラスコモジュールからフラスコクラスをインポートします。次に、__Name__変数を使用してアプリケーションの名前を指定するフラスコクラスの新しいインスタンスを作成します。
```
</div></details>
以下のコードを用いることで、 Instruction と Output (prompterの命令とassistantの回答)の形式に変換することができます。
ファインチューニングで使用する場合はこちらのコードで変換して下さい。
変換コード参考
https://github.com/h2oai/h2o-llmstudio/blob/5ebfd3879e226b4e1afd0a0b45eb632e60412129/app_utils/utils.py#L1888
```python
pip install datasets
```
```python
from datasets import load_dataset
import pandas as pd
import os
import json
# oasst1のオリジナルデータのロード
ds = load_dataset("OpenAssistant/oasst1")
train = ds["train"].to_pandas()
val = ds["validation"].to_pandas()
df_origin = pd.concat([train, val], axis=0).reset_index(drop=True)
# oasst1日本語翻訳データの読み込み
df_ja = pd.read_json("oasst1_ja_89k.json")
# oasst1のオリジナルデータと日本語翻訳データのマージ
df = pd.merge(df_origin, df_ja[["message_id", "text_ja"]], on="message_id", how="left").copy()
df["text"] = df["text_ja"]
df_assistant = df[(df.role == "assistant")].copy()
df_prompter = df[(df.role == "prompter")].copy()
df_prompter = df_prompter.set_index("message_id")
df_assistant["output"] = df_assistant["text"].values
inputs = []
parent_ids = []
for _, row in df_assistant.iterrows():
input = df_prompter.loc[row.parent_id]
inputs.append(input.text)
parent_ids.append(input.parent_id)
df_assistant["instruction"] = inputs
df_assistant["parent_id"] = parent_ids
df_assistant = df_assistant[
["instruction", "output", "message_id", "parent_id", "lang", "rank"]
].rename(columns={"message_id": "id"})
# 翻訳タスクのみデータに異常があるので除外
df_assistant2 = df_assistant[~df_assistant["instruction"].str.contains("翻訳")]
# これ以下でjsonファイルへ書き出し---------------
learn_datas = []
input_list = []
for n in range(len(df_assistant2)):
learn_data = {
"instruction": str(df_assistant2.iloc[n, 0]),
"input": "",
"output": ""
}
input_list.append(df_assistant2.iloc[n, 0])
learn_data["input"] = ""
learn_data["output"] = str(df_assistant2.iloc[n, 1])
learn_datas.append(learn_data)
json_learn_data = json.dumps(learn_datas, indent=4, ensure_ascii=False)
with open('oasst1_ja_converted.json', 'w', encoding="utf-8") as f:
f.write(json_learn_data)
```
oasst1-ja-89k Repository
https://github.com/kunishou/oasst1-89k-ja
OpenAssistant/oasst1
https://huggingface.co/datasets/OpenAssistant/oasst1 | 3,353 | [
[
-0.0265350341796875,
-0.04541015625,
0.01470947265625,
0.0021419525146484375,
-0.003936767578125,
-0.00881195068359375,
-0.0103912353515625,
-0.00951385498046875,
0.01284027099609375,
0.016082763671875,
-0.04266357421875,
-0.04010009765625,
-0.037750244140625,
... |
PeterPanTheGenius/CUHK-PEDES | 2023-07-03T08:37:42.000Z | [
"region:us"
] | PeterPanTheGenius | null | null | 0 | 101 | 2023-07-03T08:23:49 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 4374645533.392
num_examples: 238768
download_size: 575398519
dataset_size: 4374645533.392
---
# Dataset Card for "CUHK-PEDES"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 403 | [
[
-0.042266845703125,
-0.03497314453125,
0.024505615234375,
0.01273345947265625,
-0.019500732421875,
0.0037136077880859375,
0.01261138916015625,
0.00673675537109375,
0.060302734375,
0.035797119140625,
-0.0545654296875,
-0.05548095703125,
-0.0362548828125,
-0.0... |
coastalcph/fm_classifier_mutable-1-1 | 2023-10-24T13:24:01.000Z | [
"region:us"
] | coastalcph | null | null | 0 | 101 | 2023-10-23T15:13:28 | ---
dataset_info:
features:
- name: query
dtype: string
- name: answer
list:
- name: wikidata_id
dtype: string
- name: name
dtype: string
- name: id
dtype: string
- name: relation
dtype: string
- name: date
dtype: int64
- name: type
dtype: string
- name: is_mutable
dtype: int64
splits:
- name: train
num_bytes: 1606940.087431288
num_examples: 8967
- name: all_fm
num_bytes: 33865262.26303366
num_examples: 177265
- name: validation
num_bytes: 996478.5738772711
num_examples: 5800
- name: test
num_bytes: 1120775.194745333
num_examples: 5698
download_size: 6684977
dataset_size: 37589456.11908755
---
# Dataset Card for "fm_classifier_mutable-1-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 890 | [
[
-0.038055419921875,
-0.0197601318359375,
0.01041412353515625,
0.01334381103515625,
-0.01448822021484375,
-0.0076751708984375,
0.01263427734375,
-0.0037708282470703125,
0.043670654296875,
0.02337646484375,
-0.05548095703125,
-0.04119873046875,
-0.046844482421875,... |
result-kand2-sdxl-wuerst-karlo/7cbe3776 | 2023-10-25T03:40:40.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 101 | 2023-10-25T03:40:40 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 168
num_examples: 10
download_size: 1340
dataset_size: 168
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "7cbe3776"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.048980712890625,
-0.001708984375,
0.01678466796875,
0.0268096923828125,
-0.0272979736328125,
-0.0139312744140625,
0.023101806640625,
-0.0196075439453125,
0.055023193359375,
0.04791259765625,
-0.047607421875,
-0.05487060546875,
-0.036529541015625,
-0.00314... |
result-kand2-sdxl-wuerst-karlo/e03089c4 | 2023-10-25T20:23:18.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 101 | 2023-10-25T20:23:17 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 186
num_examples: 10
download_size: 1356
dataset_size: 186
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "e03089c4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.0517578125,
0.00040268898010253906,
0.025482177734375,
0.0119171142578125,
-0.01053619384765625,
-0.006412506103515625,
0.028228759765625,
-0.0172882080078125,
0.07293701171875,
0.0265960693359375,
-0.060333251953125,
-0.039642333984375,
-0.0296783447265625,
... |
result-kand2-sdxl-wuerst-karlo/96ca277a | 2023-10-26T22:44:39.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 101 | 2023-10-26T22:44:39 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 173
num_examples: 10
download_size: 1332
dataset_size: 173
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "96ca277a"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.040283203125,
-0.0136566162109375,
0.020965576171875,
0.0243072509765625,
-0.017852783203125,
-0.00223541259765625,
0.0245513916015625,
-0.0112457275390625,
0.07391357421875,
0.032745361328125,
-0.0633544921875,
-0.05047607421875,
-0.032318115234375,
-0.0... |
result-kand2-sdxl-wuerst-karlo/7f9071c2 | 2023-10-27T03:05:40.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 101 | 2023-10-27T03:05:39 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 171
num_examples: 10
download_size: 1324
dataset_size: 171
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "7f9071c2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.043548583984375,
-0.004863739013671875,
0.0122222900390625,
0.0222320556640625,
-0.0216217041015625,
-0.00844573974609375,
0.029815673828125,
-0.017181396484375,
0.0467529296875,
0.038970947265625,
-0.048095703125,
-0.04583740234375,
-0.04736328125,
-0.00... |
result-kand2-sdxl-wuerst-karlo/74441c7b | 2023-10-27T11:35:57.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 101 | 2023-10-27T11:35:55 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 178
num_examples: 10
download_size: 1358
dataset_size: 178
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "74441c7b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.046234130859375,
-0.01393890380859375,
0.01517486572265625,
0.016693115234375,
-0.03570556640625,
-0.004970550537109375,
0.019866943359375,
-0.016693115234375,
0.059173583984375,
0.043914794921875,
-0.057342529296875,
-0.0518798828125,
-0.04046630859375,
... |
result-kand2-sdxl-wuerst-karlo/166c9db0 | 2023-10-28T18:38:15.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 1 | 101 | 2023-10-28T18:38:14 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 185
num_examples: 10
download_size: 1392
dataset_size: 185
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "166c9db0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.053802490234375,
-0.0099639892578125,
0.021087646484375,
0.014862060546875,
-0.020477294921875,
0.0007205009460449219,
0.01371002197265625,
-0.01412200927734375,
0.064208984375,
0.0399169921875,
-0.06195068359375,
-0.051300048828125,
-0.0396728515625,
-0.... |
result-kand2-sdxl-wuerst-karlo/6612e023 | 2023-10-29T13:56:48.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 101 | 2023-10-29T13:56:48 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 217
num_examples: 10
download_size: 1380
dataset_size: 217
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "6612e023"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.047088623046875,
-0.004589080810546875,
0.0179901123046875,
0.0177459716796875,
-0.00930023193359375,
-0.01560211181640625,
0.024261474609375,
-0.01983642578125,
0.0699462890625,
0.0312347412109375,
-0.065185546875,
-0.046417236328125,
-0.03607177734375,
... |
result-kand2-sdxl-wuerst-karlo/f7c1d08f | 2023-10-29T17:05:29.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 101 | 2023-10-29T17:05:28 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 150
num_examples: 10
download_size: 1322
dataset_size: 150
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "f7c1d08f"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.047943115234375,
-0.0090484619140625,
0.01678466796875,
0.018035888671875,
-0.0242156982421875,
-0.0029754638671875,
0.032379150390625,
-0.00995635986328125,
0.05694580078125,
0.037628173828125,
-0.058380126953125,
-0.051300048828125,
-0.0469970703125,
-0... |
hate_speech_portuguese | 2023-01-25T14:31:44.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pt",
"license:unknown",
"hate-speech-detection",
"region:us"
] | null | Portuguese dataset for hate speech detection composed of 5,668 tweets with binary annotations (i.e. 'hate' vs. 'no-hate'). | @inproceedings{fortuna-etal-2019-hierarchically,
title = "A Hierarchically-Labeled {P}ortuguese Hate Speech Dataset",
author = "Fortuna, Paula and
Rocha da Silva, Jo{\\~a}o and
Soler-Company, Juan and
Wanner, Leo and
Nunes, S{\'e}rgio",
booktitle = "Proceedings of the Third Workshop on Abusive Language Online",
month = aug,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/W19-3510",
doi = "10.18653/v1/W19-3510",
pages = "94--104",
abstract = "Over the past years, the amount of online offensive speech has been growing steadily. To successfully cope with it, machine learning are applied. However, ML-based techniques require sufficiently large annotated datasets. In the last years, different datasets were published, mainly for English. In this paper, we present a new dataset for Portuguese, which has not been in focus so far. The dataset is composed of 5,668 tweets. For its annotation, we defined two different schemes used by annotators with different levels of expertise. Firstly, non-experts annotated the tweets with binary labels ({`}hate{'} vs. {`}no-hate{'}). Secondly, expert annotators classified the tweets following a fine-grained hierarchical multiple label scheme with 81 hate speech categories in total. The inter-annotator agreement varied from category to category, which reflects the insight that some types of hate speech are more subtle than others and that their detection depends on personal perception. This hierarchical annotation scheme is the main contribution of the presented work, as it facilitates the identification of different types of hate speech and their intersections. To demonstrate the usefulness of our dataset, we carried a baseline classification experiment with pre-trained word embeddings and LSTM on the binary classified data, with a state-of-the-art outcome.",
} | 2 | 100 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- pt
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: HateSpeechPortuguese
tags:
- hate-speech-detection
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': no-hate
'1': hate
- name: hatespeech_G1
dtype: string
- name: annotator_G1
dtype: string
- name: hatespeech_G2
dtype: string
- name: annotator_G2
dtype: string
- name: hatespeech_G3
dtype: string
- name: annotator_G3
dtype: string
splits:
- name: train
num_bytes: 826130
num_examples: 5670
download_size: 763846
dataset_size: 826130
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/paulafortuna/Portuguese-Hate-Speech-Dataset
- **Repository:** https://github.com/paulafortuna/Portuguese-Hate-Speech-Dataset
- **Paper:** https://www.aclweb.org/anthology/W19-3510/
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Portuguese dataset for hate speech detection composed of 5,668 tweets with binary annotations (i.e. 'hate' vs. 'no-hate').
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset. | 3,542 | [
[
-0.03753662109375,
-0.041534423828125,
-0.0008940696716308594,
0.0185089111328125,
-0.016754150390625,
0.0174102783203125,
-0.030120849609375,
-0.034332275390625,
0.041259765625,
0.044677734375,
-0.051239013671875,
-0.08636474609375,
-0.06390380859375,
0.005... |
hrenwac_para | 2022-11-03T16:07:49.000Z | [
"task_categories:translation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:hr",
"license:cc-by-sa-3.0",
"region:us"
] | null | The hrenWaC corpus version 2.0 consists of parallel Croatian-English texts crawled from the .hr top-level domain for Croatia.
The corpus was built with Spidextor (https://github.com/abumatran/spidextor), a tool that glues together the output of SpiderLing used for crawling and Bitextor used for bitext extraction. The accuracy of the extracted bitext on the segment level is around 80% and on the word level around 84%. | @misc{11356/1058,
title = {Croatian-English parallel corpus {hrenWaC} 2.0},
author = {Ljube{\v s}i{\'c}, Nikola and Espl{\'a}-Gomis, Miquel and Ortiz Rojas, Sergio and Klubi{\v c}ka, Filip and Toral, Antonio},
url = {http://hdl.handle.net/11356/1058},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {{CLARIN}.{SI} User Licence for Internet Corpora},
year = {2016} } | 0 | 100 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
- hr
license:
- cc-by-sa-3.0
multilinguality:
- translation
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: HrenwacPara
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- en
- hr
config_name: hrenWaC
splits:
- name: train
num_bytes: 29602110
num_examples: 99001
download_size: 11640281
dataset_size: 29602110
---
# Dataset Card for hrenwac_para
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://nlp.ffzg.hr/resources/corpora/hrenwac/
- **Repository:** http://nlp.ffzg.hr/data/corpora/hrenwac/hrenwac.en-hr.txt.gz
- **Paper:** http://workshop2013.iwslt.org/downloads/IWSLT-2013-Cettolo.pdf
- **Leaderboard:**
- **Point of Contact:** [Nikola Ljubešič](mailto:nikola.ljubesic@ffzg.hr)
### Dataset Summary
The hrenWaC corpus version 2.0 consists of parallel Croatian-English texts crawled from the .hr top-level domain for Croatia. The corpus was built with Spidextor (https://github.com/abumatran/spidextor), a tool that glues together the output of SpiderLing used for crawling and Bitextor used for bitext extraction. The accuracy of the extracted bitext on the segment level is around 80% and on the word level around 84%.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Dataset is bilingual with Croatian and English languages.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Dataset is under the [CC-BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/) license.
### Citation Information
```
@misc{11356/1058,
title = {Croatian-English parallel corpus {hrenWaC} 2.0},
author = {Ljube{\v s}i{\'c}, Nikola and Espl{\`a}-Gomis, Miquel and Ortiz Rojas, Sergio and Klubi{\v c}ka, Filip and Toral, Antonio},
url = {http://hdl.handle.net/11356/1058},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {{CLARIN}.{SI} User Licence for Internet Corpora},
year = {2016} }
```
### Contributions
Thanks to [@IvanZidov](https://github.com/IvanZidov) for adding this dataset. | 4,093 | [
[
-0.0245208740234375,
-0.039215087890625,
0.01025390625,
0.038970947265625,
-0.017974853515625,
-0.0016412734985351562,
-0.040191650390625,
-0.03375244140625,
0.0303955078125,
0.032073974609375,
-0.0595703125,
-0.0777587890625,
-0.0555419921875,
0.029296875,
... |
linnaeus | 2023-06-15T14:40:39.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | null | A novel corpus of full-text documents manually annotated for species mentions. | @article{gerner2010linnaeus,
title={LINNAEUS: a species name identification system for biomedical literature},
author={Gerner, Martin and Nenadic, Goran and Bergman, Casey M},
journal={BMC bioinformatics},
volume={11},
number={1},
pages={85},
year={2010},
publisher={Springer}
} | 1 | 100 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: linnaeus
pretty_name: LINNAEUS
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B
'2': I
config_name: linnaeus
splits:
- name: train
num_bytes: 4772417
num_examples: 11936
- name: validation
num_bytes: 1592823
num_examples: 4079
- name: test
num_bytes: 2802877
num_examples: 7143
download_size: 18204624
dataset_size: 9168117
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [linnaeus](http://linnaeus.sourceforge.net/)
- **Repository:** https://github.com/cambridgeltl/MTL-Bioinformatics-2016/tree/master/data/linnaeus-IOB
- **Paper:** [BMC Bioinformatics](https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-11-85)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The LINNAEUS corpus consists of 100 full-text documents from the PMCOA
document set which were randomly selected. All mentions of species terms were manually
annotated and normalized to the NCBI taxonomy IDs of the intended species.
The original LINNAEUS corpus is available in a TAB-separated standoff format. The resource does not define training,
development or test subsets.
We converted the corpus into BioNLP shared task standoff format using a custom script, split it into 50-, 17-, and 33-
document training, development and test sets, and then converted these into the CoNLL format using standoff2conll.
As a full-text corpus, LINNAEUS contains comparatively frequent
non-ASCII characters, which were mapped to ASCII using the
standoff2conll -a option.
The conversion was highly accurate, but due to sentence-splitting errors within entity mentions,
the number of annotations in the converted data was larger by four (100.09%) than that
in the source data. 99.77% of names in the original annotation matched names in the converted
data.
### Supported Tasks and Leaderboards
This dataset is used for species Named Entity Recognition.
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
An example from the dataset is:
```
{'id': '2',
'tokens': ['Scp160p', 'is', 'a', '160', 'kDa', 'protein', 'in', 'the', 'yeast', 'Saccharomyces', 'cerevisiae', 'that', 'contains', '14', 'repeats', 'of', 'the', 'hnRNP', 'K', '-', 'homology', '(', 'KH', ')', 'domain', ',', 'and', 'demonstrates', 'significant', 'sequence', 'homology', 'to', 'a', 'family', 'of', 'proteins', 'collectively', 'known', 'as', 'vigilins', '.'],
'ner_tags': [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}
```
### Data Fields
- `id`: Sentence identifier.
- `tokens`: Array of tokens composing a sentence.
- `ner_tags`: Array of tags, where `0` indicates no species mentioned, `1` signals the first token of a species and `2` the subsequent tokens of the species.
### Data Splits
| name |train|validation|test|
|----------|----:|---------:|---:|
| linnaeus |11936| 4079|7143|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This version of the dataset is licensed under [Creative Commons Attribution 4.0 International](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/blob/master/LICENSE.md).
### Citation Information
```bibtex
@article{crichton2017neural,
title={A neural network multi-task learning approach to biomedical named entity recognition},
author={Crichton, Gamal and Pyysalo, Sampo and Chiu, Billy and Korhonen, Anna},
journal={BMC Bioinformatics},
volume={18},
number={1},
pages={368},
year={2017},
publisher={BioMed Central}
doi = {10.1186/s12859-017-1776-8},
issn = {1471-2105},
url = {https://doi.org/10.1186/s12859-017-1776-8},
}
@article{Gerner2010,
author = {Gerner, Martin and Nenadic, Goran and Bergman, Casey M},
doi = {10.1186/1471-2105-11-85},
issn = {1471-2105},
journal = {BMC Bioinformatics},
number = {1},
pages = {85},
title = {{LINNAEUS: A species name identification system for biomedical literature}},
url = {https://doi.org/10.1186/1471-2105-11-85},
volume = {11},
year = {2010}
}
```
### Contributions
Thanks to [@edugp](https://github.com/edugp) for adding this dataset. | 6,343 | [
[
-0.0244293212890625,
-0.02142333984375,
0.01690673828125,
0.0252838134765625,
-0.0289459228515625,
-0.00525665283203125,
-0.0218505859375,
-0.036529541015625,
0.049896240234375,
0.0204620361328125,
-0.040130615234375,
-0.0626220703125,
-0.0399169921875,
0.04... |
text2log | 2022-11-03T16:15:15.000Z | [
"task_categories:translation",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | null | The dataset contains about 100,000 simple English sentences selected and filtered from enTenTen15 and their translation into First Order Logic (FOL) Lambda Dependency-based Compositional Semantics using ccg2lambda. | @INPROCEEDINGS{9401852, author={Levkovskyi, Oleksii and Li, Wei}, booktitle={SoutheastCon 2021}, title={Generating Predicate Logic Expressions from Natural Language}, year={2021}, volume={}, number={}, pages={1-8}, doi={10.1109/SoutheastCon45413.2021.9401852}} | 2 | 100 | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: text2log
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
dataset_info:
features:
- name: sentence
dtype: string
- name: fol_translation
dtype: string
splits:
- name: train
num_bytes: 10358134
num_examples: 101931
download_size: 9746473
dataset_size: 10358134
---
# Dataset Card for text2log
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:**
- **Repository:** [GitHub](https://github.com/alevkov/text2log)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** https://github.com/alevkov
### Dataset Summary
The dataset contains 100,000 simple English sentences selected and filtered from `enTenTen15` and their translation into First Order Logic (FOL) using `ccg2lambda`.
### Supported Tasks and Leaderboards
'semantic-parsing': The data set is used to train models which can generate FOL statements from natural language text
### Languages
en-US
## Dataset Structure
### Data Instances
```
{
'clean':'All things that are new are good.',
'trans':'all x1.(_thing(x1) -> (_new(x1) -> _good(x1)))'
}
```
### Data Fields
- 'clean': a simple English sentence
- 'trans': the corresponding translation into Lambda Dependency-based Compositional Semantics
### Data Splits
No predefined train/test split is given. The authors used a 80/20 split
## Dataset Creation
### Curation Rationale
The text2log data set is used to improve FOL statement generation from natural text
### Source Data
#### Initial Data Collection and Normalization
Short text samples selected from enTenTen15
#### Who are the source language producers?
See https://www.sketchengine.eu/ententen-english-corpus/
### Annotations
#### Annotation process
Machine generated using https://github.com/mynlp/ccg2lambda
#### Who are the annotators?
none
### Personal and Sensitive Information
The dataset does not contain personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
None given
### Citation Information
```bibtex
@INPROCEEDINGS{9401852,
author={Levkovskyi, Oleksii and Li, Wei},
booktitle={SoutheastCon 2021},
title={Generating Predicate Logic Expressions from Natural Language},
year={2021},
volume={},
number={},
pages={1-8},
doi={10.1109/SoutheastCon45413.2021.9401852}
}
```
### Contributions
Thanks to [@apergo-ai](https://github.com/apergo-ai) for adding this dataset. | 3,796 | [
[
-0.00909423828125,
-0.04730224609375,
0.01461029052734375,
0.01415252685546875,
-0.0258331298828125,
-0.004730224609375,
-0.0247650146484375,
-0.0439453125,
0.01873779296875,
0.041351318359375,
-0.05133056640625,
-0.05853271484375,
-0.043731689453125,
0.0182... |
NYTK/HuRC | 2022-07-07T13:03:49.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"task_ids:abstractive-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|other",
"language:hu"... | NYTK | null | null | 1 | 100 | 2022-03-02T23:29:22 | ---
YAML tags:
annotations_creators:
- crowdsourced
language_creators:
- found
- expert-generated
language:
- hu
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: HuRC
size_categories:
- unknown
source_datasets:
- extended|other
task_categories:
- question-answering
task_ids:
- extractive-qa
- abstractive-qa
---
# Dataset Card for HuRC
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
[HuRC dataset](https://github.com/nytud/HuRC)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
[lnnoemi](mailto:ligeti-nagy.noemi@nytud.hu)
### Dataset Summary
This is the dataset card for the Hungarian Corpus for Reading Comprehension with Commonsense Reasoning (HuRC), which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit HuLU.
The dataset contains 80 614 instances. Each instance is composed of a lead, a passage and a cloze-style query with a masked entity. The task is to select the named entity that is being masked in the query.
The data was automatically collected from the online news of Népszabadság online (nol.hu).
### Languages
The BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.
## Dataset Structure
### Data Instances
For each instance, there is an id, a lead, a passage, a query and a MASK.
An example:
```
{
"id": "1",
"lead": ["A Közigazgatási és Igazságügyi Minisztérium szerint a Bárka Színház esetében felmerült a felelőtlen gazdálkodás gyanúja, egyes értesülések szerint pedig ebben \"a színház igazgatójának és gazdasági vezetőjének felelőssége is felmerül\""],
"passage": [
"A teátrumnak Navracsics Tibor közigazgatási és igazságügyi miniszterhez és Kocsis Máté VIII. kerületi polgármesterhez",
"reagálva a tárca azt írta, hogy a felelőtlen gazdálkodás gyanújában \"egyes értesülések szerint a színház igazgatójának és gazdasági vezetőjének felelőssége is felmerül\". A KIM \"éppen ezért nagyon várja az Állami Számvevőszék készülő jelentését, hogy tiszta képet kaphasson a színház működéséről\".",
"A minisztérium hangsúlyozta, hogy az elmúlt évben is mindent elkövetett azért, hogy a Bárka Színház \"valós, rangos művészeti térként\" működjön, és a továbbiakban is ez a szándéka, de jelenleg a társulat működtetését a minisztérium fenntartói támogatás formájában jogszerűen még nem tudja megoldani.",
"A teátrum az átadás-átvétel elhúzódásának okát keresve tette közzé nyílt levelét, amelyben elmaradó fizetésekre, előadásokra és bemutatókra hívta fel a figyelmet, és jelezte, hogy várja a helyzet megoldását.",
"A színház átadás-átvétele jelenleg zajlik, a folyamat végeztével a Bárka a józsefvárosi önkormányzattól állami tulajdonba, a tervek szerint a Közigazgatási és Igazságügyi Minisztérium fenntartásába kerül."
],
"query": "A KIM 2014-es költségvetésében szerepel a Bárka Színház, de amíg nem a minisztérium a [MASK] fenntartója, addig ez a költségvetési keret nem nyitható meg.",
"MASK": "Bárka",
}
```
### Data Fields
- id: unique id of the instances;
- lead: a short summary of the article as it was extracted from the source texts;
- passage: 3-6 paragraphs of texts as the body of the article;
- query: the last paragraph of an article, some kind of summary or conclusion, with a named entity masked (with [MASK]) in it;
- MASK: the masked named entity.
### Data Splits
HuRC has 3 splits: *train*, *validation* and *test*.
| Dataset split | Number of instances in the split | Proportion of the split
|---------------|----------------------------------| ---------|
| train | 64614 | 80%|
| validation | 8000 |10%|
| test | 8000 |10%|
The test data is distributed without the MASK fields. To evaluate your model, please [contact us](mailto:ligeti-nagy.noemi@nytud.hu), or check [HuLU's website](hulu.nlp.nytud.hu) for an automatic evaluation (this feature is under construction at the moment).
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
To produce the Hungarian material, we used the daily articles from Népszabadság Online which had titles and summaries as well. We selected 3-6 paragraphs from each article from the ones which contain proper nouns both in the main part and the summary as well. We trained a NER model using huBERT (Nemeskey 2021) for recognizing proper nouns. NerKor (Simon és Vadász 2021) and Huggingface’s token-level classification library were used to fine-tune the model. Our model achieved an F-score of 90.18 on the test material. As a final step, we found pairs of proper names which are present both in the main article and the summary. Multiple articles contained more than one such pairs so we used those more than once. This resulted in a database of 88655 instances (from 49782 articles).
The quantitative properties of our corpus are as follows: Number of articles: 88655 Number of different articles (type): 49782 Token: 27703631 Type: 1115.260 Average length of text (token): 249.42 (median: 229) Average question length (token): 63.07 (median: 56). We fine-tuned the corpus by hand.
One annotator per 100 unit checked and validated the dataset for which we provided our own demo interface. Automatic masking and the previous occurrence of the entity was checked. This resulted in a database of 80 614 validated entries.
## Additional Information
### Licensing Information
HuRC is released under the cc-by-4.0 license.
### Citation Information
If you use this resource or any part of its documentation, please refer to:
Ligeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. XVIII. Magyar Számítógépes Nyelvészeti Konferencia. (in press)
```
@inproceedings{ligetinagy2022hulu,
title={HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából},
author={Ligeti-Nagy, N. and Ferenczi, G. and Héja, E. and Jelencsik-Mátyus, K. and Laki, L. J. and Vadász, N. and Yang, Z. Gy. and Váradi, T.},
booktitle={XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year={2022}
}
```
### Contributions
Thanks to [lnnoemi](https://github.com/lnnoemi) for adding this dataset. | 7,574 | [
[
-0.037078857421875,
-0.062469482421875,
0.018829345703125,
0.0108489990234375,
-0.019287109375,
-0.01214599609375,
-0.0286712646484375,
-0.028961181640625,
0.035675048828125,
0.040008544921875,
-0.029449462890625,
-0.06854248046875,
-0.0350341796875,
0.03405... |
benjaminbeilharz/better_daily_dialog | 2022-01-22T18:03:59.000Z | [
"region:us"
] | benjaminbeilharz | null | null | 2 | 100 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
ett | 2022-11-18T22:07:07.000Z | [
"task_categories:time-series-forecasting",
"task_ids:univariate-time-series-forecasting",
"task_ids:multivariate-time-series-forecasting",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"license:... | null | The data of Electricity Transformers from two separated counties
in China collected for two years at hourly and 15-min frequencies.
Each data point consists of the target value "oil temperature" and
6 power load features. The train/val/test is 12/4/4 months. | @inproceedings{haoyietal-informer-2021,
author = {Haoyi Zhou and
Shanghang Zhang and
Jieqi Peng and
Shuai Zhang and
Jianxin Li and
Hui Xiong and
Wancai Zhang},
title = {Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting},
booktitle = {The Thirty-Fifth {AAAI} Conference on Artificial Intelligence, {AAAI} 2021, Virtual Conference},
volume = {35},
number = {12},
pages = {11106--11115},
publisher = {{AAAI} Press},
year = {2021},
} | 3 | 100 | 2022-05-05T12:12:41 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language: []
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Electricity Transformer Temperature
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- time-series-forecasting
task_ids:
- univariate-time-series-forecasting
- multivariate-time-series-forecasting
dataset_info:
- config_name: h1
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 241978
num_examples: 1
- name: test
num_bytes: 77508960
num_examples: 240
- name: validation
num_bytes: 33916080
num_examples: 120
download_size: 2589657
dataset_size: 111667018
- config_name: h2
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 241978
num_examples: 1
- name: test
num_bytes: 77508960
num_examples: 240
- name: validation
num_bytes: 33916080
num_examples: 120
download_size: 2417960
dataset_size: 111667018
- config_name: m1
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 967738
num_examples: 1
- name: test
num_bytes: 1239008640
num_examples: 960
- name: validation
num_bytes: 542089920
num_examples: 480
download_size: 10360719
dataset_size: 1782066298
- config_name: m2
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 967738
num_examples: 1
- name: test
num_bytes: 1239008640
num_examples: 960
- name: validation
num_bytes: 542089920
num_examples: 480
download_size: 9677236
dataset_size: 1782066298
---
# Dataset Card for [Electricity Transformer Temperature](https://github.com/zhouhaoyi/ETDataset)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Electricity Transformer Dataset](https://github.com/zhouhaoyi/ETDataset)
- **Repository:** https://github.com/zhouhaoyi/ETDataset
- **Paper:** [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting](https://arxiv.org/abs/2012.07436)
- **Point of Contact:** [Haoyi Zhou](mailto:zhouhy@act.buaa.edu.cn)
### Dataset Summary
The electric power distribution problem is the distribution of electricity to different areas depending on its sequential usage. But predicting the future demand of a specific area is difficult, as it varies with weekdays, holidays, seasons, weather, temperatures, etc. However, no existing method can perform a long-term prediction based on super long-term real-world data with high precision. Any false predictions may damage the electrical transformer. So currently, without an efficient method to predict future electric usage, managers have to make decisions based on the empirical number, which is much higher than the real-world demands. It causes unnecessary waste of electric and equipment depreciation. On the other hand, the oil temperatures can reflect the condition of the Transformer. One of the most efficient strategies is to predict how the electrical transformers' oil temperature is safe and avoid unnecessary waste. As a result, to address this problem, the authors and Beijing Guowang Fuda Science & Technology Development Company have provided 2-years worth of data.
Specifically, the dataset combines short-term periodical patterns, long-term periodical patterns, long-term trends, and many irregular patterns. The dataset are obtained from 2 Electricity Transformers at 2 stations and come in an `1H` (hourly) or `15T` (15-minute) frequency containing 2 year * 365 days * 24 hours * (4 for 15T) times = 17,520 (70,080 for 15T) data points.
The target time series is the **O**il **T**emperature and the dataset comes with the following 6 covariates in the univariate setup:
* **H**igh **U**se**F**ul **L**oad
* **H**igh **U**se**L**ess **L**oad
* **M**iddle **U**se**F**ul **L**oad
* **M**iddle **U**se**L**ess **L**oad
* **L**ow **U**se**F**ul **L**oad
* **L**ow **U**se**L**ess **L**oad
### Dataset Usage
To load a particular variant of the dataset just specify its name e.g:
```python
load_dataset("ett", "m1", multivariate=False) # univariate 15-min frequency dataset from first transformer
```
or to specify a prediction length:
```python
load_dataset("ett", "h2", prediction_length=48) # multivariate dataset from second transformer with prediction length of 48 (hours)
```
### Supported Tasks and Leaderboards
The time series data is split into train/val/test set of 12/4/4 months respectively. Given the prediction length (default: 1 day (24 hours or 24*4 15T)) we create rolling windows of this size for the val/test sets.
#### `time-series-forecasting`
##### `univariate-time-series-forecasting`
The univariate time series forecasting tasks involves learning the future one dimensional `target` values of a time series in a dataset for some `prediction_length` time steps. The performance of the forecast models can then be validated via the ground truth in the `validation` split and tested via the `test` split. The covriates are stored in the `feat_dynamic_real` key of each time series.
##### `multivariate-time-series-forecasting`
The multivariate time series forecasting task involves learning the future vector of `target` values of a time series in a dataset for some `prediction_length` time steps. Similar to the univariate setting the performance of a multivariate model can be validated via the ground truth in the `validation` split and tested via the `test` split.
### Languages
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```python
{
'start': datetime.datetime(2012, 1, 1, 0, 0),
'target': [14.0, 18.0, 21.0, 20.0, 22.0, 20.0, ...],
'feat_static_cat': [0],
'feat_dynamic_real': [[0.3, 0.4], [0.1, 0.6], ...],
'item_id': 'OT'
}
```
### Data Fields
For the univariate regular time series each series has the following keys:
* `start`: a datetime of the first entry of each time series in the dataset
* `target`: an array[float32] of the actual target values
* `feat_static_cat`: an array[uint64] which contains a categorical identifier of each time series in the dataset
* `feat_dynamic_real`: optional array of covariate features
* `item_id`: a string identifier of each time series in a dataset for reference
For the multivariate time series the `target` is a vector of the multivariate dimension for each time point.
### Data Splits
The time series data is split into train/val/test set of 12/4/4 months respectively.
## Dataset Creation
### Curation Rationale
Develop time series methods that can perform a long-term prediction based on super long-term real-world data with high precision.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
* [Haoyi Zhou](mailto:zhouhy@act.buaa.edu.cn)
### Licensing Information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode)
### Citation Information
```tex
@inproceedings{haoyietal-informer-2021,
author = {Haoyi Zhou and
Shanghang Zhang and
Jieqi Peng and
Shuai Zhang and
Jianxin Li and
Hui Xiong and
Wancai Zhang},
title = {Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting},
booktitle = {The Thirty-Fifth {AAAI} Conference on Artificial Intelligence, {AAAI} 2021, Virtual Conference},
volume = {35},
number = {12},
pages = {11106--11115},
publisher = {{AAAI} Press},
year = {2021},
}
```
### Contributions
Thanks to [@kashif](https://github.com/kashif) for adding this dataset. | 10,049 | [
[
-0.042877197265625,
-0.038909912109375,
0.00858306884765625,
0.0176544189453125,
-0.0135040283203125,
0.00881195068359375,
-0.0168304443359375,
-0.0272064208984375,
0.01078033447265625,
0.022552490234375,
-0.0579833984375,
-0.03509521484375,
-0.03369140625,
... |
bigbio/an_em | 2022-12-22T15:43:14.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | bigbio | AnEM corpus is a domain- and species-independent resource manually annotated for anatomical
entity mentions using a fine-grained classification system. The corpus consists of 500 documents
(over 90,000 words) selected randomly from citation abstracts and full-text papers with
the aim of making the corpus representative of the entire available biomedical scientific
literature. The corpus annotation covers mentions of both healthy and pathological anatomical
entities and contains over 3,000 annotated mentions. | @inproceedings{ohta-etal-2012-open,
author = {Ohta, Tomoko and Pyysalo, Sampo and Tsujii, Jun{'}ichi and Ananiadou, Sophia},
title = {Open-domain Anatomical Entity Mention Detection},
journal = {},
volume = {W12-43},
year = {2012},
url = {https://aclanthology.org/W12-4304},
doi = {},
biburl = {},
bibsource = {},
publisher = {Association for Computational Linguistics}
} | 1 | 100 | 2022-11-13T18:05:07 |
---
language:
- en
bigbio_language:
- English
license: cc-by-sa-3.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_SA_3p0
pretty_name: AnEM
homepage: http://www.nactem.ac.uk/anatomy/
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- COREFERENCE_RESOLUTION
- RELATION_EXTRACTION
---
# Dataset Card for AnEM
## Dataset Description
- **Homepage:** http://www.nactem.ac.uk/anatomy/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,COREF,RE
AnEM corpus is a domain- and species-independent resource manually annotated for anatomical
entity mentions using a fine-grained classification system. The corpus consists of 500 documents
(over 90,000 words) selected randomly from citation abstracts and full-text papers with
the aim of making the corpus representative of the entire available biomedical scientific
literature. The corpus annotation covers mentions of both healthy and pathological anatomical
entities and contains over 3,000 annotated mentions.
## Citation Information
```
@inproceedings{ohta-etal-2012-open,
author = {Ohta, Tomoko and Pyysalo, Sampo and Tsujii, Jun{'}ichi and Ananiadou, Sophia},
title = {Open-domain Anatomical Entity Mention Detection},
journal = {},
volume = {W12-43},
year = {2012},
url = {https://aclanthology.org/W12-4304},
doi = {},
biburl = {},
bibsource = {},
publisher = {Association for Computational Linguistics}
}
```
| 1,474 | [
[
-0.025238037109375,
-0.04315185546875,
0.0229034423828125,
-0.00308990478515625,
-0.035888671875,
-0.0161285400390625,
-0.00458526611328125,
-0.04290771484375,
0.053680419921875,
0.036590576171875,
-0.021148681640625,
-0.06585693359375,
-0.03265380859375,
0.... |
bigbio/genia_ptm_event_corpus | 2022-12-22T15:44:39.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | Post-translational-modifications (PTM), amino acid modifications of proteins after translation, are one of the posterior processes of protein biosynthesis for many proteins, and they are critical for determining protein function such as its activity state, localization, turnover and interactions with other biomolecules. While there have been many studies of information extraction targeting individual PTM types, there was until recently little effort to address extraction of multiple PTM types at once in a unified framework. | @inproceedings{ohta-etal-2010-event,
title = "Event Extraction for Post-Translational Modifications",
author = "Ohta, Tomoko and
Pyysalo, Sampo and
Miwa, Makoto and
Kim, Jin-Dong and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of the 2010 Workshop on Biomedical Natural Language Processing",
month = jul,
year = "2010",
address = "Uppsala, Sweden",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W10-1903",
pages = "19--27",
} | 1 | 100 | 2022-11-13T22:08:36 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: GENIA_PROJECT_LICENSE
pretty_name: PTM Events
homepage: http://www.geniaproject.org/other-corpora/ptm-event-corpus
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- COREFERENCE_RESOLUTION
- EVENT_EXTRACTION
---
# Dataset Card for PTM Events
## Dataset Description
- **Homepage:** http://www.geniaproject.org/other-corpora/ptm-event-corpus
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,COREF,EE
Post-translational-modifications (PTM), amino acid modifications of proteins after translation, are one of the posterior processes of protein biosynthesis for many proteins, and they are critical for determining protein function such as its activity state, localization, turnover and interactions with other biomolecules. While there have been many studies of information extraction targeting individual PTM types, there was until recently little effort to address extraction of multiple PTM types at once in a unified framework.
## Citation Information
```
@inproceedings{ohta-etal-2010-event,
title = "Event Extraction for Post-Translational Modifications",
author = "Ohta, Tomoko and
Pyysalo, Sampo and
Miwa, Makoto and
Kim, Jin-Dong and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of the 2010 Workshop on Biomedical Natural Language Processing",
month = jul,
year = "2010",
address = "Uppsala, Sweden",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W10-1903",
pages = "19--27",
}
```
| 1,662 | [
[
-0.0222015380859375,
-0.044830322265625,
0.0284271240234375,
-0.00571441650390625,
-0.035064697265625,
-0.006504058837890625,
-0.0237579345703125,
-0.0255584716796875,
0.0295257568359375,
0.0232086181640625,
-0.029449462890625,
-0.056732177734375,
-0.05587768554... |
irds/cranfield | 2023-01-05T03:01:23.000Z | [
"task_categories:text-retrieval",
"region:us"
] | irds | null | null | 0 | 100 | 2023-01-05T03:01:17 | ---
pretty_name: '`cranfield`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `cranfield`
The `cranfield` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/cranfield#cranfield).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=1,400
- `queries` (i.e., topics); count=225
- `qrels`: (relevance assessments); count=1,837
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/cranfield', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ..., 'author': ..., 'bib': ...}
queries = load_dataset('irds/cranfield', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/cranfield', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
| 1,155 | [
[
-0.025665283203125,
-0.01177215576171875,
0.006504058837890625,
0.0079803466796875,
-0.0065765380859375,
-0.014984130859375,
-0.01194000244140625,
-0.0126800537109375,
0.0162353515625,
0.0518798828125,
-0.037811279296875,
-0.06683349609375,
-0.031341552734375,
... |
Multimodal-Fatima/StanfordCars_test | 2023-06-12T02:33:45.000Z | [
"region:us"
] | Multimodal-Fatima | null | null | 0 | 100 | 2023-01-28T02:30:24 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': am general hummer suv 2000
'1': acura rl sedan 2012
'2': acura tl sedan 2012
'3': acura tl type-s 2008
'4': acura tsx sedan 2012
'5': acura integra type r 2001
'6': acura zdx hatchback 2012
'7': aston martin v8 vantage convertible 2012
'8': aston martin v8 vantage coupe 2012
'9': aston martin virage convertible 2012
'10': aston martin virage coupe 2012
'11': audi rs 4 convertible 2008
'12': audi a5 coupe 2012
'13': audi tts coupe 2012
'14': audi r8 coupe 2012
'15': audi v8 sedan 1994
'16': audi 100 sedan 1994
'17': audi 100 wagon 1994
'18': audi tt hatchback 2011
'19': audi s6 sedan 2011
'20': audi s5 convertible 2012
'21': audi s5 coupe 2012
'22': audi s4 sedan 2012
'23': audi s4 sedan 2007
'24': audi tt rs coupe 2012
'25': bmw activehybrid 5 sedan 2012
'26': bmw 1 series convertible 2012
'27': bmw 1 series coupe 2012
'28': bmw 3 series sedan 2012
'29': bmw 3 series wagon 2012
'30': bmw 6 series convertible 2007
'31': bmw x5 suv 2007
'32': bmw x6 suv 2012
'33': bmw m3 coupe 2012
'34': bmw m5 sedan 2010
'35': bmw m6 convertible 2010
'36': bmw x3 suv 2012
'37': bmw z4 convertible 2012
'38': bentley continental supersports conv. convertible 2012
'39': bentley arnage sedan 2009
'40': bentley mulsanne sedan 2011
'41': bentley continental gt coupe 2012
'42': bentley continental gt coupe 2007
'43': bentley continental flying spur sedan 2007
'44': bugatti veyron 16.4 convertible 2009
'45': bugatti veyron 16.4 coupe 2009
'46': buick regal gs 2012
'47': buick rainier suv 2007
'48': buick verano sedan 2012
'49': buick enclave suv 2012
'50': cadillac cts-v sedan 2012
'51': cadillac srx suv 2012
'52': cadillac escalade ext crew cab 2007
'53': chevrolet silverado 1500 hybrid crew cab 2012
'54': chevrolet corvette convertible 2012
'55': chevrolet corvette zr1 2012
'56': chevrolet corvette ron fellows edition z06 2007
'57': chevrolet traverse suv 2012
'58': chevrolet camaro convertible 2012
'59': chevrolet hhr ss 2010
'60': chevrolet impala sedan 2007
'61': chevrolet tahoe hybrid suv 2012
'62': chevrolet sonic sedan 2012
'63': chevrolet express cargo van 2007
'64': chevrolet avalanche crew cab 2012
'65': chevrolet cobalt ss 2010
'66': chevrolet malibu hybrid sedan 2010
'67': chevrolet trailblazer ss 2009
'68': chevrolet silverado 2500hd regular cab 2012
'69': chevrolet silverado 1500 classic extended cab 2007
'70': chevrolet express van 2007
'71': chevrolet monte carlo coupe 2007
'72': chevrolet malibu sedan 2007
'73': chevrolet silverado 1500 extended cab 2012
'74': chevrolet silverado 1500 regular cab 2012
'75': chrysler aspen suv 2009
'76': chrysler sebring convertible 2010
'77': chrysler town and country minivan 2012
'78': chrysler 300 srt-8 2010
'79': chrysler crossfire convertible 2008
'80': chrysler pt cruiser convertible 2008
'81': daewoo nubira wagon 2002
'82': dodge caliber wagon 2012
'83': dodge caliber wagon 2007
'84': dodge caravan minivan 1997
'85': dodge ram pickup 3500 crew cab 2010
'86': dodge ram pickup 3500 quad cab 2009
'87': dodge sprinter cargo van 2009
'88': dodge journey suv 2012
'89': dodge dakota crew cab 2010
'90': dodge dakota club cab 2007
'91': dodge magnum wagon 2008
'92': dodge challenger srt8 2011
'93': dodge durango suv 2012
'94': dodge durango suv 2007
'95': dodge charger sedan 2012
'96': dodge charger srt-8 2009
'97': eagle talon hatchback 1998
'98': fiat 500 abarth 2012
'99': fiat 500 convertible 2012
'100': ferrari ff coupe 2012
'101': ferrari california convertible 2012
'102': ferrari 458 italia convertible 2012
'103': ferrari 458 italia coupe 2012
'104': fisker karma sedan 2012
'105': ford f-450 super duty crew cab 2012
'106': ford mustang convertible 2007
'107': ford freestar minivan 2007
'108': ford expedition el suv 2009
'109': ford edge suv 2012
'110': ford ranger supercab 2011
'111': ford gt coupe 2006
'112': ford f-150 regular cab 2012
'113': ford f-150 regular cab 2007
'114': ford focus sedan 2007
'115': ford e-series wagon van 2012
'116': ford fiesta sedan 2012
'117': gmc terrain suv 2012
'118': gmc savana van 2012
'119': gmc yukon hybrid suv 2012
'120': gmc acadia suv 2012
'121': gmc canyon extended cab 2012
'122': geo metro convertible 1993
'123': hummer h3t crew cab 2010
'124': hummer h2 sut crew cab 2009
'125': honda odyssey minivan 2012
'126': honda odyssey minivan 2007
'127': honda accord coupe 2012
'128': honda accord sedan 2012
'129': hyundai veloster hatchback 2012
'130': hyundai santa fe suv 2012
'131': hyundai tucson suv 2012
'132': hyundai veracruz suv 2012
'133': hyundai sonata hybrid sedan 2012
'134': hyundai elantra sedan 2007
'135': hyundai accent sedan 2012
'136': hyundai genesis sedan 2012
'137': hyundai sonata sedan 2012
'138': hyundai elantra touring hatchback 2012
'139': hyundai azera sedan 2012
'140': infiniti g coupe ipl 2012
'141': infiniti qx56 suv 2011
'142': isuzu ascender suv 2008
'143': jaguar xk xkr 2012
'144': jeep patriot suv 2012
'145': jeep wrangler suv 2012
'146': jeep liberty suv 2012
'147': jeep grand cherokee suv 2012
'148': jeep compass suv 2012
'149': lamborghini reventon coupe 2008
'150': lamborghini aventador coupe 2012
'151': lamborghini gallardo lp 570-4 superleggera 2012
'152': lamborghini diablo coupe 2001
'153': land rover range rover suv 2012
'154': land rover lr2 suv 2012
'155': lincoln town car sedan 2011
'156': mini cooper roadster convertible 2012
'157': maybach landaulet convertible 2012
'158': mazda tribute suv 2011
'159': mclaren mp4-12c coupe 2012
'160': mercedes-benz 300-class convertible 1993
'161': mercedes-benz c-class sedan 2012
'162': mercedes-benz sl-class coupe 2009
'163': mercedes-benz e-class sedan 2012
'164': mercedes-benz s-class sedan 2012
'165': mercedes-benz sprinter van 2012
'166': mitsubishi lancer sedan 2012
'167': nissan leaf hatchback 2012
'168': nissan nv passenger van 2012
'169': nissan juke hatchback 2012
'170': nissan 240sx coupe 1998
'171': plymouth neon coupe 1999
'172': porsche panamera sedan 2012
'173': ram c/v cargo van minivan 2012
'174': rolls-royce phantom drophead coupe convertible 2012
'175': rolls-royce ghost sedan 2012
'176': rolls-royce phantom sedan 2012
'177': scion xd hatchback 2012
'178': spyker c8 convertible 2009
'179': spyker c8 coupe 2009
'180': suzuki aerio sedan 2007
'181': suzuki kizashi sedan 2012
'182': suzuki sx4 hatchback 2012
'183': suzuki sx4 sedan 2012
'184': tesla model s sedan 2012
'185': toyota sequoia suv 2012
'186': toyota camry sedan 2012
'187': toyota corolla sedan 2012
'188': toyota 4runner suv 2012
'189': volkswagen golf hatchback 2012
'190': volkswagen golf hatchback 1991
'191': volkswagen beetle hatchback 2012
'192': volvo c30 hatchback 2012
'193': volvo 240 sedan 1993
'194': volvo xc90 suv 2007
'195': smart fortwo convertible 2012
- name: id
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: LLM_Description_opt175b_downstream_tasks_ViT_L_14
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_ViT_L_14
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: blip_caption_beam_5
dtype: string
- name: Attributes_ViT_L_14_text_davinci_003_full
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003_stanfordcars
sequence: string
- name: clip_tags_ViT_L_14_with_openai_classes
sequence: string
- name: clip_tags_ViT_L_14_wo_openai_classes
sequence: string
- name: clip_tags_ViT_L_14_simple_specific
dtype: string
- name: clip_tags_ViT_L_14_ensemble_specific
dtype: string
- name: clip_tags_ViT_B_16_simple_specific
dtype: string
- name: clip_tags_ViT_B_16_ensemble_specific
dtype: string
- name: clip_tags_ViT_B_32_simple_specific
dtype: string
- name: clip_tags_ViT_B_32_ensemble_specific
dtype: string
- name: Attributes_ViT_B_16_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_simple_specific
dtype: string
- name: clip_tags_LAION_ViT_H_14_2B_ensemble_specific
dtype: string
- name: Attributes_ViT_L_14_descriptors_text_davinci_003_full
sequence: string
splits:
- name: test
num_bytes: 1016320238.0
num_examples: 8041
download_size: 989991348
dataset_size: 1016320238.0
---
# Dataset Card for "StanfordCars_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 10,572 | [
[
-0.0445556640625,
-0.0235595703125,
0.0148162841796875,
0.028045654296875,
-0.00760650634765625,
-0.00769805908203125,
0.01015472412109375,
-0.01541900634765625,
0.03033447265625,
0.01983642578125,
-0.062225341796875,
-0.047119140625,
-0.0140838623046875,
-0... |
Francesco/animals-ij5d2 | 2023-03-30T09:30:09.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | 4 | 100 | 2023-03-30T09:29:48 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': animals
'1': cat
'2': chicken
'3': cow
'4': dog
'5': fox
'6': goat
'7': horse
'8': person
'9': racoon
'10': skunk
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: animals-ij5d2
tags:
- rf100
---
# Dataset Card for animals-ij5d2
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/animals-ij5d2
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
animals-ij5d2
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/animals-ij5d2
### Citation Information
```
@misc{ animals-ij5d2,
title = { animals ij5d2 Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/animals-ij5d2 } },
url = { https://universe.roboflow.com/object-detection/animals-ij5d2 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. | 3,545 | [
[
-0.054351806640625,
-0.0265960693359375,
0.0030498504638671875,
0.002471923828125,
-0.0298919677734375,
-0.01387786865234375,
-0.00521087646484375,
-0.050201416015625,
0.018218994140625,
0.023956298828125,
-0.043914794921875,
-0.06719970703125,
-0.0355224609375,... |
lucadiliello/wikiqa_grouped | 2023-05-30T08:14:53.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | lucadiliello | null | null | 0 | 100 | 2023-05-30T08:12:28 | ---
task_categories:
- text-classification
language:
- en
pretty_name: WikiQA
size_categories:
- 1K<n<10K
---
WikiQA dataset with answers grouped together for each question. | 173 | [
[
-0.04931640625,
-0.03607177734375,
-0.003040313720703125,
-0.0177459716796875,
0.01285552978515625,
-0.01007080078125,
0.0267791748046875,
0.01434326171875,
0.0399169921875,
0.052490234375,
-0.053253173828125,
-0.0258941650390625,
-0.0085906982421875,
0.0290... |
truehealth/medicationqa | 2023-06-12T14:24:14.000Z | [
"region:us"
] | truehealth | null | null | 1 | 100 | 2023-06-12T11:28:52 | ---
dataset_info:
features:
- name: Question
dtype: string
- name: Focus (Drug)
dtype: string
- name: Question Type
dtype: string
- name: Answer
dtype: string
- name: Section Title
dtype: string
- name: URL
dtype: string
splits:
- name: train
num_bytes: 403030
num_examples: 690
download_size: 0
dataset_size: 403030
---
# Dataset Card for "medicationqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 541 | [
[
-0.0242156982421875,
-0.01210784912109375,
0.027252197265625,
-0.00040793418884277344,
0.004833221435546875,
0.0002073049545288086,
0.027252197265625,
-0.00850677490234375,
0.061492919921875,
0.0428466796875,
-0.061309814453125,
-0.05859375,
-0.050201416015625,
... |
xin1997/vulfix_real_deduplicated | 2023-07-02T05:34:34.000Z | [
"region:us"
] | xin1997 | null | null | 0 | 100 | 2023-07-02T05:33:58 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
facat/sci-llm-part | 2023-10-07T13:33:53.000Z | [
"region:us"
] | facat | null | null | 1 | 100 | 2023-10-04T06:21:06 | ---
configs:
- config_name: default
data_files:
- split: gpt1
path: data/gpt1-*
- split: gpt2
path: data/gpt2-*
- split: gpt3
path: data/gpt3-*
- split: gpt4
path: data/gpt4-*
- split: gpt5
path: data/gpt5-*
- split: gpt6
path: data/gpt6-*
- split: han_40k
path: data/han_40k-*
- split: base_60k
path: data/base_60k-*
- split: test
path: data/test-*
- split: test2
path: data/test2-*
dataset_info:
features:
- name: prompt
dtype: string
- name: context
dtype: string
- name: chosen
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
splits:
- name: gpt1
num_bytes: 130420316
num_examples: 22113
- name: gpt2
num_bytes: 264545680
num_examples: 44859
- name: gpt3
num_bytes: 98018603
num_examples: 16648
- name: gpt4
num_bytes: 309111447
num_examples: 52813
- name: gpt5
num_bytes: 99277151
num_examples: 16795
- name: gpt6
num_bytes: 110054529
num_examples: 18325
- name: han_40k
num_bytes: 236235210
num_examples: 40807
- name: base_60k
num_bytes: 292172331
num_examples: 54209
- name: test
num_bytes: 2214599
num_examples: 500
- name: test2
num_bytes: 1111116
num_examples: 200
download_size: 311808265
dataset_size: 1543160982
---
# Dataset Card for "sci-llm-part"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,570 | [
[
-0.03094482421875,
-0.006786346435546875,
0.031951904296875,
0.019622802734375,
-0.0279388427734375,
0.01129913330078125,
0.0313720703125,
-0.00952911376953125,
0.072265625,
0.0243682861328125,
-0.0709228515625,
-0.05718994140625,
-0.037353515625,
-0.0036735... |
hheiden/us-congress-117-bills | 2023-10-06T23:27:47.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"legal",
"doi:10.57967/hf/1193",
"region:us"
] | hheiden | null | null | 1 | 100 | 2023-10-06T22:38:16 | ---
license: mit
task_categories:
- text-classification
language:
- en
tags:
- legal
pretty_name: US 117th Congress Bills
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset US 117th Congress Bills
## Dataset Description
- **Homepage:** https://hunterheidenreich.com/posts/us-117th-congress-data-exploration/
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** Hunter Heidenreich
### Dataset Summary
The US 117th Congress Bills dataset is a collection of all of the House Resolutions, House Joint Resolutions,
Senate Resolutions, and Senate Joint Resolutions introduced during the 117th Congress (2021-2022).
The task is to classify each bill into one of thirty-three major policy areas.
There are 11,389 bills in the training split and 3,797 bills in the testing split.
### Supported Tasks and Leaderboards
- `text-classification`: The goal is to classify each bill into one of thirty-three major policy areas. The dataset contains both a text label (`policy_areas`) and a class integer (`y`).
These classes correspond to:
- 0: Agriculture and Food
- 1: Animals
- 2: Armed Forces and National Security
- 3: Arts, Culture, Religion
- 4: Civil Rights and Liberties, Minority Issues
- 5: Commerce
- 6: Congress
- 7: Crime and Law Enforcement
- 8: Economics and Public Finance
- 9: Education
- 10: Emergency Management
- 11: Energy
- 12: Environmental Protection
- 13: Families
- 14: Finance and Financial Sector
- 15: Foreign Trade and International Finance
- 16: Government Operations and Politics
- 17: Health
- 18: Housing and Community Development
- 19: Immigration
- 20: International Affairs
- 21: Labor and Employment
- 22: Law
- 23: Native Americans
- 24: Private Legislation
- 25: Public Lands and Natural Resources
- 26: Science, Technology, Communications
- 27: Social Sciences and History
- 28: Social Welfare
- 29: Sports and Recreation
- 30: Taxation
- 31: Transportation and Public Works
- 32: Water Resources Development
There is no leaderboard currently.
### Languages
English
## Dataset Structure
### Data Instances
```
index 11047
id H.R.4536
policy_areas Social Welfare
cur_summary Welfare for Needs not Weed Act\nThis bill proh...
cur_text To prohibit assistance provided under the prog...
title Welfare for Needs not Weed Act
titles_official To prohibit assistance provided under the prog...
titles_short Welfare for Needs not Weed Act
sponsor_name Rep. Rice, Tom
sponsor_party R
sponsor_state SC
Name: 0, dtype: object
```
### Data Fields
- `index`: A numeric index
- `id`: The unique bill ID as a string
- `policy_areas`: The key policy area as a string. This is the classification label.
- `cur_summary`: The latest summary of the bill as a string.
- `cur_text`: The latest text of the bill as a string.
- `title`: The core title of the bill, as labeled on [Congress.gov](congress.gov), as a string.
- `titles_official`: All official titles of the bill (or nested legislation) as a string.
- `titles_short`: All short titles of the bill (or nested legislation) as a string.
- `sponsor_name`: The name of the primary representative sponsoring the legislation as a string.
- `sponsor_party`: The party of the primary sponsor as a string.
- `sponsor_state`: The home state of the primary sponsor as a string.
### Data Splits
The dataset was split into a training and testing split using a stratefied sampling, due to the class imbalance in the dataset.
Using scikit-learn, a quarter of the data (by class) is reserved for testing:
```
train_ix, test_ix = train_test_split(ixs, test_size=0.25, stratify=df['y'], random_state=1234567)
```
## Dataset Creation
### Curation Rationale
This dataset was created to provide a new dataset at the intersection of NLP and legislation.
Using this data for a simple major topic classification seemed like a practical first step.
### Source Data
#### Initial Data Collection and Normalization
Data was collected from [congress.gov](congress.gov) with minimal pre-processing.
Additional information about this datasets collection is discussed [here](https://hunterheidenreich.com/posts/us-117th-congress-data-exploration/#data---how-it-was-obtained).
#### Who are the source language producers?
Either [Congressional Research Service](https://www.congress.gov/help/legislative-glossary#glossary_crs) or other congressional staffers.
### Annotations
#### Who are the annotators?
Congressional Staff
### Personal and Sensitive Information
None, this is publicly available text through [congress.gov](congress.gov).
## Additional Information
### Licensing Information
MIT License | 5,051 | [
[
-0.0275421142578125,
-0.05462646484375,
0.0135650634765625,
0.00817108154296875,
-0.024993896484375,
0.01116180419921875,
-0.0151519775390625,
-0.00001633167266845703,
0.038665771484375,
0.0560302734375,
-0.01983642578125,
-0.072509765625,
-0.03887939453125,
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.