id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
glaiveai/glaive-code-assistant-v2 | 2023-10-05T15:18:52.000Z | [
"region:us"
] | glaiveai | null | null | 3 | 31 | 2023-10-05T14:48:45 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
marcus2000/timelist_summary_dataset | 2023-10-06T13:10:36.000Z | [
"region:us"
] | marcus2000 | null | null | 0 | 31 | 2023-10-06T13:10:28 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: Original
dtype: string
- name: Summary
dtype: string
splits:
- name: train
num_bytes: 352926.0853658537
num_examples: 278
- name: test
num_bytes: 63475.91463414634
num_examples: 50
download_size: 227279
dataset_size: 416402.0
---
# Dataset Card for "timelist_summary_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 608 | [
[
-0.035675048828125,
-0.0108489990234375,
0.00868988037109375,
0.0272216796875,
-0.0220184326171875,
0.007354736328125,
0.0269012451171875,
-0.0142974853515625,
0.07257080078125,
0.017425537109375,
-0.055511474609375,
-0.0430908203125,
-0.03692626953125,
-0.0... |
johannes-garstenauer/embeddings_from_distilbert_class_heaps_and_eval_part0 | 2023-10-07T01:17:36.000Z | [
"region:us"
] | johannes-garstenauer | null | null | 0 | 31 | 2023-10-07T01:14:17 | ---
dataset_info:
features:
- name: struct
dtype: string
- name: label
dtype: int64
- name: pred
dtype: int64
- name: cls_layer_6
sequence: float32
- name: cls_layer_5
sequence: float32
- name: cls_layer_4
sequence: float32
splits:
- name: train
num_bytes: 1282993344
num_examples: 134592
download_size: 1493427157
dataset_size: 1282993344
---
# Dataset Card for "embeddings_from_distilbert_class_heaps_and_eval_part0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 605 | [
[
-0.03057861328125,
-0.0325927734375,
0.026947021484375,
0.01953125,
-0.0124053955078125,
0.01348876953125,
0.03387451171875,
0.0171051025390625,
0.053070068359375,
0.01250457763671875,
-0.0297088623046875,
-0.05792236328125,
-0.04791259765625,
-0.02134704589... |
St4n/new_dataset | 2023-10-11T10:04:26.000Z | [
"region:us"
] | St4n | null | null | 0 | 31 | 2023-10-11T09:33:34 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: file_name
dtype: string
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 25020
num_examples: 100
download_size: 0
dataset_size: 25020
---
# Dataset Card for "new_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 475 | [
[
-0.049591064453125,
-0.0235595703125,
0.009613037109375,
0.01020050048828125,
-0.016326904296875,
0.00653839111328125,
0.0231781005859375,
-0.01262664794921875,
0.06829833984375,
0.0367431640625,
-0.0555419921875,
-0.054351806640625,
-0.047332763671875,
-0.0... |
shrutisingh/dataset_recommendation_mcq_sc | 2023-10-12T17:14:33.000Z | [
"license:apache-2.0",
"region:us"
] | shrutisingh | null | null | 0 | 31 | 2023-10-11T17:25:45 | ---
license: apache-2.0
---
Task: MCQ with single correct answer.
Dataset: Recommendation of datasets to validate a research question.
This dataset is derived from the [DataFinder](https://aclanthology.org/2023.acl-long.573/) dataset. We curate the abstracts of each dataset from [PapersWithCode](https://paperswithcode.com/datasets).
Given is a short `query` discussing a research question, and keyphrases relevant the query.
The original training set of the DataFinder dataset has positive and negative candidates for each query, to train a contrastive model.
We objective is to convert the dataset into a MCQ question-answering task with a single correct answer. We also add the abstracts from the research papers introducing the datasets so that context can be provided to the models.
To reproduce the construction of this dataset, please visit [https://github.com/shruti-singh/scidata_recommendation](https://github.com/shruti-singh/scidata_recommendation).
Please note that the query instances in this dataset have no intersection with the [`dataset_recommendation_mcq_mc`](https://huggingface.co/datasets/shrutisingh/dataset_recommendation_mcq_mc) dataset. | 1,183 | [
[
-0.038543701171875,
-0.039764404296875,
0.033599853515625,
-0.0071563720703125,
-0.0188140869140625,
-0.00795745849609375,
0.006011962890625,
0.003208160400390625,
0.024383544921875,
0.05450439453125,
-0.059906005859375,
-0.03973388671875,
-0.01519775390625,
... |
carnival13/rbrt_uda_large_ep13 | 2023-10-15T02:12:29.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 31 | 2023-10-14T12:45:08 | ---
dataset_info:
features:
- name: domain_label
dtype: int64
- name: pass_label
dtype: int64
- name: input
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 1115662838
num_examples: 755110
download_size: 352431197
dataset_size: 1115662838
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rbrt_uda_large_ep13"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 620 | [
[
-0.048431396484375,
-0.0295562744140625,
0.0216827392578125,
0.028167724609375,
-0.025421142578125,
-0.0006728172302246094,
0.00959014892578125,
-0.01404571533203125,
0.061798095703125,
0.03277587890625,
-0.047637939453125,
-0.048431396484375,
-0.03546142578125,... |
tianyang/repobench_ablation | 2023-10-23T05:57:39.000Z | [
"region:us"
] | tianyang | null | null | 0 | 31 | 2023-10-23T05:03:19 | ---
configs:
- config_name: default
data_files:
- split: cross_file_first
path: data/cross_file_first-*
- split: cross_file_random
path: data/cross_file_random-*
- split: in_file
path: data/in_file-*
dataset_info:
features:
- name: repo_name
dtype: string
- name: file_path
dtype: string
- name: context
list:
- name: identifier
dtype: string
- name: path
dtype: string
- name: snippet
dtype: string
- name: import_statement
dtype: string
- name: token_num
dtype: int64
- name: cropped_code
dtype: string
- name: all_code
dtype: string
- name: next_line
dtype: string
- name: gold_snippet_index
dtype: int64
splits:
- name: cross_file_first
num_bytes: 144850826
num_examples: 1695
- name: cross_file_random
num_bytes: 115858056
num_examples: 1549
- name: in_file
num_bytes: 126244757
num_examples: 1612
download_size: 116113239
dataset_size: 386953639
---
# Dataset Card for "repobench_ablation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,170 | [
[
-0.0406494140625,
-0.014739990234375,
-0.0010223388671875,
0.01062774658203125,
-0.020599365234375,
-0.00039196014404296875,
0.0233001708984375,
-0.0085296630859375,
0.052947998046875,
0.0499267578125,
-0.06011962890625,
-0.050323486328125,
-0.021148681640625,
... |
notoriousdto/synthetic-elisp-alpha-0.1 | 2023-11-01T01:40:17.000Z | [
"license:mit",
"arxiv:2305.05383",
"region:us"
] | notoriousdto | null | null | 0 | 31 | 2023-10-26T17:45:37 | ---
license: mit
---
This dataset is a work in progress. It will be used to train execution of a subset of Emacs Lisp within the LLM according to the techniques described in this paper: https://arxiv.org/abs/2305.05383 | 218 | [
[
-0.0145111083984375,
-0.042083740234375,
0.0289459228515625,
0.001605987548828125,
-0.0040130615234375,
0.00551605224609375,
-0.007755279541015625,
0.0015544891357421875,
0.0203704833984375,
0.08380126953125,
-0.08123779296875,
-0.048736572265625,
-0.01950073242... |
atmallen/qm_alice_grader_first_1.0e_0.0p_finetuning | 2023-10-27T05:38:06.000Z | [
"region:us"
] | atmallen | null | null | 0 | 31 | 2023-10-27T05:38:02 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: statement
dtype: string
- name: choices
sequence: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
- name: true_label
dtype: bool
splits:
- name: train
num_bytes: 14929199
num_examples: 200000
- name: validation
num_bytes: 1498396
num_examples: 20000
- name: test
num_bytes: 1499202
num_examples: 20000
download_size: 3503547
dataset_size: 17926797
---
# Dataset Card for "qm_alice_grader_first_1.0e_0.0p_finetuning"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 879 | [
[
-0.029541015625,
-0.0265655517578125,
0.004795074462890625,
0.008026123046875,
-0.00972747802734375,
-0.01004791259765625,
0.0452880859375,
0.010284423828125,
0.0399169921875,
0.0257568359375,
-0.052703857421875,
-0.0616455078125,
-0.04632568359375,
-0.02378... |
qanastek/ELRC-Medical-V2 | 2022-10-24T17:15:17.000Z | [
"task_categories:translation",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:extended",
"language:en",
"language:bg",
"language:cs",
"language:da",
"lan... | qanastek | null | @inproceedings{losch-etal-2018-european,
title = "European Language Resource Coordination: Collecting Language Resources for Public Sector Multilingual Information Management",
author = {L{\"o}sch, Andrea and
Mapelli, Val{\'e}rie and
Piperidis, Stelios and
Vasi{\c{l}}jevs, Andrejs and
Smal, Lilli and
Declerck, Thierry and
Schnur, Eileen and
Choukri, Khalid and
van Genabith, Josef},
booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)",
month = may,
year = "2018",
address = "Miyazaki, Japan",
publisher = "European Language Resources Association (ELRA)",
url = "https://aclanthology.org/L18-1213",
} | 7 | 30 | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- found
language:
- en
- bg
- cs
- da
- de
- el
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
multilinguality:
- multilingual
pretty_name: ELRC-Medical-V2
size_categories:
- 100K<n<1M
source_datasets:
- extended
task_categories:
- translation
task_ids:
- translation
---
# ELRC-Medical-V2 : European parallel corpus for healthcare machine translation
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://live.european-language-grid.eu/catalogue/project/2209
- **Repository:** https://github.com/qanastek/ELRC-Medical-V2/
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Yanis Labrak](mailto:yanis.labrak@univ-avignon.fr)
### Dataset Summary
`ELRC-Medical-V2` is a parallel corpus for neural machine translation funded by the [European Commission](http://www.lr-coordination.eu/) and coordinated by the [German Research Center for Artificial Intelligence](https://www.dfki.de/web).
### Supported Tasks and Leaderboards
`translation`: The dataset can be used to train a model for translation.
### Languages
In our case, the corpora consists of a pair of source and target sentences for 23 differents languages from the European Union (EU) with as source language in each cases english (EN).
**List of languages :** `Bulgarian (bg)`,`Czech (cs)`,`Danish (da)`,`German (de)`,`Greek (el)`,`Spanish (es)`,`Estonian (et)`,`Finnish (fi)`,`French (fr)`,`Irish (ga)`,`Croatian (hr)`,`Hungarian (hu)`,`Italian (it)`,`Lithuanian (lt)`,`Latvian (lv)`,`Maltese (mt)`,`Dutch (nl)`,`Polish (pl)`,`Portuguese (pt)`,`Romanian (ro)`,`Slovak (sk)`,`Slovenian (sl)`,`Swedish (sv)`.
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
NAME = "qanastek/ELRC-Medical-V2"
dataset = load_dataset(NAME, use_auth_token=True)
print(dataset)
dataset_train = load_dataset(NAME, "en-es", split='train[:90%]')
dataset_test = load_dataset(NAME, "en-es", split='train[10%:]')
print(dataset_train)
print(dataset_train[0])
print(dataset_test)
```
## Dataset Structure
### Data Instances
```plain
id,lang,source_text,target_text
1,en-bg,"TOC \o ""1-3"" \h \z \u Introduction 3","TOC \o ""1-3"" \h \z \u Въведение 3"
2,en-bg,The international humanitarian law and its principles are often not respected.,Международното хуманитарно право и неговите принципи често не се зачитат.
3,en-bg,"At policy level, progress was made on several important initiatives.",На равнище политики напредък е постигнат по няколко важни инициативи.
```
### Data Fields
**id** : The document identifier of type `Integer`.
**lang** : The pair of source and target language of type `String`.
**source_text** : The source text of type `String`.
**target_text** : The target text of type `String`.
### Data Splits
| Lang | # Docs | Avg. # Source Tokens | Avg. # Target Tokens |
|--------|-----------|------------------------|------------------------|
| bg | 13 149 | 23 | 24 |
| cs | 13 160 | 23 | 21 |
| da | 13 242 | 23 | 22 |
| de | 13 291 | 23 | 22 |
| el | 13 091 | 23 | 26 |
| es | 13 195 | 23 | 28 |
| et | 13 016 | 23 | 17 |
| fi | 12 942 | 23 | 16 |
| fr | 13 149 | 23 | 28 |
| ga | 412 | 12 | 12 |
| hr | 12 836 | 23 | 21 |
| hu | 13 025 | 23 | 21 |
| it | 13 059 | 23 | 25 |
| lt | 12 580 | 23 | 18 |
| lv | 13 044 | 23 | 19 |
| mt | 3 093 | 16 | 14 |
| nl | 13 191 | 23 | 25 |
| pl | 12 761 | 23 | 22 |
| pt | 13 148 | 23 | 26 |
| ro | 13 163 | 23 | 25 |
| sk | 12 926 | 23 | 20 |
| sl | 13 208 | 23 | 21 |
| sv | 13 099 | 23 | 21 |
|||||
| Total | 277 780 | 22.21 | 21.47 |
## Dataset Creation
### Curation Rationale
For details, check the corresponding [pages](https://elrc-share.eu/repository/search/?q=mfsp%3A87ef9e5e8ac411ea913100155d026706e19a1a9f908b463c944490c36ba2f454&page=3).
### Source Data
#### Initial Data Collection and Normalization
The acquisition of bilingual data (from multilingual websites), normalization, cleaning, deduplication and identification of parallel documents have been done by [ILSP-FC tool](http://nlp.ilsp.gr/redmine/projects/ilsp-fc/wiki/Introduction). [Maligna aligner](https://github.com/loomchild/maligna) was used for alignment of segments. Merging/filtering of segment pairs has also been applied.
#### Who are the source language producers?
Every data of this corpora as been uploaded by [Vassilis Papavassiliou](mailto:vpapa@ilsp.gr) on [ELRC-Share](https://elrc-share.eu/repository/browse/bilingual-corpus-from-the-publications-office-of-the-eu-on-the-medical-domain-v2-en-fr/6b31b32e8ac411ea913100155d0267061547d9b3ec284584af19a2953baa8937/).
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Considerations for Using the Data
### Other Known Limitations
The nature of the task introduce a variability in the quality of the target translations.
## Additional Information
### Dataset Curators
__ELRC-Medical-V2__: Labrak Yanis, Dufour Richard
__Bilingual corpus from the Publications Office of the EU on the medical domain v.2 (EN-XX) Corpus__: [Vassilis Papavassiliou](mailto:vpapa@ilsp.gr) and [others](https://live.european-language-grid.eu/catalogue/project/2209).
### Licensing Information
<a rel="license" href="https://elrc-share.eu/static/metashare/licences/CC-BY-4.0.pdf"><img alt="Attribution 4.0 International (CC BY 4.0) License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="https://elrc-share.eu/static/metashare/licences/CC-BY-4.0.pdf">Attribution 4.0 International (CC BY 4.0) License</a>.
### Citation Information
Please cite the following paper when using this model.
```latex
@inproceedings{losch-etal-2018-european,
title = European Language Resource Coordination: Collecting Language Resources for Public Sector Multilingual Information Management,
author = {
L'osch, Andrea and
Mapelli, Valérie and
Piperidis, Stelios and
Vasiljevs, Andrejs and
Smal, Lilli and
Declerck, Thierry and
Schnur, Eileen and
Choukri, Khalid and
van Genabith, Josef
},
booktitle = Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018),
month = may,
year = 2018,
address = Miyazaki, Japan,
publisher = European Language Resources Association (ELRA),
url = https://aclanthology.org/L18-1213,
}
```
| 8,903 | [
[
-0.033172607421875,
-0.0248565673828125,
0.0169525146484375,
0.0172882080078125,
-0.0202178955078125,
0.0004622936248779297,
-0.0238800048828125,
-0.0255584716796875,
0.025787353515625,
0.0305633544921875,
-0.0379638671875,
-0.067626953125,
-0.0540771484375,
... |
rony/soccer-dialogues | 2022-07-25T08:33:15.000Z | [
"region:us"
] | rony | null | null | 0 | 30 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
stevhliu/demo | 2022-10-24T18:02:42.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"conditional-text-generation",
"r... | stevhliu | null | null | 0 | 30 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- summarization
- text2text-generation
task_ids: []
tags:
- conditional-text-generation
---
# Dataset Card for Demo
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a demo dataset with two files `train.csv` and `test.csv`.
Load it by:
```python
from datasets import load_dataset
data_files = {"train": "train.csv", "test": "test.csv"}
demo = load_dataset("stevhliu/demo", data_files=data_files)
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | 3,000 | [
[
-0.027618408203125,
-0.035919189453125,
0.0037841796875,
0.0204010009765625,
-0.012176513671875,
0.01309967041015625,
-0.029815673828125,
-0.02484130859375,
0.039825439453125,
0.038177490234375,
-0.0628662109375,
-0.07977294921875,
-0.043121337890625,
0.0037... |
copenlu/fever_gold_evidence | 2022-11-17T11:42:54.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:ex... | copenlu | null | null | 5 | 30 | 2022-04-02T14:52:35 | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- machine-generated
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
paperswithcode_id: fever
pretty_name: ''
size_categories:
- 100K<n<1M
source_datasets:
- extended|fever
task_categories:
- text-classification
task_ids:
- fact-checking
---
# Dataset Card for fever_gold_evidence
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/copenlu/fever-adversarial-attacks
- **Repository:** https://github.com/copenlu/fever-adversarial-attacks
- **Paper:** https://aclanthology.org/2020.emnlp-main.256/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Dataset for training classification-only fact checking with claims from the FEVER dataset.
This dataset is used in the paper "Generating Label Cohesive and Well-Formed Adversarial Claims", EMNLP 2020
The evidence is the gold evidence from the FEVER dataset for *REFUTE* and *SUPPORT* claims.
For *NEI* claims, we extract evidence sentences with the system in "Christopher Malon. 2018. Team Papelo: Transformer Networks at FEVER. In Proceedings of the
First Workshop on Fact Extraction and VERification (FEVER), pages 109113."
More details can be found in https://github.com/copenlu/fever-adversarial-attacks
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@inproceedings{atanasova-etal-2020-generating,
title = "Generating Label Cohesive and Well-Formed Adversarial Claims",
author = "Atanasova, Pepa and
Wright, Dustin and
Augenstein, Isabelle",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.256",
doi = "10.18653/v1/2020.emnlp-main.256",
pages = "3168--3177",
abstract = "Adversarial attacks reveal important vulnerabilities and flaws of trained models. One potent type of attack are universal adversarial triggers, which are individual n-grams that, when appended to instances of a class under attack, can trick a model into predicting a target class. However, for inference tasks such as fact checking, these triggers often inadvertently invert the meaning of instances they are inserted in. In addition, such attacks produce semantically nonsensical inputs, as they simply concatenate triggers to existing samples. Here, we investigate how to generate adversarial attacks against fact checking systems that preserve the ground truth meaning and are semantically valid. We extend the HotFlip attack algorithm used for universal trigger generation by jointly minimizing the target class loss of a fact checking model and the entailment class loss of an auxiliary natural language inference model. We then train a conditional language model to generate semantically valid statements, which include the found universal triggers. We find that the generated attacks maintain the directionality and semantic validity of the claim better than previous work.",
}
``` | 5,188 | [
[
-0.03057861328125,
-0.06512451171875,
-0.0007243156433105469,
-0.006534576416015625,
-0.007110595703125,
-0.016693115234375,
-0.01364898681640625,
-0.024078369140625,
0.032745361328125,
0.038116455078125,
-0.04150390625,
-0.0655517578125,
-0.05816650390625,
... |
chainyo/rvl-cdip-invoice | 2022-04-06T16:57:20.000Z | [
"license:other",
"region:us"
] | chainyo | null | null | 3 | 30 | 2022-04-06T16:52:14 | ---
license: other
---
⚠️ This only a subpart of the original dataset, containing only `invoice`.
The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.
For questions and comments please contact Adam Harley (aharley@scs.ryerson.ca).
The full dataset can be found [here](https://www.cs.cmu.edu/~aharley/rvl-cdip/).
## Labels
0: letter
1: form
2: email
3: handwritten
4: advertissement
5: scientific report
6: scientific publication
7: specification
8: file folder
9: news article
10: budget
11: invoice
12: presentation
13: questionnaire
14: resume
15: memo
## Citation
This dataset is from this [paper](https://www.cs.cmu.edu/~aharley/icdar15/) `A. W. Harley, A. Ufkes, K. G. Derpanis, "Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval," in ICDAR, 2015`
## License
RVL-CDIP is a subset of IIT-CDIP, which came from the [Legacy Tobacco Document Library](https://www.industrydocuments.ucsf.edu/tobacco/), for which license information can be found [here](https://www.industrydocuments.ucsf.edu/help/copyright/).
## References
1. D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard, "Building a test collection for complex document information processing," in Proc. 29th Annual Int. ACM SIGIR Conference (SIGIR 2006), pp. 665-666, 2006
2. The Legacy Tobacco Document Library (LTDL), University of California, San Francisco, 2007. http://legacy.library.ucsf.edu/. | 1,753 | [
[
-0.0247344970703125,
-0.019317626953125,
0.03765869140625,
0.004100799560546875,
-0.00412750244140625,
0.002483367919921875,
-0.00753021240234375,
-0.035491943359375,
-0.03118896484375,
0.054718017578125,
-0.020111083984375,
-0.04962158203125,
-0.0491943359375,
... |
huggingface/image-classification-test-sample | 2022-04-19T08:02:02.000Z | [
"region:us"
] | huggingface | null | null | 1 | 30 | 2022-04-19T08:02:01 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sil-ai/bloom-speech | 2023-02-15T13:28:59.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ajz",
"language:bam",
"language:bi",
... | sil-ai | Bloom-speech is a dataset of text aligned speech from bloomlibrary.org. This dataset contains over 50 languages including many low-resource languages. This dataset should be useful for training and/or testing speech-to-text or text-to-speech/ASR models. | @InProceedings{huggingface:bloom-speech,
title = {bloom-speech},
author={Joshua Nemecek, Colin Leong, and Daniel Whitenack
},
year={2022}
} | 15 | 30 | 2022-06-09T12:08:44 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ajz
- bam
- bi
- bis
- bjn
- bm
- boz
- bze
- bzi
- cak
- ceb
- chd
- chp
- clo
- csw
- en
- eng
- es
- fli
- fr
- fra
- gu
- guj
- hbb
- hi
- hin
- id
- ind
- jmx
- jra
- kan
- kbq
- kek
- kjb
- kmu
- kn
- kqr
- kwu
- loh
- mai
- mal
- mam
- mar
- ml
- mle
- mr
- my
- mya
- myk
- nas
- nsk
- nsn
- oj
- oji
- omw
- por
- pt
- quc
- sdk
- snk
- spa
- stk
- ta
- taj
- tam
- tbj
- tdc
- tgl
- tl
- tpi
- tuz
- tzj
license:
- cc-by-nc-4.0
- cc-by-sa-4.0
- cc-by-nc-nd-4.0
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- text-to-speech
paperswithcode_id: null
pretty_name: BloomSpeech
extra_gated_prompt: |-
One more step before getting this dataset. This dataset is open access and available only for non-commercial use (except for portions of the dataset labeled with a `cc-by-sa` license). A "license" field paired with each of the dataset entries/samples specifies the Creative Commons license for that entry/sample.
These [Creative Commons licenses](https://creativecommons.org/about/cclicenses/) specify that:
1. You cannot use the dataset for or directed toward commercial advantage or monetary compensation (except for those portions of the dataset labeled specifically with a `cc-by-sa` license. If you would like to ask about commercial uses of this dataset, please [email us](mailto:sj@derivation.co).
2. Any public, non-commercial use of the data must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
3. For those portions of the dataset marked with an ND license, you cannot remix, transform, or build upon the material, and you may not distribute modified material.
In addition to the above implied by Creative Commons and when clicking "Access Repository" below, you agree:
1. Not to use the dataset for any use intended to or which has the effect of harming or enabling discrimination against individuals or groups based on legally protected characteristics or categories, including but not limited to discrimination against Indigenous People as outlined in Articles 2; 13-16; and 31 of the United Nations Declaration on the Rights of Indigenous People, 13 September 2007 and as subsequently amended and revised.
2. That your *contact information* (email address and username) can be shared with the model authors as well.
extra_gated_fields:
I have read the License and agree with its terms: checkbox
---
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
<!-- - [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions) -->
## Dataset Description
- **Homepage:** [SIL AI](https://ai.sil.org/)
- **Point of Contact:** [SIL AI email](mailto:idx_aqua@sil.org)
- **Source Data:** [Bloom Library](https://bloomlibrary.org/)
 
## Dataset Summary
**Bloom** is free, open-source software and an associated website [Bloom Library](https://bloomlibrary.org/), app, and services developed by [SIL International](https://www.sil.org/). Bloom’s primary goal is to equip non-dominant language communities and their members to create the literature they want for their community and children. Bloom also serves organizations that help such communities develop literature and education or other aspects of community development.
This version of the Bloom Library data is developed specifically for the automatic speech recognition and speech-to-text tasks. It includes data from 56 languages across 18 language families. There is a mean of 458 and median of 138 audio records per language.
**Note**: If you speak one of these languages and can help provide feedback or corrections, please let us know!
**Note**: Although data from [bloom-lm](https://huggingface.co/datasets/sil-ai/bloom-lm) was used in the training of the [BLOOM model](https://huggingface.co/bigscience/bloom), the dataset only represents a small portion of the data used to train that model. Data from "Bloom Library" was combined with a large number of other datasets to train that model. "Bloom Library" is a project that existed prior to the BLOOM model, and is something separate. All that to say... We were using the "Bloom" name before it was cool. 😉
## Languages
Of the 500+ languages listed at BloomLibrary.org, there are 56 languages available in this dataset. Here are the corresponding ISO 639-3 codes:
ajz, bam, bis, bjn, boz, bze, bzi, cak, ceb, chd, chp, clo, csw, eng, fli, fra, guj, hbb, hin, ind, jmx, jra, kan, kbq, kek, kjb, kmu, kqr, kwu, loh, mai, mal, mam, mar, mle, mya, myk, nas, nsk, nsn, oji, omw, por, quc, sdk, snk, spa, stk, taj, tam, tbj, tdc, tgl, tpi, tuz, tzj
## Dataset Statistics
Some of the languages included in the dataset include few audio cuts. These are not split between training, validation, and test. For those with higher numbers of available stories we include the following numbers of stories in each split:
| ISO 639-3 | Name | Train Cuts | Validation Cuts | Test Cuts |
|:------------|:------------------------------|----------------:|---------------------:|---------------:|
| ajz | Amri Karbi | 135 | 34 | 50 |
| bam | Bamanankan | 203 | 50 | 50 |
| bis | Bislama | 0 | 0 | 46 |
| bjn | Banjar | 80 | 20 | 50 |
| boz | Bozo, Tieyaxo | 427 | 50 | 52 |
| bze | Bozo, Jenaama | 101 | 26 | 50 |
| bzi | Bisu | 1363 | 50 | 157 |
| cak | Kaqchikel | 989 | 50 | 115 |
| ceb | Cebuano | 553 | 50 | 67 |
| chd | Chontal, Highland Oaxaca | 205 | 50 | 50 |
| chp | Dene | 0 | 0 | 14 |
| clo | Chontal, Lowland Oaxaca | 120 | 30 | 50 |
| csw | Cree, Swampy | 0 | 0 | 45 |
| eng | English | 4143 | 48 | 455 |
| fli | Fali Muchella | 59 | 15 | 50 |
| fra | French | 261 | 49 | 50 |
| guj | Gujarati | 27 | 0 | 48 |
| hbb | Nya Huba | 558 | 50 | 67 |
| hin | Hindi | 62 | 15 | 49 |
| ind | Indonesian | 0 | 0 | 14 |
| jmx | Mixtec, Western Juxtlahuaca | 39 | 0 | 50 |
| jra | Jarai | 203 | 50 | 50 |
| kan | Kannada | 281 | 43 | 50 |
| kbq | Kamano | 0 | 0 | 27 |
| kek | Q’eqchi’ | 1676 | 49 | 190 |
| kjb | Q’anjob’al | 770 | 50 | 91 |
| kmu | Kanite | 0 | 0 | 28 |
| kqr | Kimaragang | 0 | 0 | 18 |
| kwu | Kwakum | 58 | 15 | 50 |
| loh | Narim | 0 | 0 | 15 |
| mai | Maithili | 0 | 0 | 11 |
| mal | Malayalam | 125 | 31 | 44 |
| mam | Mam | 1313 | 50 | 151 |
| mar | Marathi | 25 | 0 | 49 |
| mle | Manambu | 0 | 0 | 8 |
| mya | Burmese | 321 | 50 | 50 |
| myk | Sénoufo, Mamara | 669 | 50 | 80 |
| nas | Naasioi | 13 | 0 | 50 |
| nsk | Naskapi | 0 | 0 | 15 |
| nsn | Nehan | 0 | 0 | 31 |
| oji | Ojibwa | 0 | 0 | 25 |
| omw | Tairora, South | 0 | 0 | 34 |
| por | Portuguese | 0 | 0 | 34 |
| quc | K’iche’ | 1460 | 50 | 167 |
| sdk | Sos Kundi | 312 | 50 | 50 |
| snk | Soninke | 546 | 50 | 66 |
| spa | Spanish | 1816 | 50 | 207 |
| stk | Aramba | 180 | 45 | 50 |
| taj | Tamang, Eastern | 0 | 0 | 24 |
| tam | Tamil | 159 | 39 | 46 |
| tbj | Tiang | 0 | 0 | 24 |
| tdc | Ẽpẽra Pedea | 0 | 0 | 19 |
| tgl | Tagalog | 352 | 48 | 50 |
| tpi | Tok Pisin | 1061 | 50 | 123 |
| tuz | Turka | 48 | 13 | 50 |
| tzj | Tz’utujil | 0 | 0 | 41 |
## Dataset Structure
### Data Instances
The examples look like this for Hindi:
```
from datasets import load_dataset
# Specify the language code.
dataset = load_dataset('sil-ai/bloom-speech', 'hin', use_auth_token=True) #note you must login to HuggingFace via the huggingface hub or huggingface cli
# A data point consists of transcribed audio in the specified language code.
# To see a transcription:
print(dataset['train']['text'][0])
```
This would produce an output:
```
चित्र: बो और शैम्पू की बोतल
```
Whereas if you wish to gather all the text for a language you may use this:
```
dataset['train']['text']
```
### Data Fields
The metadata fields are below. In terms of licenses, all stories included in the current release are released under a Creative Commons license (even if the individual story metadata fields are missing).
- **file**: the local path to the audio file
- **audio**: a dictionary with a path, array, and sampling_rate as is standard for Hugging Face audio
- **text**: the transcribed text
- **book**: title of the book, e.g. "बो मेस्सी और शैम्पू".
- **instance**: unique ID for each book/translation assigned by Bloom Library. For example the Hindi version of 'बो मेस्सी और शैम्पू' is 'eba60f56-eade-4d78-a66f-f52870f6bfdd'
- **license**: specific license used, e.g. "cc-by-sa" for "Creative Commons, by attribution, share-alike".
- **credits**: attribution of contributors as described in the book metadata, including authors, editors, etc. if available
- **original_lang_tag**: the language tag originally assigned in Bloom Library. This may include information on script type, etc.
### Data Splits
All languages include a train, validation, and test split. However, for language having a small number of stories, certain of these splits maybe empty. In such cases, we recommend using any data for testing only or for zero-shot experiments.
## Changelog
- **26 September 2022** Page initiated | 13,424 | [
[
-0.0391845703125,
-0.0126495361328125,
0.00616455078125,
0.024810791015625,
0.0026607513427734375,
0.01364898681640625,
-0.00713348388671875,
-0.03509521484375,
0.035675048828125,
0.026275634765625,
-0.053131103515625,
-0.0550537109375,
-0.03741455078125,
0.... |
sepidmnorozy/Maltese_sentiment | 2022-08-16T09:44:25.000Z | [
"region:us"
] | sepidmnorozy | null | null | 0 | 30 | 2022-08-16T09:26:10 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
arbml/AQMAR | 2022-10-26T14:50:48.000Z | [
"region:us"
] | arbml | null | null | 0 | 30 | 2022-10-25T22:09:41 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
ProGamerGov/StableDiffusion-v1-5-Regularization-Images | 2022-11-26T02:14:20.000Z | [
"license:mit",
"region:us"
] | ProGamerGov | null | null | 124 | 30 | 2022-10-31T22:21:09 | ---
license: mit
---
A collection of regularization / class instance datasets for the [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) model to use for DreamBooth prior preservation loss training. Files labeled with "mse vae" used the [stabilityai/sd-vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse) VAE. For ease of use, datasets are stored as zip files containing 512x512 PNG images. The number of images in each zip file is specified at the end of the filename.
There is currently a bug where HuggingFace is incorrectly reporting that the datasets are pickled. They are not picked, they are simple ZIP files containing the images.
Currently this repository contains the following datasets (datasets are named after the prompt they used):
Art Styles
* "**artwork style**": 4125 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**artwork style**": 4200 images generated using 50 DPM++ 2S a Karras steps and a CFG of 7, using the MSE VAE. A negative prompt of "text" was also used for this dataset.
* "**artwork style**": 2750 images generated using 50 DPM++ 2S a Karras steps and a CFG of 7, using the MSE VAE.
* "**illustration style**": 3050 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**erotic photography**": 2760 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**landscape photography**": 2500 images generated using 50 DPM++ 2S a Karras steps and a CFG of 7, using the MSE VAE. A negative prompt of "b&w, text" was also used for this dataset.
People
* "**person**": 2115 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**woman**": 4420 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**guy**": 4820 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**supermodel**": 4411 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**bikini model**": 4260 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**sexy athlete**": 5020 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**femme fatale**": 4725 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**sexy man**": 3505 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**sexy woman**": 3500 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
Animals
* "**kitty**": 5100 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**cat**": 2050 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
Vehicles
* "**fighter jet**": 1600 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**train**": 2669 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**car**": 3150 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
Themes
* "**cyberpunk**": 3040 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
I used the "Generate Forever" feature in [AUTOMATIC1111's WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) to create thousands of images for each dataset. Every image in a particular dataset uses the exact same settings, with only the seed value being different.
You can use my regularization / class image datasets with: https://github.com/ShivamShrirao/diffusers, https://github.com/JoePenna/Dreambooth-Stable-Diffusion, https://github.com/TheLastBen/fast-stable-diffusion, and any other DreamBooth projects that have support for prior preservation loss.
| 3,632 | [
[
-0.043609619140625,
-0.034942626953125,
0.0204010009765625,
-0.006595611572265625,
-0.0213165283203125,
-0.0011005401611328125,
0.013336181640625,
-0.01163482666015625,
0.0185699462890625,
0.045440673828125,
-0.0462646484375,
-0.05303955078125,
-0.03240966796875... |
bigbio/meddocan | 2022-12-22T15:45:24.000Z | [
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"region:us"
] | bigbio | MEDDOCAN: Medical Document Anonymization Track
This dataset is designed for the MEDDOCAN task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.
It is a manually classified collection of 1,000 clinical case reports derived from the Spanish Clinical Case Corpus (SPACCC), enriched with PHI expressions.
The annotation of the entire set of entity mentions was carried out by experts annotatorsand it includes 29 entity types relevant for the annonymiation of medical documents.22 of these annotation types are actually present in the corpus: TERRITORIO, FECHAS, EDAD_SUJETO_ASISTENCIA, NOMBRE_SUJETO_ASISTENCIA, NOMBRE_PERSONAL_SANITARIO, SEXO_SUJETO_ASISTENCIA, CALLE, PAIS, ID_SUJETO_ASISTENCIA, CORREO, ID_TITULACION_PERSONAL_SANITARIO,ID_ASEGURAMIENTO, HOSPITAL, FAMILIARES_SUJETO_ASISTENCIA, INSTITUCION, ID_CONTACTO ASISTENCIAL,NUMERO_TELEFONO, PROFESION, NUMERO_FAX, OTROS_SUJETO_ASISTENCIA, CENTRO_SALUD, ID_EMPLEO_PERSONAL_SANITARIO
For further information, please visit https://temu.bsc.es/meddocan/ or send an email to encargo-pln-life@bsc.es | @inproceedings{marimon2019automatic,
title={Automatic De-identification of Medical Texts in Spanish: the MEDDOCAN Track, Corpus, Guidelines, Methods and Evaluation of Results.},
author={Marimon, Montserrat and Gonzalez-Agirre, Aitor and Intxaurrondo, Ander and Rodriguez, Heidy and Martin, Jose Lopez and Villegas, Marta and Krallinger, Martin},
booktitle={IberLEF@ SEPLN},
pages={618--638},
year={2019}
} | 1 | 30 | 2022-11-13T22:09:29 |
---
language:
- es
bigbio_language:
- Spanish
license: cc-by-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: MEDDOCAN
homepage: https://temu.bsc.es/meddocan/
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for MEDDOCAN
## Dataset Description
- **Homepage:** https://temu.bsc.es/meddocan/
- **Pubmed:** False
- **Public:** True
- **Tasks:** NER
MEDDOCAN: Medical Document Anonymization Track
This dataset is designed for the MEDDOCAN task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.
It is a manually classified collection of 1,000 clinical case reports derived from the Spanish Clinical Case Corpus (SPACCC), enriched with PHI expressions.
The annotation of the entire set of entity mentions was carried out by experts annotatorsand it includes 29 entity types relevant for the annonymiation of medical documents.22 of these annotation types are actually present in the corpus: TERRITORIO, FECHAS, EDAD_SUJETO_ASISTENCIA, NOMBRE_SUJETO_ASISTENCIA, NOMBRE_PERSONAL_SANITARIO, SEXO_SUJETO_ASISTENCIA, CALLE, PAIS, ID_SUJETO_ASISTENCIA, CORREO, ID_TITULACION_PERSONAL_SANITARIO,ID_ASEGURAMIENTO, HOSPITAL, FAMILIARES_SUJETO_ASISTENCIA, INSTITUCION, ID_CONTACTO ASISTENCIAL,NUMERO_TELEFONO, PROFESION, NUMERO_FAX, OTROS_SUJETO_ASISTENCIA, CENTRO_SALUD, ID_EMPLEO_PERSONAL_SANITARIO
For further information, please visit https://temu.bsc.es/meddocan/ or send an email to encargo-pln-life@bsc.es
## Citation Information
```
@inproceedings{marimon2019automatic,
title={Automatic De-identification of Medical Texts in Spanish: the MEDDOCAN Track, Corpus, Guidelines, Methods and Evaluation of Results.},
author={Marimon, Montserrat and Gonzalez-Agirre, Aitor and Intxaurrondo, Ander and Rodriguez, Heidy and Martin, Jose Lopez and Villegas, Marta and Krallinger, Martin},
booktitle={IberLEF@ SEPLN},
pages={618--638},
year={2019}
}
```
| 1,966 | [
[
-0.0194549560546875,
-0.039154052734375,
0.036285400390625,
0.022308349609375,
-0.0289306640625,
0.01317596435546875,
-0.004924774169921875,
-0.039794921875,
0.038787841796875,
0.0533447265625,
-0.03369140625,
-0.0767822265625,
-0.058197021484375,
0.04641723... |
bigbio/swedish_medical_ner | 2022-12-22T15:46:59.000Z | [
"multilinguality:monolingual",
"language:sv",
"license:cc-by-sa-4.0",
"region:us"
] | bigbio | swedish_medical_ner is Named Entity Recognition dataset on medical text in Swedish.
It consists three subsets which are in turn derived from three different sources
respectively: the Swedish Wikipedia (a.k.a. wiki), Läkartidningen (a.k.a. lt),
and 1177 Vårdguiden (a.k.a. 1177). While the Swedish Wikipedia and Läkartidningen
subsets in total contains over 790000 sequences with 60 characters each,
the 1177 Vårdguiden subset is manually annotated and contains 927 sentences,
2740 annotations, out of which 1574 are disorder and findings, 546 are
pharmaceutical drug, and 620 are body structure.
Texts from both Swedish Wikipedia and Läkartidningen were automatically annotated
using a list of medical seed terms. Sentences from 1177 Vårdguiden were manuually
annotated. | @inproceedings{almgren-etal-2016-named,
author = {
Almgren, Simon and
Pavlov, Sean and
Mogren, Olof
},
title = {Named Entity Recognition in Swedish Medical Journals with Deep Bidirectional Character-Based LSTMs},
booktitle = {Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining (BioTxtM 2016)},
publisher = {The COLING 2016 Organizing Committee},
pages = {30-39},
year = {2016},
month = {12},
url = {https://aclanthology.org/W16-5104},
eprint = {https://aclanthology.org/W16-5104.pdf}
} | 2 | 30 | 2022-11-13T22:12:24 |
---
language:
- sv
bigbio_language:
- Swedish
license: cc-by-sa-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_SA_4p0
pretty_name: Swedish Medical NER
homepage: https://github.com/olofmogren/biomedical-ner-data-swedish/
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for Swedish Medical NER
## Dataset Description
- **Homepage:** https://github.com/olofmogren/biomedical-ner-data-swedish/
- **Pubmed:** False
- **Public:** True
- **Tasks:** NER
swedish_medical_ner is Named Entity Recognition dataset on medical text in Swedish.
It consists three subsets which are in turn derived from three different sources
respectively: the Swedish Wikipedia (a.k.a. wiki), Läkartidningen (a.k.a. lt),
and 1177 Vårdguiden (a.k.a. 1177). While the Swedish Wikipedia and Läkartidningen
subsets in total contains over 790000 sequences with 60 characters each,
the 1177 Vårdguiden subset is manually annotated and contains 927 sentences,
2740 annotations, out of which 1574 are disorder and findings, 546 are
pharmaceutical drug, and 620 are body structure.
Texts from both Swedish Wikipedia and Läkartidningen were automatically annotated
using a list of medical seed terms. Sentences from 1177 Vårdguiden were manuually
annotated.
## Citation Information
```
@inproceedings{almgren-etal-2016-named,
author = {
Almgren, Simon and
Pavlov, Sean and
Mogren, Olof
},
title = {Named Entity Recognition in Swedish Medical Journals with Deep Bidirectional Character-Based LSTMs},
booktitle = {Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining (BioTxtM 2016)},
publisher = {The COLING 2016 Organizing Committee},
pages = {30-39},
year = {2016},
month = {12},
url = {https://aclanthology.org/W16-5104},
eprint = {https://aclanthology.org/W16-5104.pdf}
}
```
| 1,968 | [
[
-0.012603759765625,
-0.0286407470703125,
0.03900146484375,
-0.01076507568359375,
-0.042755126953125,
-0.017608642578125,
-0.01096343994140625,
-0.04730224609375,
0.0299072265625,
0.048431396484375,
-0.029144287109375,
-0.068359375,
-0.03485107421875,
0.05166... |
juancaruizc/lyrics | 2022-12-31T22:55:26.000Z | [
"region:us"
] | juancaruizc | null | null | 0 | 30 | 2022-12-31T22:52:18 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
Poulpidot/FrenchHateSpeechSuperset | 2023-02-04T21:17:04.000Z | [
"license:unknown",
"doi:10.57967/hf/0284",
"region:us"
] | Poulpidot | null | null | 0 | 30 | 2023-01-07T13:19:59 | ---
license: unknown
---
### FrenchHateSpeechSuperset
This dataset is a superset of multiple datasets including hate speech, harasment, sexist, racist, etc...messages from various platforms.
Included datasets :
- MLMA dataset
- CAA dataset
- FTR dataset
- "An Annotated Corpus for Sexism Detection in French Tweets" dataset
- UC-Berkeley-Measuring-Hate-Speech dataset (translated from english*)
#### References
```
@inproceedings{chiril2020annotated,
title={An Annotated Corpus for Sexism Detection in French Tweets},
author={Chiril, Patricia and Moriceau, V{\'e}ronique and Benamara, Farah and Mari, Alda and Origgi, Gloria and Coulomb-Gully, Marl{\`e}ne},
booktitle={Proceedings of The 12th Language Resources and Evaluation Conference},
pages={1397--1403},
year={2020}
}
```
```
@inproceedings{ousidhoum-etal-multilingual-hate-speech-2019,
title = "Multilingual and Multi-Aspect Hate Speech Analysis",
author = "Ousidhoum, Nedjma
and Lin, Zizheng
and Zhang, Hongming
and Song, Yangqiu
and Yeung, Dit-Yan",
booktitle = "Proceedings of EMNLP",
year = "2019",
publisher = "Association for Computational Linguistics",
}
```
```
Vanetik, N.; Mimoun, E. Detection of Racist Language in French Tweets. Information 2022, 13, 318. https://doi.org/10.3390/info13070318
```
```
@article{kennedy2020constructing,
title={Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application},
author={Kennedy, Chris J and Bacon, Geoff and Sahn, Alexander and von Vacano, Claudia},
journal={arXiv preprint arXiv:2009.10277},
year={2020}
}
```
```
Anaïs Ollagnier, Elena Cabrio, Serena Villata, Catherine Blaya. CyberAgressionAdo-v1: a Dataset of Annotated Online Aggressions in French Collected through a Role-playing Game. Language Resources and Evaluation Conference, Jun 2022, Marseille, France. ⟨hal-03765860⟩
```
### Translation
French datasets for hate speech are quite rare. To augment current dataset, messages from other languages (english only for now) have been integrated.
To integrate other languages dataset, MT model were used and manually selected for each dataset.
- UC-Berkeley-Measuring-Hate-Speech dataset : Abelll/marian-finetuned-kde4-en-to-fr
### Language verification
Since MT models are not perfect, some messages are not entirely translated or not translated at all.
To check for obvious errors in pipeline, a general language detection model is used to prune non french texts.
Language detection model : papluca/xlm-roberta-base-language-detection
### Annotation
Since "hate speech" dimension is highly subjective, and datasets comes with different annotations types, a conventional labeling stategy is required.
Each sample is annotated with "0" if negative sample and "1" if positive sample.
### Filtering rules :
- FTR dataset : [wip]
- MLMA dataset : [wip]
- CAA dataset : [wip]
- "Annotated Corpus" dataset : [wip]
- UC-Berkeley Measuring Hate Speech dataset : average hate_speech_score > 0 -> 1
| 3,058 | [
[
-0.034423828125,
-0.0648193359375,
0.005474090576171875,
0.021942138671875,
-0.004695892333984375,
0.0115509033203125,
-0.02069091796875,
-0.0211639404296875,
0.0183563232421875,
0.037994384765625,
-0.042083740234375,
-0.05218505859375,
-0.06768798828125,
-0... |
metaeval/defeasible-nli | 2023-06-22T14:09:34.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:en",
"license:apache-2.0",
"region:us"
] | metaeval | null | null | 0 | 30 | 2023-02-02T21:21:26 | ---
license: apache-2.0
task_ids:
- natural-language-inference
task_categories:
- text-classification
language:
- en
---
https://github.com/rudinger/defeasible-nli
```
@inproceedings{rudinger2020thinking,
title={Thinking like a skeptic:
feasible inference in natural language},
author={Rudinger, Rachel and Shwartz, Vered and Hwang, Jena D and Bhagavatula, Chandra and Forbes, Maxwell and Le Bras, Ronan and Smith, Noah A and Choi, Yejin},
booktitle={Findings of the Association for Computational Linguistics: EMNLP 2020},
pages={4661--4675},
year={2020}
}
``` | 573 | [
[
-0.0227203369140625,
-0.055633544921875,
0.03533935546875,
0.009490966796875,
0.0032558441162109375,
-0.006504058837890625,
-0.027252197265625,
-0.042755126953125,
0.051177978515625,
0.0267791748046875,
-0.052520751953125,
-0.007587432861328125,
-0.0338134765625... |
ELiRF/dacsa | 2023-03-25T09:58:52.000Z | [
"task_categories:text2text-generation",
"task_ids:news-articles-summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:ca",
"language:es",
"license:odbl",
"region:us"
] | ELiRF | The Dataset for Automatic summarization of Catalan and Spanish newspaper
Articles (DACSA) corpus. It is a high-quality large-scale corpus that can be
used to train summarization models for Catalan and Spanish. The data provides
pairs of news article and its summary from different newspapers for both, the
Catalan and the Spanish languages. Regarding the Catalan set, there are 725,184
sample pairs from 9 newspapers, regarding the Spanish set, the corpus provides
2,120,649 sample pairs from 21 newspapers. | @inproceedings{segarra-soriano-etal-2022-dacsa,
title = "{DACSA}: A large-scale Dataset for Automatic summarization of {C}atalan and {S}panish newspaper Articles",
author = "Segarra Soriano, Encarnaci{\'o}n and
Ahuir, Vicent and
Hurtado, Llu{\'\i}s-F. and
Gonz{\'a}lez, Jos{\'e}",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.434",
doi = "10.18653/v1/2022.naacl-main.434",
pages = "5931--5943",
abstract = "The application of supervised methods to automatic summarization requires the availability of adequate corpora consisting of a set of document-summary pairs. As in most Natural Language Processing tasks, the great majority of available datasets for summarization are in English, making it difficult to develop automatic summarization models for other languages. Although Spanish is gradually forming part of some recent summarization corpora, it is not the same for minority languages such as Catalan.In this work, we describe the construction of a corpus of Catalan and Spanish newspapers, the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA) corpus. It is a high-quality large-scale corpus that can be used to train summarization models for Catalan and Spanish.We have carried out an analysis of the corpus, both in terms of the style of the summaries and the difficulty of the summarization task. In particular, we have used a set of well-known metrics in the summarization field in order to characterize the corpus. Additionally, for benchmarking purposes, we have evaluated the performances of some extractive and abstractive summarization systems on the DACSA corpus.",
} | 1 | 30 | 2023-03-03T10:16:33 | ---
task_categories:
- text2text-generation
task_ids:
- news-articles-summarization
language:
- ca
- es
size_categories:
- 1M<n<10M
license:
- odbl
multilinguality:
- multilingual
source_datasets:
- original
paperswithcode_id: dacsa
annotations_creators:
- found
language_creators:
- found
pretty_name: DACSA
---
# Dataset Card for "DACSA"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:** [DACSA: A large-scale Dataset for Automatic summarization of Catalan and Spanish newspaper Articles](https://aclanthology.org/2022.naacl-main.434/)
- **Point of Contact:** [Vicent Ahuir](mailto:viahes@dsic.upv.es)
### Dataset Summary
The Dataset for Automatic summarization of Catalan and Spanish newspaper
Articles (DACSA) corpus. It is a high-quality large-scale corpus that can be
used to train summarization models for Catalan and Spanish. The data provides
pairs of news article and its summary from different newspapers for both, the
Catalan and the Spanish languages. Regarding the Catalan set, there are 725,184
sample pairs from 9 newspapers, regarding the Spanish set, the corpus provides
2,120,649 sample pairs from 21 newspapers.
### Supported Tasks and Leaderboards
[More information needed](https://github.com/csebuetnlp/xl-sum)
### Languages
- `catalan`
- `spanish`
## Dataset Structure
### Data Fields
- 'id': A string representing the article ID.
- 'summary': A string containing the article summary.
- 'article' : A string containing the article text.
### Data Splits
Four splits are provided for each language set
- **train**: samples for training models
- **validation**: samples for adjusting and validating models
- **test.i**: test samples from newspapers present in _train_ and _validation_ splits
- **test.ni**: test samples from newspapers not present in training and validation splits
The _validation_ and _test-i_ splits contain a uniform distribution of samples
from each newspaper source.
Languages | ISO 639-1 Code | Train | Val | Test.i | Test.ni | Total |
--------------|----------------|---------|-------|--------|---------|---------|
Catalan | ca | 636596 | 35376 | 35376 | 17836 | 725184 |
Spanish | es | 1802919 | 104052 | 104052 | 109626 | 2120649 |
## Dataset Creation
### Curation Rationale
[More information needed](https://github.com/csebuetnlp/xl-sum)
### Source Data
Newspapers from Spain that publish news in Catalan or Spanish
#### Initial Data Collection and Normalization
[Detailed in the paper](https://aclanthology.org/2022.naacl-main.434/)
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed](https://aclanthology.org/2022.naacl-main.434/)
### Discussion of Biases
[More information needed](https://aclanthology.org/2022.naacl-main.434/)
### Other Known Limitations
[More information needed](https://aclanthology.org/2022.naacl-main.434/)
## Additional Information
### Dataset Curators
[More information needed](https://aclanthology.org/2022.naacl-main.434/)
### Licensing Information
These data are released under this licensing scheme.
We do not own any of the text from which these data has been extracted.
This DACSA dataset package is made available under the Open Database License: http://opendatacommons.org/licenses/odbl/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/
Should you consider that our data contains material that is owned by you
and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address,
telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and
information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources
from the next release of the corpus.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{segarra-soriano-etal-2022-dacsa,
title = "{DACSA}: A large-scale Dataset for Automatic summarization of {C}atalan and {S}panish newspaper Articles",
author = "Segarra Soriano, Encarnaci{\'o}n and
Ahuir, Vicent and
Hurtado, Llu{\'\i}s-F. and
Gonz{\'a}lez, Jos{\'e}",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.434",
doi = "10.18653/v1/2022.naacl-main.434",
pages = "5931--5943",
abstract = "The application of supervised methods to automatic summarization requires the availability of adequate corpora consisting of a set of document-summary pairs. As in most Natural Language Processing tasks, the great majority of available datasets for summarization are in English, making it difficult to develop automatic summarization models for other languages. Although Spanish is gradually forming part of some recent summarization corpora, it is not the same for minority languages such as Catalan.In this work, we describe the construction of a corpus of Catalan and Spanish newspapers, the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA) corpus. It is a high-quality large-scale corpus that can be used to train summarization models for Catalan and Spanish.We have carried out an analysis of the corpus, both in terms of the style of the summaries and the difficulty of the summarization task. In particular, we have used a set of well-known metrics in the summarization field in order to characterize the corpus. Additionally, for benchmarking purposes, we have evaluated the performances of some extractive and abstractive summarization systems on the DACSA corpus.",
}
```
| 7,621 | [
[
-0.03399658203125,
-0.048370361328125,
0.0179901123046875,
0.038482666015625,
-0.0193939208984375,
0.017822265625,
-0.0158843994140625,
-0.03265380859375,
0.051422119140625,
0.0272064208984375,
-0.0265655517578125,
-0.06500244140625,
-0.048309326171875,
0.02... |
mstz/higgs | 2023-04-16T17:31:30.000Z | [
"task_categories:tabular-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc",
"higgs",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @article{baldi2014searching,
title={Searching for exotic particles in high-energy physics with deep learning},
author={Baldi, Pierre and Sadowski, Peter and Whiteson, Daniel},
journal={Nature communications},
volume={5},
number={1},
pages={4308},
year={2014},
publisher={Nature Publishing Group UK London}
} | 0 | 30 | 2023-03-29T10:17:37 | ---
language:
- en
tags:
- higgs
- tabular_classification
- binary_classification
- UCI
pretty_name: Higgs
size_categories:
- 10K<n<100K
task_categories:
- tabular-classification
configs:
- higgs
license: cc
---
# Higgs
The [Higgs dataset](https://www.nature.com/articles/ncomms5308/) from "[Searching for exotic particles in high-energy physics with deep learning](https://www.nature.com/articles/ncomms5308/)".
Try to classify particles as Higgs bosons.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-----------------------------------------------------------------|
| higgs | Binary classification | Is the particle a Higgs boson? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/higgs")["train"]
```
# Features
|**Feature** |**Type** |
|---------------------------|-----------|
|`lepton_pT` |`[float64]`|
|`lepton_eta` |`[float64]`|
|`lepton_phi` |`[float64]`|
|`missing_energy_magnitude` |`[float64]`|
|`missing_energy_phi` |`[float64]`|
|`jet1pt` |`[float64]`|
|`jet1eta` |`[float64]`|
|`jet1phi` |`[float64]`|
|`jet1b` |`[float64]`|
|`jet2pt` |`[float64]`|
|`jet2eta` |`[float64]`|
|`jet2phi` |`[float64]`|
|`jet2b` |`[float64]`|
|`jet3pt` |`[float64]`|
|`jet3eta` |`[float64]`|
|`jet3phi` |`[float64]`|
|`jet3b` |`[float64]`|
|`jet4pt` |`[float64]`|
|`jet4eta` |`[float64]`|
|`jet4phi` |`[float64]`|
|`jet4b` |`[float64]`|
|`m_jj` |`[float64]`|
|`m_jjj` |`[float64]`|
|`m_lv` |`[float64]`|
|`m_jlv` |`[float64]`|
|`m_bb` |`[float64]`|
|`m_wbb` |`[float64]`|
|`m_wwbb` |`[float64]`| | 2,207 | [
[
-0.041107177734375,
-0.0204010009765625,
0.03533935546875,
-0.01009368896484375,
-0.0110626220703125,
0.00827789306640625,
0.00598907470703125,
-0.0247650146484375,
0.0168914794921875,
0.0270233154296875,
-0.046173095703125,
-0.035736083984375,
-0.04278564453125... |
andreabac3/Quora-Italian-Fauno-Baize | 2023-04-08T15:54:40.000Z | [
"license:gpl-3.0",
"arxiv:2304.01196",
"region:us"
] | andreabac3 | null | null | 2 | 30 | 2023-04-08T15:52:17 | ---
license: gpl-3.0
---
# Quora-Italian-Fauno-Baize
This dataset is an Italian translation of the Quora dataset presented by Baize's authors.
## Dataset Description
- **Paper:** https://arxiv.org/abs/2304.01196
### Languages
Italian
## Dataset Structure
### Data Instances
Sentences 54,456
average number of turns 3.9
response lengths of each turn 35.9
### Data Fields
topic, input
### Data Splits
Train
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
https://github.com/project-baize/baize-chatbot
## Additional Information
### Dataset Curators
[Andrea Bacciu](https://andreabac3.github.io/), Dr. [Giovanni Trappolini](https://sites.google.com/view/giovannitrappolini), [Andrea Santilli](https://www.santilli.xyz/), and Professor [Fabrizio Silvestri](https://sites.google.com/diag.uniroma1.it/fabriziosilvestri/home).
### Licensing Information
This project is a derivative of Baize, and we adhere to the licensing constraints imposed by Baize's creators.
### Citation Information
```bibtex
@misc{fauno,
author = {Andrea Bacciu, Giovanni Trappolini, Andrea Santilli, Fabrizio Silvestri},
title = {Fauno: The Italian Large Language Model that will leave you senza parole!},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/andreabac3/Fauno-Italian-LLM}},
}
```
```bibtex
@article{xu2023baize,
title={Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data},
author={Xu, Canwen and Guo, Daya and Duan, Nan and McAuley, Julian},
journal={arXiv preprint arXiv:2304.01196},
year={2023}
}
``` | 1,652 | [
[
-0.0220794677734375,
-0.053253173828125,
0.01485443115234375,
0.0227508544921875,
-0.004749298095703125,
-0.0259857177734375,
-0.025848388671875,
-0.0023632049560546875,
0.02679443359375,
0.0196380615234375,
-0.0452880859375,
-0.03680419921875,
-0.042724609375,
... |
cestwc/hdb0420 | 2023-04-20T04:50:47.000Z | [
"region:us"
] | cestwc | null | null | 0 | 30 | 2023-04-20T04:50:28 | ---
dataset_info:
features:
- name: labels
dtype: int64
- name: text
dtype: string
splits:
- name: '0420'
num_bytes: 334028
num_examples: 3110
- name: '0110'
num_bytes: 16067
num_examples: 110
- name: '0327'
num_bytes: 317961
num_examples: 3000
download_size: 318187
dataset_size: 668056
---
# Dataset Card for "hdb0420"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 503 | [
[
-0.06640625,
-0.020050048828125,
0.00983428955078125,
0.0234527587890625,
-0.00969696044921875,
-0.003753662109375,
0.036529541015625,
-0.00922393798828125,
0.0626220703125,
0.042694091796875,
-0.054901123046875,
-0.039825439453125,
-0.0298004150390625,
-0.0... |
michaelwzhu/ChatMed_Consult_Dataset | 2023-05-05T13:41:10.000Z | [
"task_categories:text-generation",
"language:zh",
"license:cc-by-4.0",
"medical consultation",
"finetuning",
"region:us"
] | michaelwzhu | null | null | 66 | 30 | 2023-04-20T04:53:33 | ---
license: cc-by-4.0
language:
- zh
tags:
- medical consultation
- finetuning
pretty_name: ChatMed-Dataset
task_categories:
- text-generation
---
# Dataset Card for ChatMed
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/michaelwzhu/ChatMed-Datasets
- **Repository:** https://github.com/michael-wzhu/ChatMed
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** Wei Zhu
### Dataset Summary
ChatMed-Dataset is a dataset of 110,113 medical query-response pairs (in Chinese) generated by OpenAI's `GPT-3.5` engine. The queries are crawled from several online medical consultation sites, reflecting the medical needs in the real world. The responses are generated by the OpenAI engine. This dataset is designated to to inject medical knowledge into Chinese large language models.
The dataset size growing rapidly. Stay tuned for updates!
在大模型时代,模型都是趋同的,比如开源社区由于LlaMa的"开源",出现了各种stanford-alpaca等一系列的开源模型。但是中文垂直领域上的AGI数据集相对匮乏。
本数据集, ChatMed-Dataset, 中的query(或者是prompt)来自于互联网上的医疗问诊问题(110,113),反映了真实世界的不同用户/患者的医疗问诊需求。目前response都是由OpenAI `GPT-3.5`引擎回答的。我们后续会对互联网上的医生回答与患者回答进行筛选甄别,择优选择,构建质量更优的数据集。
### Supported Tasks and Leaderboards
The ChatMed-Dataset designed for fine-tuning pretrained language models to perform better in automatic medical consultations.
### Languages
The data in ChatMed-Dataset are in Chinese (zh).
## Dataset Structure
### Data Instances
The dataset is in json-line format and is very convenient to use! An example in the ChatMed-Dataset looks as follows:
```json
{
"query": "你好,视神经萎缩,黄斑变性。怎样缓解,及日常注意事项。谢谢。",
"response": "视神经萎缩和黄斑变性是两种不同的疾病,需要分别进行治疗。\n\n视神经萎缩是指视神经逐渐受损,导致视力逐渐减退,常见的治疗方法包括:\n\n1.药物治疗:如促进视神经细胞再生的药物,改善视网膜供血的药物等。\n\n2.手术治疗:如视神经萎缩的手术治疗可以通过神经移植、修复和再生等手术方式来缓解症状。\n\n3.康复训练:通过视力训练等康复训练可以帮助患者恢复视力。\n\n黄斑变性则是指黄斑区域受损,导致中央视力减退,常见的治疗方法包括:\n\n1.药物治疗:如抗氧化剂、维生素等可以缓解症状。\n\n2.手术治疗:如黄斑脱离手术等可以恢复视力。\n\n3.日常注意事项:保持良好的生活习惯,如戒烟、控制血压、血糖等,避免暴晒和过度用眼,保持充足的睡眠和均衡的饮食等。\n\n需要根据具体情况选择合适的治疗方案,建议您咨询专业医生进行诊断和治疗。"
}
```
### Data Fields
The data fields are as follows:
* `query`: real-life medical query from the web.
* `response`: response generated by the OpenAI `GPT-3.5` engine.
### Data Splits
| | train |
|---------------|------:|
| ChatMed-Dataset | 110,113 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
> We believe that releasing the above assets will enable the academic community to perform controlled scientific studies on how large language models responses to real-life medical query.
> More social impacts will be discussed soon!
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
The `ChatMed-Dataset` data is generated by a language model (`GPT-3.5`) and inevitably contains some errors or biases. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{alpaca,
author = {Wei Zhu },
title = {ChatMed-Dataset: An GPT generated medical query-response datasets for medcial large language models},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/michael-wzhu/ChatMed}},
}
```
### Contributions
[More Information Needed] | 3,911 | [
[
-0.0139312744140625,
-0.0677490234375,
0.0007076263427734375,
0.0177001953125,
-0.028961181640625,
-0.02252197265625,
-0.02655029296875,
-0.045013427734375,
0.0450439453125,
0.035797119140625,
-0.038238525390625,
-0.05596923828125,
-0.039947509765625,
0.0033... |
PaulineSanchez/Traduction_en_fr_food | 2023-04-24T17:18:08.000Z | [
"task_categories:translation",
"language:fr",
"language:en",
"region:us"
] | PaulineSanchez | null | null | 1 | 30 | 2023-04-24T07:55:46 | ---
task_categories:
- translation
language:
- fr
- en
dataset_info:
features:
- name: alim_nom_fr
dtype: string
- name: alim_nom_eng
dtype: string
splits:
- name: train
num_bytes: 238948
num_examples: 3153
download_size: 114072
dataset_size: 238948
---
- info: This dataset comes from the ANSES-CIQUAL 2020 Table in English in XML format, found on https://www.data.gouv.fr/fr/datasets/table-de-composition-nutritionnelle-des-aliments-ciqual/ | 475 | [
[
-0.0097503662109375,
-0.014404296875,
0.0245361328125,
0.0192413330078125,
0.007251739501953125,
0.0104217529296875,
0.02490234375,
-0.032135009765625,
0.041656494140625,
0.06793212890625,
-0.06268310546875,
-0.051666259765625,
-0.03924560546875,
0.035827636... |
SJTU-CL/ArguGPT | 2023-05-02T08:44:22.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"AIGC for education",
"arxiv:2304.07666",
"region:us"
] | SJTU-CL | null | null | 1 | 30 | 2023-05-02T08:11:18 | ---
license: cc
task_categories:
- text-classification
language:
- en
tags:
- AIGC for education
size_categories:
- 1K<n<10K
---
# Machine-essays generation pipeline
Please check out our [github repo](https://github.com/huhailinguist/ArguGPT).
This document only introduces how we collected **machine-generated essays**.
| model | timestamp | # total | # valid | # short | # repetitive | # overlapped |
|------------------|-------------|---------|---------|---------|--------------|--------------|
| gpt2-xl | Nov, 2019 | 4,573 | 563 | 1,637 | 0 | 2,373 |
| text-babbage-001 | April, 2022 | 917 | 479 | 181 | 240 | 17 |
| text-curie-001 | April, 2022 | 654 | 498 | 15 | 110 | 31 |
| text-davinci-001 | April, 2022 | 632 | 493 | 1 | 41 | 97 |
| text-davinci-002 | April, 2022 | 621 | 495 | 1 | 56 | 69 |
| text-davinci-003 | Nov, 2022 | 1,130 | 1,090 | 0 | 30 | 10 |
| gpt-3.5-turbo | Mar, 2023 | 1,122 | 1,090 | 0 | 4 | 28 |
| total | - | 9,647 | 4,708 | 1,835 | 481 | 2,625 |
## Models
We chose 7 models from GPT family: 1) `gpt2-xl`, 2) `text-babbage-001`, 3) `text-curie-001`, 4) `text-davinci-001`, 5) `text-davinci-002`,
6) `text-davinci-003`, and 7) `gpt-3.5-turbo`.
More information about these models can be seen in [OpenAI documentation](https://platform.openai.com/docs/model-index-for-researchers).
For WECCL and TOEFL, we used all 7 models to generate argumentative essays.
As for GRE, of which the writing task is more difficult than WECCL and TOEFL, we only used `text-davinci-003` and `gpt-3.5-turbo`.
**Notes**: Since `gpt2-xl` cannot respond to prompts as InstructGPTs and other later models,
we fed `gpt2-xl` the prompt along with one beginning sentence randomly extracted from human essays for continuous writing.
Therefore, the first sentence of each essay generated by `gpt2-xl` is actually human-authored.
## Prompts selection
Our writing topics are collected from human-WECCL, human-TOEFL, and human-GRE.
In a writing task, a topic statement is presented for students (or machines) to attack or defend.
The topic statement here is refered to `ESSAY_PROMPT`, and our added instructions for machine is refered to `ADDED_PROMPT`.
Therefore, our prompt format is as follow: `ESSAY_PROMPT` + `ADDED_PROMPT`.
For instance,
- `ESSAY_PROMPT`: It is better to have broad knowledge of many academic subjects than to specialize in one specific subject.
- `ADDED_PROMPT`: Do you agree or disagree? Use specific reasons and examples to support your answer. Write an essay of roughly {300/400/500} words.
We asked the machine to write 300 words for writing tasks in WECCL, 400 for TOEFL, and 500 for GRE.
## Essays filtering, preprocessing, and automated scoring
We then filtered out the essays that are short, repetitive and overlapped.
- Short: we set the threshold of 50 words for `gpt2-xl`, and 100 words for others.
- Repetitive: 40% of sentences are *similar*.
- Overlapped: 40% of sentences are *similar* with any other essay already generated.
- Definition of *similar*: "I like a dog." and "I don't like a cat." have 3 words in common. The similarity therefore is 6 / 9 = 0.67. If the similarity is greater than 0.8, the two sentences are *similar*.
We deleted "As an AI model, ..." generated by gpt-3.5-turbo.
And we used [YouDao automated scoring system](https://ai.youdao.com/) to score all the essays,
and categorized them into low, mid, and high levels.
## Citation
Please cite our work [arXiv:2304.07666](https://arxiv.org/abs/2304.07666) as
```
@misc{liu2023argugpt,
title={ArguGPT: evaluating, understanding and identifying argumentative essays generated by GPT models},
author={Yikang Liu and Ziyin Zhang and Wanyang Zhang and Shisen Yue and Xiaojing Zhao and Xinyuan Cheng and Yiwen Zhang and Hai Hu},
year={2023},
eprint={2304.07666},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| 4,185 | [
[
-0.04449462890625,
-0.06927490234375,
0.056182861328125,
-0.01108551025390625,
-0.00835418701171875,
-0.00616455078125,
0.006839752197265625,
-0.0241851806640625,
-0.0137481689453125,
0.0311279296875,
-0.031768798828125,
-0.03436279296875,
-0.046905517578125,
... |
turkish-nlp-suite/turkish-wikiNER | 2023-09-26T10:37:00.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:tr",
"license:cc-by-sa-4.0",
"region:us"
] | turkish-nlp-suite | General Purpose Turkish NER dataset. 19 labels and 20.000 instances at total. [Turkish Wiki NER dataset](https://github.com/turkish-nlp-suite/Turkish-Wiki-NER-Dataset) | @inproceedings{altinok-2023-diverse,
title = "A Diverse Set of Freely Available Linguistic Resources for {T}urkish",
author = "Altinok, Duygu",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.768",
pages = "13739--13750",
abstract = "This study presents a diverse set of freely available linguistic resources for Turkish natural language processing, including corpora, pretrained models and education material. Although Turkish is spoken by a sizeable population of over 80 million people, Turkish linguistic resources for natural language processing remain scarce. In this study, we provide corpora to allow practitioners to build their own applications and pretrained models that would assist industry researchers in creating quick prototypes. The provided corpora include named entity recognition datasets of diverse genres, including Wikipedia articles and supplement products customer reviews. In addition, crawling e-commerce and movie reviews websites, we compiled several sentiment analysis datasets of different genres. Our linguistic resources for Turkish also include pretrained spaCy language models. To the best of our knowledge, our models are the first spaCy models trained for the Turkish language. Finally, we provide various types of education material, such as video tutorials and code examples, that can support the interested audience on practicing Turkish NLP. The advantages of our linguistic resources are three-fold: they are freely available, they are first of their kind, and they are easy to use in a broad range of implementations. Along with a thorough description of the resource creation process, we also explain the position of our resources in the Turkish NLP world.",
} | 0 | 30 | 2023-05-07T16:41:04 | ---
language:
- tr
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Turkish-WikiNER
---
# Dataset Card for "turkish-nlp-suite/turkish-wikiNER"
<img src="https://raw.githubusercontent.com/turkish-nlp-suite/.github/main/profile/wiki.png" width="20%" height="20%">
## Dataset Description
- **Repository:** [Turkish-WikiNER](https://github.com/turkish-nlp-suite/Turkish-Wiki-NER-Dataset)
- **Paper:** [ACL link]()
- **Dataset:** Turkish-WikiNER
- **Domain:** Wiki
- **Number of Labels:** 18
### Dataset Summary
Turkish NER dataset from Wikipedia sentences. 20.000 sentences are sampled and re-annotated from [Kuzgunlar NER dataset](https://data.mendeley.com/datasets/cdcztymf4k/1).
Annotations are done by [Co-one](https://co-one.co/). Many thanks to them for their contributions. This dataset is also used in our brand new spaCy Turkish packages.
### Dataset Instances
An instance of this dataset looks as follows:
```
{
"tokens": ["Çekimler", "5", "Temmuz", "2005", "tarihinde", "Reebok", "Stadyum", ",", "Bolton", ",", "İngiltere'de", "yapılmıştır", "."],
"tags": [O", "B-DATE", "I-DATE", "I-DATE", "O", "B-FAC", "I-FAC", "O", "B-GPE", "O", "B-GPE", "O", "O"]
}
```
or even better:

### Labels
- CARDINAL
- DATE
- EVENT
- FAC
- GPE
- LANGUAGE
- LAW
- LOC
- MONEY
- NORP
- ORDINAL
- ORG
- PERCENT
- PERSON
- PRODUCT
- QUANTITY
- TIME
- TITLE
- WORK_OF_ART
### Data Split
| name |train|validation|test|
|---------|----:|---------:|---:|
|Turkish-WikiNER|18000| 1000|1000|
### Citation
This work is supported by Google Developer Experts Program. Part of Duygu 2022 Fall-Winter collection, "Turkish NLP with Duygu"/ "Duygu'yla Türkçe NLP". All rights reserved. If you'd like to use this dataset in your own work, please kindly cite [A Diverse Set of Freely Available Linguistic Resources for Turkish](https://aclanthology.org/2023.acl-long.768/) :
```
@inproceedings{altinok-2023-diverse,
title = "A Diverse Set of Freely Available Linguistic Resources for {T}urkish",
author = "Altinok, Duygu",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.768",
pages = "13739--13750",
abstract = "This study presents a diverse set of freely available linguistic resources for Turkish natural language processing, including corpora, pretrained models and education material. Although Turkish is spoken by a sizeable population of over 80 million people, Turkish linguistic resources for natural language processing remain scarce. In this study, we provide corpora to allow practitioners to build their own applications and pretrained models that would assist industry researchers in creating quick prototypes. The provided corpora include named entity recognition datasets of diverse genres, including Wikipedia articles and supplement products customer reviews. In addition, crawling e-commerce and movie reviews websites, we compiled several sentiment analysis datasets of different genres. Our linguistic resources for Turkish also include pretrained spaCy language models. To the best of our knowledge, our models are the first spaCy models trained for the Turkish language. Finally, we provide various types of education material, such as video tutorials and code examples, that can support the interested audience on practicing Turkish NLP. The advantages of our linguistic resources are three-fold: they are freely available, they are first of their kind, and they are easy to use in a broad range of implementations. Along with a thorough description of the resource creation process, we also explain the position of our resources in the Turkish NLP world.",
}
```
| 4,118 | [
[
-0.046478271484375,
-0.04327392578125,
-0.01013946533203125,
0.01152801513671875,
-0.0278167724609375,
-0.026611328125,
-0.03936767578125,
-0.031585693359375,
0.02569580078125,
0.032501220703125,
-0.03704833984375,
-0.061065673828125,
-0.0465087890625,
0.034... |
eastwind/semeval-2016-absa-reviews-english-translated-stanford-alpaca | 2023-06-09T11:08:27.000Z | [
"task_categories:text-classification",
"task_categories:zero-shot-classification",
"task_categories:question-answering",
"task_categories:text2text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"region:us"
] | eastwind | null | null | 1 | 30 | 2023-06-09T11:05:14 | ---
license: mit
task_categories:
- text-classification
- zero-shot-classification
- question-answering
- text2text-generation
language:
- en
pretty_name: >-
SemEval 2016 Hotel Aspect Based Sentiment Analysis translated and alpaca
format for LLM training
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
Derived from eastwind/semeval-2016-absa-reviews-arabic using Helsinki-NLP/opus-mt-tc-big-ar-en | 419 | [
[
-0.037200927734375,
-0.0501708984375,
-0.0114898681640625,
0.004840850830078125,
-0.070068359375,
-0.0038299560546875,
-0.0018358230590820312,
-0.024169921875,
0.048980712890625,
0.033416748046875,
-0.060028076171875,
-0.0814208984375,
-0.03741455078125,
0.0... |
tianyang/repobench-c | 2023-06-24T01:37:41.000Z | [
"task_categories:text-generation",
"task_ids:document-retrieval",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"license:cc-by-nc-nd-4.0",
"code",
"arxiv:2306.03091",
"region:us"
] | tianyang | RepoBench is a dataset that benchmarks repository-level code auto-completion systems.
RepoBench-C denotes RepoBench for code completion,
which is subtask of RepoBench for next-line code prediction given both cross-file and in-file context. | @misc{liu2023repobench,
title={RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems},
author={Tianyang Liu and Canwen Xu and Julian McAuley},
year={2023},
eprint={2306.03091},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 3 | 30 | 2023-06-16T07:18:00 | ---
language_creators:
- found
license:
- cc-by-nc-nd-4.0
multilinguality:
- multilingual
pretty_name: RepoBench-Completion
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- document-retrieval
tags:
- code
size_categories:
- 100K<n<1M
---
# Dataset Card for RepoBench-C
## Dataset Description
- **Homepage:** https://github.com/Leolty/repobench
- **Paper:** https://arxiv.org/abs/2306.03091
## Dataset Summary
**RepoBench-C (Completion)** is a subtask of **RepoBench**([GitHub](https://github.com/Leolty/repobench), [arXiv](https://arxiv.org/abs/2306.03091)), focuing on the prediction of the next line of code, given in-file context (including several preceding lines and import statements), and cross-file context.
## Settings
- `cff`: short for cross_file_first, indicating the cross-file module in next line is first used in the current file.
- `cfr`: short for cross_file_random, indicating the cross-file module in next line is not first used in the current file.
- `if`: short for in_file, indicating the next line does not contain any cross-file module.
## Supported Tasks
- `python_cff`: python code prediction with cross-file-first setting.
- `python_cfr`: python code prediction with cross-file-random setting.
- `python_if`: python code prediction with in-file setting.
- `java_cff`: java code prediction with cross-file-first setting.
- `java_cfr`: java code prediction with cross-file-random setting.
- `java_if`: java code prediction with in-file setting.
## Loading Data
For example, if you want to load the `test` set to test your model on `Python` code prediction with `cff` setting, you can do the following:
```python
from datasets import load_dataset
dataset = load_dataset("tianyang/repobench-c", "python_cff", split="test")
```
> Note: The `split` argument is optional. If not provided, the entire dataset will be loaded.
## Dataset Structure
```json
{
"repo_name": "repository name of the data point",
"file_path": "path/to/file",
"context": "commented and concatenated cross-file context",
"import_statement": "all import statements in the file",
"code": "the code for next-line prediction",
"prompt": "cross-file context + import statements + in-file code",
"next_line": "the next line of the code"
}
```
## Licensing Information
CC BY-NC-ND 4.0
## Citation Information
```bibtex
@misc{liu2023repobench,
title={RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems},
author={Tianyang Liu and Canwen Xu and Julian McAuley},
year={2023},
eprint={2306.03091},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Contributions
Thanks to [@Leolty](https://github.com/Leolty) for adding this dataset. | 2,752 | [
[
-0.0309295654296875,
-0.01300048828125,
-0.0010232925415039062,
0.01519775390625,
-0.00823974609375,
0.0026798248291015625,
-0.006809234619140625,
-0.033447265625,
0.01218414306640625,
0.03497314453125,
-0.047698974609375,
-0.040985107421875,
-0.02484130859375,
... |
ltkw98/mapping | 2023-06-22T13:01:48.000Z | [
"region:us"
] | ltkw98 | null | null | 0 | 30 | 2023-06-22T13:01:46 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: tec_name
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 369062
num_examples: 2358
download_size: 165236
dataset_size: 369062
---
# Dataset Card for "mapping"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 420 | [
[
-0.049163818359375,
-0.01641845703125,
0.023406982421875,
0.019622802734375,
-0.007122039794921875,
-0.0045928955078125,
0.0094757080078125,
-0.015167236328125,
0.050567626953125,
0.0367431640625,
-0.0517578125,
-0.06622314453125,
-0.05145263671875,
-0.02978... |
anujsahani01/English-Marathi | 2023-06-29T23:46:13.000Z | [
"task_categories:translation",
"size_categories:1M<n<10M",
"language:en",
"language:mr",
"region:us"
] | anujsahani01 | null | null | 1 | 30 | 2023-06-23T14:25:18 | ---
task_categories:
- translation
language:
- en
- mr
size_categories:
- 1M<n<10M
---
This Dataset was prepared by collecting english-marathi translation from different resources.
Happy Fine-tuning😀 | 203 | [
[
-0.0216217041015625,
-0.021240234375,
-0.01036834716796875,
0.01059722900390625,
-0.059814453125,
-0.02911376953125,
-0.0289154052734375,
-0.0284423828125,
0.045074462890625,
0.063720703125,
-0.0711669921875,
-0.012481689453125,
-0.04931640625,
0.03225708007... |
Vezora/Mini_Orca_Uncencored_Alpaca | 2023-08-14T04:49:24.000Z | [
"license:apache-2.0",
"region:us"
] | Vezora | null | null | 2 | 30 | 2023-07-12T04:19:59 | ---
license: apache-2.0
---
This is dataset is a modified version of "psmathur's" Mini orca dataset, formated in the alpaca format and uncencored.
For ALPACA LORA users:
Modules you can target with lora:"gate_proj", "down_proj", "up_proj", "q_proj", "v_proj", "k_proj", "o_proj"
Most lora models use:"q_proj", "v_proj", "k_proj", "o_proj"
Platypus which got terrific results: "gate_proj", "down_proj", "up_proj"
Research on targeting certain modules still needs to be done, but if you don't want to train over a previously trained models newly learned abilities, target different modules than the ones used for original training.
Hyper perameters used by Platypus:
Hyperparameters for 13B and 70B Models
Hyperparameter Platypus2-13B / 70B
batch size 16
micro batch size 1
num epochs 1
learning rate 4e-4 / 3e-4
cutoff len 4096
lora rank 16
lora alpha 16
lora dropout 0.05
lora target modules gate_proj, down_proj, up_proj
train on inputs False
add eos token False
group by length False
prompt template alpaca
lr scheduler cosine
warmup steps 100
I would reccomend using a batch size of 4-10, and cutt off length to ≤ 2048 to avoid using vram issues. Load_in_4bit, Normal Float, and bf16. For single 24 gig card.
If training with oobabooga you must edit the "training.py" file in the "oobabooga_windows\text-generation-webui\modules" folder. In line 49 edit standard modules to the modules you would like to target.
If training with alpaca lora use the argument --lora_target_modules when running the train.py command. To load in 4bit you must edit the train file, adding load in 4 bit, bf16, and normal float quant.
| 1,622 | [
[
-0.036041259765625,
-0.049163818359375,
0.00994873046875,
0.0046539306640625,
-0.030364990234375,
-0.0168304443359375,
0.0007109642028808594,
-0.01444244384765625,
0.045867919921875,
0.05450439453125,
-0.0518798828125,
-0.03289794921875,
-0.036865234375,
-0.... |
jxu9001/tagged_addresses | 2023-07-20T19:09:23.000Z | [
"region:us"
] | jxu9001 | null | null | 0 | 30 | 2023-07-19T17:36:02 | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: tags
sequence: string
splits:
- name: train
num_bytes: 14472345
num_examples: 105594
- name: validation
num_bytes: 1809379
num_examples: 13199
- name: test
num_bytes: 1811309
num_examples: 13200
download_size: 0
dataset_size: 18093033
---
# Dataset Card for "tagged_addresses"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 527 | [
[
-0.050933837890625,
-0.00738525390625,
0.010223388671875,
0.0027332305908203125,
-0.0089111328125,
-0.0015048980712890625,
0.0219268798828125,
-0.029052734375,
0.0640869140625,
0.036224365234375,
-0.0489501953125,
-0.059967041015625,
-0.030487060546875,
-0.0... |
Locutusque/InstructMix | 2023-08-02T23:35:14.000Z | [
"task_categories:text-generation",
"task_categories:conversational",
"task_categories:question-answering",
"language:en",
"region:us"
] | Locutusque | null | null | 4 | 30 | 2023-08-02T19:47:23 | ---
dataset:
name: InstructiveMix
tagline: A Combined Dataset of Diverse Instructional Content
description: >
InstructiveMix is a comprehensive dataset that brings together various
instructional content from different domains. It combines instructions for
tasks, code, poems, essays, medical texts, and more. With a diverse range of
instructional data, this dataset is suitable for a wide range of natural
language processing (NLP) tasks and research.
license: CC-BY-SA-4.0
dataset_creation: '2023-08-02T00:00:00.000Z'
dataset_version: 1.0.0
authors:
- name: Locutusque
email: locutusque.airshipcraft@gmail.com
task_categories:
- text-generation
- conversational
- question-answering
language:
- en
---
**Dataset Summary:**
InstructMix is a comprehensive combined dataset that offers diverse instructional content for a range of tasks. It includes data from various sources, such as code instructions, poems, essays, medical texts, and more. This dataset is designed to support natural language processing (NLP) research, model training, and evaluation across different domains.
**Dataset Contents:**
The dataset contains a collection of instructional data with corresponding inputs and outputs. Each entry has an "Input" field that contains the instructional content, and an "Output" field that represents the corresponding response or completion. Here is a list of the datasets used:
- Locutusque/ColumnedChatCombined
- TokenBender/code_instructions_120k_alpaca_style
- Open-Orca/OpenOrca
- vicgalle/alpaca-gpt4
- ChristophSchuhmann/essays-with-instructions
- checkai/instruction-poems
- pubmed_qa
- BI55/MedText
- nampdn-ai/tiny-codes
It contains two of the following columns:
- Input (string)
- Output (string)
These should hopefully be self-explanatory
**Dataset Composition:**
- Number of samples: [7283349]
- Languages: English
- License: CC-BY-SA-4.0
**Use Cases:**
The InstructiveMix dataset is suitable for various NLP tasks, including text generation, text completion, translation, summarization, and more. It can be used to train and evaluate language models, code generation models, and other NLP-based applications.
**Dataset Creation:**
The InstructiveMix dataset was created by combining multiple existing datasets with instructional content and adding metadata to facilitate seamless integration. The content spans a diverse set of domains and was sourced from reputable datasets and public sources.
**Acknowledgements:**
I would like to acknowledge the original creators of the datasets used to construct InstructiveMix. Their contributions have enabled the creation of this valuable resource for the NLP community.
**Contact:**
For any questions or inquiries related to the InstructiveMix dataset, please contact me at [locutusque.airshipcraft@gmail.com].
--- | 2,838 | [
[
-0.0200653076171875,
-0.033355712890625,
0.00403594970703125,
0.027496337890625,
-0.002410888671875,
0.0101318359375,
-0.0224609375,
-0.0082244873046875,
0.0001080632209777832,
0.06622314453125,
-0.04608154296875,
-0.0626220703125,
-0.04766845703125,
0.01223... |
thesistranslation/distilled-ccmatrix-de-en | 2023-10-03T09:22:18.000Z | [
"language:de",
"language:en",
"region:us"
] | thesistranslation | null | null | 0 | 30 | 2023-08-17T01:34:16 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: train
num_bytes: 7314473226
num_examples: 30000000
download_size: 5149999083
dataset_size: 7314473226
language:
- de
- en
---
# Dataset Card for "distilled-ccmatrix-de-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 493 | [
[
-0.043304443359375,
-0.02410888671875,
0.0268707275390625,
0.01494598388671875,
-0.037506103515625,
0.0262298583984375,
0.0019989013671875,
0.010406494140625,
0.0494384765625,
0.024017333984375,
-0.048583984375,
-0.06365966796875,
-0.059722900390625,
-0.0017... |
notrichardren/azaria-mitchell | 2023-08-17T21:22:50.000Z | [
"region:us"
] | notrichardren | null | null | 0 | 30 | 2023-08-17T21:22:46 | ---
configs:
- config_name: default
data_files:
- split: combined
path: data/combined-*
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: claim
dtype: string
- name: label
dtype: int64
- name: dataset
dtype: string
- name: qa_type
dtype: int64
- name: ind
dtype: int64
splits:
- name: combined
num_bytes: 1553103
num_examples: 17092
- name: train
num_bytes: 1244045
num_examples: 13673
- name: test
num_bytes: 309058
num_examples: 3419
download_size: 1228770
dataset_size: 3106206
---
# Dataset Card for "azaria-mitchell"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 790 | [
[
-0.05181884765625,
-0.01885986328125,
0.017242431640625,
0.016021728515625,
-0.00875091552734375,
-0.016876220703125,
0.020721435546875,
-0.014312744140625,
0.061004638671875,
0.042816162109375,
-0.0772705078125,
-0.05877685546875,
-0.05517578125,
-0.0104827... |
thesistranslation/distilled-ccmatrix-fr-en | 2023-10-03T09:22:02.000Z | [
"language:fr",
"language:en",
"region:us"
] | thesistranslation | null | null | 0 | 30 | 2023-08-20T07:18:08 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: translation
dtype:
translation:
languages:
- fr
- en
splits:
- name: train
num_bytes: 7513764655
num_examples: 30000000
download_size: 5154705851
dataset_size: 7513764655
language:
- fr
- en
---
# Dataset Card for "distilled-ccmatrix-fr-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 493 | [
[
-0.0450439453125,
-0.01849365234375,
0.0214385986328125,
0.0225372314453125,
-0.0386962890625,
0.0274505615234375,
-0.0020847320556640625,
0.00984954833984375,
0.045501708984375,
0.02142333984375,
-0.054473876953125,
-0.058990478515625,
-0.057647705078125,
0... |
thesistranslation/distilled-ccmatrix-en-fr | 2023-10-03T09:21:51.000Z | [
"language:en",
"language:fr",
"region:us"
] | thesistranslation | null | null | 0 | 30 | 2023-08-20T09:13:26 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 7456743546
num_examples: 30000000
download_size: 5098116495
dataset_size: 7456743546
language:
- en
- fr
---
# Dataset Card for "distilled-ccmatrix-en-fr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 493 | [
[
-0.04351806640625,
-0.020172119140625,
0.020721435546875,
0.021331787109375,
-0.036895751953125,
0.0274810791015625,
0.0009093284606933594,
0.01209259033203125,
0.04620361328125,
0.023529052734375,
-0.052459716796875,
-0.055694580078125,
-0.05645751953125,
-... |
thesistranslation/distilled-ccmatrix-en-es | 2023-10-03T09:21:40.000Z | [
"language:es",
"language:en",
"region:us"
] | thesistranslation | null | null | 0 | 30 | 2023-08-24T01:32:46 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: translation
dtype:
translation:
languages:
- en
- es
splits:
- name: train
num_bytes: 7084246805
num_examples: 30000000
download_size: 4913968666
dataset_size: 7084246805
language:
- es
- en
---
# Dataset Card for "distilled-ccmatrix-en-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 493 | [
[
-0.042083740234375,
-0.0211029052734375,
0.0264129638671875,
0.014190673828125,
-0.034027099609375,
0.0292205810546875,
-0.0007944107055664062,
0.01276397705078125,
0.0556640625,
0.02691650390625,
-0.05499267578125,
-0.061859130859375,
-0.057830810546875,
-0... |
theblackcat102/multiround-programming-convo | 2023-09-07T11:43:59.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"data-science",
"programming",
"statistic",
"region:us"
] | theblackcat102 | null | null | 2 | 30 | 2023-09-02T22:12:22 | ---
task_categories:
- text-generation
language:
- en
tags:
- data-science
- programming
- statistic
pretty_name: Multi-Round Programming Conversations
size_categories:
- 100K<n<1M
---
# Multi-Round Programming Conversations
Based on previous evol-codealpaca-v1 dataset with added sampled questions from stackoverflow, crossvalidated and make it multiround!
It should be more suited to train a code assistant which works side by side.
## Tasks included in here:
* Data science, statistic, programming questions
* Code translation : translate a short function from Python, Golang, C++, Java, Javascript
* Code fixing : Fix randomly corrupts characters with no tab spacing code.
| 685 | [
[
-0.033294677734375,
-0.07122802734375,
0.0325927734375,
0.0259552001953125,
-0.00525665283203125,
0.010955810546875,
-0.0030670166015625,
-0.04486083984375,
0.043243408203125,
0.053070068359375,
-0.0498046875,
-0.03179931640625,
-0.025665283203125,
0.0140762... |
JWBickel/BibleDictionaries | 2023-10-29T00:32:53.000Z | [
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | JWBickel | null | null | 2 | 30 | 2023-09-04T18:09:01 | ---
language:
- en
configs:
- config_name: default
data_files:
- split: train
path:
- "Easton's Bible Dictionary.jsonl"
- "Hitchcock's Bible Names Dictionary.jsonl"
- "Smith's Bible Dictionary.jsonl"
- "TorreysTopicalTextbook.jsonl"
- config_name: Easton
data_files:
- split: train
path: "Easton's Bible Dictionary.jsonl"
- config_name: Hitchcock
data_files:
- split: train
path: "Hitchcock's Bible Names Dictionary.jsonl"
- config_name: Smith
data_files:
- split: train
path: "Smith's Bible Dictionary.jsonl"
- config_name: Torrey
data_files:
- split: train
path: "TorreysTopicalTextbook.jsonl"
size_categories:
- 10K<n<100K
---
JSON for:
-
Easton's Bible Dictionary
Smith's Bible Dictionary
Hitchcock's Bible Names Dictionary
Torry's Topical Handbook | 818 | [
[
0.003917694091796875,
-0.034027099609375,
0.04095458984375,
0.0179595947265625,
-0.02490234375,
0.0210113525390625,
0.016387939453125,
-0.0018329620361328125,
0.03497314453125,
0.035125732421875,
-0.0283660888671875,
-0.047119140625,
-0.07666015625,
0.033264... |
AdamCodd/emotion-balanced | 2023-10-14T23:10:06.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:other",
"emotion-classific... | AdamCodd | null | null | 0 | 30 | 2023-09-06T16:49:25 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
paperswithcode_id: emotion
pretty_name: Emotion
tags:
- emotion-classification
dataset_info:
- config_name: split
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': sadness
'1': joy
'2': love
'3': anger
'4': fear
'5': surprise
splits:
- name: train
num_bytes: 1968209
num_examples: 16000
- name: validation
num_bytes: 247888
num_examples: 2000
- name: test
num_bytes: 244379
num_examples: 2000
download_size: 740883
dataset_size: 2173481
- config_name: unsplit
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': sadness
'1': joy
'2': love
'3': anger
'4': fear
'5': surprise
splits:
- name: train
num_bytes: 10792185
num_examples: 89754
download_size: 10792185
dataset_size: 10792185
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for "emotion"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/AdamCodd/emotion-dataset](https://github.com/AdamCodd/emotion-dataset)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 10.54 MB
### Dataset Summary
Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
An example looks as follows.
```
{
"text": "im feeling quite sad and sorry for myself but ill snap out of it soon",
"label": 0
}
```
### Data Fields
The data fields are:
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `sadness` (0), `joy` (1), `love` (2), `anger` (3), `fear` (4), `surprise` (5).
### Data Splits
The dataset has 2 configurations:
- split: with a total of 20_000 examples split into train, validation and test
- unsplit: with a total of 89_754 examples in a single train split
| name | train | validation | test |
|---------|-------:|-----------:|-----:|
| split | 16000 | 2000 | 2000 |
| unsplit | 89754 | n/a | n/a |
## Dataset Creation
### Curation Rationale
This dataset is designed for training machine learning models to perform emotion analysis. It contains text samples from Twitter labeled with six different emotions: sadness, joy, love, anger, fear, and surprise. The dataset is balanced, meaning that it has an equal number of samples for each label.
This dataset is originally sourced from [dair-ai's emotion dataset](https://huggingface.co/datasets/dair-ai/emotion), but the initial dataset was unbalanced and had some duplicate samples. Thus, this dataset has been deduplicated and balanced to ensure an equal number of samples for each emotion label.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset should be used for educational and research purposes only.
### Citation Information
If you use this dataset, please cite:
```
@inproceedings{saravia-etal-2018-carer,
title = "{CARER}: Contextualized Affect Representations for Emotion Recognition",
author = "Saravia, Elvis and
Liu, Hsien-Chi Toby and
Huang, Yen-Hao and
Wu, Junlin and
Chen, Yi-Shin",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D18-1404",
doi = "10.18653/v1/D18-1404",
pages = "3687--3697",
abstract = "Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.",
}
```
If you want to support me, you can [here](https://ko-fi.com/adamcodd). | 9,059 | [
[
-0.036590576171875,
-0.04852294921875,
0.01052093505859375,
0.0288848876953125,
-0.032196044921875,
-0.0009064674377441406,
-0.0272216796875,
-0.036590576171875,
0.05084228515625,
0.00922393798828125,
-0.059051513671875,
-0.0751953125,
-0.056549072265625,
0.... |
erkam/clevr-full-v5 | 2023-09-07T21:56:08.000Z | [
"region:us"
] | erkam | null | null | 0 | 30 | 2023-09-07T21:36:36 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: depth
dtype: image
- name: layout
dtype: image
- name: colored_layout
dtype: image
- name: objects
sequence: int64
- name: boxes
sequence:
sequence: float32
- name: triplets
sequence:
sequence: int64
- name: objects_str
dtype: string
splits:
- name: train
num_bytes: 72217786.0
num_examples: 960
- name: val
num_bytes: 8935628.0
num_examples: 119
- name: test
num_bytes: 8912087.0
num_examples: 119
download_size: 88745185
dataset_size: 90065501.0
---
# Dataset Card for "clevr-full-v5"
25 objects with 4 spatial relationships
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 980 | [
[
-0.057586669921875,
-0.010528564453125,
0.043670654296875,
0.0347900390625,
-0.017120361328125,
-0.036468505859375,
0.018035888671875,
-0.032684326171875,
0.0182342529296875,
0.04425048828125,
-0.044891357421875,
-0.068115234375,
-0.02508544921875,
0.0048141... |
kinianlo/wikipedia_pos_tagged | 2023-09-30T21:41:55.000Z | [
"region:us"
] | kinianlo | null | null | 2 | 30 | 2023-09-24T12:51:20 | ---
dataset_info:
- config_name: 20220301_en_nltk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: pos_tags
sequence:
sequence:
sequence: string
splits:
- name: train
num_bytes: 88585221192
num_examples: 6458670
download_size: 3527644902
dataset_size: 88585221192
- config_name: 20220301_en_nltk_tags_only
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: pos_tags
sequence:
sequence:
sequence: string
splits:
- name: train
num_bytes: 68920385173
num_examples: 6458670
download_size: 0
dataset_size: 68920385173
- config_name: 20220301_simple_nltk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: pos_tags
sequence:
sequence:
sequence: string
splits:
- name: train
num_bytes: 1000903680
num_examples: 205328
download_size: 286763992
dataset_size: 1000903680
- config_name: 20220301_simple_nltk_tags_only
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: pos_tags
sequence:
sequence:
sequence: string
splits:
- name: train
num_bytes: 783729741
num_examples: 205328
download_size: 161414334
dataset_size: 783729741
- config_name: 20220301_simple_spacy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: pos_tags
sequence:
sequence:
sequence: string
splits:
- name: train
num_bytes: 1131814443
num_examples: 205328
download_size: 289479815
dataset_size: 1131814443
- config_name: 20220301_simple_spacy_tags_only
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: pos_tags
sequence:
sequence:
sequence: string
splits:
- name: train
num_bytes: 914640504
num_examples: 205328
download_size: 164284823
dataset_size: 914640504
configs:
- config_name: 20220301_en_nltk
data_files:
- split: train
path: 20220301_en_nltk/train-*
- config_name: 20220301_en_nltk_tags_only
data_files:
- split: train
path: 20220301_en_nltk_tags_only/train-*
- config_name: 20220301_simple_nltk
data_files:
- split: train
path: 20220301_simple_nltk/train-*
- config_name: 20220301_simple_nltk_tags_only
data_files:
- split: train
path: 20220301_simple_nltk_tags_only/train-*
- config_name: 20220301_simple_spacy
data_files:
- split: train
path: 20220301_simple_spacy/train-*
- config_name: 20220301_simple_spacy_tags_only
data_files:
- split: train
path: 20220301_simple_spacy_tags_only/train-*
---
# Dataset Card for "wikipedia_pos_tagged"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 3,107 | [
[
-0.053070068359375,
-0.01436614990234375,
0.0142822265625,
0.0010423660278320312,
-0.0274810791015625,
-0.0265655517578125,
0.01081085205078125,
-0.00981903076171875,
0.061614990234375,
0.0192413330078125,
-0.039337158203125,
-0.06610107421875,
-0.03793334960937... |
vsarathy/nl-robotics-semantic-parsing-info_structure-10k-no-context-TEST | 2023-10-05T13:43:48.000Z | [
"region:us"
] | vsarathy | null | null | 0 | 30 | 2023-10-05T13:43:24 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.035064697265625,
0.0465087890625,
0.052490234375,
0.00505828857421875,
0.051361083984375,
0.01702880859375,
-0.05206298828125,
-0.01497650146484375,
-0.060302734375,
0.03790283203... |
joey234/affixal_negation_nonce | 2023-10-06T04:20:13.000Z | [
"region:us"
] | joey234 | null | null | 0 | 30 | 2023-10-06T04:20:09 | ---
dataset_info:
features:
- name: word
dtype: string
- name: affix
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 11912
num_examples: 418
download_size: 4873
dataset_size: 11912
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "affixal_negation_nonce"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 509 | [
[
-0.05438232421875,
-0.04266357421875,
0.0008196830749511719,
0.01302337646484375,
-0.005558013916015625,
-0.01151275634765625,
0.0151519775390625,
-0.00501251220703125,
0.066650390625,
0.0360107421875,
-0.059661865234375,
-0.06976318359375,
-0.052734375,
0.0... |
ContextualAI/lambada | 2023-10-06T23:56:48.000Z | [
"region:us"
] | ContextualAI | null | null | 0 | 30 | 2023-10-06T23:56:33 | ---
dataset_info:
features:
- name: query
dtype: string
- name: gold_generation
dtype: string
splits:
- name: dev
num_bytes: 1698004
num_examples: 4869
- name: test
num_bytes: 1786025
num_examples: 5153
download_size: 2261199
dataset_size: 3484029
---
# Dataset Card for "lambada"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 453 | [
[
-0.044769287109375,
-0.01172637939453125,
0.0192108154296875,
0.017913818359375,
-0.0133514404296875,
-0.005390167236328125,
0.0238189697265625,
-0.0193328857421875,
0.054290771484375,
0.0273284912109375,
-0.040802001953125,
-0.05706787109375,
-0.0333251953125,
... |
oroikon/chart_captioning | 2023-10-08T15:48:31.000Z | [
"region:us"
] | oroikon | null | null | 0 | 30 | 2023-10-08T14:45:34 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 395695728.546
num_examples: 7057
- name: test
num_bytes: 48381523.0
num_examples: 882
- name: validation
num_bytes: 48266912.0
num_examples: 883
download_size: 480469420
dataset_size: 492344163.546
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
# Dataset Card for "chart_captioning"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 711 | [
[
-0.0316162109375,
-0.005390167236328125,
0.00838470458984375,
0.023895263671875,
-0.030914306640625,
0.01605224609375,
0.020263671875,
-0.0110931396484375,
0.0474853515625,
0.048492431640625,
-0.049530029296875,
-0.052032470703125,
-0.051544189453125,
-0.015... |
bryandts/waste_organic_anorganic_classification | 2023-10-08T19:39:41.000Z | [
"region:us"
] | bryandts | null | null | 0 | 30 | 2023-10-08T18:40:07 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.035064697265625,
0.0465087890625,
0.052490234375,
0.00505828857421875,
0.051361083984375,
0.01702880859375,
-0.05206298828125,
-0.01497650146484375,
-0.060302734375,
0.03790283203... |
renumics/spotlight-cifar100-enrichment | 2023-10-19T15:07:31.000Z | [
"region:us"
] | renumics | null | null | 0 | 30 | 2023-10-09T15:22:15 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prediction
dtype:
class_label:
names:
'0': apple
'1': aquarium_fish
'2': baby
'3': bear
'4': beaver
'5': bed
'6': bee
'7': beetle
'8': bicycle
'9': bottle
'10': bowl
'11': boy
'12': bridge
'13': bus
'14': butterfly
'15': camel
'16': can
'17': castle
'18': caterpillar
'19': cattle
'20': chair
'21': chimpanzee
'22': clock
'23': cloud
'24': cockroach
'25': couch
'26': cra
'27': crocodile
'28': cup
'29': dinosaur
'30': dolphin
'31': elephant
'32': flatfish
'33': forest
'34': fox
'35': girl
'36': hamster
'37': house
'38': kangaroo
'39': keyboard
'40': lamp
'41': lawn_mower
'42': leopard
'43': lion
'44': lizard
'45': lobster
'46': man
'47': maple_tree
'48': motorcycle
'49': mountain
'50': mouse
'51': mushroom
'52': oak_tree
'53': orange
'54': orchid
'55': otter
'56': palm_tree
'57': pear
'58': pickup_truck
'59': pine_tree
'60': plain
'61': plate
'62': poppy
'63': porcupine
'64': possum
'65': rabbit
'66': raccoon
'67': ray
'68': road
'69': rocket
'70': rose
'71': sea
'72': seal
'73': shark
'74': shrew
'75': skunk
'76': skyscraper
'77': snail
'78': snake
'79': spider
'80': squirrel
'81': streetcar
'82': sunflower
'83': sweet_pepper
'84': table
'85': tank
'86': telephone
'87': television
'88': tiger
'89': tractor
'90': train
'91': trout
'92': tulip
'93': turtle
'94': wardrobe
'95': whale
'96': willow_tree
'97': wolf
'98': woman
'99': worm
- name: prediction_error
dtype: bool
- name: probability
dtype: float32
- name: entropy
dtype: float32
- name: embedding_reduced
sequence: float32
length: 2
- name: embedding
sequence: float32
length: 768
splits:
- name: train
num_bytes: 154806250
num_examples: 50000
- name: test
num_bytes: 30961250
num_examples: 10000
download_size: 223227009
dataset_size: 185767500
---
# Dataset Card for "spotlight-cifar100-enrichment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 3,155 | [
[
-0.049102783203125,
-0.011199951171875,
0.01470947265625,
0.0277862548828125,
-0.0062255859375,
0.021209716796875,
0.0059814453125,
-0.0184783935546875,
0.062225341796875,
0.0260467529296875,
-0.054046630859375,
-0.043304443359375,
-0.0306243896484375,
-0.01... |
Luciya/llama-2-nuv-intent-noE-oos | 2023-10-10T06:50:18.000Z | [
"region:us"
] | Luciya | null | null | 0 | 30 | 2023-10-10T06:50:17 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 828135
num_examples: 1834
download_size: 127293
dataset_size: 828135
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama-2-nuv-intent-noE-oos"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 451 | [
[
-0.0173187255859375,
-0.0150146484375,
0.0247955322265625,
0.0244293212890625,
-0.031494140625,
-0.01043701171875,
0.0294647216796875,
-0.00910186767578125,
0.06927490234375,
0.05303955078125,
-0.05828857421875,
-0.061126708984375,
-0.048980712890625,
-0.009... |
BubbleJoe/snli_unified_input | 2023-10-11T00:06:53.000Z | [
"region:us"
] | BubbleJoe | null | null | 1 | 30 | 2023-10-10T20:59:39 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: input
dtype: string
splits:
- name: test
num_bytes: 2617808
num_examples: 10000
- name: train
num_bytes: 137270292
num_examples: 550152
- name: validation
num_bytes: 2626072
num_examples: 10000
download_size: 40318460
dataset_size: 142514172
---
# Dataset Card for "snli_unified_input"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 886 | [
[
-0.01904296875,
-0.00958251953125,
0.0223388671875,
0.0123138427734375,
-0.004970550537109375,
0.0020847320556640625,
0.0115814208984375,
0.0008935928344726562,
0.06494140625,
0.03240966796875,
-0.05670166015625,
-0.039703369140625,
-0.03070068359375,
-0.011... |
chargoddard/chai-feedback-pairs | 2023-10-13T07:51:45.000Z | [
"region:us"
] | chargoddard | null | null | 0 | 30 | 2023-10-13T07:46:10 | ---
dataset_info:
features:
- name: chosen
dtype: string
- name: chosen_conv_id
dtype: string
- name: rejected
dtype: string
- name: rejected_conv_id
dtype: string
- name: same_user
dtype: bool
splits:
- name: train
num_bytes: 100941843
num_examples: 30084
download_size: 43602877
dataset_size: 100941843
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Entries from [ChaiML/20231007_chai_prize_model_feedback_all](https://huggingface.co/datasets/ChaiML/20231007_chai_prize_model_feedback_all) grouped into accept/reject pairs sharing the same bot_id.
Subject to the license of and any restrictions associated with ChaiML/20231007_chai_prize_model_feedback_all.
| 753 | [
[
-0.0259857177734375,
-0.0357666015625,
0.031280517578125,
0.037017822265625,
-0.004940032958984375,
-0.012603759765625,
0.0079193115234375,
-0.0330810546875,
0.032012939453125,
0.038055419921875,
-0.066650390625,
-0.0210113525390625,
-0.06439208984375,
0.022... |
erhwenkuo/dolly-15k-chinese-zhtw | 2023-10-13T14:32:29.000Z | [
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:zh",
"license:cc-by-sa-3.0",
"arxiv:2203.02155",
"region:us"
] | erhwenkuo | null | null | 3 | 30 | 2023-10-13T14:10:46 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 10483730
num_examples: 15011
download_size: 7492947
dataset_size: 10483730
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
language:
- zh
size_categories:
- 10K<n<100K
---
# Dataset Card for "dolly-15k-chinese-zhtw"
## 內容
dolly-15k-chinese-zhtw 是一個開源數據集,它的原始數據集 [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) 包含由數千名 Databricks 員工產生的指令追蹤記錄,涉及 [InstructGPT](https://arxiv.org/abs/2203.02155) 論文中概述的幾個行為類別,包括腦力激盪、分類、封閉式QA、生成、資訊擷取、開放式QA 和總結。
根據以下條款,該資料集可用於任何目的,無論是學術目的還是商業目的 [Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode)。
## 支援的任務
- 訓練 LLMs
- 合成數據的生成
- 數據增強
## 概述
databricks-dolly-15k 是由數千名 Databricks 員工產生的超過 15,000 筆記錄的語料庫,使大型語言模型能夠展現 ChatGPT 的神奇互動性。 Databricks 員工被邀請在八個不同的指令類別中的每一個類別中建立提示/回應對,其中包括 InstructGPT 論文中概述的七個類別,以及開放式自由格式類別。貢獻者被指示避免使用除維基百科(針對指令類別的特定子集)之外的網絡上任何來源的信息,並明確指示避免在製定指令或響應時使用生成式人工智能。提供了每種行為的範例,以激發適合每個類別的問題類型和說明。
在資料生成過程的中間,貢獻者可以選擇回答其他貢獻者提出的問題。他們被要求重新表述原來的問題,並且只選擇他們可以合理地預期正確回答的問題。
對於某些類別,貢獻者被要求提供從維基百科複製的參考文本。參考文本(由實際資料集中的上下文欄位指示)可能包含括號內的維基百科引用編號(例如[42]),我們建議使用者在下游應用程式中將其刪除。
## 範例
一個樣本的範例:
```
{
'instruction': '小森田智昭是什麼時候出生的?',
'context': '小森田出生於1981年7月10日,出生在熊本縣。高中畢業後,他於2000年加入了J1聯賽俱樂部Avispa...',
'response': '小森田智明出生於1981年7月10日。'
}
```
## 資料欄位
資料有幾個欄位:
- `instruction`: 描述模型應該執行的任務
- `context`: 任務內容的上下文
- `response`: 回應
## 已知限制
- 維基百科是一個眾包語料庫,該資料集的內容可能反映維基百科中發現的偏見、事實錯誤和主題焦點
- 註釋者人口統計和主題可能反映 Databricks 員工的組成
## 論文引用
```
@online{DatabricksBlog2023DollyV2,
author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
urldate = {2023-06-30}
}
```
## 許可資訊
資料集中的某些類別的資料包括來自以下來源的資料,並根據 CC BY-SA 3.0 授權:
- 維基百科 - https://www.wikipedia.org | 2,366 | [
[
-0.026885986328125,
-0.06597900390625,
-0.00104522705078125,
0.0243377685546875,
-0.03643798828125,
-0.021148681640625,
0.0038127899169921875,
-0.017913818359375,
0.02362060546875,
0.027984619140625,
-0.043609619140625,
-0.056549072265625,
-0.035308837890625,
... |
duncanodhis/databaseinfor | 2023-10-13T15:14:39.000Z | [
"task_categories:question-answering",
"task_categories:summarization",
"language:en",
"license:mit",
"region:us"
] | duncanodhis | null | null | 0 | 30 | 2023-10-13T15:10:58 | ---
license: mit
task_categories:
- question-answering
- summarization
language:
- en
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | 4,450 | [
[
-0.04034423828125,
-0.0419921875,
0.009765625,
0.0178070068359375,
-0.0300445556640625,
-0.00893402099609375,
-0.0026874542236328125,
-0.048431396484375,
0.043212890625,
0.059478759765625,
-0.05938720703125,
-0.069580078125,
-0.042205810546875,
0.00993347167... |
surathisin/wiki_medical_llam2 | 2023-10-16T02:31:04.000Z | [
"region:us"
] | surathisin | null | null | 0 | 30 | 2023-10-15T10:33:36 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
giuid/qrecc_for_t5 | 2023-10-16T11:44:27.000Z | [
"region:us"
] | giuid | null | null | 0 | 30 | 2023-10-16T11:34:46 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.01496124267578125,
-0.06036376953125,
0.0379... |
asmallgreenpotato/stream-dataset | 2023-10-23T22:33:12.000Z | [
"region:us"
] | asmallgreenpotato | null | null | 0 | 30 | 2023-10-16T19:15:56 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.01496124267578125,
-0.06036376953125,
0.0379... |
mirfan899/hindi-ner | 2023-10-18T12:19:30.000Z | [
"region:us"
] | mirfan899 | null | null | 0 | 30 | 2023-10-18T04:25:37 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': LOCATION
'1': BRAND
'2': TITLE_OBJECT
'3': PERSON
'4': DESIGNATION
'5': ORGANIZATION
'6': ABBREVIATION
'7': TIME
'8': NUMBER
'9': MEASURE
'10': TERMS
'11': O
splits:
- name: train
num_bytes: 22988092
num_examples: 18376
- name: validation
num_bytes: 9784310
num_examples: 7876
- name: test
num_bytes: 9784310
num_examples: 7876
download_size: 6072695
dataset_size: 42556712
---
# Dataset Card for "hindi-ner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,042 | [
[
-0.032501220703125,
-0.0251617431640625,
-0.009521484375,
0.0213623046875,
-0.01473236083984375,
0.01296234130859375,
0.0013895034790039062,
-0.01009368896484375,
0.0633544921875,
0.015655517578125,
-0.048126220703125,
-0.044677734375,
-0.059478759765625,
0.... |
ck46/hendrycks_math | 2023-10-19T17:48:20.000Z | [
"region:us"
] | ck46 | null | null | 0 | 30 | 2023-10-19T17:48:13 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: problem
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 5984772
num_examples: 7500
- name: test
num_bytes: 3732833
num_examples: 5000
download_size: 4848007
dataset_size: 9717605
---
# Dataset Card for "hendryks_math"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 647 | [
[
-0.043182373046875,
-0.01092529296875,
0.0034465789794921875,
0.029205322265625,
-0.00986480712890625,
-0.0150299072265625,
0.006931304931640625,
-0.0055694580078125,
0.05419921875,
0.0300140380859375,
-0.066650390625,
-0.05316162109375,
-0.029754638671875,
... |
nicolaslee/CGLibrary | 2023-10-19T18:15:19.000Z | [
"region:us"
] | nicolaslee | null | null | 0 | 30 | 2023-10-19T18:15:09 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.01496124267578125,
-0.06036376953125,
0.0379... |
sshreyy/accord_donut_official | 2023-10-20T12:01:51.000Z | [
"region:us"
] | sshreyy | null | null | 0 | 30 | 2023-10-20T12:00:54 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 43888147.0
num_examples: 584
- name: test
num_bytes: 3244555.0
num_examples: 38
- name: validation
num_bytes: 5095471.0
num_examples: 68
download_size: 24415431
dataset_size: 52228173.0
---
# Dataset Card for "accord_donut_official"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 538 | [
[
-0.018280029296875,
-0.02813720703125,
0.01812744140625,
0.0087432861328125,
-0.0011758804321289062,
0.0032176971435546875,
-0.0069427490234375,
0.000545501708984375,
0.052825927734375,
0.032470703125,
-0.056640625,
-0.052825927734375,
-0.03765869140625,
-0.... |
oscarlaird/miniF2f_valid_hf_dataset | 2023-10-24T14:54:33.000Z | [
"region:us"
] | oscarlaird | null | null | 0 | 30 | 2023-10-20T19:11:03 | ---
dataset_info:
features:
- name: informal_statement
dtype: string
- name: formal_statement
dtype: string
splits:
- name: train
num_bytes: 69374
num_examples: 244
download_size: 0
dataset_size: 69374
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "miniF2f_valid_hf_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 500 | [
[
-0.03167724609375,
-0.0263671875,
0.0125732421875,
0.01922607421875,
-0.0148773193359375,
-0.0082244873046875,
0.018951416015625,
-0.005199432373046875,
0.04010009765625,
0.0230712890625,
-0.05462646484375,
-0.035736083984375,
-0.0391845703125,
-0.0043983459... |
gopikrsmscs/torch-issues | 2023-10-23T17:12:03.000Z | [
"task_categories:feature-extraction",
"size_categories:1K<n<10K",
"license:apache-2.0",
"region:us"
] | gopikrsmscs | null | null | 0 | 30 | 2023-10-22T05:37:37 | ---
license: apache-2.0
pretty_name: Pytorch Github Issues Metadata
size_categories:
- 1K<n<10K
task_categories:
- feature-extraction
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | 4,498 | [
[
-0.04034423828125,
-0.0419921875,
0.009765625,
0.0178070068359375,
-0.0300445556640625,
-0.00893402099609375,
-0.0026874542236328125,
-0.048431396484375,
0.043212890625,
0.059478759765625,
-0.05938720703125,
-0.069580078125,
-0.042205810546875,
0.00993347167... |
Naveengo/flickr8k | 2023-10-26T08:06:49.000Z | [
"task_categories:image-to-text",
"license:apache-2.0",
"region:us"
] | Naveengo | null | null | 0 | 30 | 2023-10-26T08:02:48 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1132031887.712
num_examples: 8091
download_size: 1114562282
dataset_size: 1132031887.712
license: apache-2.0
task_categories:
- image-to-text
---
# Dataset Card for "flickr8k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 539 | [
[
-0.047119140625,
0.00514984130859375,
0.01462554931640625,
0.0110321044921875,
-0.027313232421875,
-0.004497528076171875,
0.040557861328125,
-0.01129913330078125,
0.04736328125,
0.0325927734375,
-0.06005859375,
-0.0439453125,
-0.042236328125,
-0.011665344238... |
bobbybelajar/AmazonGrouped | 2023-10-29T07:39:25.000Z | [
"region:us"
] | bobbybelajar | null | null | 0 | 30 | 2023-10-29T07:06:42 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
Salesforce/cloudops_tsf | 2023-10-31T08:23:00.000Z | [
"task_categories:time-series-forecasting",
"size_categories:100M<n<1B",
"license:cc-by-4.0",
"arxiv:2310.05063",
"region:us"
] | Salesforce | null | @article{woo2023pushing,
title={Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain},
author={Woo, Gerald and Liu, Chenghao and Kumar, Akshat and Sahoo, Doyen},
journal={arXiv preprint arXiv:2310.05063},
year={2023}
} | 1 | 30 | 2023-10-29T07:51:30 | ---
license: cc-by-4.0
task_categories:
- time-series-forecasting
pretty_name: cloud
size_categories:
- 100M<n<1B
---
# Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain
[Paper](https://arxiv.org/abs/2310.05063) | [Code](https://github.com/SalesforceAIResearch/pretrain-time-series-cloudops)
Datasets accompanying the paper "Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain".
## Quick Start
```python
from datasets import load_dataset
dataset = load_dataset('Salesforce/cloudops_tsf', 'azure_vm_traces_2017')
```
## Available Datasets
### azure_vm_traces_2017
```python
DatasetDict({
train_test: Dataset({
features: ['start', 'target', 'item_id', 'feat_static_cat', 'feat_static_real', 'past_feat_dynamic_real'],
num_rows: 17568
})
pretrain: Dataset({
features: ['start', 'target', 'item_id', 'feat_static_cat', 'feat_static_real', 'past_feat_dynamic_real'],
num_rows: 159472
})
})
```
### borg_cluster_data_2011
```python
DatasetDict({
train_test: Dataset({
features: ['start', 'target', 'item_id', 'feat_static_cat', 'past_feat_dynamic_real'],
num_rows: 11117
})
pretrain: Dataset({
features: ['start', 'target', 'item_id', 'feat_static_cat', 'past_feat_dynamic_real'],
num_rows: 143386
})
})
```
### alibaba_cluster_trace_2018
```python
DatasetDict({
train_test: Dataset({
features: ['start', 'target', 'item_id', 'feat_static_cat', 'past_feat_dynamic_real'],
num_rows: 6048
})
pretrain: Dataset({
features: ['start', 'target', 'item_id', 'feat_static_cat', 'past_feat_dynamic_real'],
num_rows: 58409
})
})
```
## Dataset Config
```python
from datasets import load_dataset_builder
config = load_dataset_builder('Salesforce/cloudops_tsf', 'azure_vm_traces_2017').config
print(config)
CloudOpsTSFConfig(
name='alibaba_cluster_trace_2018',
version=1.0.0,
data_dir=None,
data_files=None,
description='',
prediction_length=48,
freq='5T',
stride=48,
univariate=False,
multivariate=True,
optional_fields=('feat_static_cat', 'past_feat_dynamic_real'),
rolling_evaluations=12,
test_split_date=Period('2018-01-08 11:55', '5T'),
_feat_static_cat_cardinalities={
'pretrain': (
('container_id', 64457),
('app_du', 9484)),
'train_test': (
('container_id', 6048),
('app_du', 1292)
)
},
target_dim=2,
feat_static_real_dim=0,
past_feat_dynamic_real_dim=6
)
```
```test_split_date``` is provided to achieve the same train-test split as given in the paper.
This is essentially the date/time of ```rolling_evaluations * prediction_length``` time steps before the last time step in the dataset.
Note that the pre-training dataset includes the test region, and thus should also be filtered before usage.
## Acknowledgements
The datasets were processed from the following original sources. Please cite the original sources if you use the datasets.
* Azure VM Traces 2017
* Bianchini. Resource central: Understanding and predicting workloads for improved resource management in large cloud platforms. In Proceedings of the 26th Symposium on Operating Systems Principles, pp. 153–167, 2017.
* https://github.com/Azure/AzurePublicDataset
* Borg Cluster Data 2011
* John Wilkes. More Google cluster data. Google research blog, November 2011. Posted at http://googleresearch.blogspot.com/2011/11/more-google-cluster-data.html.
* https://github.com/google/cluster-data
* Alibaba Cluster Trace 2018
* Jing Guo, Zihao Chang, Sa Wang, Haiyang Ding, Yihui Feng, Liang Mao, and Yungang Bao. Who limits the resource efficiency of my datacenter: An analysis of alibaba datacenter traces. In Proceedings of the International Symposium on Quality of Service, pp. 1–10, 2019.
* https://github.com/alibaba/clusterdata
## Citation
```
@article{woo2023pushing,
title={Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain},
author={Woo, Gerald and Liu, Chenghao and Kumar, Akshat and Sahoo, Doyen},
journal={arXiv preprint arXiv:2310.05063},
year={2023}
}
```
| 4,249 | [
[
-0.040771484375,
-0.02642822265625,
0.0185699462890625,
0.003757476806640625,
-0.03173828125,
-0.005649566650390625,
-0.008087158203125,
-0.0211029052734375,
0.021728515625,
0.0222015380859375,
-0.07470703125,
-0.035369873046875,
-0.0287322998046875,
-0.0226... |
aminlouhichi/donut5 | 2023-10-30T12:55:34.000Z | [
"region:us"
] | aminlouhichi | null | null | 0 | 30 | 2023-10-30T12:55:16 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 12953017.0
num_examples: 60
- name: validation
num_bytes: 12953017.0
num_examples: 60
- name: test
num_bytes: 25755968.0
num_examples: 60
download_size: 41314952
dataset_size: 51662002.0
---
# Dataset Card for "donut5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 698 | [
[
-0.0303192138671875,
-0.004634857177734375,
0.022064208984375,
0.0065460205078125,
0.0008540153503417969,
0.00989532470703125,
0.01497650146484375,
-0.00896453857421875,
0.051239013671875,
0.034393310546875,
-0.058135986328125,
-0.0540771484375,
-0.041748046875,... |
eltoai/test-hello | 2023-10-31T18:34:15.000Z | [
"region:us"
] | eltoai | null | null | 0 | 30 | 2023-10-31T18:34:15 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: data
sequence: string
splits:
- name: train
num_bytes: 45780
num_examples: 1000
download_size: 19559
dataset_size: 45780
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "test-hello"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 465 | [
[
-0.057891845703125,
-0.02996826171875,
0.004558563232421875,
0.00247955322265625,
-0.00873565673828125,
-0.00025582313537597656,
0.01763916015625,
-0.0170745849609375,
0.057647705078125,
0.020843505859375,
-0.059051513671875,
-0.052154541015625,
-0.034423828125,... |
lukesjordan/worldbank-project-documents | 2022-10-24T20:10:40.000Z | [
"task_categories:table-to-text",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:abstractive-qa",
"task_ids:closed-domain-qa",
"task_ids:extractive-qa",
"task_ids:language-modeling",
"task_ids:named-entity-recognition",
"task_ids:... | lukesjordan | null | null | 2 | 29 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- table-to-text
- question-answering
- summarization
- text-generation
task_ids:
- abstractive-qa
- closed-domain-qa
- extractive-qa
- language-modeling
- named-entity-recognition
- text-simplification
pretty_name: worldbank_project_documents
language_bcp47:
- en-US
tags:
- conditional-text-generation
- structure-prediction
---
# Dataset Card for World Bank Project Documents
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/luke-grassroot/aid-outcomes-ml
- **Paper:** Forthcoming
- **Point of Contact:** Luke Jordan (lukej at mit)
### Dataset Summary
This is a dataset of documents related to World Bank development projects in the period 1947-2020. The dataset includes
the documents used to propose or describe projects when they are launched, and those in the review. The documents are indexed
by the World Bank project ID, which can be used to obtain features from multiple publicly available tabular datasets.
### Supported Tasks and Leaderboards
No leaderboard yet. A wide range of possible supported tasks, including varieties of summarization, QA, and language modelling. To date, the datasets have been used primarily in conjunction with tabular data (via BERT embeddings) to predict project outcomes.
### Languages
English
## Dataset Structure
### Data Instances
### Data Fields
* World Bank project ID
* Document text
* Document type: "APPROVAL" for documents written at the beginning of a project, when it is approved; and "REVIEW" for documents written at the end of a project
### Data Splits
To allow for open exploration, and since different applications will want to do splits based on different sampling weights, we have not done a train test split but left all files in the train branch.
## Dataset Creation
### Source Data
Documents were scraped from the World Bank's public project archive, following links through to specific project pages and then collecting the text files made available by the [World Bank](https://projects.worldbank.org/en/projects-operations/projects-home).
### Annotations
This dataset is not annotated.
### Personal and Sensitive Information
None.
## Considerations for Using the Data
### Social Impact of Dataset
Affects development projects, which can have large-scale consequences for many millions of people.
### Discussion of Biases
The documents reflect the history of development, which has well-documented and well-studied issues with the imposition of developed world ideas on developing world countries. The documents provide a way to study those in the field of development, but should not be used for their description of the recipient countries, since that language will reflect a multitude of biases, especially in the earlier reaches of the historical projects.
## Additional Information
### Dataset Curators
Luke Jordan, Busani Ndlovu.
### Licensing Information
MIT +no-false-attribs license (MITNFA).
### Citation Information
@dataset{world-bank-project-documents,
author = {Jordan, Luke and Ndlovu, Busani and Shenk, Justin},
title = {World Bank Project Documents Dataset},
year = {2021}
}
### Contributions
Thanks to [@luke-grassroot](https://github.com/luke-grassroot), [@FRTNX](https://github.com/FRTNX/) and [@justinshenk](https://github.com/justinshenk) for adding this dataset. | 4,628 | [
[
-0.042694091796875,
-0.0484619140625,
0.004093170166015625,
0.01947021484375,
-0.02459716796875,
0.0158843994140625,
-0.00792694091796875,
-0.036834716796875,
0.01293182373046875,
0.046417236328125,
-0.038726806640625,
-0.056640625,
-0.051727294921875,
-0.00... |
qanastek/ECDC | 2022-10-23T04:59:32.000Z | [
"task_categories:translation",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:en-sv",
"multilinguality:en-pl",
"multilinguality:en-hu",
"multilinguality:en-lt",
"multilinguality:en-sk",
"multilinguality:en-ga",
"mult... | qanastek | null | @article{10.1007/s10579-014-9277-0,
author = {Steinberger, Ralf and Ebrahim, Mohamed and Poulis, Alexandros and Carrasco-Benitez, Manuel and Schl\"{u}ter, Patrick and Przybyszewski, Marek and Gilbro, Signe},
title = {An Overview of the European Union's Highly Multilingual Parallel Corpora},
year = {2014},
issue_date = {December 2014},
publisher = {Springer-Verlag},
address = {Berlin, Heidelberg},
volume = {48},
number = {4},
issn = {1574-020X},
url = {https://doi.org/10.1007/s10579-014-9277-0},
doi = {10.1007/s10579-014-9277-0},
abstract = {Starting in 2006, the European Commission's Joint Research Centre and other European Union organisations have made available a number of large-scale highly-multilingual parallel language resources. In this article, we give a comparative overview of these resources and we explain the specific nature of each of them. This article provides answers to a number of question, including: What are these linguistic resources? What is the difference between them? Why were they originally created and why was the data released publicly? What can they be used for and what are the limitations of their usability? What are the text types, subject domains and languages covered? How to avoid overlapping document sets? How do they compare regarding the formatting and the translation alignment? What are their usage conditions? What other types of multilingual linguistic resources does the EU have? This article thus aims to clarify what the similarities and differences between the various resources are and what they can be used for. It will also serve as a reference publication for those resources, for which a more detailed description has been lacking so far (EAC-TM, ECDC-TM and DGT-Acquis).},
journal = {Lang. Resour. Eval.},
month = {dec},
pages = {679–707},
numpages = {29},
keywords = {DCEP, EAC-TM, EuroVoc, JRC EuroVoc Indexer JEX, Parallel corpora, DGT-TM, Eur-Lex, Highly multilingual, Linguistic resources, DGT-Acquis, European Union, ECDC-TM, JRC-Acquis, Translation memory}
} | 1 | 29 | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- en-sv
- en-pl
- en-hu
- en-lt
- en-sk
- en-ga
- en-fr
- en-cs
- en-el
- en-it
- en-lv
- en-da
- en-nl
- en-bg
- en-is
- en-ro
- en-no
- en-pt
- en-es
- en-et
- en-mt
- en-sl
- en-fi
- en-de
pretty_name: ECDC
size_categories:
- 100K<n<1M
source_datasets:
- extended
task_categories:
- translation
- machine-translation
task_ids:
- translation
- machine-translation
---
# ECDC : An overview of the European Union's highly multilingual parallel corpora
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [No Warranty](#no-warranty)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://joint-research-centre.ec.europa.eu/language-technology-resources/ecdc-translation-memory_en#Introduction
- **Repository:** https://joint-research-centre.ec.europa.eu/language-technology-resources/ecdc-translation-memory_en#Introduction
- **Paper:** https://dl.acm.org/doi/10.1007/s10579-014-9277-0
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Yanis Labrak](mailto:yanis.labrak@univ-avignon.fr)
### Dataset Summary
In October 2012, the European Union (EU) agency 'European Centre for Disease Prevention and Control' (ECDC) released a translation memory (TM), i.e. a collection of sentences and their professionally produced translations, in twenty-five languages. The data gets distributed via the [web pages of the EC's Joint Research Centre (JRC)](https://joint-research-centre.ec.europa.eu/language-technology-resources/ecdc-translation-memory_en#Introduction).
### Supported Tasks and Leaderboards
`translation`: The dataset can be used to train a model for translation.
### Languages
In our case, the corpora consists of a pair of source and target sentences for all 22 different languages from the European Union (EU).
**List of languages :** `English (en)`, `Swedish (sv)`, `Polish (pl)`, `Hungarian (hu)`,`Lithuanian (lt)`, `Latvian (lv)`, `German (de)`, `Finnish (fi)`, `Slovak (sk)`,`Slovenian (sl)`, `French (fr)`, ,`Czech (cs)`,`Danish (da)`, `Italian (it)`,`Maltese (mt)`,`Dutch (nl)`,`Portuguese (pt)`,`Romanian (ro)`, `Spanish (es)`,`Estonian (et)`, `Bulgarian (bg)`,`Greek (el)`, `Irish (ga)`, `Icelandic (is)` and `Norwegian (no)`.
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
dataset = load_dataset("qanastek/ECDC", "en-it", split='train', download_mode='force_redownload')
print(dataset)
print(dataset[0])
```
## Dataset Structure
### Data Instances
```plain
key,lang,source_text,target_text
doc_0,en-bg,Vaccination against hepatitis C is not yet available.,Засега няма ваксина срещу хепатит С.
doc_1355,en-bg,Varicella infection,Инфекция с варицела
doc_2349,en-bg,"If you have any questions about the processing of your e-mail and related personal data, do not hesitate to include them in your message.","Ако имате въпроси относно обработката на вашия адрес на електронна поща и свързаните лични данни, не се колебайте да ги включите в съобщението си."
doc_192,en-bg,Transmission can be reduced especially by improving hygiene in food production handling.,Предаването на инфекцията може да бъде ограничено особено чрез подобряване на хигиената при манипулациите в хранителната индустрия.
```
### Data Fields
**key** : The document identifier `String`.
**lang** : The pair of source and target language of type `String`.
**source_text** : The source text of type `String`.
**target_text** : The target text of type `String`.
### Data Splits
|lang | key |
|-----|-----|
|en-bg|2567 |
|en-cs|2562 |
|en-da|2577 |
|en-de|2560 |
|en-el|2530 |
|en-es|2564 |
|en-et|2581 |
|en-fi|2617 |
|en-fr|2561 |
|en-ga|1356 |
|en-hu|2571 |
|en-is|2511 |
|en-it|2534 |
|en-lt|2545 |
|en-lv|2542 |
|en-mt|2539 |
|en-nl|2510 |
|en-no|2537 |
|en-pl|2546 |
|en-pt|2531 |
|en-ro|2555 |
|en-sk|2525 |
|en-sl|2545 |
|en-sv|2527 |
## Dataset Creation
### Curation Rationale
For details, check the corresponding [pages](https://joint-research-centre.ec.europa.eu/language-technology-resources/ecdc-translation-memory_en#Introduction).
### Source Data
<!-- #### Initial Data Collection and Normalization
ddd -->
#### Who are the source language producers?
Every data of this corpora as been uploaded by on [JRC](https://joint-research-centre.ec.europa.eu/language-technology-resources/ecdc-translation-memory_en#Introduction).
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Considerations for Using the Data
### Other Known Limitations
The nature of the task introduce a variability in the quality of the target translations.
## Additional Information
### Dataset Curators
__Hugging Face ECDC__: Labrak Yanis, Dufour Richard (Not affiliated with the original corpus)
__An overview of the European Union's highly multilingual parallel corpora__: Steinberger Ralf, Mohamed Ebrahim, Alexandros Poulis, Manuel Carrasco-Benitez, Patrick Schlüter, Marek Przybyszewski & Signe Gilbro.
### Licensing Information
By downloading or using the ECDC-Translation Memory, you are bound by the [ECDC-TM usage conditions (PDF)](https://wt-public.emm4u.eu/Resources/ECDC-TM/2012_10_Terms-of-Use_ECDC-TM.pdf).
### No Warranty
Each Work is provided ‘as is’ without, to the full extent permitted by law, representations,
warranties, obligations and liabilities of any kind, either express or implied, including, but
not limited to, any implied warranty of merchantability, integration, satisfactory quality and
fitness for a particular purpose.
Except in the cases of wilful misconduct or damages directly caused to natural persons, the
Owner will not be liable for any incidental, consequential, direct or indirect damages,
including, but not limited to, the loss of data, lost profits or any other financial loss arising
from the use of, or inability to use, the Work even if the Owner has been notified of the
possibility of such loss, damages, claims or costs, or for any claim by any third party. The
Owner may be liable under national statutory product liability laws as far as such laws apply
to the Work.
### Citation Information
Please cite the following paper when using this dataset.
```latex
@article{10.1007/s10579-014-9277-0,
author = {Steinberger, Ralf and Ebrahim, Mohamed and Poulis, Alexandros and Carrasco-Benitez, Manuel and Schl\"{u}ter, Patrick and Przybyszewski, Marek and Gilbro, Signe},
title = {An Overview of the European Union's Highly Multilingual Parallel Corpora},
year = {2014},
issue_date = {December 2014},
publisher = {Springer-Verlag},
address = {Berlin, Heidelberg},
volume = {48},
number = {4},
issn = {1574-020X},
url = {https://doi.org/10.1007/s10579-014-9277-0},
doi = {10.1007/s10579-014-9277-0},
abstract = {Starting in 2006, the European Commission's Joint Research Centre and other European Union organisations have made available a number of large-scale highly-multilingual parallel language resources. In this article, we give a comparative overview of these resources and we explain the specific nature of each of them. This article provides answers to a number of question, including: What are these linguistic resources? What is the difference between them? Why were they originally created and why was the data released publicly? What can they be used for and what are the limitations of their usability? What are the text types, subject domains and languages covered? How to avoid overlapping document sets? How do they compare regarding the formatting and the translation alignment? What are their usage conditions? What other types of multilingual linguistic resources does the EU have? This article thus aims to clarify what the similarities and differences between the various resources are and what they can be used for. It will also serve as a reference publication for those resources, for which a more detailed description has been lacking so far (EAC-TM, ECDC-TM and DGT-Acquis).},
journal = {Lang. Resour. Eval.},
month = {dec},
pages = {679–707},
numpages = {29},
keywords = {DCEP, EAC-TM, EuroVoc, JRC EuroVoc Indexer JEX, Parallel corpora, DGT-TM, Eur-Lex, Highly multilingual, Linguistic resources, DGT-Acquis, European Union, ECDC-TM, JRC-Acquis, Translation memory}
}
```
| 9,512 | [
[
-0.03985595703125,
-0.04071044921875,
0.024627685546875,
0.01404571533203125,
-0.016754150390625,
0.00641632080078125,
-0.047698974609375,
-0.029815673828125,
0.0233154296875,
0.0210113525390625,
-0.04522705078125,
-0.07550048828125,
-0.036529541015625,
0.04... |
solomonk/reddit_mental_health_posts | 2022-01-11T15:40:01.000Z | [
"region:us"
] | solomonk | null | null | 8 | 29 | 2022-03-02T23:29:22 | # Reddit posts about mental health
## files
- adhd.csv from r/adhd
- aspergers.csv from r/aspergers
- depression.csv from r/depression
- ocd.csv from r/ocd
- ptsd.csv from r/ptsd
## fields
- author
- body
- created_utc
- id
- num_comments
- score
- subreddit
- title
- upvote_ratio
- url
for more details about theses fields [Praw Submission](https://praw.readthedocs.io/en/latest/code_overview/models/submission.html).
| 425 | [
[
-0.0211029052734375,
-0.046722412109375,
0.0643310546875,
0.032562255859375,
-0.012237548828125,
0.0216522216796875,
0.01314544677734375,
-0.0219268798828125,
0.05499267578125,
0.0235443115234375,
-0.0626220703125,
-0.048614501953125,
-0.05120849609375,
0.02... |
tau/multi_news | 2022-03-24T08:56:03.000Z | [
"region:us"
] | tau | Multi-News, consists of news articles and human-written summaries
of these articles from the site newser.com.
Each summary is professionally written by editors and
includes links to the original articles cited.
There are two features:
- document: text of news articles seperated by special token "|||||".
- summary: news summary. | @misc{alex2019multinews,
title={Multi-News: a Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model},
author={Alexander R. Fabbri and Irene Li and Tianwei She and Suyi Li and Dragomir R. Radev},
year={2019},
eprint={1906.01749},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 0 | 29 | 2022-03-23T14:58:54 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
wza/roc_stories | 2022-05-03T06:19:34.000Z | [
"region:us"
] | wza | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | 2 | 29 | 2022-05-03T02:15:53 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
nateraw/ade20k-tiny | 2022-07-08T06:58:09.000Z | [
"task_categories:image-segmentation",
"task_ids:semantic-segmentation",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:extended|ade20k",
"language:en",
"license:bsd-3-c... | nateraw | null | null | 1 | 29 | 2022-06-15T04:32:58 | ---
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- found
language:
- en
license:
- bsd-3-clause
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- extended|ade20k
task_categories:
- image-segmentation
task_ids:
- semantic-segmentation
pretty_name: ADE 20K Tiny
---
# Dataset Card for ADE 20K Tiny
This is a tiny subset of the ADE 20K dataset, which you can find [here](https://huggingface.co/datasets/scene_parse_150). | 474 | [
[
-0.058349609375,
-0.01708984375,
0.0193023681640625,
0.01160430908203125,
-0.0215911865234375,
-0.00608062744140625,
0.0292816162109375,
-0.01332855224609375,
0.05291748046875,
0.03350830078125,
-0.08477783203125,
-0.04510498046875,
-0.0160980224609375,
-0.0... |
cjvt/solar3 | 2022-10-21T07:35:45.000Z | [
"task_categories:text2text-generation",
"task_categories:other",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:sl",
"license:cc-by-nc-sa-4.0",
"gram... | cjvt | Šolar is a developmental corpus of 5485 school texts (e.g., essays), written by students in Slovenian secondary schools
(age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade.
Part of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the
document available at https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1589/Smernice-za-oznacevanje-korpusa-Solar_V1.1.pdf (in Slovenian). | @misc{solar3.0,
title = {Developmental corpus {\v S}olar 3.0},
author = {Arhar Holdt, {\v S}pela and Rozman, Tadeja and Stritar Ku{\v c}uk, Mojca and Krek, Simon and Krap{\v s} Vodopivec, Irena and Stabej, Marko and Pori, Eva and Goli, Teja and Lavri{\v c}, Polona and Laskowski, Cyprian and Kocjan{\v c}i{\v c}, Polonca and Klemenc, Bojan and Krsnik, Luka and Kosem, Iztok},
url = {http://hdl.handle.net/11356/1589},
note = {Slovenian language resource repository {CLARIN}.{SI}},
year = {2022}
} | 0 | 29 | 2022-09-07T09:16:23 | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- sl
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 1K<n<10K
source_datasets:
- original
task_categories:
- text2text-generation
- other
task_ids: []
pretty_name: solar3
tags:
- grammatical-error-correction
- other-token-classification-of-text-errors
---
# Dataset Card for solar3
### Dataset Summary
Šolar* is a developmental corpus of 5485 school texts (e.g., essays), written by students in Slovenian secondary schools
(age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade.
Part of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the
document available at https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1589/Smernice-za-oznacevanje-korpusa-Solar_V1.1.pdf (in Slovenian).
\(*) pronounce "š" as "sh" in "shoe".
By default the dataset is provided at **sentence-level** (125867 instances): each instance contains a source (the original) and a target (the corrected) sentence. Note that either the source or the target sentence in an instance may be missing - this usually happens when a source sentence is marked as redundant or when a new sentence is added by the teacher. Additionally, a source or a target sentence may appear in multiple instances - for example, this happens when one sentence gets divided into multiple sentences.
There is also an option to aggregate the instances at the **document-level** or **paragraph-level**
by explicitly providing the correct config:
```
datasets.load_dataset("cjvt/solar3", "paragraph_level")`
datasets.load_dataset("cjvt/solar3", "document_level")`
```
### Supported Tasks and Leaderboards
Error correction, e.g., at token/sequence level, as token/sequence classification or text2text generation.
### Languages
Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset:
```json
{
'id_doc': 'solar1',
'doc_title': 'KUS-G-slo-1-GO-E-2009-10001',
'is_manually_validated': True,
'src_tokens': ['”', 'Ne', 'da', 'sovražim', ',', 'da', 'ljubim', 'sem', 'na', 'svetu', '”', ',', 'izreče', 'Antigona', 'v', 'bran', 'kralju', 'Kreonu', 'za', 'svoje', 'nasprotno', 'mišljenje', 'pred', 'smrtjo', '.'],
'src_ling_annotations': {
# truncated for conciseness
'lemma': ['”', 'ne', 'da', 'sovražiti', ...],
'ana': ['mte:U', 'mte:L', 'mte:Vd', ...],
'msd': ['UPosTag=PUNCT', 'UPosTag=PART|Polarity=Neg', 'UPosTag=SCONJ', ...],
'ne_tag': [..., 'O', 'B-PER', 'O', ...],
'space_after': [False, True, True, False, ...]
},
'tgt_tokens': ['„', 'Ne', 'da', 'sovražim', ',', 'da', 'ljubim', 'sem', 'na', 'svetu', ',', '”', 'izreče', 'Antigona', 'sebi', 'v', 'bran', 'kralju', 'Kreonu', 'za', 'svoje', 'nasprotno', 'mišljenje', 'pred', 'smrtjo', '.'],
# omitted for conciseness, the format is the same as in 'src_ling_annotations'
'tgt_ling_annotations': {...},
'corrections': [
{'idx_src': [0], 'idx_tgt': [0], 'corr_types': ['Z/LOČ/nerazvrščeno']},
{'idx_src': [10, 11], 'idx_tgt': [10, 11], 'corr_types': ['Z/LOČ/nerazvrščeno']},
{'idx_src': [], 'idx_tgt': [14], 'corr_types': ['O/KAT/povratnost']}
]
}
```
The instance represents a correction in the document 'solar1' (`id_doc`), which were manually assigned/validated (`is_manually_validated`). More concretely, the source sentence contains three errors (as indicated by three elements in `corrections`):
- a punctuation change: '”' -> '„';
- a punctuation change: ['”', ','] -> [',', '”'] (i.e. comma inside the quote, not outside);
- addition of a new word: 'sebi'.
### Data Fields
- `id_doc`: a string containing the identifying name of the document in which the sentence appears;
- `doc_title`: a string containing the assigned document title;
- `is_manually_validated`: a bool indicating whether the document in which the sentence appears was reviewed by a teacher;
- `src_tokens`: words in the source sentence (`[]` if there is no source sentence);
- `src_ling_annotations`: a dict containing the lemmas (key `"lemma"`), morphosyntactic descriptions using UD (key `"msd"`) and JOS/MULTEXT-East (key `"ana"`) specification, named entity tags encoded using IOB2 (key `"ne_tag"`) for the source tokens (**automatically annotated**), and spacing information (key `"space_after"`), i.e. whether there is a whitespace after each token;
- `tgt_tokens`: words in the target sentence (`[]` if there is no target sentence);
- `tgt_ling_annotations`: a dict containing the lemmas (key `"lemma"`), morphosyntactic descriptions using UD (key `"msd"`) and JOS/MULTEXT-East (key `"ana"`) specification, named entity tags encoded using IOB2 (key `"ne_tag"`) for the target tokens (**automatically annotated**), and spacing information (key `"space_after"`), i.e. whether there is a whitespace after each token;
- `corrections`: a list of the corrections, with each correction represented with a dictionary, containing the indices of the source tokens involved (`idx_src`), target tokens involved (`idx_tgt`), and the categories of the corrections made (`corr_types`). Please note that there can be multiple assigned categories for one annotated correction, in which case `len(corr_types) > 1`.
## Dataset Creation
The Developmental corpus Šolar consists of 5,485 texts written by students in Slovenian secondary schools (age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade. The information on school (elementary or secondary), subject, level (grade or year), type of text, region, and date of production is provided for each text. School essays form the majority of the corpus while other material includes texts created during lessons, such as text recapitulations or descriptions, examples of formal applications, etc.
Part of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the attached document (in Slovenian). Teacher corrections were part of the original files and reflect real classroom situations of essay marking. Corrections were then inserted into texts by annotators and subsequently categorized. Due to the annotations being gathered in a practical (i.e. classroom) setting, only the most relevant errors may sometimes be annotated, e.g., not all incorrectly placed commas are annotated if there is a bigger issue in the text.
## Additional Information
### Dataset Curators
Špela Arhar Holdt; et al. (please see http://hdl.handle.net/11356/1589 for the full list)
### Licensing Information
CC BY-NC-SA 4.0.
### Citation Information
```
@misc{solar3,
title = {Developmental corpus {\v S}olar 3.0},
author = {Arhar Holdt, {\v S}pela and Rozman, Tadeja and Stritar Ku{\v c}uk, Mojca and Krek, Simon and Krap{\v s} Vodopivec, Irena and Stabej, Marko and Pori, Eva and Goli, Teja and Lavri{\v c}, Polona and Laskowski, Cyprian and Kocjan{\v c}i{\v c}, Polonca and Klemenc, Bojan and Krsnik, Luka and Kosem, Iztok},
url = {http://hdl.handle.net/11356/1589},
note = {Slovenian language resource repository {CLARIN}.{SI}},
year = {2022}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
| 7,340 | [
[
-0.01555633544921875,
-0.045135498046875,
0.0259246826171875,
0.01678466796875,
-0.01023101806640625,
-0.00954437255859375,
-0.0310821533203125,
-0.01450347900390625,
0.01366424560546875,
0.037628173828125,
-0.041259765625,
-0.0731201171875,
-0.04083251953125,
... |
tomekkorbak/detoxify-pile-chunk3-150000-200000 | 2022-10-06T02:24:38.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 29 | 2022-10-03T19:42:50 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
tomekkorbak/detoxify-pile-chunk3-200000-250000 | 2022-10-06T01:24:15.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 29 | 2022-10-03T19:43:14 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
venelin/inferes | 2022-10-08T01:25:47.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:es",
"license:cc-by-4.0",
"nli",
"spanish"... | venelin | null | null | 0 | 29 | 2022-10-07T16:57:37 | ---
annotations_creators:
- expert-generated
language:
- es
language_creators:
- expert-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: InferES
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- nli
- spanish
- negation
- coreference
task_categories:
- text-classification
task_ids:
- natural-language-inference
---
# Dataset Card for InferES
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/venelink/inferes
- **Repository:** https://github.com/venelink/inferes
- **Paper:** https://arxiv.org/abs/2210.03068
- **Point of Contact:** venelin [at] utexas [dot] edu
### Dataset Summary
Natural Language Inference dataset for European Spanish
Paper accepted and (to be) presented at COLING 2022
### Supported Tasks and Leaderboards
Natural Language Inference
### Languages
Spanish
## Dataset Structure
The dataset contains two texts inputs (Premise and Hypothesis), Label for three-way classification, and annotation data.
### Data Instances
train size = 6444
test size = 1612
### Data Fields
ID : the unique ID of the instance
Premise
Hypothesis
Label: cnt, ent, neutral
Topic: 1 (Picasso), 2 (Columbus), 3 (Videogames), 4 (Olympic games), 5 (EU), 6 (USSR)
Anno: ID of the annotators (in cases of undergrads or crowd - the ID of the group)
Anno Type: Generate, Rewrite, Crowd, and Automated
### Data Splits
train size = 6444
test size = 1612
The train/test split is stratified by a key that combines Label + Anno + Anno type
### Source Data
Wikipedia + text generated from "sentence generators" hired as part of the process
#### Who are the annotators?
Native speakers of European Spanish
### Personal and Sensitive Information
No personal or Sensitive information is included.
Annotators are anonymized and only kept as "ID" for research purposes.
### Dataset Curators
Venelin Kovatchev
### Licensing Information
cc-by-4.0
### Citation Information
To be added after proceedings from COLING 2022 appear
### Contributions
Thanks to [@venelink](https://github.com/venelink) for adding this dataset.
| 2,858 | [
[
-0.0340576171875,
-0.04071044921875,
0.02557373046875,
0.031219482421875,
-0.0082855224609375,
-0.00536346435546875,
-0.0187530517578125,
-0.0357666015625,
0.03564453125,
0.04754638671875,
-0.05047607421875,
-0.05963134765625,
-0.0447998046875,
0.02154541015... |
projecte-aina/Parafraseja | 2023-09-13T12:37:01.000Z | [
"task_categories:text-classification",
"task_ids:multi-input-text-classification",
"annotations_creators:CLiC-UB",
"language_creators:found",
"multilinguality:monolingual",
"language:ca",
"license:cc-by-nc-nd-4.0",
"region:us"
] | projecte-aina | Parafraseja is a dataset of 16,584 pairs of sentences with a label that indicates if they are paraphrases or not. The original sentences were collected from TE-ca and STS-ca. For each sentence, an annotator wrote a sentence that was a paraphrase and another that was not. The guidelines of this annotation are available. | 1 | 29 | 2022-10-24T09:54:42 | ---
annotations_creators:
- CLiC-UB
language_creators:
- found
language:
- ca
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
pretty_name: Parafraseja
size_categories:
- ?
task_categories:
- text-classification
task_ids:
- multi-input-text-classification
---
# Dataset Card for Parafraseja
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Point of Contact:** [blanca.calvo@bsc.es](blanca.calvo@bsc.es)
### Dataset Summary
Parafraseja is a dataset of 21,984 pairs of sentences with a label that indicates if they are paraphrases or not. The original sentences were collected from [TE-ca](https://huggingface.co/datasets/projecte-aina/teca) and [STS-ca](https://huggingface.co/datasets/projecte-aina/sts-ca). For each sentence, an annotator wrote a sentence that was a paraphrase and another that was not. The guidelines of this annotation are available.
### Supported Tasks and Leaderboards
This dataset is mainly intended to train models for paraphrase detection.
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
The dataset consists of pairs of sentences labelleCAd with "Parafrasis" or "No Parafrasis" in a jsonl format.
### Data Instances
<pre>
{
"id": "te1_14977_1",
"source": "teca",
"original": "La 2a part consta de 23 cap\u00edtols, cadascun dels quals descriu un ocell diferent.",
"new": "La segona part consisteix en vint-i-tres cap\u00edtols, cada un dels quals descriu un ocell diferent.",
"label": "Parafrasis"
}
</pre>
### Data Fields
- original: original sentence
- new: new sentence, which could be a paraphrase or a non-paraphrase
- label: relation between original and new
### Data Splits
* dev.json: 2,000 examples
* test.json: 4,000 examples
* train.json: 15,984 examples
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
The original sentences of this dataset came from the [STS-ca](https://huggingface.co/datasets/projecte-aina/sts-ca) and the [TE-ca](https://huggingface.co/datasets/projecte-aina/teca).
#### Initial Data Collection and Normalization
11,543 of the original sentences came from TE-ca, and 10,441 came from STS-ca.
#### Who are the source language producers?
TE-ca and STS-ca come from the [Catalan Textual Corpus](https://zenodo.org/record/4519349#.Y1Zs__uxXJF), which consists of several corpora gathered from web crawling and public corpora, and [Vilaweb](https://www.vilaweb.cat), a Catalan newswire.
### Annotations
The dataset is annotated with the label "Parafrasis" or "No Parafrasis" for each pair of sentences.
CA
#### Annotation process
The annotation process was done by a single annotator and reviewed by another.
#### Who are the annotators?
The annotators were Catalan native speakers, with a background on linguistics.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that this data might contain biases. We have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
[Creative Commons Attribution Non-commercial No-Derivatives 4.0 International](https://creativecommons.org/licenses/by-nc-nd/4.0/).
### Contributions
[N/A]
| 4,999 | [
[
-0.018768310546875,
-0.04583740234375,
0.0227508544921875,
0.03509521484375,
-0.033599853515625,
-0.0003514289855957031,
-0.0122528076171875,
-0.01349639892578125,
0.053680419921875,
0.06298828125,
-0.025177001953125,
-0.07305908203125,
-0.0487060546875,
0.0... | |
sileod/probability_words_nli | 2023-09-06T14:56:43.000Z | [
"task_categories:text-classification",
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_ids:open-domain-qa",
"task_ids:multiple-choice-qa",
"task_ids:natural-language-inference",
"task_ids:multi-input-text-classification",
"annotations_creators:expert-generated",
"langu... | sileod | Probing neural language models for understanding of words of estimative probability | @inproceedings{sileo-moens-2023-probing,
title = "Probing neural language models for understanding of words of estimative probability",
author = "Sileo, Damien and
Moens, Marie-francine",
booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.starsem-1.41",
doi = "10.18653/v1/2023.starsem-1.41",
pages = "469--476",
} | 3 | 29 | 2022-11-03T14:21:14 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: 'probability_words_nli'
paperwithcoode_id: probability-words-nli
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
- multiple-choice
- question-answering
task_ids:
- open-domain-qa
- multiple-choice-qa
- natural-language-inference
- multi-input-text-classification
tags:
- wep
- words of estimative probability
- probability
- logical reasoning
- soft logic
- nli
- verbal probabilities
- natural-language-inference
- reasoning
- logic
train-eval-index:
- config: usnli
task: text-classification
task_id: multi-class-classification
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: context
sentence2: hypothesis
label: label
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 binary
- config: reasoning-1hop
task: text-classification
task_id: multi-class-classification
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: context
sentence2: hypothesis
label: label
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 binary
- config: reasoning-2hop
task: text-classification
task_id: multi-class-classification
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: context
sentence2: hypothesis
label: label
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 binary
---
# Dataset accompanying the "Probing neural language models for understanding of words of estimative probability" article
This dataset tests the capabilities of language models to correctly capture the meaning of words denoting probabilities (WEP, also called verbal probabilities), e.g. words like "probably", "maybe", "surely", "impossible".
We used probabilitic soft logic to combine probabilistic statements expressed with WEP (WEP-Reasoning) and we also used the UNLI dataset (https://nlp.jhu.edu/unli/) to directly check whether models can detect the WEP matching human-annotated probabilities according to [Fagen-Ulmschneider, 2018](https://github.com/wadefagen/datasets/tree/master/Perception-of-Probability-Words).
The dataset can be used as natural language inference data (context, premise, label) or multiple choice question answering (context,valid_hypothesis, invalid_hypothesis).
Code : [colab](https://colab.research.google.com/drive/10ILEWY2-J6Q1hT97cCB3eoHJwGSflKHp?usp=sharing)
# Citation
https://arxiv.org/abs/2211.03358
```bib
@inproceedings{sileo-moens-2023-probing,
title = "Probing neural language models for understanding of words of estimative probability",
author = "Sileo, Damien and
Moens, Marie-francine",
booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.starsem-1.41",
doi = "10.18653/v1/2023.starsem-1.41",
pages = "469--476",
}
```
| 3,335 | [
[
-0.0249481201171875,
-0.061981201171875,
0.031402587890625,
0.002655029296875,
-0.01187896728515625,
-0.0238189697265625,
-0.0308380126953125,
-0.026458740234375,
-0.0074920654296875,
0.0278778076171875,
-0.043853759765625,
-0.0413818359375,
-0.0271759033203125,... |
lawcompany/KLAID | 2022-11-17T07:09:10.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"multilinguality:monolingual",
"language:ko",
"license:cc-by-nc-nd-4.0",
"region:us"
] | lawcompany | KLAID (Korean Legal Artificial Intelligence Datasets) is a dataset for the development of Korean legal artificial intelligence technology. This time we offer 1 task, which is legal judgment prediction(LJP). | null | 7 | 29 | 2022-11-13T05:21:05 | ---
pretty_name: KLAID
viewer: true
language: ko
multilinguality:
- monolingual
license: cc-by-nc-nd-4.0
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for KLAID
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Other Inquiries](#other_inquiries)
- [Licensing Information](#licensing-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://klaid.net](https://klaid.net)
- **Leaderboard:** [https://klaid.net](https://klaid.net)
- **Point of Contact:** [klaid@lawcompany.co.kr](klaid@lawcompany.co.kr)
### Dataset Summary
Korean Legal Artificial Intelligence Datasets(KLAID) is a dataset for the development of Korean legal artificial intelligence technology. This time we offer 1 task, which is legal judgment prediction(LJP).
### Supported Tasks and Leaderboards
Legal Judgment Prediction(LJP)
### Languages
`korean`
### How to use
```python
from datasets import load_dataset
# legal judgment prediction
dataset = load_dataset("lawcompany/KLAID", 'ljp')
```
## Dataset Structure
### Data Instances
#### ljp
An example of 'train' looks as follows.
```
{
'fact': '피고인은 2022. 11. 14. 혈중알콜농도 0.123%의 술에 취한 상태로 승용차를 운전하였다.',
'laws_service': '도로교통법 제148조의2 제3항 제2호,도로교통법 제44조 제1항',
'laws_service_id': 7
}
```
Other References
You can refer to each label's 'laws service content' [here](https://storage.googleapis.com/klaid/ljp/dataset/ljp_laws_service_content.json).
'Laws service content' is the statute([source](https://www.law.go.kr/)) corresponding to each label.
### Data Fields
#### ljp
+ "fact": a `string` feature
+ "laws_service": a `string` feature
+ "laws_service_id": a classification label, with 177 legal judgment values
[More Information Needed](https://klaid.net/tasks-1)
### Data Splits
#### ljp
+ train: 161,192
## Dataset Creation
### Curation Rationale
The legal domain is arguably one of the most expertise fields that require expert knowledge to comprehend. Natural language processing requires many aspects, and we focus on the dataset requirements. As a gold standard is necessary for the testing and the training of a neural model, we hope that our dataset release will help the advances in natural language processing in the legal domain, especially for those for the Korean legal system.
### Source Data
These are datasets based on Korean legal case data.
### Personal and Sensitive Information
Due to the nature of legal case data, personal and sensitive information may be included. Therefore, in order to prevent problems that may occur with personal and sensitive information, we proceeded to de-realize the legal case.
## Considerations for Using the Data
### Other Known Limitations
We plan to upload more data and update them as some of the court records may be revised from now on, based on the ever-evolving legal system.
## Additional Information
### Other Inquiries
[klaid@lawcompany.co.kr](klaid@lawcompany.co.kr)
### Licensing Information
Copyright 2022-present [Law&Company Co. Ltd.](https://career.lawcompany.co.kr/)
Licensed under the CC-BY-NC-ND-4.0
### Contributions
[More Information Needed] | 3,868 | [
[
-0.013214111328125,
-0.0305023193359375,
0.0345458984375,
0.0194854736328125,
-0.0347900390625,
-0.0213775634765625,
-0.022979736328125,
-0.032470703125,
0.0171356201171875,
0.06390380859375,
-0.03289794921875,
-0.0716552734375,
-0.0374755859375,
-0.01244354... |
xusenlin/cmeee | 2022-12-07T14:24:00.000Z | [
"region:us"
] | xusenlin | null | null | 3 | 29 | 2022-12-07T14:16:08 | ---
dataset_info:
features:
- name: text
dtype: string
- name: entities
list:
- name: id
dtype: int64
- name: entity
dtype: string
- name: start_offset
dtype: int64
- name: end_offset
dtype: int64
- name: label
dtype: string
splits:
- name: train
num_bytes: 5289666
num_examples: 15000
- name: test
num_bytes: 461472
num_examples: 3000
- name: validation
num_bytes: 1752698
num_examples: 5000
download_size: 3359069
dataset_size: 7503836
---
# CMeEE 中文医学命名实体识别数据集
字段说明
+ `text`: 文本
+ `entities`: 文本中包含的实体
+ `id`: 实体 `id`
+ `entity`: 实体对应的字符串
+ `start_offset`: 实体开始位置
+ `end_offset`: 实体结束位置的下一位
+ `label`: 实体对应的开始位置
| 736 | [
[
-0.053802490234375,
-0.05718994140625,
0.033447265625,
0.021820068359375,
-0.041839599609375,
0.0027332305908203125,
0.0067596435546875,
-0.02569580078125,
0.049163818359375,
0.01654052734375,
-0.029632568359375,
-0.0609130859375,
-0.05596923828125,
-0.01023... |
ruanchaves/hatebr | 2023-04-13T13:39:40.000Z | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pt",
"instagram",
"doi:10.57967/hf/0274",
"region:us"
] | ruanchaves | HateBR is the first large-scale expert annotated corpus of Brazilian Instagram comments for hate speech and offensive language detection on the web and social media. The HateBR corpus was collected from Brazilian Instagram comments of politicians and manually annotated by specialists. It is composed of 7,000 documents annotated according to three different layers: a binary classification (offensive versus non-offensive comments), offensiveness-level (highly, moderately, and slightly offensive messages), and nine hate speech groups (xenophobia, racism, homophobia, sexism, religious intolerance, partyism, apology for the dictatorship, antisemitism, and fatphobia). Each comment was annotated by three different annotators and achieved high inter-annotator agreement. Furthermore, baseline experiments were implemented reaching 85% of F1-score outperforming the current literature models for the Portuguese language. Accordingly, we hope that the proposed expertly annotated corpus may foster research on hate speech and offensive language detection in the Natural Language Processing area. | @inproceedings{vargas2022hatebr,
title={HateBR: A Large Expert Annotated Corpus of Brazilian Instagram Comments for Offensive Language and Hate Speech Detection},
author={Vargas, Francielle and Carvalho, Isabelle and de G{\'o}es, Fabiana Rodrigues and Pardo, Thiago and Benevenuto, Fabr{\'\i}cio},
booktitle={Proceedings of the Thirteenth Language Resources and Evaluation Conference},
pages={7174--7183},
year={2022}
} | 6 | 29 | 2023-01-15T11:11:33 | ---
annotations_creators:
- expert-generated
language:
- pt
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: HateBR - Offensive Language and Hate Speech Dataset in Brazilian Portuguese
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- instagram
task_categories:
- text-classification
task_ids:
- hate-speech-detection
---
# Dataset Card for HateBR - Offensive Language and Hate Speech Dataset in Brazilian Portuguese
## Dataset Description
- **Homepage:** http://143.107.183.175:14581/
- **Repository:** https://github.com/franciellevargas/HateBR
- **Paper:** https://aclanthology.org/2022.lrec-1.777/
- **Leaderboard:**
- **Point of Contact:** https://franciellevargas.github.io/
### Dataset Summary
HateBR is the first large-scale expert annotated corpus of Brazilian Instagram comments for hate speech and offensive language detection on the web and social media. The HateBR corpus was collected from Brazilian Instagram comments of politicians and manually annotated by specialists. It is composed of 7,000 documents annotated according to three different layers: a binary classification (offensive versus non-offensive comments), offensiveness-level (highly, moderately, and slightly offensive messages), and nine hate speech groups (xenophobia, racism, homophobia, sexism, religious intolerance, partyism, apology for the dictatorship, antisemitism, and fatphobia). Each comment was annotated by three different annotators and achieved high inter-annotator agreement. Furthermore, baseline experiments were implemented reaching 85% of F1-score outperforming the current literature models for the Portuguese language. Accordingly, we hope that the proposed expertly annotated corpus may foster research on hate speech and offensive language detection in the Natural Language Processing area.
**Relevant Links:**
* [**Demo: Brasil Sem Ódio**](http://143.107.183.175:14581/)
* [**MOL - Multilingual Offensive Lexicon Annotated with Contextual Information**](https://github.com/franciellevargas/MOL)
### Supported Tasks and Leaderboards
Hate Speech Detection
### Languages
Portuguese
## Dataset Structure
### Data Instances
```
{'instagram_comments': 'Hipocrita!!',
'offensive_language': True,
'offensiveness_levels': 2,
'antisemitism': False,
'apology_for_the_dictatorship': False,
'fatphobia': False,
'homophobia': False,
'partyism': False,
'racism': False,
'religious_intolerance': False,
'sexism': False,
'xenophobia': False,
'offensive_&_non-hate_speech': True,
'non-offensive': False,
'specialist_1_hate_speech': False,
'specialist_2_hate_speech': False,
'specialist_3_hate_speech': False
}
```
### Data Fields
* **instagram_comments**: Instagram comments.
* **offensive_language**: A classification of comments as either offensive (True) or non-offensive (False).
* **offensiveness_levels**: A classification of comments based on their level of offensiveness, including highly offensive (3), moderately offensive (2), slightly offensive (1) and non-offensive (0).
* **antisemitism**: A classification of whether or not the comment contains antisemitic language.
* **apology_for_the_dictatorship**: A classification of whether or not the comment praises the military dictatorship period in Brazil.
* **fatphobia**: A classification of whether or not the comment contains language that promotes fatphobia.
* **homophobia**: A classification of whether or not the comment contains language that promotes homophobia.
* **partyism**: A classification of whether or not the comment contains language that promotes partyism.
* **racism**: A classification of whether or not the comment contains racist language.
* **religious_intolerance**: A classification of whether or not the comment contains language that promotes religious intolerance.
* **sexism**: A classification of whether or not the comment contains sexist language.
* **xenophobia**: A classification of whether or not the comment contains language that promotes xenophobia.
* **offensive_&_no-hate_speech**: A classification of whether or not the comment is offensive but does not contain hate speech.
* **specialist_1_hate_speech**: A classification of whether or not the comment was annotated by the first specialist as hate speech.
* **specialist_2_hate_speech**: A classification of whether or not the comment was annotated by the second specialist as hate speech.
* **specialist_3_hate_speech**: A classification of whether or not the comment was annotated by the third specialist as hate speech.
### Data Splits
The original authors of the dataset did not propose a standard data split. To address this, we use the [multi-label data stratification technique](http://scikit.ml/stratification.html) implemented at the scikit-multilearn library to propose a train-validation-test split. This method considers all classes for hate speech in the data and attempts to balance the representation of each class in the split.
| name |train|validation|test|
|---------|----:|----:|----:|
|hatebr|4480|1120|1400|
## Considerations for Using the Data
### Discussion of Biases
Please refer to [the HateBR paper](https://aclanthology.org/2022.lrec-1.777/) for a discussion of biases.
### Licensing Information
The HateBR dataset, including all its components, is provided strictly for academic and research purposes. The use of the dataset for any commercial or non-academic purpose is expressly prohibited without the prior written consent of [SINCH](https://www.sinch.com/).
### Citation Information
```
@inproceedings{vargas2022hatebr,
title={HateBR: A Large Expert Annotated Corpus of Brazilian Instagram Comments for Offensive Language and Hate Speech Detection},
author={Vargas, Francielle and Carvalho, Isabelle and de G{\'o}es, Fabiana Rodrigues and Pardo, Thiago and Benevenuto, Fabr{\'\i}cio},
booktitle={Proceedings of the Thirteenth Language Resources and Evaluation Conference},
pages={7174--7183},
year={2022}
}
```
### Contributions
Thanks to [@ruanchaves](https://github.com/ruanchaves) for adding this dataset. | 6,094 | [
[
-0.04052734375,
-0.06414794921875,
-0.01399993896484375,
0.02850341796875,
-0.016357421875,
0.0185089111328125,
-0.0305328369140625,
-0.04425048828125,
0.0179901123046875,
0.0256195068359375,
-0.0169219970703125,
-0.05950927734375,
-0.05950927734375,
0.01346... |
TurkuNLP/squad_v2_fi | 2023-10-10T19:55:56.000Z | [
"task_categories:question-answering",
"language:fi",
"license:cc-by-sa-4.0",
"region:us"
] | TurkuNLP | combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers
to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but
also determine when no answer is supported by the paragraph and abstain from answering. | null | 0 | 29 | 2023-01-30T09:03:27 | ---
license: cc-by-sa-4.0
task_categories:
- question-answering
language:
- fi
---
### Dataset Summary
This is a Finnish SQuAD question answering dataset. It is a DeepL -based machine translation of the English SQuAD2.0 dataset which combines the 100,000 questions in
SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones.
To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported
by the paragraph and abstain from answering.
### Data Fields
The data fields are the same among all splits.
#### Example Data
```
{
"title": "Victoria_(Australia)",
"paragraphs": [
{
"qas": [
{
"question": "Millainen talous Victoriassa on?",
"id": "570d2417fed7b91900d45c3d",
"answers": [
{
"text": "monipuolinen",
"answer_start": 26,
"texts": [
"monipuolinen"
],
"starts": [
26
]
},
{
"text": "hyvin monipuolinen",
"answer_start": 20,
"texts": [
"hyvin ",
"monipuolinen"
],
"starts": [
20,
26
]
},
{
"text": "hyvin monipuolinen",
"answer_start": 20,
"texts": [
"hyvin ",
"monipuolinen"
],
"starts": [
20,
26
]
}
],
"is_impossible": false
}
],
"context": "Victorian talous on hyvin monipuolinen: palvelualat, kuten rahoitus- ja kiinteistöpalvelut, terveydenhuolto, koulutus, tukkukauppa, vähittäiskauppa, majoitus- ja ravitsemistoiminta ja teollisuus muodostavat suurimman osan työllisyydestä. Victorian osavaltion bruttokansantuote on Australian toiseksi suurin, vaikka Victoria on asukaskohtaisen bruttokansantuotteen osalta neljäntenä, koska sen kaivostoiminta on vähäistä. Kulttuurin alalla Melbournessa on useita museoita, taidegallerioita ja teattereita, ja sitä kutsutaan myös \"Australian urheilupääkaupungiksi\". Melbournen krikettikenttä (Melbourne Cricket Ground) on Australian suurin stadion, ja siellä järjestettiin vuoden 1956 kesäolympialaiset ja vuoden 2006 Kansainyhteisön kisat. Kenttää pidetään myös australialaisen kriketin ja australialaisen jalkapallon \"henkisenä kotina\", ja se isännöi vuosittain Australian jalkapalloliigan (AFL) suurta loppuottelua, johon osallistuu yleensä yli 95 000 ihmistä. Victoriaan kuuluu kahdeksan julkista yliopistoa, joista vanhin, Melbournen yliopisto, on perustettu vuonna 1853."
}
]
}
```
#### squad_v2
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
- `texts`: a `string` feature.
- `starts`: a `int32` feature.
### Data Splits
| name | train | validation |
| -------- | -----: | ---------: |
| squad_v2 | 130319 | 11873 |
### Evaluation Results
Results from fine-tuning [TurkuNLP/bert-base-finnish-cased-v1](ttps://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) for extractive question answering.
| dataset | F1 |
| -------------------- | ----: |
| TurkuNLP/squad_v2_fi | 73.66 |
| ilmariky/SQuAD_v2_fi | 61.87 |
### Considerations for Using the Data
Due to DeepL terms and conditions, this dataset **must not be used for any machine translation work**, namely machine translation
system development and evaluation of any kind. In general, we wish you do not pair the original English data with the translations
except when working on research unrelated to machine translation, so as not to infringe on the terms and conditions.
### Licensing Information
Contents of this repository are distributed under the
[Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
Copyright of the dataset contents belongs to the original copyright holders. | 4,427 | [
[
-0.037689208984375,
-0.05859375,
0.0276336669921875,
0.0094146728515625,
-0.017913818359375,
-0.007579803466796875,
-0.0109710693359375,
-0.0161895751953125,
0.033538818359375,
0.045562744140625,
-0.07489013671875,
-0.052642822265625,
-0.037322998046875,
0.0... |
metaeval/strategy-qa | 2023-02-09T14:56:34.000Z | [
"region:us"
] | metaeval | null | null | 2 | 29 | 2023-02-09T14:55:42 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
breadlicker45/youtube-comments-180k | 2023-02-24T15:15:32.000Z | [
"region:us"
] | breadlicker45 | null | null | 1 | 29 | 2023-02-24T15:14:47 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
urialon/summ_screen_test | 2023-02-28T16:39:55.000Z | [
"region:us"
] | urialon | null | null | 0 | 29 | 2023-02-28T16:39:49 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
CATIE-AQ/frenchQA | 2023-10-18T08:35:40.000Z | [
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:fr",
"license:cc-by-4.0",
"doi:10.57967/hf/0862",
"region:us"
] | CATIE-AQ | One French QA Dataset to rule them all, One French QA Dataset to find them, One French QA Dataset to bring them all, and in the darkness bind them. | \ | 0 | 29 | 2023-03-14T14:32:36 | ---
task_categories:
- question-answering
language:
- fr
size_categories:
- 100K<n<1M
license: cc-by-4.0
---
# Dataset information
Dataset concatenating all QA datasets with context available in French and open-source.
In addition, an augmented version of these datasets has been added (same context but different questions to create data in SQuADv2 format).
In total, there are 221,348 training data, **910** validation data and 6,376 test data (the first 3,188 rows correspond to SQuADv2 format, the remaining 3,188 to SQuADv2 format).
In practice, due to the restrictive license for the FQUAD 1.0 dataset, we can only share **179,886** rows of the 221,348 training data and not the test dataset.
Our methodology is described in a blog post available in [English](https://blog.vaniila.ai/en/Question_answering/) or [French](https://blog.vaniila.ai/QA/).
# Usage
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/frenchQA",sep=";")
```
```
dataset
DatasetDict({
train: Dataset({
features: ['context', 'question', 'answer', 'answer_start', 'dataset'],
num_rows: 179886
})
validation: Dataset({
features: ['context', 'question', 'answer', 'answer_start', 'dataset'],
num_rows: 910
})
})
```
# Dataset
## Dataset details
| Dataset | Format | Train split | Dev split | Test split | Available in frenchQA |
| ----------- | ----------- | ----------- | ----------- | ----------- | ------------------------ |
| [piaf](https://www.data.gouv.fr/en/datasets/piaf-le-dataset-francophone-de-questions-reponses/)| SQuAD 1.0 | 9 224 Q & A | X | X | Yes |
| piaf_v2| SQuAD 2.0 | 9 224 Q & A | X | X | Yes |
| [fquad](https://fquad.illuin.tech/)| SQuAD 1.0 | 20 731 Q & A | 3 188 Q & A (is not used for training, but as a test dataset) | 2 189 Q & A (not freely available)| No due to the license |
| fquad_v2 | SQuAD 2.0 | 20 731 Q & A | 3 188 Q & A (is not used for training, but as a test dataset) | X | No due to the license |
| [lincoln/newsquadfr](https://huggingface.co/datasets/lincoln/newsquadfr) | SQuAD 1.0 | 1 650 Q & A | 455 Q & A | X | Yes |
| lincoln/newsquadfr_v2 | SQuAD 2.0 | 1 650 Q & A | 455 Q & A | X | Yes |
| [pragnakalp/squad_v2_french_translated](https://huggingface.co/datasets/pragnakalp/squad_v2_french_translated)| SQuAD 2.0 | 79 069 Q & A | X | X | Yes |
| pragnakalp/squad_v2_french_translated_v2| SQuAD 2.0 | 79 069 Q & A | X | X | Yes |
## Columns
```
dataset_train = dataset['train'].to_pandas()
dataset_train.head()
context question answer answer_start dataset
0 Beyoncé Giselle Knowles-Carter (/ biːˈjɒnseɪ /... Quand Beyonce a-t-elle commencé à devenir popu... à la fin des années 1990 269 pragnakalp/squad_v2_french_translated
1 Beyoncé Giselle Knowles-Carter (/ biːˈjɒnseɪ /... Quand Beyonce a-t-elle quitté Destiny's Child ... 2003 549 pragnakalp/squad_v2_french_translated
2 Beyoncé Giselle Knowles-Carter (/ biːˈjɒnseɪ /... Qui a dirigé le groupe Destiny's Child ? Mathew Knowles 376 pragnakalp/squad_v2_french_translated
3 Beyoncé Giselle Knowles-Carter (/ biːˈjɒnseɪ /... Quand Beyoncé a-t-elle sorti Dangerously in Lo... 2003 549 pragnakalp/squad_v2_french_translated
4 Beyoncé Giselle Knowles-Carter (/ biːˈjɒnseɪ /... Combien de Grammy Awards Beyoncé a-t-elle gagn... cinq 629 pragnakalp/squad_v2_french_translated
```
- the `context` column contains the context
- the `question` column contains the question
- the `answer` column contains the answer (has been replaced by `no_answer` for rows in SQuAD v2 format)
- the `answer_start` column contains the start position of the answer in the context (has been replaced by `-1` for rows in SQuAD v2 format)
- the `dataset` column identifies the row's original dataset (if you wish to apply filters to it, rows in SQuAD v2 format are indicated with the suffix `_v2` in the dataset name)
## Split
- `train` corresponds to the concatenation of the training dataset from `pragnakalp/squad_v2_english_translated` + `lincoln/newsquadfr` + `PIAFv1.2` + the augmented version of each dataset in SQuADv2 format (no shuffle has been performed)
- `validation` corresponds to the concatenation of the newsquadfr validation dataset + this same dataset expanded in SQuAD v2 format (= newsquadfr_v2) (no shuffle performed)
# Question type statistics
The question type distribution is as follows:
| Type of question | Frequency in percent |
| ----------- | ----------- |
|What (que) |55.02|
|Who (qui) |15.96|
|How much (combien)|7.92|
|When (quand) |6.90|
|Where (où) |3.15|
|How (comment) |3.76|
|What (quoi) |2.60|
|Why (pourquoi) |1.25|
|Other |3.44|
The number of questions containing a negation, e.g. "What was the name of Chopin's first music teacher who was not an amateur musician?", is estimated at 3.55% of the total questions.
For information, the distribution of the complete dataset (containing FQUAD 1.0 and FQUAD 1.0 data in SQUAD 2.0 format) is as follows:
| Type of question | Frequency in percent |
| ----------- | ----------- |
|What (que) |55.12|
|Who (qui) |16.24|
|How much (combien)|7.56|
|When (quand) |6.85|
|Where (où) |3.98|
|How (comment) |3.76|
|What (quoi) |2.94|
|Why (pourquoi) |1.41|
|Other |2.14|
The number of questions containing a negation, e.g. "What was the name of Chopin's first music teacher who was not an amateur musician?", is estimated at 3.07% of the total questions.
# Citation
```
@misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { frenchQA (Revision 6249cd5) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/frenchQA },
doi = { 10.57967/hf/0862 },
publisher = { Hugging Face }
}
```
# License
[cc-by-4.0](https://creativecommons.org/licenses/by/4.0/deed.en) | 6,334 | [
[
-0.0423583984375,
-0.049652099609375,
0.00511932373046875,
0.033905029296875,
-0.0008392333984375,
0.00007849931716918945,
-0.0033779144287109375,
-0.0169830322265625,
0.0253448486328125,
0.0185089111328125,
-0.06927490234375,
-0.03839111328125,
-0.0265045166015... |
MentalFox/GPTeacher | 2023-04-10T11:12:29.000Z | [
"region:us"
] | MentalFox | null | null | 1 | 29 | 2023-04-10T11:11:17 | # GPTeacher
A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer
The General-Instruct used many of the same seed prompts as alpaca, but also had specific examples of things we didnt see much in with alpaca. Such as Chain of Thought Reasoning, Logic Puzzles, Wordplay, Role Playing (lightly), and was asked to include reasoning behind and thought steps where appropriate in example responses, among other things.
The General-Instruct dataset is about 20,000 examples with just deduplication.
Still cleaning the codegen instruct dataset, will be up when its cleaned.
Each dataset is split into 5 separate datasets, based on similarity scored cleaning. Simple dedupe only, and then range of <60% to <90% similarity cleaned sets for each.
They are all made to be compliant with Alpaca's dataset format, i.e. each has an instruction, input, and output field, should make it easier to use the same fine tune script and process as alpaca has.
Documentation on the toolformers section coming soon, we generated a dataset to use a set of predefined tools, including search, python, terminal/shell, wikipedia, wolfram, and others. More info on prompt format for inference soon..
| 1,245 | [
[
-0.044219970703125,
-0.061492919921875,
0.042633056640625,
0.003971099853515625,
-0.0157470703125,
-0.033294677734375,
0.00847625732421875,
-0.01381683349609375,
-0.00013720989227294922,
0.0391845703125,
-0.057891845703125,
-0.03936767578125,
-0.041473388671875,... |
sbmaruf/forai_ml_masakhane_mafand | 2023-05-25T00:11:20.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"language:fr",
"language:am",
"language:bm",
"lang... | sbmaruf | MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are:
- Amharic
- Bambara
- Ghomala
- Ewe
- Fon
- Hausa
- Igbo
- Kinyarwanda
- Luganda
- Luo
- Mossi
- Nigerian-Pidgin
- Chichewa
- Shona
- Swahili
- Setswana
- Twi
- Wolof
- Xhosa
- Yoruba
- Zulu
The train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho
For more details see https://aclanthology.org/2022.naacl-main.223/ | @inproceedings{adelani-etal-2022-thousand,
title = "A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation",
author = "Adelani, David and
Alabi, Jesujoba and
Fan, Angela and
Kreutzer, Julia and
Shen, Xiaoyu and
Reid, Machel and
Ruiter, Dana and
Klakow, Dietrich and
Nabende, Peter and
Chang, Ernie and
Gwadabe, Tajuddeen and
Sackey, Freshia and
Dossou, Bonaventure F. P. and
Emezue, Chris and
Leong, Colin and
Beukman, Michael and
Muhammad, Shamsuddeen and
Jarso, Guyo and
Yousuf, Oreen and
Niyongabo Rubungo, Andre and
Hacheme, Gilles and
Wairagala, Eric Peter and
Nasir, Muhammad Umair and
Ajibade, Benjamin and
Ajayi, Tunde and
Gitau, Yvonne and
Abbott, Jade and
Ahmed, Mohamed and
Ochieng, Millicent and
Aremu, Anuoluwapo and
Ogayo, Perez and
Mukiibi, Jonathan and
Ouoba Kabore, Fatoumata and
Kalipe, Godson and
Mbaye, Derguene and
Tapo, Allahsera Auguste and
Memdjokam Koagne, Victoire and
Munkoh-Buabeng, Edwin and
Wagner, Valencia and
Abdulmumin, Idris and
Awokoya, Ayodele and
Buzaaba, Happy and
Sibanda, Blessing and
Bukula, Andiswa and
Manthalu, Sam",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.223",
doi = "10.18653/v1/2022.naacl-main.223",
pages = "3053--3070",
abstract = "Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.",
} | 1 | 29 | 2023-04-22T17:43:25 | ---
annotations_creators:
- expert-generated
language:
- en
- fr
- am
- bm
- bbj
- ee
- fon
- ha
- ig
- lg
- mos
- ny
- pcm
- rw
- sn
- sw
- tn
- tw
- wo
- xh
- yo
- zu
language_creators:
- expert-generated
license:
- cc-by-nc-4.0
multilinguality:
- translation
- multilingual
pretty_name: mafand
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- news, mafand, masakhane
task_categories:
- translation
task_ids: []
---
An unofficial version of https://huggingface.co/datasets/masakhane/mafand
We created a different data loader for a @forai_ml project. | 568 | [
[
-0.036834716796875,
-0.034942626953125,
-0.0117034912109375,
0.023712158203125,
-0.003467559814453125,
-0.0102081298828125,
0.00553131103515625,
-0.0246429443359375,
0.0654296875,
0.045196533203125,
-0.0703125,
-0.0316162109375,
-0.0199737548828125,
0.004962... |
cgulse/alpaca-cleaned-tr | 2023-05-18T19:59:11.000Z | [
"size_categories:10K<n<100K",
"language:tr",
"license:cc-by-4.0",
"alpaca",
"instruction-finetuning",
"region:us"
] | cgulse | null | null | 0 | 29 | 2023-05-18T19:50:32 | ---
license: cc-by-4.0
language:
- tr
tags:
- alpaca
- instruction-finetuning
pretty_name: Turkish Alpaca-cleaned
size_categories:
- 10K<n<100K
---
Alpaca Cleaned Dataset.
Machine Translated facebook/nllb-200-3.3B
Languages
Turkish | 237 | [
[
-0.033355712890625,
-0.06781005859375,
-0.0027313232421875,
0.03955078125,
-0.03631591796875,
-0.01511383056640625,
-0.00550079345703125,
-0.035308837890625,
0.04766845703125,
0.09051513671875,
-0.043792724609375,
-0.0299835205078125,
-0.04302978515625,
0.02... |
ma2za/many_emotions | 2023-06-10T02:18:01.000Z | [
"task_categories:text-classification",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:dair-ai/emotion",
"source_datasets:daily_dialog",
"source_datasets:go_emotions",
"language:en",
"license:apache-2.0",
"emotion",
"region:us"
] | ma2za | null | null | 1 | 29 | 2023-05-20T21:59:41 | ---
license:
apache-2.0
task_categories:
- text-classification
multilinguality:
- multilingual
source_datasets:
- dair-ai/emotion
- daily_dialog
- go_emotions
language:
- en
size_categories:
- 100K<n<1M
tags:
- emotion
---
# Dataset Card for "many_emotions"
## Dataset Description
- **Homepage:**
### Dataset Summary
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
The data fields are:
- `id`: unique identifier
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `anger` (0), `fear` (1), `joy` (2), `love` (
3), `sadness` (4), `surprise` (5), `neutral` (6).
- `license`: inherited license from source dataset
- `dataset`: source dataset
- `language`: text language
### Data Splits
The dataset has 2 configurations:
- raw: with 5 configuration for each language
- split: with configurations train, validation, test
## Dataset Creation
### Curation Rationale
The raw split contains duplicates.
In the split "split" there may be equal rows but with different label.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
## Additional Information
### Licensing Information
Each row has its own license which is inherited from the source dataset. | 1,340 | [
[
-0.049346923828125,
-0.0248565673828125,
0.00010949373245239258,
0.036376953125,
-0.033477783203125,
-0.004154205322265625,
-0.0164031982421875,
-0.016845703125,
0.0203857421875,
0.0247802734375,
-0.06536865234375,
-0.06488037109375,
-0.03662109375,
0.020355... |
Patt/HellaSwag_thai | 2023-06-13T23:15:58.000Z | [
"arxiv:1907.04307",
"region:us"
] | Patt | null | null | 0 | 29 | 2023-06-01T10:42:56 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for HellaSwag_TH
### Dataset Description
This dataset is Thai translated version of [hellaswag](https://huggingface.co/datasets/hellaswag) using google translate with [Multilingual Universal Sentence Encoder](https://arxiv.org/abs/1907.04307) to calculate score for Thai translation.
### Languages
- EN
- TH
| 529 | [
[
-0.01226806640625,
-0.034088134765625,
0.006622314453125,
0.03955078125,
-0.0635986328125,
-0.004058837890625,
-0.021820068359375,
-0.01255035400390625,
0.03240966796875,
0.04833984375,
-0.04925537109375,
-0.08203125,
-0.050201416015625,
0.01727294921875,
... |
FabioArdi/yugioh_images | 2023-06-11T17:16:55.000Z | [
"region:us"
] | FabioArdi | null | null | 0 | 29 | 2023-06-11T17:14:16 | ---
dataset_info:
features:
- name: image
dtype: image
- name: name
dtype: string
- name: frameType
dtype: string
splits:
- name: train
num_bytes: 660338115.545
num_examples: 12405
download_size: 656146541
dataset_size: 660338115.545
---
# Dataset Card for "yugioh_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 441 | [
[
-0.040313720703125,
-0.020263671875,
0.01287078857421875,
0.0137481689453125,
-0.03155517578125,
-0.025482177734375,
0.028167724609375,
-0.02374267578125,
0.07232666015625,
0.0447998046875,
-0.0582275390625,
-0.044342041015625,
-0.027252197265625,
-0.0142288... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.