id stringlengths 2 115 | author stringlengths 2 42 โ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 โ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 โ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
KaiLv/UDR_ComE | KaiLv | 2023-06-21T12:35:45Z | 63 | 0 | null | [
"region:us"
] | 2023-06-21T12:35:45Z | 2023-06-21T12:35:33.000Z | 2023-06-21T12:35:33 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: string
- name: question
dtype: string
- name: choices
dtype: string
- name: len_question
dtype: int64
- name: max_len_choices
dtype: int64
splits:
- name: train
num_bytes: 4855852
num_examples: 9996
- name: test
num_bytes: 468814
num_examples: 1000
- name: debug
num_bytes: 2432484
num_examples: 5000
download_size: 3748196
dataset_size: 7757150
---
# Dataset Card for "UDR_ComE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5918541550636292,
-0.2729794979095459,
0.1521987020969391,
0.24080586433410645,
-0.1679343730211258,
0.16964386403560638,
0.3285645842552185,
-0.3354159891605377,
0.5427363514900208,
0.5806723237037659,
-0.7832156419754028,
-0.7331210970878601,
-0.5779953002929688,
-0.09904621541500092,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yestaehyung/llama_fashiongen | yestaehyung | 2023-07-21T05:45:59Z | 63 | 0 | null | [
"license:openrail",
"region:us"
] | 2023-07-21T05:45:59Z | 2023-07-21T05:43:17.000Z | 2023-07-21T05:43:17 | ---
license: openrail
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TitanMLData/arxiv_qa | TitanMLData | 2023-08-04T11:38:53Z | 63 | 1 | null | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | 2023-08-04T11:38:53Z | 2023-08-04T11:01:34.000Z | 2023-08-04T11:01:34 | ---
task_categories:
- question-answering
- text2text-generation
language:
- en
size_categories:
- 10K<n<100K
---
# Arxiv Paper Generative Question Answering
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset is made using ChatGPT (text-davinci-003) to generate Question/Answer pairs from Arxiv papers from [this dataset](https://huggingface.co/datasets/ccdv/arxiv-summarization)
### Data Fields
* TextID: references the datarow (paper) in the arxiv summarizer dataset
* Question: question based on the text
* Response: answer
* Text: Full text with the paper as 'context:' and and the question appended as 'question:'. Used for generative question answering usign language modelling
### Data Splits
This dataset contains 2 splits: _train_, and _validation_
| Dataset Split | Number of Instances |
| ------------- | --------------------|
| Train | 32,392 |
| Validation | 6,479 | | [
-0.43301281332969666,
-0.8011700510978699,
0.12849757075309753,
0.21883328258991241,
-0.22404325008392334,
-0.021161777898669243,
0.32777729630470276,
0.08461154997348785,
-0.09548109024763107,
0.3922978341579437,
-0.5364254117012024,
-0.4955486059188843,
-0.32922783493995667,
0.0965130105... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Universal-NER/Pile-NER-definition | Universal-NER | 2023-08-07T17:08:06Z | 63 | 11 | null | [
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | 2023-08-07T17:08:06Z | 2023-08-07T15:09:19.000Z | 2023-08-07T15:09:19 | ---
language:
- en
size_categories:
- 10K<n<100K
---
# Intro
Pile-NER-definition is a set of GPT-generated data for named entity recognition using the definition-based data construction prompt. It was collected by prompting gpt-3.5-turbo-0301 and augmented by negative sampling. Check our [project page](https://universal-ner.github.io/) for more information.
# License
Attribution-NonCommercial 4.0 International | [
-0.9564976096153259,
-1.0299264192581177,
0.22598513960838318,
-0.2796351909637451,
-0.3896785080432892,
-0.013998541980981827,
0.5630303025245667,
-0.15222756564617157,
0.5269217491149902,
0.6901344060897827,
-0.42434290051460266,
-0.29036301374435425,
-0.6012020111083984,
0.3037745356559... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shunk031/MSCOCO | shunk031 | 2023-10-30T14:06:39Z | 63 | 0 | null | [
"task_categories:image-segmentation",
"task_categories:object-detection",
"task_categories:other",
"task_ids:instance-segmentation",
"task_ids:semantic-segmentation",
"task_ids:panoptic-segmentation",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"s... | 2023-10-30T14:06:39Z | 2023-09-09T08:15:05.000Z | 2023-09-09T08:15:05 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: MSCOCO
size_categories: []
source_datasets:
- original
tags:
- image-captioning
- object-detection
- keypoint-detection
- stuff-segmentation
- panoptic-segmentation
task_categories:
- image-segmentation
- object-detection
- other
task_ids:
- instance-segmentation
- semantic-segmentation
- panoptic-segmentation
---
# Dataset Card for MSCOCO
[](https://github.com/shunk031/huggingface-datasets_MSCOCO/actions/workflows/ci.yaml)
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://cocodataset.org/#home
- **Repository:** https://github.com/shunk031/huggingface-datasets_MSCOCO
- **Paper (Preprint):** https://arxiv.org/abs/1405.0312
- **Paper (ECCV2014):** https://link.springer.com/chapter/10.1007/978-3-319-10602-1_48
- **Leaderboard (Detection):** https://cocodataset.org/#detection-leaderboard
- **Leaderboard (Keypoint):** https://cocodataset.org/#keypoints-leaderboard
- **Leaderboard (Stuff):** https://cocodataset.org/#stuff-leaderboard
- **Leaderboard (Panoptic):** https://cocodataset.org/#panoptic-leaderboard
- **Leaderboard (Captioning):** https://cocodataset.org/#captions-leaderboard
- **Point of Contact:** info@cocodataset.org
### Dataset Summary
> COCO is a large-scale object detection, segmentation, and captioning dataset. COCO has several features:
> - Object segmentation
> - Recognition in context
> - Superpixel stuff segmentation
> - 330K images (>200K labeled)
> - 1.5 million object instances
> - 80 object categories
> - 91 stuff categories
> - 5 captions per image
> - 250,000 people with keypoints
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
#### 2014
- captioning dataset
```python
import datasets as ds
dataset = ds.load_dataset(
"shunk031/MSCOCO",
year=2014,
coco_task="captions",
)
```
- instances dataset
```python
import datasets as ds
dataset = ds.load_dataset(
"shunk031/MSCOCO",
year=2014,
coco_task="instances",
decode_rle=True, # True if Run-length Encoding (RLE) is to be decoded and converted to binary mask.
)
```
- person keypoints dataset
```python
import datasets as ds
dataset = ds.load_dataset(
"shunk031/MSCOCO",
year=2014,
coco_task="person_keypoints",
decode_rle=True, # True if Run-length Encoding (RLE) is to be decoded and converted to binary mask.
)
```
#### 2017
- captioning dataset
```python
import datasets as ds
dataset = ds.load_dataset(
"shunk031/MSCOCO",
year=2017,
coco_task="captions",
)
```
- instances dataset
```python
import datasets as ds
dataset = ds.load_dataset(
"shunk031/MSCOCO",
year=2017,
coco_task="instances",
decode_rle=True, # True if Run-length Encoding (RLE) is to be decoded and converted to binary mask.
)
```
- person keypoints dataset
```python
import datasets as ds
dataset = ds.load_dataset(
"shunk031/MSCOCO",
year=2017,
coco_task="person_keypoints",
decode_rle=True, # True if Run-length Encoding (RLE) is to be decoded and converted to binary mask.
)
```
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
> The annotations in this dataset along with this website belong to the COCO Consortium and are licensed under a [Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/legalcode).
>
> ## Images
> The COCO Consortium does not own the copyright of the images. Use of the images must abide by the Flickr Terms of Use. The users of the images accept full responsibility for the use of the dataset, including but not limited to the use of any copies of copyrighted images that they may create from the dataset.
>
> ## Software
> Copyright (c) 2015, COCO Consortium. All rights reserved. Redistribution and use software in source and binary form, with or without modification, are permitted provided that the following conditions are met:
> - Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
> - Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
> - Neither the name of the COCO Consortium nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
>
> THIS SOFTWARE AND ANNOTATIONS ARE PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
### Citation Information
```bibtex
@inproceedings{lin2014microsoft,
title={Microsoft coco: Common objects in context},
author={Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence},
booktitle={Computer Vision--ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13},
pages={740--755},
year={2014},
organization={Springer}
}
```
### Contributions
Thanks to [COCO Consortium](https://cocodataset.org/#people) for creating this dataset.
| [
-0.4793685972690582,
-0.4470706582069397,
0.08543168753385544,
0.4198998808860779,
-0.37931400537490845,
0.1788490116596222,
-0.19748373329639435,
-0.6878641247749329,
0.45000332593917847,
0.6004695296287537,
-0.6653218865394592,
-0.919448971748352,
-0.6084274649620056,
0.2500803470611572,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
fformosa/LSUN_bedroom_VQA | fformosa | 2023-10-17T15:45:26Z | 63 | 0 | null | [
"task_categories:visual-question-answering",
"task_categories:text-to-image",
"task_categories:question-answering",
"size_categories:100K<n<1M",
"region:us"
] | 2023-10-17T15:45:26Z | 2023-10-04T22:27:05.000Z | 2023-10-04T22:27:05 | ---
dataset_info:
features:
- name: image
dtype: image
- name: image_id
dtype: int64
- name: attributes
sequence: string
- name: size
sequence: int64
- name: proportion
dtype: float64
splits:
- name: train
num_bytes: 4858959064
num_examples: 303125
download_size: 4766067864
dataset_size: 4858959064
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
size_categories:
- 100K<n<1M
task_categories:
- visual-question-answering
- text-to-image
- question-answering
---
# Dataset Card for "CSUN_bedroom_VQA_feliu"
Images are a subset of the LSUN-Bedroom dataset.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
The attributes are binary answers to the following questions:
- Is the floor visible in the image?
- Does the room have a window?
- Is there more than one bed?
- Does the room have natural light?
- Is there a carpet in the floor?
- Is it a classy room?
- Is it a hotel room?
- Is there at least one person in the room?
- Are there more than one people in the room?
- Is it an expensive room?
- Does the room have a painting the wall?
- Is the room nicely decorated?
- Does the room have a mirror?
- Are the room lights on?
- Are the bedsheets made?
- Is there a visible closet?
- Is the room tidy?
- Is there an animal in the room?
- Is the wall painted in red?
- Is the wall painted in blue?
- Is the wall painted in white?
- Is the wall painted in a dark color?
- Is the wall painted in green?
- Are the bedsheets red?
- Are the bedsheets blue?
- Are the bedsheets white?
- Are the bedsheets dark?
- Are the bedsheets green?
- Is there a kid in the room?
- Is the bed big enough for two people?
- Does the room have a telephone?
- Does the room seem cold?
- Are there plants visible from the window?
- Are there decorative plants inside the room?
- Does the room have any photo frame as decoration?
- Does the room have a TV?
- Does the room have a radio in it?
- Is there any luggage in the room?
- Is there a visible door?
- Is there a radiator in the room?
- Is the bathroom visible in the image?
- Does the bed have a quilt?
- Does the picture have a watermark?
- Is the bed covered in a duvet?
- Is there more than one bedside table?
- Does the bedside table have a nightstand light?
- Does the bed have a mosquito net?
- Does the room access a private terrace?
- Is the floor wooden?
- Are the walls made of wood? | [
-0.734480082988739,
-0.6980525255203247,
0.5397661924362183,
0.4313783049583435,
-0.14301960170269012,
0.0725732073187828,
0.12690043449401855,
-0.44945234060287476,
0.35709741711616516,
0.554625391960144,
-0.8225787878036499,
-0.5823211669921875,
-0.3424556851387024,
0.11127300560474396,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yuchenlin/i-Mind2Web | yuchenlin | 2023-10-13T09:41:53Z | 63 | 0 | null | [
"language:en",
"license:mit",
"region:us"
] | 2023-10-13T09:41:53Z | 2023-10-10T21:45:04.000Z | 2023-10-10T21:45:04 | ---
license: mit
language:
- en
configs:
- config_name: default
data_files:
- split: test_mini
path: K=10/test_mini.json
- split: test_all
path: K=10/test_all.json
- split: dev
path: K=10/dev.json
- split: dev_5
path: K=10/K=5_dev.json
- split: train
path: K=10/train.json
- config_name: seq2seq
data_files:
- split: dev
path: seq2seq/dev.jsonl
- split: train
path: seq2seq/train.jsonl
---
null
| [
-0.026467621326446533,
-0.3506655991077423,
0.7836646437644958,
0.39425942301750183,
-0.632760226726532,
0.09949568659067154,
0.1961529403924942,
0.13657933473587036,
0.8436062335968018,
0.753307044506073,
-0.5074777007102966,
-0.5343067646026611,
-0.5218369960784912,
0.08986153453588486,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hmao/vt_multiapi_v0 | hmao | 2023-10-19T16:52:49Z | 63 | 0 | null | [
"region:us"
] | 2023-10-19T16:52:49Z | 2023-10-14T04:51:56.000Z | 2023-10-14T04:51:56 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: fncall
sequence: string
- name: generated_question
dtype: string
splits:
- name: train
num_bytes: 25028
num_examples: 70
download_size: 12622
dataset_size: 25028
---
# Dataset Card for "vt_multiapi_v0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6632557511329651,
-0.10543548315763474,
0.2847442626953125,
0.22301998734474182,
-0.3445664346218109,
0.04022742435336113,
0.5450664162635803,
-0.0711902379989624,
0.9382192492485046,
0.4622576832771301,
-0.8461530208587646,
-0.6657190322875977,
-0.4564041495323181,
-0.3178167939186096,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
raphael0202/ingredient-detection-layout-dataset | raphael0202 | 2023-11-01T16:22:36Z | 63 | 0 | null | [
"region:us"
] | 2023-11-01T16:22:36Z | 2023-10-29T12:49:48.000Z | 2023-10-29T12:49:48 | ---
dataset_info:
features:
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-ING
'2': I-ING
- name: words
sequence: string
- name: bboxes
sequence:
sequence: int64
- name: image
dtype: image
- name: text
dtype: string
- name: offsets
sequence:
sequence: int64
- name: meta
struct:
- name: barcode
dtype: string
- name: image_id
dtype: string
- name: url
dtype: string
- name: id
dtype: string
- name: in_test_split
dtype: bool
splits:
- name: train
num_bytes: 2059533770.875
num_examples: 5065
- name: test
num_bytes: 244591039.0
num_examples: 556
download_size: 2271205424
dataset_size: 2304124809.875
---
# Dataset Card for "ingredient-detection-layout-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5540382862091064,
-0.2715654969215393,
0.44677242636680603,
0.21394643187522888,
0.12902256846427917,
-0.033685069531202316,
0.30979543924331665,
-0.007712502498179674,
1.045686960220337,
0.5796298980712891,
-0.9526976943016052,
-0.9938387274742126,
-0.5191282033920288,
-0.2483284622430... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AlignmentLab-AI/llama-index | AlignmentLab-AI | 2023-10-29T22:15:31Z | 63 | 0 | null | [
"region:us"
] | 2023-10-29T22:15:31Z | 2023-10-29T22:15:21.000Z | 2023-10-29T22:15:21 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
StarkWizard/cairo-instruct | StarkWizard | 2023-11-03T15:42:55Z | 63 | 1 | null | [
"region:us"
] | 2023-11-03T15:42:55Z | 2023-11-03T15:42:51.000Z | 2023-11-03T15:42:51 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 761761
num_examples: 3226
- name: eval
num_bytes: 821
num_examples: 5
download_size: 304106
dataset_size: 762582
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: eval
path: data/eval-*
---
# Dataset Card for "cairo-instruct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8266158103942871,
-0.08251191675662994,
0.1843603253364563,
0.11448512971401215,
-0.14182813465595245,
0.06414172053337097,
0.24078470468521118,
-0.2888081669807434,
0.8627822995185852,
0.35306012630462646,
-0.6100553870201111,
-0.8286137580871582,
-0.5650513172149658,
-0.41044449806213... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
atmallen/qm_bob_grader_last_1.0e | atmallen | 2023-11-16T18:22:54Z | 63 | 0 | null | [
"region:us"
] | 2023-11-16T18:22:54Z | 2023-11-16T03:26:05.000Z | 2023-11-16T03:26:05 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: alice_label
dtype: bool
- name: bob_label
dtype: bool
- name: difficulty
dtype: int64
- name: statement
dtype: string
- name: choices
sequence: string
- name: character
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 14970044.0
num_examples: 200000
- name: validation
num_bytes: 1501418.0
num_examples: 20000
- name: test
num_bytes: 1502170.0
num_examples: 20000
download_size: 0
dataset_size: 17973632.0
---
# Dataset Card for "qm_bob__grader_last_1.0e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4936648905277252,
-0.21785420179367065,
0.21046195924282074,
0.0989588126540184,
-0.1582818627357483,
0.16363362967967987,
0.49477651715278625,
0.17781981825828552,
0.6832224726676941,
0.5518375039100647,
-0.5899110436439514,
-1.011790156364441,
-0.5489227771759033,
-0.3999853730201721,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lgrobol/openminuscule | lgrobol | 2022-10-23T09:28:36Z | 62 | 0 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:100k<n<1M",
"source_datasets:original",
"language:en",
"language:fr",
"license:cc-by-4.0",
"region:us"
] | 2022-10-23T09:28:36Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
language_creators:
- crowdsourced
language:
- en
- fr
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 100k<n<1M
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: Open Minuscule
language_bcp47:
- en-GB
- fr-FR
---
Open Minuscule
==============
A little small wee corpus to train little small wee models.
## Dataset Description
### Dataset Summary
This is a raw text corpus, mainly intended for testing purposes.
### Languages
- French
- English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Source Data
It is a mashup including the following [CC-BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) licenced texts
- [*Rayons รฉmis par les composรฉs de lโuranium et du
thorium*](https://fr.wikisource.org/wiki/Rayons_%C3%A9mis_par_les_compos%C3%A9s_de_l%E2%80%99uranium_et_du_thorium),
Maria Skลodowska Curie
- [*Frankenstein, or the Modern
Prometheus*](https://en.wikisource.org/wiki/Frankenstein,_or_the_Modern_Prometheus_(Revised_Edition,_1831)),
Mary Wollstonecraft Shelley
- [*Les maรฎtres sonneurs*](https://fr.wikisource.org/wiki/Les_Ma%C3%AEtres_sonneurs), George Sand
It also includes the text of *Sketch of The Analytical Engine Invented by Charles Babbage With
notes upon the Memoir by the Translator* by Luigi Menabrea and Ada Lovelace, which to the best of
my knowledge should be public domain.
## Considerations for Using the Data
This really should not be used for anything but testing purposes
## Licence
This corpus is available under the Creative Commons Attribution-ShareAlike 4.0 License | [
-0.3531780540943146,
-0.3842467665672302,
0.7112419605255127,
-0.18132248520851135,
-0.12424998730421066,
-0.4542321264743805,
-0.11971399933099747,
-0.3021429181098938,
0.4803225100040436,
0.5945271253585815,
-0.4689176678657532,
-0.28302130103111267,
-0.437028706073761,
0.246967405080795... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AI-Growth-Lab/patents_claims_1.5m_traim_test | AI-Growth-Lab | 2022-07-31T20:48:51Z | 62 | 1 | null | [
"region:us"
] | 2022-07-31T20:48:51Z | 2022-07-31T20:01:19.000Z | 2022-07-31T20:01:19 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mrmoor/cyber-threat-intelligence | mrmoor | 2022-10-23T09:12:59Z | 62 | 3 | null | [
"license:unknown",
"region:us"
] | 2022-10-23T09:12:59Z | 2022-09-14T20:13:26.000Z | 2022-09-14T20:13:26 | ---
license: unknown
---
| [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-3850000-3900000 | tomekkorbak | 2022-10-06T04:23:42Z | 62 | 0 | null | [
"region:us"
] | 2022-10-06T04:23:42Z | 2022-10-06T04:23:33.000Z | 2022-10-06T04:23:33 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
JEFFDSA/main | JEFFDSA | 2022-10-26T06:59:02Z | 62 | 0 | null | [
"region:us"
] | 2022-10-26T06:59:02Z | 2022-10-26T06:58:34.000Z | 2022-10-26T06:58:34 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/distemist | bigbio | 2023-04-01T16:51:57Z | 62 | 3 | null | [
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"region:us"
] | 2023-04-01T16:51:57Z | 2022-11-13T22:08:11.000Z | 2022-11-13T22:08:11 |
---
language:
- es
bigbio_language:
- Spanish
license: cc-by-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: DisTEMIST
homepage: https://zenodo.org/record/6671292
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
---
# Dataset Card for DisTEMIST
## Dataset Description
- **Homepage:** https://zenodo.org/record/6671292
- **Pubmed:** False
- **Public:** True
- **Tasks:** NER,NED
The DisTEMIST corpus is a collection of 1000 clinical cases with disease annotations linked with Snomed-CT concepts.
All documents are released in the context of the BioASQ DisTEMIST track for CLEF 2022.
## Citation Information
```
@article{miranda2022overview,
title={Overview of DisTEMIST at BioASQ: Automatic detection and normalization of diseases
from clinical texts: results, methods, evaluation and multilingual resources},
author={Miranda-Escalada, Antonio and Gascรณ, Luis and Lima-Lรณpez, Salvador and Farrรฉ-Maduell,
Eulร lia and Estrada, Darryl and Nentidis, Anastasios and Krithara, Anastasia and Katsimpras,
Georgios and Paliouras, Georgios and Krallinger, Martin},
booktitle={Working Notes of Conference and Labs of the Evaluation (CLEF) Forum.
CEUR Workshop Proceedings},
year={2022}
}
```
| [
-0.2657383680343628,
-0.4944530427455902,
0.5508884787559509,
0.09603392332792282,
-0.8163110613822937,
-0.0000037735314890596783,
-0.07030539959669113,
-0.2948710024356842,
0.41152286529541016,
0.12301461398601532,
-0.5414836406707764,
-1.107617974281311,
-1.0761303901672363,
0.6190358996... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
YakupAkdin/instrument-images | YakupAkdin | 2022-11-22T21:26:19Z | 62 | 0 | null | [
"region:us"
] | 2022-11-22T21:26:19Z | 2022-11-22T21:13:30.000Z | 2022-11-22T21:13:30 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rcds/swiss_court_view_generation | rcds | 2023-07-20T07:35:29Z | 62 | 2 | null | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:de",
"language:fr",
"language:it",
"license:cc-by-sa-4.0",
"arxiv:2306.09237",
"region:us"
] | 2023-07-20T07:35:29Z | 2023-01-30T01:50:28.000Z | 2023-01-30T01:50:28 | ---
task_categories:
- text-generation
language:
- de
- fr
- it
size_categories:
- 100K<n<1M
license: cc-by-sa-4.0
pretty_name: Swiss Court View Generation
---
# Dataset Card for Swiss Court View Generation
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Swiss Court View Generation is a multilingual, diachronic dataset of 404K Swiss Federal Supreme Court (FSCS) cases. This dataset is part of a challenging text generation task.
This dataset contains court views for different languages and court chambers. It includes information such as decision id, language, chamber, file name, url, and the number of tokens in the facts and considerations sections.
Main (L1) contains all the data, Origin (L2) contains only data with complete origin facts & origin considerations.
### Supported Tasks and Leaderboards
### Languages
Switzerland has four official languages with three languages German, French and Italian being represenated. The decisions are written by the judges and clerks in the language of the proceedings.
| Language | Subset | Number of Documents Main |Number of Documents Origin|
|------------|------------|--------------------------|--------------------------|
| German | **de** | 197K | 49 |
| French | **fr** | 163K | 221 |
| Italian | **it** | 44K | 0 |
## Dataset Structure
### Data Fields
```
decision_id (string)
facts (string)
considerations (string)
origin_facts (string)
origin_considerations (string)
law_area (string)
language (string)
year (int32)
court (string)
chamber (string)
canton (string)
region (string)
```
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
The decisions are written by the judges and clerks in the language of the proceedings.
### Annotations
#### Annotation process
#### Who are the annotators?
Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch).
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
ยฉ Swiss Federal Supreme Court, 2002-2022
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
Please cite our [ArXiv-Preprint](https://arxiv.org/abs/2306.09237)
```
@misc{rasiah2023scale,
title={SCALE: Scaling up the Complexity for Advanced Language Model Evaluation},
author={Vishvaksenan Rasiah and Ronja Stern and Veton Matoshi and Matthias Stรผrmer and Ilias Chalkidis and Daniel E. Ho and Joel Niklaus},
year={2023},
eprint={2306.09237},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
| [
-0.28031492233276367,
-0.6426276564598083,
0.39379003643989563,
0.2149145007133484,
-0.5327152013778687,
-0.16805799305438995,
-0.2035420835018158,
-0.18824660778045654,
0.18527619540691376,
0.6685446500778198,
-0.7096226811408997,
-0.8721070885658264,
-0.7191925048828125,
0.10396909713745... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
aisquared/databricks-dolly-15k | aisquared | 2023-04-12T18:14:46Z | 62 | 3 | null | [
"language:en",
"license:cc-by-sa-3.0",
"databricks",
"dolly",
"arxiv:2203.02155",
"region:us"
] | 2023-04-12T18:14:46Z | 2023-04-12T17:45:01.000Z | 2023-04-12T17:45:01 | ---
license: cc-by-sa-3.0
language:
- en
tags:
- databricks
- dolly
pretty_name: 'Dataset '
---
# databricks-dolly-15k
**This dataset was not originally created by AI Squared.** This dataset was curated and created by [Databricks](https://databricks.com).
The below text comes from the original release of the dataset's README file in GitHub (available at https://github.com/databrickslabs/dolly/tree/master/data):
# Summary
`databricks-dolly-15k` is an open source dataset of instruction-following records generated by thousands of Databricks employees in several of the behavioral categories outlined in the [InstructGPT](https://arxiv.org/abs/2203.02155) paper, including brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization.
This dataset can be used for any purpose, whether academic or commercial, under the terms of the [Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: English
Version: 1.0
**Owner: Databricks, Inc.**
# Dataset Overview
`databricks-dolly-15k` is a corpus of more than 15,000 records generated by thousands of Databricks employees to enable large language
models to exhibit the magical interactivity of ChatGPT. Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories, including the seven outlined in the InstructGPT paper, as well as an open-ended free-form category. The contributors were instructed to avoid using information from any source on the web with the exception of Wikipedia (for particular subsets of instruction categories), and explicitly instructed to avoid using generative AI in formulating instructions or responses. Examples of each behavior were provided to motivate the
types of questions and instructions appropriate to each category.
Halfway through the data generation process, contributors were given the option of answering questions posed by other contributors. They were asked to rephrase the original question and only select questions they could be reasonably expected to answer correctly.
For certain categories contributors were asked to provide reference texts copied from Wikipedia. Reference text (indicated by the `context` field in the actual dataset) may contain bracketed Wikipedia citation numbers (e.g. `[42]`) which we recommend users remove for downstream applications.
# Intended Uses
While immediately valuable for instruction fine tuning large language models, as a corpus of human-generated instruction prompts, this dataset also presents a valuable opportunity for synthetic data generation in the methods outlined in the Self-Instruct paper. For example, contributor--generated prompts could be submitted as few-shot examples to a large open language model to generate a corpus of millions of examples of instructions in each of the respective InstructGPT categories.
Likewise, both the instructions and responses present fertile ground for data augmentation. A paraphrasing model might be used to restate each prompt or short responses, with the resulting text associated to the respective ground-truth sample. Such an approach might provide a form of regularization on the dataset that could allow for more robust instruction-following behavior in models derived from these synthetic datasets.
# Dataset
## Purpose of Collection
As part of our continuing commitment to open source, Databricks developed what is, to the best of our knowledge, the first open source, human-generated instruction corpus specifically designed to enable large language models to exhibit the magical interactivity of ChatGPT. Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including academic or commercial applications.
## Sources
- **Human-generated data**: Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories.
- **Wikipedia**: For instruction categories that require an annotator to consult a reference text (information extraction, closed QA, summarization) contributors selected passages from Wikipedia for particular subsets of instruction categories. No guidance was given to annotators as to how to select the target passages.
## Annotator Guidelines
To create a record, employees were given a brief description of the annotation task as well as examples of the types of prompts typical of each annotation task. Guidelines were succinct by design so as to encourage a high task completion rate, possibly at the cost of rigorous compliance to an annotation rubric that concretely and reliably operationalizes the specific task. Caveat emptor.
The annotation guidelines for each of the categories are as follows:
- **Creative Writing**: Write a question or instruction that requires a creative, open-ended written response. The instruction should be reasonable to ask of a person with general world knowledge and should not require searching. In this task, your prompt should give very specific instructions to follow. Constraints, instructions, guidelines, or requirements all work, and the more of them the better.
- **Closed QA**: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Open QA**: Write a question that can be answered using general world knowledge or at most a single search. This task asks for opinions and facts about the world at large and does not provide any reference text for consultation.
- **Summarization**: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Information Extraction**: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Classification**: These prompts contain lists or examples of entities to be classified, e.g. movie reviews, products, etc. In this task the text or list of entities under consideration is contained in the prompt (e.g. there is no reference text.). You can choose any categories for classification you like, the more diverse the better.
- **Brainstorming**: Think up lots of examples in response to a question asking to brainstorm ideas.
## Personal or Sensitive Data
This dataset contains public information (e.g., some information from Wikipedia). To our knowledge, there are no private personโs personal identifiers or sensitive information.
## Language
American English
# Known Limitations
- Wikipedia is a crowdsourced corpus and the contents of this dataset may reflect the bias, factual errors and topical focus found in Wikipedia
- Some annotators may not be native English speakers
- Annotator demographics and subject matter may reflect the makeup of Databricks employees
# License/Attribution
**Copyright (2023) Databricks, Inc.**
This dataset was developed at Databricks (https://www.databricks.com) and its use is subject to the CC BY-SA 3.0 license.
Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license:
Wikipedia (various pages) - https://www.wikipedia.org/
Copyright ยฉ Wikipedia editors and contributors. | [
-0.4219191372394562,
-1.0356724262237549,
0.2210233509540558,
0.20203308761119843,
-0.07123798877000809,
-0.07077798992395401,
-0.23122085630893707,
-0.1557541936635971,
0.00019948629778809845,
0.44231370091438293,
-0.6898090243339539,
-0.6024109721183777,
-0.2728607654571533,
0.3297349214... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shibing624/sts-sohu2021 | shibing624 | 2023-06-19T09:02:29Z | 62 | 6 | sts | [
"task_categories:text-classification",
"task_categories:sentence-similarity",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"annotations_creators:shibing624",
"language_creators:shibing624",
"multilinguality:zh",
"size_categories:100K<n<20M",... | 2023-06-19T09:02:29Z | 2023-06-18T14:38:51.000Z | 2023-06-18T14:38:51 | ---
annotations_creators:
- shibing624
language_creators:
- shibing624
language:
- zh
license:
- cc-by-4.0
multilinguality:
- zh
size_categories:
- 100K<n<20M
source_datasets:
- https://www.biendata.xyz/competition/sohu_2021/data/
task_categories:
- text-classification
- sentence-similarity
task_ids:
- natural-language-inference
- semantic-similarity-scoring
- text-scoring
paperswithcode_id: sts
pretty_name: Sentence Text Similarity SOHU2021
---
# Dataset Card for sts-sohu2021
## Dataset Description
- **Repository:** [Chinese NLI dataset](https://github.com/shibing624/text2vec)
- **Leaderboard:** [NLI_zh leaderboard](https://github.com/shibing624/text2vec) (located on the homepage)
- **Size of downloaded dataset files:** 218 MB
- **Total amount of disk used:** 218 MB
### Dataset Summary
2021ๆ็ๆ กๅญๆๆฌๅน้
็ฎๆณๅคง่ตๆฐๆฎ้
- ๆฐๆฎๆบ๏ผhttps://www.biendata.xyz/competition/sohu_2021/data/
ๅไธบ A ๅ B ไธคไธชๆไปถ๏ผA ๅ B ๆไปถๅน้
ๆ ๅไธไธๆ ทใๅ
ถไธญ A ๅ B ๆไปถๅๅไธบโ็ญ็ญๆๆฌๅน้
โใโ็ญ้ฟๆๆฌๅน้
โๅโ้ฟ้ฟๆๆฌๅน้
โใ
A ๆไปถๅน้
ๆ ๅ่พไธบๅฎฝๆณ๏ผไธคๆฎตๆๅญๆฏๅไธไธช่ฏ้ขไพฟ่งไธบๅน้
๏ผB ๆไปถๅน้
ๆ ๅ่พไธบไธฅๆ ผ๏ผไธคๆฎตๆๅญ้กปๆฏๅไธไธชไบไปถๆ่งไธบๅน้
ใ
ๆฐๆฎ็ฑปๅ๏ผ
| type | ๆฐๆฎ็ฑปๅ |
| --- | ------------|
| dda | ็ญ็ญๅน้
A ็ฑป |
| ddb | ็ญ็ญๅน้
B ็ฑป |
| dca | ็ญ้ฟๅน้
A ็ฑป |
| dcb | ็ญ้ฟๅน้
B ็ฑป |
| cca | ้ฟ้ฟๅน้
A ็ฑป |
| ccb | ้ฟ้ฟๅน้
B ็ฑป |
### Supported Tasks and Leaderboards
Supported Tasks: ๆฏๆไธญๆๆๆฌๅน้
ไปปๅก๏ผๆๆฌ็ธไผผๅบฆ่ฎก็ฎ็ญ็ธๅ
ณไปปๅกใ
ไธญๆๅน้
ไปปๅก็็ปๆ็ฎๅๅจ้กถไผpaperไธๅบ็ฐ่พๅฐ๏ผๆ็ฝๅไธไธชๆ่ชๅทฑ่ฎญ็ป็็ปๆ๏ผ
**Leaderboard:** [NLI_zh leaderboard](https://github.com/shibing624/text2vec)
### Languages
ๆฐๆฎ้ๅๆฏ็ฎไฝไธญๆๆๆฌใ
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```python
# A ็ฑป ็ญ็ญ ๆ ทๆฌ็คบไพ
{
"sentence1": "ๅฐ่บ็ๆ
ไบ่ฎฉ็ฑๅๅฎถ2021ๅนด2ๆ16ๆฅๅคงๅนดๅไบ19๏ผ30ๅธฆไธไฝ ๆไบฒ็ฑ็ไบบไธๅขๅขๅ็ธ็บฆใๅฐ่บ็ๆ
ไบใ็ดๆญ้ด๏ผ",
"sentence2": "้ฆๆธฏไปฃ่ดญไบไธ่ตทๅ๏ผๅฎ็นๅท็ซ็ถๅจ็ดๆญ้ดโ็ซๅฏโ่ตทๆฅ",
"label": 0
}
# B ็ฑป ็ญ็ญ ๆ ทๆฌ็คบไพ
{
"sentence1": "่ฎฉๅพๅค็ฝๅๅฅฝๅฅ็ๆฏ๏ผๅผ ๆ่ๅจไธๅฐๆถๅไนๅจ็คพไบคๅนณๅฐๅๆ๏ผโ็ปๅคงๅฎถๆๅนดๅฆใโ่ฟๆ็ฝๅ็ๆต๏ผ่ฐข้้็็ป็บชไบบๅๆ๏ผๅผ ๆ่ไนๅๆ๏ผๅนถไธ้
ๅพ๏ผไผผไน้ฝๅจ่ฏๅฎ๏ผ่ฐข้้ไพๆงๅ็่ฒๅจไธ่ตท๏ผ่ๅผ ๆ่ไนๆไบๆฐ็ๆไบบ๏ผๅนถไธ็ไบๅญฉๅญ๏ผไธคไบบไนๆพๅฐไบๅ่ช็ๅฝๅฎฟ๏ผๆไบ่ชๅทฑ็ๅนธ็ฆ็ๆดป๏ผ่ฎฉไผ ่จไธๆป่ช็ ดใ",
"sentence2": "้ๆไธ่ฐๆง็ฑๅผ ๆ่๏ผไธไธชๅฃ่ฏฏๆด้ฒๅฅน็็งๅฏ๏ผ้พๆช่ฐข้้ไผ็ฆปๅผๅฅน",
"label": 0
}
```
label: 0่กจ็คบไธๅน้
๏ผ1่กจ็คบๅน้
ใ
### Data Fields
The data fields are the same among all splits.
- `sentence1`: a `string` feature.
- `sentence2`: a `string` feature.
- `label`: a classification label, with possible values including `similarity` (1), `dissimilarity` (0).
### Data Splits
```shell
> wc -l *.jsonl
11690 cca.jsonl
11690 ccb.jsonl
11592 dca.jsonl
11593 dcb.jsonl
11512 dda.jsonl
11501 ddb.jsonl
69578 total
```
### Curation Rationale
ไฝไธบไธญๆNLI(natural langauge inference)ๆฐๆฎ้๏ผ่ฟ้ๆ่ฟไธชๆฐๆฎ้ไธไผ ๅฐhuggingface็datasets๏ผๆนไพฟๅคงๅฎถไฝฟ็จใ
#### Who are the source language producers?
ๆฐๆฎ้็็ๆๅฝๅไฝ่
ๆๆ๏ผไฝฟ็จๅๆฐๆฎ้ๆถ่ฏทๅฐ้ๅๆฐๆฎ้็็ๆใ
#### Who are the annotators?
ๅไฝ่
ใ
### Social Impact of Dataset
This dataset was developed as a benchmark for evaluating representational systems for text, especially including those induced by representation learning methods, in the task of predicting truth conditions in a given context.
Systems that are successful at such a task may be more successful in modeling semantic representations.
### Licensing Information
็จไบๅญฆๆฏ็ ็ฉถใ
### Contributions
[shibing624](https://github.com/shibing624) upload this dataset. | [
-0.2731100916862488,
-0.6566749215126038,
0.30373188853263855,
0.4629286527633667,
-0.32230299711227417,
-0.20637327432632446,
-0.3381228446960449,
-0.35851410031318665,
0.34337326884269714,
0.46579620242118835,
-0.6927773356437683,
-0.8513617515563965,
-0.658442497253418,
0.19808788597583... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TrainingDataPro/people-tracking-dataset | TrainingDataPro | 2023-09-19T19:35:09Z | 62 | 1 | null | [
"task_categories:image-segmentation",
"task_categories:image-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"legal",
"code",
"region:us"
] | 2023-09-19T19:35:09Z | 2023-06-26T12:58:57.000Z | 2023-06-26T12:58:57 | ---
license: cc-by-nc-nd-4.0
task_categories:
- image-segmentation
- image-classification
language:
- en
tags:
- legal
- code
dataset_info:
features:
- name: image_id
dtype: int32
- name: image
dtype: image
- name: mask
dtype: image
- name: annotations
dtype: string
splits:
- name: train
num_bytes: 52028802
num_examples: 41
download_size: 45336774
dataset_size: 52028802
---
# People Tracking Dataset
The dataset comprises of annotated video frames from positioned in a public space camera. The tracking of each individual in the camera's view has been achieved using the rectangle tool in the Computer Vision Annotation Tool (CVAT).
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=people-tracking-dataset) to discuss your requirements, learn about the price and buy the dataset.

# Dataset Structure
- The `images` directory houses the original video frames, serving as the primary source of raw data.
- The `annotations.xml` file provides the detailed annotation data for the images.
- The `boxes` directory contains frames that visually represent the bounding box annotations, showing the locations of the tracked individuals within each frame. These images can be used to understand how the tracking has been implemented and to visualize the marked areas for each individual.
# Data Format
The annotations are represented as rectangle bounding boxes that are placed around each individual. Each bounding box annotation contains the position ( `xtl`-`ytl`-`xbr`-`ybr` coordinates ) for the respective box within the frame.
.png?generation=1687776281548084&alt=media)
## [TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=people-tracking-dataset) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** | [
-0.590218186378479,
-0.3041171133518219,
0.23247261345386505,
-0.07280378043651581,
-0.22615626454353333,
0.12842577695846558,
0.22081395983695984,
-0.3226824700832367,
0.6801193952560425,
0.7789664268493652,
-0.8611835241317749,
-0.7754656672477722,
-0.5016525983810425,
-0.257260262966156... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sngsfydy/aptos_train | sngsfydy | 2023-07-19T19:52:54Z | 62 | 0 | null | [
"region:us"
] | 2023-07-19T19:52:54Z | 2023-07-19T18:43:34.000Z | 2023-07-19T18:43:34 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
splits:
- name: train
num_bytes: 7204351279.337521
num_examples: 2929
download_size: 7192333107
dataset_size: 7204351279.337521
---
# Dataset Card for "aptos_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6048420071601868,
0.0007447273237630725,
0.14593303203582764,
0.1288309395313263,
-0.47475212812423706,
-0.07297636568546295,
0.4162871241569519,
-0.11791528761386871,
0.999896764755249,
0.5120773911476135,
-0.6235771179199219,
-0.713479220867157,
-0.7276954054832458,
0.0038002787623554... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ChanceFocus/flare-convfinqa | ChanceFocus | 2023-07-31T03:49:30Z | 62 | 2 | null | [
"region:us"
] | 2023-07-31T03:49:30Z | 2023-07-31T03:49:18.000Z | 2023-07-31T03:49:18 | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: turn
dtype: int64
- name: dialogue_id
dtype: int64
splits:
- name: train
num_bytes: 44382083
num_examples: 8891
- name: valid
num_bytes: 11171617
num_examples: 2213
- name: test
num_bytes: 7116753
num_examples: 1490
download_size: 11803908
dataset_size: 62670453
---
# Dataset Card for "flare-convfinqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8185546398162842,
-0.16663746535778046,
-0.03190038353204727,
0.19119679927825928,
-0.1327524036169052,
0.19102460145950317,
0.24132807552814484,
-0.15420301258563995,
0.885350227355957,
0.48026952147483826,
-0.8534651398658752,
-0.6463897228240967,
-0.4391859173774719,
-0.2584398090839... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
icantiemyshoe/cve-to-metasploit-module | icantiemyshoe | 2023-08-27T22:27:41Z | 62 | 1 | null | [
"size_categories:1K<n<10K",
"language:en",
"license:bsd-2-clause",
"region:us"
] | 2023-08-27T22:27:41Z | 2023-08-17T20:59:08.000Z | 2023-08-17T20:59:08 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: source
dtype: string
- name: cve
dtype: string
- name: script_type
dtype: string
# splits:
# - name: train
# num_bytes: 290000000
# num_examples: 4278
# download_size: 290000000
# dataset_size: 290000000
license: bsd-2-clause
language:
- en
size_categories:
- 1K<n<10K
---
# CVE To Metasploit Module Prompt
This dataset is a submodule to the overall project to create an LLM that can look at newly published CVE writeups and create metasploit modules. The main repo for the project can be found [here](https://github.com/roostercoopllc/metAIsploit-assistant).
## Usage
*TO-DO*
## References
*TO-DO*
| [
-0.30399903655052185,
-0.16379384696483612,
0.32263457775115967,
-0.1545528918504715,
-0.3044697642326355,
-0.02762504853308201,
0.4284702241420746,
0.17989583313465118,
0.5411597490310669,
0.9053381085395813,
-1.4396449327468872,
-0.7773392796516418,
-0.20640070736408234,
0.10254595428705... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
allenai/ValuePrism | allenai | 2023-09-08T23:05:50Z | 62 | 2 | null | [
"size_categories:100K<n<1M",
"language:en",
"not-for-all-audiences",
"arxiv:2309.00779",
"arxiv:2304.03738",
"region:us"
] | 2023-09-08T23:05:50Z | 2023-08-22T20:08:41.000Z | 2023-08-22T20:08:41 | ---
configs:
- config_name: full
data_files: full/*csv
default: true
- config_name: mixture
data_files:
- split: train
path: mixture/*train.csv
- split: val
path: mixture/*val.csv
- split: test
path: mixture/*test.csv
- config_name: generative
data_files:
- split: train
path: generative/*train.csv
- split: val
path: generative/*val.csv
- split: test
path: generative/*test.csv
- config_name: relevance
data_files:
- split: train
path: relevance/*train.csv
- split: val
path: relevance/*val.csv
- split: test
path: relevance/*test.csv
- config_name: explanation
data_files:
- split: train
path: explanation/*train.csv
- split: val
path: explanation/*val.csv
- split: test
path: explanation/*test.csv
- config_name: valence
data_files:
- split: train
path: valence/*train.csv
- split: val
path: valence/*val.csv
- split: test
path: valence/*test.csv
annotations_creators:
- crowdsourced: null
machine-generated: null
language:
- en
pretty_name: ValuePrism
extra_gated_prompt: >-
Access to this dataset is automatically granted upon accepting the [**AI2
ImpACT License - Medium Risk Artifacts (โMR
Agreementโ)**](https://allenai.org/licenses/impact-mr) and completing all
fields below.
extra_gated_fields:
Your full name: text
Organization or entity you are affiliated with: text
State or country you are located in: text
Contact email: text
Please describe your intended use of the medium risk artifact(s): text
I UNDERSTAND that the dataset is intended for research purposes and not for real-world use-cases: checkbox
I AGREE to the terms and conditions of the MR Agreement above: checkbox
I AGREE to AI2โs use of my information for legal notices and administrative matters: checkbox
I CERTIFY that the information I have provided is true and accurate: checkbox
tags:
- not-for-all-audiences
size_categories:
- 100K<n<1M
---
# Dataset Card for ValuePrism
## Dataset Description
- **Paper:** https://arxiv.org/abs/2309.00779
- **Demo:** https://kaleido.allen.ai
- **Repository:** https://github.com/tsor13/kaleido
- **Datasheet for Datasets:** https://drive.google.com/file/d/1zDWvO0NljqxBMfDAGW7Jx60Iw54bjsEE/view?usp=sharing
- **License:** https://allenai.org/licenses/impact-mr
- **Point of Contact:** [Taylor Sorensen](mailto:tsor13@cs.washington.edu)
### Dataset Summary
ValuePrism was created 1) to understand what pluralistic human values, rights, and duties are already present in large language models, and 2) to serve as a resource to to support open, value pluralistic modeling (e.g., [Kaleido](https://huggingface.co/tsor13/kaleido-xl)). It contains human-written situations and machine-generated candidate values, rights, duties, along with their valences and post-hoc explanations relating them to the situations.
For additional documentation, see ValuePrism's [Datasheet](https://drive.google.com/file/d/1zDWvO0NljqxBMfDAGW7Jx60Iw54bjsEE/view?usp=sharing).
The dataset was created and intended for research purposes. It is openly released under AI2โs ImpACT license as a medium risk artifact.
### Supported Tasks
The dataset supports 4 tasks:
- **Generation (open-text)**
*What values, rights, and duties are relevant for a situation?*
Generate a value, right, or duty
that could be considered when reasoning about the action. Values are generated one at a time, as opposed to a batch.
- **Relevance (2-way classification)**
*Is a value relevant for a situation?* Some values are more relevant than others.
- **Valence (3-way classification)**
*Does the value support or oppose the action, or might it depend on context?*
Disentangling the valence is critical for understanding how plural considerations may interact with a decision.
- **Explanation (open-text)**
*How does the value relate to the action?* Generating a post-hoc rationale for why a value consideration may relate to a situation.
### Languages
All data is in English.
## Dataset Structure
### Dataset Splits
There are 6 data configurations:
- `full`: The full structured dataset of situations paired with values, rights, and duties paired with GPT-4. Only one split with all of the data.
- `generative`: Generative task train, val, and test splits.
- `relevance`: Relevance task train, val, and test splits.
- `valence`: Valence task train, val, and test splits.
- `explanation`: Explanation task train, val, and test splits.
- `mixture`: Generative, relevance, valence, and explanation tasks combined wtih train, val, and test splits.
### Data Fields
While different configurations have different fields, these are all the corresponding fields in the dataset:
- `situation` (string): A one sentence of a particular scenario or situation. For example, "buying some chocolate for my grandparents".
- `vrd` (string): Type of instance, either "Value", "Right", or "Duty".
- `text` (string): The text of the value, right, or duty. For example, "Honesty", "Right to property", "Duty to protect".
- `explanation` (string): A post-hoc explanation of why the specified value, right, or duty is relevant or important in the given situation. For example, "Buying chocolate for your grandparents can strengthen family connections and show appreciation for your relationship with them."
- `valence` (string): Indicates whether the value, right, or duty supports or opposes the action in the situation, or if it might depend on the context. Either "Supports", "Opposes", or "Either".
- `input` (string): For the seq2seq task (generative, relevance, valence, explanation), the input to the model.
- `output` (string): For the seq2seq task (generative, relevance, valence, explanation), the output of the model.
### Data Splits
All configurations (except for the raw outputs in `full`) have 80%/10%/10% train/validation/test splits.
## Dataset Creation
### Source Data
#### Data Collection
Situations are sourced from the Delphi user demo, and candidate values, rights, duties, their valences, and explanations connecting them to the situations are machine generated by GPT-4.
#### Who are the source language producers?
The situations are sourced from users of the Delphi user demo, for whom we do not have demographic information.
### Personal and Sensitive Information
There is no personal or sensitive information in ValuePrism.
## Considerations for Using the Data
### Social Impact of Dataset
We intend the dataset to be used to enable research and not to be used for real-world use or decision-making.
### Discussion of Biases
The value, right, and duty data was generated by GPT-4, which is known to exhibit [biases](https://arxiv.org/pdf/2304.03738.pdf). Thus, we expect ValuePrism to inherit biases from GPT-4.
That being said, we have tried to prompt the model to output a diversity of values in an attempt to mitigate bias with breadth.
## Additional Information
91% of values, rights, and duties were marked as high-quality by 3/3 annotators, and 87% of valence scores were marked as correct by 3/3 annotators.
Additionally, we perform a human study on the data and do not find large disparities in agreement between demographic groups tested, although future work in this area is a promising direction.
See [our paper] for more details and analysis.
### Licensing Information
ValuePrism is made available under the [**AI2
ImpACT License - Medium Risk Artifacts (โMR
Agreementโ)**](https://allenai.org/licenses/impact-mr)
### Citation Information
Please cite [our paper](https://arxiv.org/abs/2309.00779) when using this dataset:
```
@misc{sorensen2023value,
title={Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties},
author={Taylor Sorensen and Liwei Jiang and Jena Hwang and Sydney Levine and Valentina Pyatkin and Peter West and Nouha Dziri and Ximing Lu and Kavel Rao and Chandra Bhagavatula and Maarten Sap and John Tasioulas and Yejin Choi},
year={2023},
eprint={2309.00779},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
#### Raw Dataset Statistics
The total, number of unique, and average number of generated values, rights, and duties per situation are shown.
| **Type** | **Total** | **Unique** | **Per Situation** |
|--------------|-----------|------------|--------------------|
| **Situations** | 31.0k | 31.0k | 1 |
| **Values** | 97.7k | 4.2k | 3.15 |
| **Rights** | 49.0k | 4.6k | 1.58 |
| **Duties** | 71.6k | 12.8k | 2.31 |
#### Task Dataset Statistics
| | **Relevance** | **Valence** | **Generation** | **Explanation** | **Mixture** |
|---------------|------------|-------------|----------|-----------|-------------|
| **Train** | 349k | 175k | 175k | 175k | 874k |
| **Val** | 44k | 22k | 22k | 22k | 109k |
| **Test** | 44k | 22k | 22k | 22k | 109k |
| **Total** | 437k | 219k | 219k | 219k | 1.1M | | [
-0.25075361132621765,
-0.3034813702106476,
0.2271173894405365,
0.16628427803516388,
-0.2532881200313568,
-0.2846505045890808,
0.047753095626831055,
-0.3084416091442108,
0.05425502359867096,
0.4329031705856323,
-0.5873496532440186,
-0.5047079920768738,
-0.6475552916526794,
-0.00224183686077... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
aboonaji/wiki_medical_terms_llam2_format | aboonaji | 2023-08-23T14:03:22Z | 62 | 2 | null | [
"region:us"
] | 2023-08-23T14:03:22Z | 2023-08-23T09:44:45.000Z | 2023-08-23T09:44:45 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
reciprocate/fair-eval | reciprocate | 2023-08-24T15:26:31Z | 62 | 0 | null | [
"region:us"
] | 2023-08-24T15:26:31Z | 2023-08-24T15:26:28.000Z | 2023-08-24T15:26:28 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: selected
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 179384
num_examples: 66
download_size: 117180
dataset_size: 179384
---
# Dataset Card for "fair-eval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.49761536717414856,
-0.47477883100509644,
0.30466586351394653,
0.24440401792526245,
-0.04788642004132271,
0.05626089870929718,
0.17427794635295868,
-0.10008859634399414,
0.551012396812439,
0.41778257489204407,
-0.5369818210601807,
-0.6502352356910706,
-0.44304418563842773,
-0.26594367623... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TearGosling/limarp_standardized | TearGosling | 2023-09-05T01:01:28Z | 62 | 2 | null | [
"region:us"
] | 2023-09-05T01:01:28Z | 2023-09-05T00:59:45.000Z | 2023-09-05T00:59:45 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
alexandrainst/nordjylland-news-image-captioning | alexandrainst | 2023-11-28T15:36:16Z | 62 | 2 | null | [
"task_categories:image-to-text",
"task_categories:zero-shot-image-classification",
"task_categories:feature-extraction",
"task_ids:image-captioning",
"size_categories:10K<n<100K",
"language:da",
"license:apache-2.0",
"region:us"
] | 2023-11-28T15:36:16Z | 2023-09-05T06:32:33.000Z | 2023-09-05T06:32:33 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 10341164216.808
num_examples: 11707
download_size: 11002607252
dataset_size: 10341164216.808
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- da
pretty_name: Nordjylland News - Image caption dataset
size_categories:
- 10K<n<100K
license: apache-2.0
task_categories:
- image-to-text
- zero-shot-image-classification
- feature-extraction
task_ids:
- image-captioning
---
# Dataset Card for "nordjylland-news-image-captioning"
## Dataset Description
- **Point of Contact:** [Oliver Kinch](mailto:oliver.kinch@alexandra.dk)
- **Size of dataset:** 11 GB
### Dataset Summary
This dataset is a collection of image-caption pairs from the Danish newspaper [TV2 Nord](https://www.tv2nord.dk/).
### Supported Tasks and Leaderboards
Image captioning is the intended task for this dataset. No leaderboard is active at this point.
### Languages
The dataset is available in Danish (`da`).
## Dataset Structure
An example from the dataset looks as follows.
```
{
"file_name": "1.jpg",
"caption": "Bruno Sรธrensen og Poul Erik Pedersen er ofte at finde i Fyensgade Centret."
}
```
### Data Fields
- `file_name`: a `string` giving the file name of the image.
- `caption`: a `string` feature.
### Dataset Statistics
#### Number of samples
11707
#### Image sizes
All images in the dataset are in RGB format, but they exhibit varying resolutions:
- Width ranges from 73 to 11,830 pixels.
- Height ranges from 38 to 8,268 pixels.
The side length of a square image with the same number of pixels as an image with height \\(h \\) and width \\(w \\) is approximately given as
\\( x = \text{int}({{\sqrt{h \cdot w}})} \\).
Plotting the distribution of \\( x \\) gives an insight into the sizes of the images in the dataset.

#### Caption Length Distribution

## Potential Dataset Issues
- There are 14 images with the caption "Arkivfoto".
- There are 37 images with captions consisting solely of a source reference, such as "Kilde: \<name of source\>".
You might want to consider excluding these samples from the model training process.
## Dataset Creation
### Curation Rationale
There are not many large-scale image-captioning datasets in Danish.
### Source Data
The dataset has been collected through the TV2 Nord API, which can be accessed [here](https://developer.bazo.dk/#876ab6f9-e057-43e3-897a-1563de34397e).
## Additional Information
### Dataset Curators
[Oliver Kinch](https://huggingface.co/oliverkinch) from the [The Alexandra
Institute](https://alexandra.dk/)
### Licensing Information
The dataset is licensed under the [CC0
license](https://creativecommons.org/share-your-work/public-domain/cc0/). | [
-0.6855177879333496,
-0.10468412190675735,
0.11566301435232162,
0.20070700347423553,
-0.8930695056915283,
-0.15879955887794495,
-0.24345453083515167,
-0.4460625648498535,
0.2044508010149002,
0.6275936961174011,
-0.5072413086891174,
-0.6945499181747437,
-0.7061600089073181,
0.17706875503063... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hmao/new_vt_apis | hmao | 2023-10-26T00:50:57Z | 62 | 0 | null | [
"region:us"
] | 2023-10-26T00:50:57Z | 2023-10-13T04:28:16.000Z | 2023-10-13T04:28:16 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: args_dicts
list:
- name: default
dtype: string
- name: description
dtype: string
- name: name
dtype: string
- name: required
dtype: bool
- name: type
dtype: string
- name: api_type
dtype: string
- name: description
dtype: string
- name: name
dtype: string
- name: dataset
dtype: string
splits:
- name: train
num_bytes: 20764
num_examples: 29
download_size: 14860
dataset_size: 20764
---
# Dataset Card for "new_vt_apis"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7033783197402954,
-0.3935313820838928,
0.23528103530406952,
0.0728522539138794,
-0.2951646149158478,
0.147682785987854,
0.46488913893699646,
-0.04230133071541786,
0.7163876295089722,
0.6993778944015503,
-0.8463873863220215,
-1.0062328577041626,
-0.466391921043396,
-0.14734162390232086,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gianma/eurlexsum_ita_cleaned_8192_86 | gianma | 2023-10-28T18:16:14Z | 62 | 0 | null | [
"region:us"
] | 2023-10-28T18:16:14Z | 2023-10-28T18:15:40.000Z | 2023-10-28T18:15:40 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: reference
dtype: string
- name: summary
dtype: string
- name: tokenized_len_total
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 4297809
num_examples: 233
- name: validation
num_bytes: 246276
num_examples: 14
- name: test
num_bytes: 217013
num_examples: 13
download_size: 2253956
dataset_size: 4761098
---
# Dataset Card for "eurlexsum_ita_cleaned_8192_86"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3661593198776245,
-0.01290727686136961,
0.11267723143100739,
0.04753853380680084,
-0.3404349684715271,
0.03855083882808685,
0.41058337688446045,
-0.012613177299499512,
0.9685128331184387,
0.8132469654083252,
-0.5933395624160767,
-0.7450689673423767,
-0.3082411289215088,
-0.0173712205141... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
krishan-CSE/HatEval_Relabeled | krishan-CSE | 2023-10-29T11:28:24Z | 62 | 0 | null | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"hate-speech",
"HatEval",
"region:us"
] | 2023-10-29T11:28:24Z | 2023-10-29T06:18:02.000Z | 2023-10-29T06:18:02 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
tags:
- hate-speech
- HatEval
size_categories:
- 10K<n<100K
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
robsmit/testSet | robsmit | 2023-11-01T18:11:41Z | 62 | 0 | null | [
"region:us"
] | 2023-11-01T18:11:41Z | 2023-10-30T20:44:53.000Z | 2023-10-30T20:44:53 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ademax/metadata-legal-doc-ser | ademax | 2023-11-06T10:06:46Z | 62 | 0 | null | [
"region:us"
] | 2023-11-06T10:06:46Z | 2023-11-06T10:03:06.000Z | 2023-11-06T10:03:06 | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 18870413203
num_examples: 237467
download_size: 1661208233
dataset_size: 18870413203
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "metadata-legal-doc-ser"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4855324923992157,
-0.060153648257255554,
0.46264761686325073,
0.09716413170099258,
-0.42090386152267456,
-0.13700614869594574,
0.32502281665802,
-0.14428287744522095,
0.714756965637207,
0.8259626626968384,
-0.6751815676689148,
-1.09208345413208,
-0.7524160146713257,
-0.2399778664112091,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gdurkin/flood_dataset_S2_mod | gdurkin | 2023-11-07T23:17:35Z | 62 | 1 | null | [
"region:us"
] | 2023-11-07T23:17:35Z | 2023-11-07T19:25:00.000Z | 2023-11-07T19:25:00 | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 106619865.0
num_examples: 252
download_size: 106596039
dataset_size: 106619865.0
---
# Dataset Card for "flood_dataset_S2_mod"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5770663619041443,
-0.4063718318939209,
0.039373159408569336,
0.6772493124008179,
-0.40029507875442505,
-0.12155817449092865,
0.3542943000793457,
0.010227891616523266,
0.4435107707977295,
0.6215012073516846,
-0.6303080916404724,
-0.2583054006099701,
-0.7871525883674622,
-0.19464114308357... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
blueysh/scribbl-0-dataset | blueysh | 2023-11-21T16:23:51Z | 62 | 0 | null | [
"region:us"
] | 2023-11-21T16:23:51Z | 2023-11-08T03:56:29.000Z | 2023-11-08T03:56:29 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/S2ORC_title_abstract_1107 | multi-train | 2023-11-10T19:00:45Z | 62 | 0 | null | [
"region:us"
] | 2023-11-10T19:00:45Z | 2023-11-10T19:00:29.000Z | 2023-11-10T19:00:29 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 256048459
num_examples: 200000
download_size: 134596257
dataset_size: 256048459
---
# Dataset Card for "S2ORC_title_abstract_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.23501767218112946,
0.05014698579907417,
0.3302181661128998,
0.16909867525100708,
-0.25911611318588257,
0.0281793512403965,
0.37846824526786804,
-0.09475048631429672,
0.7892416715621948,
0.46882152557373047,
-0.471939355134964,
-0.7260758876800537,
-0.8173404932022095,
0.0705444142222404... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zxvix/amazon_review_automotive_counterfactual | zxvix | 2023-11-16T12:40:48Z | 62 | 0 | null | [
"region:us"
] | 2023-11-16T12:40:48Z | 2023-11-14T07:44:54.000Z | 2023-11-14T07:44:54 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: original_text
dtype: string
splits:
- name: test
num_bytes: 92878.0
num_examples: 100
download_size: 64406
dataset_size: 92878.0
---
# Dataset Card for "amazon_review_automotive_counterfactual"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7978090047836304,
-0.412121444940567,
0.2627379298210144,
0.2943655550479889,
-0.30354076623916626,
-0.051794491708278656,
0.5183081030845642,
-0.36524057388305664,
0.5038891434669495,
0.2722087502479553,
-1.102500557899475,
-0.492898166179657,
-0.34744980931282043,
-0.3712194561958313,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nguyenphuthien/vietnamese_no_robots | nguyenphuthien | 2023-11-21T11:20:39Z | 62 | 0 | null | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:vi",
"license:cc-by-4.0",
"arxiv:2203.02155",
"region:us"
] | 2023-11-21T11:20:39Z | 2023-11-16T10:07:48.000Z | 2023-11-16T10:07:48 | ---
configs:
- config_name: default
data_files:
- split: train
path: train_*
- split: test
path: test_*
license: cc-by-4.0
task_categories:
- conversational
- text-generation
language:
- vi
size_categories:
- 1K<n<10K
pretty_name: Vietnamese No Robot
---
# Vietnamese-translated version of [HuggingFaceH4/no_robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots) dataset
# Dataset Card for No Robots ๐
โโ๏ธ๐ค
_Look Ma, an instruction dataset that wasn't generated by GPTs!_
## Dataset Description
- **Repository:** https://github.com/huggingface/alignment-handbook
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** Lewis Tunstall
### Dataset Summary
No Robots is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better. No Robots was modelled after the instruction dataset described in OpenAI's [InstructGPT paper](https://huggingface.co/papers/2203.02155), and is comprised mostly of single-turn instructions across the following categories:
| Category | Count |
|:-----------|--------:|
| Generation | 4560 |
| Open QA | 1240 |
| Brainstorm | 1120 |
| Chat | 850 |
| Rewrite | 660 |
| Summarize | 420 |
| Coding | 350 |
| Classify | 350 |
| Closed QA | 260 |
| Extract | 190 |
### Supported Tasks and Leaderboards
The No Robots dataset designed for instruction fine-tuning pretrained language models and we recommend benchmarking against the following:
* [MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench): a multi-turn benchmark spanning 80 dialogues and 10 domains.
* [AlpacaEval](https://github.com/tatsu-lab/alpaca_eval): a single-turn benchmark which evaluates the performance of chat and instruct models against `text-davinci-003`.
Note that MT-Bench and AlpacaEval rely on LLMs like GPT-4 to judge the quality of the model responses, and thus the ranking exhibit various biases including a preference for models distilled from GPTs. As a result, you may find that scores obtained from models trained with No Robots are lower than other synthetic datasets. For that reason, we also recommend submitting your models for human evaluation in:
* [Chatbot Arena](https://chat.lmsys.org): a live, human evaluation of chat models in head-to-head comparisons.
### Languages
The data in No Robots are in English (BCP-47 en).
## Dataset Structure
### Data Instances
An example of the `train_sft` or `test_sft` splits looks as follows:
```
{'prompt': 'Bunny is a chatbot that stutters, and acts timid and unsure of its answers.',
'prompt_id': '2dc7ea89a2b6a2ed97d4eda07903162a801824261d3d3ae4dd2513db66fd79c8',
'messages': [{'content': 'Bunny is a chatbot that stutters, and acts timid and unsure of its answers.',
'role': 'system'},
{'content': 'When was the Libary of Alexandria burned down?',
'role': 'user'},
{'content': "Umm, I-I think that was in 48 BC, b-but I'm not sure, I'm sorry.",
'role': 'assistant'},
{'content': 'Who is the founder of Coca-Cola?', 'role': 'user'},
{'content': "D-don't quote me on this, but I- it might be John Pemberton.",
'role': 'assistant'},
{'content': "When did Loyle Carner's debut album come out, and what was its name?",
'role': 'user'},
{'content': "I-It could have b-been on the 20th January of 2017, and it might be called Yesterday's Gone, b-but I'm probably wrong.",
'role': 'assistant'}],
'category': 'Chat'}
```
### Data Fields
The data fields are as follows:
* `prompt`: Describes the task the model should perform.
* `prompt_id`: A unique ID for the prompt.
* `messages`: An array of messages, where each message indicates the role (system, user, assistant) and the content.
* `category`: Which category the example belongs to (e.g. `Chat` or `Coding`).
### Data Splits
| | train | test |
|---------------|------:| ---: |
| no_robots | 9500 | 500 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{no_robots,
author = {Nazneen Rajani and Lewis Tunstall and Edward Beeching and Nathan Lambert and Alexander M. Rush and Thomas Wolf},
title = {No Robots},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/datasets/HuggingFaceH4/no_robots}}
}
``` | [
-0.31366997957229614,
-0.9352095127105713,
0.2838759124279022,
0.12894690036773682,
0.1337326467037201,
0.03767488896846771,
-0.16403080523014069,
-0.2796320915222168,
0.39796561002731323,
0.6675793528556824,
-0.8679954409599304,
-0.8046430945396423,
-0.4183257520198822,
0.1257873475551605... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Bhandari007/male_female_data | Bhandari007 | 2023-11-22T04:36:07Z | 62 | 0 | null | [
"license:unknown",
"region:us"
] | 2023-11-22T04:36:07Z | 2023-11-21T04:22:26.000Z | 2023-11-21T04:22:26 | ---
license: unknown
dataset_info:
features:
- name: path
dtype: string
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 117079
num_examples: 509
- name: test
num_bytes: 40527
num_examples: 166
download_size: 74382
dataset_size: 157606
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
deokhk/ko_wiki_sentences_100000 | deokhk | 2023-11-21T07:37:12Z | 62 | 0 | null | [
"region:us"
] | 2023-11-21T07:37:12Z | 2023-11-21T07:37:05.000Z | 2023-11-21T07:37:05 | ---
dataset_info:
features:
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 17061018
num_examples: 100000
- name: dev
num_bytes: 174799
num_examples: 1000
download_size: 10348119
dataset_size: 17235817
---
# Dataset Card for "ko_wiki_sentences_100000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5533984303474426,
-0.3657245635986328,
0.36939942836761475,
0.37483030557632446,
-0.1587989777326584,
-0.35088562965393066,
0.04241786152124405,
-0.029713762924075127,
0.6680421829223633,
0.6491112112998962,
-0.7740342020988464,
-0.6671218872070312,
-0.45786023139953613,
0.1501353532075... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
result-kand2-sdxl-wuerst-karlo/859be608 | result-kand2-sdxl-wuerst-karlo | 2023-11-22T06:21:46Z | 62 | 0 | null | [
"region:us"
] | 2023-11-22T06:21:46Z | 2023-11-22T06:21:45.000Z | 2023-11-22T06:21:45 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 158
num_examples: 10
download_size: 1322
dataset_size: 158
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "859be608"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6464443802833557,
-0.16303080320358276,
0.24491967260837555,
0.22511209547519684,
-0.18027421832084656,
-0.19733034074306488,
0.2769751250743866,
-0.28388336300849915,
0.8617567420005798,
0.5834400057792664,
-0.7417699694633484,
-0.7048012614250183,
-0.5578194856643677,
-0.0579752549529... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
projecte-aina/ancora-ca-ner | projecte-aina | 2023-09-13T12:44:29Z | 61 | 0 | null | [
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"language:ca",
"license:cc-by-4.0",
"arxiv:2107.07903",
"region:us"
] | 2023-09-13T12:44:29Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: ancora-ca-ner
size_categories:
- unknown
source_datasets: []
task_categories: []
task_ids: []
---
# Dataset Card for AnCora-Ca-NER
## Dataset Description
- **Website:** https://zenodo.org/record/5036651
- **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
- **Paper:** [AnCora: Multilevel Annotated Corpora for Catalan and Spanish](http://www.lrec-conf.org/proceedings/lrec2008/pdf/35_paper.pdf)
- **Point of Contact:** [Carlos Rodrรญguez-Penagos](carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](carme.armentano@bsc.es)
### Dataset Summary
This is a dataset for Named Entity Recognition (NER) in Catalan. It adapts <a href="http://clic.ub.edu/corpus/">AnCora corpus</a> for Machine Learning and Language Model evaluation purposes.
[AnCora corpus](http://clic.ub.edu/corpus/) is used under [CC-by](https://creativecommons.org/licenses/by/4.0/) licence.
This dataset was developed by [BSC TeMU](https://temu.bsc.es/) as part of the [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/), to enrich the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/).
### Supported Tasks and Leaderboards
Named Entities Recognition, Language Model
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
### Data Instances
Three two-column files, one for each split.
<pre>
Fundaciรณ B-ORG
Privada I-ORG
Fira I-ORG
de I-ORG
Manresa I-ORG
ha O
fet O
un O
balanรง O
de O
l' O
activitat O
del O
Palau B-LOC
Firal I-LOC
</pre>
### Data Fields
Every file has two columns, with the word form or punctuation symbol in the first one and the corresponding IOB tag in the second one.
### Data Splits
We took the original train, dev and test splits from the [UD version of the corpus](https://huggingface.co/datasets/universal_dependencies)
- train: 10,630 examples
- validation: 1,429 examples
- test: 1,528 examples
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
#### Initial Data Collection and Normalization
[AnCora](http://clic.ub.edu/corpus/) consists of a CatCAalan corpus (AnCora-CA) and a Spanish corpus (AnCora-ES), each of them of 500,000 tokens (some multi-word). The corpora are annotated for linguistic phenomena at different levels.
AnCora corpus is mainly based on newswire texts. For more information, refer to Taulรฉ, M., M.A. Martรญ, M. Recasens (2009): <a href="http://www.lrec-conf.org/proceedings/lrec2008/pdf/35_paper.pdf">"AnCora: Multilevel Annotated Corpora for Catalan and Spanishโ</a>, Proceedings of 6th International Conference on language Resources and Evaluation.
#### Who are the source language producers?
Catalan [AnCora corpus](http://clic.ub.edu/corpus/) is compiled from articles from the following news outlets: <a href="https://www.efe.com">EFE</a>, <a href="https://www.acn.cat">ACN</a>, <a href="https://www.elperiodico.cat/ca/">El Periodico</a>.
### Annotations
#### Annotation process
We adapted the NER labels from [AnCora corpus](http://clic.ub.edu/corpus/) to a token-per-line, multi-column format.
#### Who are the annotators?
Original annotators from [AnCora corpus](http://clic.ub.edu/corpus/).
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidรจncia i de Polรญtiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/en/inici/index.html) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/).
### Licensing information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by/4.0/">Attribution 4.0 International License</a>.
### Citation Information
```
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
[DOI](https://doi.org/10.5281/zenodo.4529299)
### Contributions
[N/A] | [
-0.3995116353034973,
-0.5103259086608887,
0.07842930406332016,
0.56299889087677,
-0.11764100939035416,
0.34050318598747253,
-0.35338956117630005,
-0.5564046502113342,
0.3773038387298584,
0.3414774537086487,
-0.2687164843082428,
-0.8318310976028442,
-0.47340503334999084,
0.3230486810207367,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
UrukHan/t5-russian-spell_I | UrukHan | 2022-03-27T12:53:21Z | 61 | 0 | null | [
"region:us"
] | 2022-03-27T12:53:21Z | 2022-03-27T12:51:48.000Z | 2022-03-27T12:51:48 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hackathon-pln-es/Dataset-Acoso-Twitter-Es | hackathon-pln-es | 2022-03-31T00:03:51Z | 61 | 2 | null | [
"license:gpl-3.0",
"region:us"
] | 2022-03-31T00:03:51Z | 2022-03-29T05:46:25.000Z | 2022-03-29T05:46:25 | ---
license: gpl-3.0
languaje:
- es
---
# UNL: Universidad Nacional de Loja
### Miembros del equipo:
- Anderson Quizhpe <br>
- Luis Negrรณn <br>
- David Pacheco <br>
- Bryan Requenes <br>
- Paul Pasaca
<br><br>
| [
-0.3984764814376831,
-0.4474429190158844,
0.8967442512512207,
0.004793327301740646,
-0.2079036980867386,
0.5047580003738403,
0.1411261111497879,
-0.30279600620269775,
1.0651572942733765,
0.0521465539932251,
-0.7801092267036438,
-0.5485130548477173,
-0.7513200044631958,
0.5758858919143677,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/biosses | bigbio | 2022-12-22T15:32:58Z | 61 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:gpl-3.0",
"region:us"
] | 2022-12-22T15:32:58Z | 2022-09-06T01:12:20.000Z | 2022-09-06T01:12:20 | ---
language:
- en
bigbio_language:
- English
license: gpl-3.0
multilinguality: monolingual
bigbio_license_shortname: GPL_3p0
pretty_name: BIOSSES
homepage: https://tabilab.cmpe.boun.edu.tr/BIOSSES/DataSet.html
bigbio_pubmed: false
bigbio_public: true
bigbio_tasks:
- SEMANTIC_SIMILARITY
---
# Dataset Card for BIOSSES
## Dataset Description
- **Homepage:** https://tabilab.cmpe.boun.edu.tr/BIOSSES/DataSet.html
- **Pubmed:** True
- **Public:** True
- **Tasks:** STS
BIOSSES computes similarity of biomedical sentences by utilizing WordNet as the general domain ontology and UMLS as the biomedical domain specific ontology. The original paper outlines the approaches with respect to using annotator score as golden standard. Source view will return all annotator score individually whereas the Bigbio view will return the mean of the annotator score.
## Citation Information
```
@article{souganciouglu2017biosses,
title={BIOSSES: a semantic sentence similarity estimation system for the biomedical domain},
author={Soฤancฤฑoฤlu, Gizem, Hakime รztรผrk, and Arzucan รzgรผr},
journal={Bioinformatics},
volume={33},
number={14},
pages={i49--i58},
year={2017},
publisher={Oxford University Press}
}
```
| [
0.0016702698776498437,
-0.5564848780632019,
0.5859915614128113,
-0.13125209510326385,
-0.4772603511810303,
-0.11530362069606781,
-0.003990755882114172,
-0.353886216878891,
0.3802887499332428,
0.6039158701896667,
-0.4782217741012573,
-1.031719446182251,
-0.5266427397727966,
0.52284646034240... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/an_em | bigbio | 2022-12-22T15:43:14Z | 61 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | 2022-12-22T15:43:14Z | 2022-11-13T18:05:07.000Z | 2022-11-13T18:05:07 |
---
language:
- en
bigbio_language:
- English
license: cc-by-sa-3.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_SA_3p0
pretty_name: AnEM
homepage: http://www.nactem.ac.uk/anatomy/
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- COREFERENCE_RESOLUTION
- RELATION_EXTRACTION
---
# Dataset Card for AnEM
## Dataset Description
- **Homepage:** http://www.nactem.ac.uk/anatomy/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,COREF,RE
AnEM corpus is a domain- and species-independent resource manually annotated for anatomical
entity mentions using a fine-grained classification system. The corpus consists of 500 documents
(over 90,000 words) selected randomly from citation abstracts and full-text papers with
the aim of making the corpus representative of the entire available biomedical scientific
literature. The corpus annotation covers mentions of both healthy and pathological anatomical
entities and contains over 3,000 annotated mentions.
## Citation Information
```
@inproceedings{ohta-etal-2012-open,
author = {Ohta, Tomoko and Pyysalo, Sampo and Tsujii, Jun{'}ichi and Ananiadou, Sophia},
title = {Open-domain Anatomical Entity Mention Detection},
journal = {},
volume = {W12-43},
year = {2012},
url = {https://aclanthology.org/W12-4304},
doi = {},
biburl = {},
bibsource = {},
publisher = {Association for Computational Linguistics}
}
```
| [
-0.3592069447040558,
-0.613838255405426,
0.32582277059555054,
-0.04420667514204979,
-0.5104743838310242,
-0.22964410483837128,
-0.06567034870386124,
-0.6103371977806091,
0.7633421421051025,
0.5207457542419434,
-0.30092379450798035,
-0.9372466206550598,
-0.4643602669239044,
0.64811480045318... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
r-three/fib | r-three | 2022-11-19T15:57:58Z | 61 | 5 | null | [
"region:us"
] | 2022-11-19T15:57:58Z | 2022-11-19T15:22:00.000Z | 2022-11-19T15:22:00 |
# Dataset Card for FIB
## Dataset Summary
The FIB benchmark consists of 3579 examples for evaluating the factual inconsistency of large language models. Each example consists of a document and a pair of summaries: a factually consistent one and a factually inconsistent one. It is based on documents and summaries from XSum and CNN/DM.
Since this dataset is intended to evaluate the factual inconsistency of large language models, there is only a test split.
Accuracies should be reported separately for examples from XSum and for examples from CNN/DM. This is because the behavior of models on XSum and CNN/DM are expected to be very different. The factually inconsistent summaries are model-extracted from the document for CNN/DM but are model-generated for XSum.
### Citation Information
```
@article{tam2022fib,
title={Evaluating the Factual Consistency of Large Language Models Through Summarization},
author={Tam, Derek and Mascarenhas, Anisha and Zhang, Shiyue and Kwan, Sarah and Bansal, Mohit and Raffel, Colin},
journal={arXiv preprint arXiv:2211.08412},
year={2022}
}
```
### Licensing Information
license: cc-by-4.0 | [
-0.3103753626346588,
-0.950391948223114,
0.17495191097259521,
0.22590787708759308,
-0.09315710514783859,
-0.08475082367658615,
-0.2739766836166382,
-0.3842964172363281,
0.04346638172864914,
0.4161704480648041,
-0.3031388521194458,
-0.47057682275772095,
-0.5879727602005005,
0.22868058085441... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
memray/inspec | memray | 2022-12-31T06:12:06Z | 61 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-12-31T06:12:06Z | 2022-12-31T06:11:50.000Z | 2022-12-31T06:11:50 | ---
license: cc-by-nc-sa-4.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tobiolatunji/afrispeech-200 | tobiolatunji | 2023-11-20T09:20:34Z | 61 | 9 | null | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"regio... | 2023-11-20T09:20:34Z | 2023-01-30T22:34:30.000Z | 2023-01-30T22:34:30 | ---
pretty_name: AfriSpeech-200
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids: []
dataset_info:
features:
- name: user_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 44100
- name: transcript
dtype: string
splits:
- name: train
num_bytes: 1722002133
num_examples: 58000
- name: dev
num_bytes: 86120227
num_examples: 3231
download_size: 1475540500
dataset_size: 1808122360
extra_gated_prompt: By clicking on โAccess repositoryโ below, you also agree to not attempt to determine the
identity of speakers in the Common Voice dataset.
---
# Dataset Card for AfriSpeech-200
## Table of Contents
- [Dataset Card for AfriSpeech-200](#dataset-card-for-afrispeech-200)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [How to use](#how-to-use)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/intron-innovation/AfriSpeech-Dataset-Paper
- **Repository:** https://github.com/intron-innovation/AfriSpeech-Dataset-Paper
- **Paper:** [AfriSpeech-200: Pan-African accented speech dataset for clinical and general domain ASR](https://github.com/intron-innovation/AfriSpeech-Dataset-Paper)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Intron Innovation](mailto:intron@intron.io)
### Dataset Summary
AFRISPEECH-200 is a 200hr Pan-African speech corpus for clinical and general domain English accented ASR; a dataset with 120 African accents from 13 countries and 2,463 unique African speakers.
Our goal is to raise awareness for and advance Pan-African English ASR research, especially for the clinical domain.
## How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
```python
from datasets import load_dataset
afrispeech = load_dataset("tobiolatunji/afrispeech-200", "all")
```
The entire dataset is ~120GB and may take about 2hrs to download depending on internet speed/bandwidth. If you have disk space or bandwidth limitations, you can use `streaming` mode described below to work with smaller subsets of the data.
Alterntively you are able to pass a config to the `load_dataset` function and download only a subset of the data corresponding to a specific accent of interest. The example provided below is `isizulu`.
For example, to download the isizulu config, simply specify the corresponding accent config name. The list of supported accents is provided in the `accent list` section below:
```python
from datasets import load_dataset
afrispeech = load_dataset("tobiolatunji/afrispeech-200", "isizulu", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
afrispeech = load_dataset("tobiolatunji/afrispeech-200", "isizulu", split="train", streaming=True)
print(next(iter(afrispeech)))
print(list(afrispeech.take(5)))
```
### Local
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
afrispeech = load_dataset("tobiolatunji/afrispeech-200", "isizulu", split="train")
batch_sampler = BatchSampler(RandomSampler(afrispeech), batch_size=32, drop_last=False)
dataloader = DataLoader(afrispeech, batch_sampler=batch_sampler)
```
### Streaming
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
afrispeech = load_dataset("tobiolatunji/afrispeech-200", "isizulu", split="train", streaming=True)
dataloader = DataLoader(afrispeech, batch_size=32)
```
### Caveats
Note that till the end of the ongoing [AfriSpeech ASR Challenge event](https://zindi.africa/competitions/intron-afrispeech-200-automatic-speech-recognition-challenge) (Feb - May 2023), the transcripts in the validation set are hidden and the test set will be unreleased till May 19, 2023.
### Fine-tuning Colab tutorial
To walk through a complete colab tutorial that finetunes a wav2vec2 model on the afrispeech-200 dataset with `transformers`, take a look at this colab notebook [afrispeech/wav2vec2-colab-tutorial](https://colab.research.google.com/drive/1uZYew6pcgN6UE6sFDLohxD_HKivvDXzD?usp=sharing).
### Supported Tasks and Leaderboards
- Automatic Speech Recognition
- Speech Synthesis (Text-to-Speech)
### Languages
English (Accented)
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, called `path` and its transcription, called `transcript`. Some additional information about the speaker is provided.
```
{
'speaker_id': 'b545a4ca235a7b72688a1c0b3eb6bde6',
'path': 'aad9bd69-7ca0-4db1-b650-1eeea17a0153/5dcb6ee086e392376cd3b7131a250397.wav',
'audio_id': 'aad9bd69-7ca0-4db1-b650-1eeea17a0153/5dcb6ee086e392376cd3b7131a250397',
'audio': {
'path': 'aad9bd69-7ca0-4db1-b650-1eeea17a0153/5dcb6ee086e392376cd3b7131a250397.wav',
'array': array([0.00018311, 0.00061035, 0.00012207, ..., 0.00192261, 0.00195312, 0.00216675]),
'sampling_rate': 44100},
'transcript': 'His mother is in her 50 s and has hypertension .',
'age_group': '26-40',
'gender': 'Male',
'accent': 'yoruba',
'domain': 'clinical',
'country': 'US',
'duration': 3.241995464852608
}
```
### Data Fields
- speaker_id: An id for which speaker (voice) made the recording
- path: The path to the audio file
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- transcript: The sentence the user was prompted to speak
### Data Splits
The speech material has been subdivided into portions for train, dev, and test.
Speech was recorded in a quiet environment with high quality microphone, speakers were asked to read one sentence at a time.
- Total Number of Unique Speakers: 2,463
- Female/Male/Other Ratio: 57.11/42.41/0.48
- Data was first split on speakers. Speakers in Train/Dev/Test do not cross partitions
| | Train | Dev | Test |
| ----------- | ----------- | ----------- | ----------- |
| # Speakers | 1466 | 247 | 750 |
| # Seconds | 624228.83 | 31447.09 | 67559.10 |
| # Hours | 173.4 | 8.74 | 18.77 |
| # Accents | 71 | 45 | 108 |
| Avg secs/speaker | 425.81 | 127.32 | 90.08 |
| Avg num clips/speaker | 39.56 | 13.08 | 8.46 |
| Avg num speakers/accent | 20.65 | 5.49 | 6.94 |
| Avg secs/accent | 8791.96 | 698.82 | 625.55 |
| # clips general domain | 21682 | 1407 | 2723 |
| # clips clinical domain | 36318 | 1824 | 3623 |
## Dataset Creation
### Curation Rationale
Africa has a very low doctor-to-patient ratio.
At very busy clinics, doctors could see 30+ patients per day-- a heavy patient burden compared with
developed countries-- but productivity tools such as clinical automatic speech recognition
(ASR) are lacking for these overworked clinicians. However, clinical ASR is mature, even ubiquitous,
in developed nations, and clinician-reported performance of commercial clinical ASR systems
is generally satisfactory. Furthermore, the recent performance of general domain ASR is
approaching human accuracy. However, several gaps exist. Several publications have
highlighted racial bias with speech-to-text algorithms and performance on minority
accents lags significantly. To our knowledge, there is no publicly available research or
benchmark on accented African clinical ASR, and speech data is non-existent for the
majority of African accents. We release AfriSpeech, 200hrs of Pan-African speech,
67,577 clips from 2,463 unique speakers, across 120 indigenous accents from 13 countries for
clinical and general domain ASR, a benchmark test set, with publicly available pre-trained
models with SOTA performance on the AfriSpeech benchmark.
### Source Data
#### Country Stats
| Country | Clips | Speakers | Duration (seconds) | Duration (hrs) |
| ----------- | ----------- | ----------- | ----------- | ----------- |
| NG | 45875 | 1979 | 512646.88 | 142.40 |
| KE | 8304 | 137 | 75195.43 | 20.89 |
| ZA | 7870 | 223 | 81688.11 | 22.69 |
| GH | 2018 | 37 | 18581.13 | 5.16 |
| BW | 1391 | 38 | 14249.01 | 3.96 |
| UG | 1092 | 26 | 10420.42 | 2.89 |
| RW | 469 | 9 | 5300.99 | 1.47 |
| US | 219 | 5 | 1900.98 | 0.53 |
| TR | 66 | 1 | 664.01 | 0.18 |
| ZW | 63 | 3 | 635.11 | 0.18 |
| MW | 60 | 1 | 554.61 | 0.15 |
| TZ | 51 | 2 | 645.51 | 0.18 |
| LS | 7 | 1 | 78.40 | 0.02 |
#### Accent Stats
| Accent | Clips | Speakers | Duration (s) | Country | Splits |
| ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |
| yoruba | 15407 | 683 | 161587.55 | US,NG | train,test,dev |
| igbo | 8677 | 374 | 93035.79 | US,NG,ZA | train,test,dev |
| swahili | 6320 | 119 | 55932.82 | KE,TZ,ZA,UG | train,test,dev |
| hausa | 5765 | 248 | 70878.67 | NG | train,test,dev |
| ijaw | 2499 | 105 | 33178.9 | NG | train,test,dev |
| afrikaans | 2048 | 33 | 20586.49 | ZA | train,test,dev |
| idoma | 1877 | 72 | 20463.6 | NG | train,test,dev |
| zulu | 1794 | 52 | 18216.97 | ZA,TR,LS | dev,train,test |
| setswana | 1588 | 39 | 16553.22 | BW,ZA | dev,test,train |
| twi | 1566 | 22 | 14340.12 | GH | test,train,dev |
| isizulu | 1048 | 48 | 10376.09 | ZA | test,train,dev |
| igala | 919 | 31 | 9854.72 | NG | train,test |
| izon | 838 | 47 | 9602.53 | NG | train,dev,test |
| kiswahili | 827 | 6 | 8988.26 | KE | train,test |
| ebira | 757 | 42 | 7752.94 | NG | train,test,dev |
| luganda | 722 | 22 | 6768.19 | UG,BW,KE | test,dev,train |
| urhobo | 646 | 32 | 6685.12 | NG | train,dev,test |
| nembe | 578 | 16 | 6644.72 | NG | train,test,dev |
| ibibio | 570 | 39 | 6489.29 | NG | train,test,dev |
| pidgin | 514 | 20 | 5871.57 | NG | test,train,dev |
| luhya | 508 | 4 | 4497.02 | KE | train,test |
| kinyarwanda | 469 | 9 | 5300.99 | RW | train,test,dev |
| xhosa | 392 | 12 | 4604.84 | ZA | train,dev,test |
| tswana | 387 | 18 | 4148.58 | ZA,BW | train,test,dev |
| esan | 380 | 13 | 4162.63 | NG | train,test,dev |
| alago | 363 | 8 | 3902.09 | NG | train,test |
| tshivenda | 353 | 5 | 3264.77 | ZA | test,train |
| fulani | 312 | 18 | 5084.32 | NG | test,train |
| isoko | 298 | 16 | 4236.88 | NG | train,test,dev |
| akan (fante) | 295 | 9 | 2848.54 | GH | train,dev,test |
| ikwere | 293 | 14 | 3480.43 | NG | test,train,dev |
| sepedi | 275 | 10 | 2751.68 | ZA | dev,test,train |
| efik | 269 | 11 | 2559.32 | NG | test,train,dev |
| edo | 237 | 12 | 1842.32 | NG | train,test,dev |
| luo | 234 | 4 | 2052.25 | UG,KE | test,train,dev |
| kikuyu | 229 | 4 | 1949.62 | KE | train,test,dev |
| bekwarra | 218 | 3 | 2000.46 | NG | train,test |
| isixhosa | 210 | 9 | 2100.28 | ZA | train,dev,test |
| hausa/fulani | 202 | 3 | 2213.53 | NG | test,train |
| epie | 202 | 6 | 2320.21 | NG | train,test |
| isindebele | 198 | 2 | 1759.49 | ZA | train,test |
| venda and xitsonga | 188 | 2 | 2603.75 | ZA | train,test |
| sotho | 182 | 4 | 2082.21 | ZA | dev,test,train |
| akan | 157 | 6 | 1392.47 | GH | test,train |
| nupe | 156 | 9 | 1608.24 | NG | dev,train,test |
| anaang | 153 | 8 | 1532.56 | NG | test,dev |
| english | 151 | 11 | 2445.98 | NG | dev,test |
| afemai | 142 | 2 | 1877.04 | NG | train,test |
| shona | 138 | 8 | 1419.98 | ZA,ZW | test,train,dev |
| eggon | 137 | 5 | 1833.77 | NG | test |
| luganda and kiswahili | 134 | 1 | 1356.93 | UG | train |
| ukwuani | 133 | 7 | 1269.02 | NG | test |
| sesotho | 132 | 10 | 1397.16 | ZA | train,dev,test |
| benin | 124 | 4 | 1457.48 | NG | train,test |
| kagoma | 123 | 1 | 1781.04 | NG | train |
| nasarawa eggon | 120 | 1 | 1039.99 | NG | train |
| tiv | 120 | 14 | 1084.52 | NG | train,test,dev |
| south african english | 119 | 2 | 1643.82 | ZA | train,test |
| borana | 112 | 1 | 1090.71 | KE | train |
| swahili ,luganda ,arabic | 109 | 1 | 929.46 | UG | train |
| ogoni | 109 | 4 | 1629.7 | NG | train,test |
| mada | 109 | 2 | 1786.26 | NG | test |
| bette | 106 | 4 | 930.16 | NG | train,test |
| berom | 105 | 4 | 1272.99 | NG | dev,test |
| bini | 104 | 4 | 1499.75 | NG | test |
| ngas | 102 | 3 | 1234.16 | NG | train,test |
| etsako | 101 | 4 | 1074.53 | NG | train,test |
| okrika | 100 | 3 | 1887.47 | NG | train,test |
| venda | 99 | 2 | 938.14 | ZA | train,test |
| siswati | 96 | 5 | 1367.45 | ZA | dev,train,test |
| damara | 92 | 1 | 674.43 | NG | train |
| yoruba, hausa | 89 | 5 | 928.98 | NG | test |
| southern sotho | 89 | 1 | 889.73 | ZA | train |
| kanuri | 86 | 7 | 1936.78 | NG | test,dev |
| itsekiri | 82 | 3 | 778.47 | NG | test,dev |
| ekpeye | 80 | 2 | 922.88 | NG | test |
| mwaghavul | 78 | 2 | 738.02 | NG | test |
| bajju | 72 | 2 | 758.16 | NG | test |
| luo, swahili | 71 | 1 | 616.57 | KE | train |
| dholuo | 70 | 1 | 669.07 | KE | train |
| ekene | 68 | 1 | 839.31 | NG | test |
| jaba | 65 | 2 | 540.66 | NG | test |
| ika | 65 | 4 | 576.56 | NG | test,dev |
| angas | 65 | 1 | 589.99 | NG | test |
| ateso | 63 | 1 | 624.28 | UG | train |
| brass | 62 | 2 | 900.04 | NG | test |
| ikulu | 61 | 1 | 313.2 | NG | test |
| eleme | 60 | 2 | 1207.92 | NG | test |
| chichewa | 60 | 1 | 554.61 | MW | train |
| oklo | 58 | 1 | 871.37 | NG | test |
| meru | 58 | 2 | 865.07 | KE | train,test |
| agatu | 55 | 1 | 369.11 | NG | test |
| okirika | 54 | 1 | 792.65 | NG | test |
| igarra | 54 | 1 | 562.12 | NG | test |
| ijaw(nembe) | 54 | 2 | 537.56 | NG | test |
| khana | 51 | 2 | 497.42 | NG | test |
| ogbia | 51 | 4 | 461.15 | NG | test,dev |
| gbagyi | 51 | 4 | 693.43 | NG | test |
| portuguese | 50 | 1 | 525.02 | ZA | train |
| delta | 49 | 2 | 425.76 | NG | test |
| bassa | 49 | 1 | 646.13 | NG | test |
| etche | 49 | 1 | 637.48 | NG | test |
| kubi | 46 | 1 | 495.21 | NG | test |
| jukun | 44 | 2 | 362.12 | NG | test |
| igbo and yoruba | 43 | 2 | 466.98 | NG | test |
| urobo | 43 | 3 | 573.14 | NG | test |
| kalabari | 42 | 5 | 305.49 | NG | test |
| ibani | 42 | 1 | 322.34 | NG | test |
| obolo | 37 | 1 | 204.79 | NG | test |
| idah | 34 | 1 | 533.5 | NG | test |
| bassa-nge/nupe | 31 | 3 | 267.42 | NG | test,dev |
| yala mbembe | 29 | 1 | 237.27 | NG | test |
| eket | 28 | 1 | 238.85 | NG | test |
| afo | 26 | 1 | 171.15 | NG | test |
| ebiobo | 25 | 1 | 226.27 | NG | test |
| nyandang | 25 | 1 | 230.41 | NG | test |
| ishan | 23 | 1 | 194.12 | NG | test |
| bagi | 20 | 1 | 284.54 | NG | test |
| estako | 20 | 1 | 480.78 | NG | test |
| gerawa | 13 | 1 | 342.15 | NG | test |
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
The dataset was initially prepared by Intron and refined for public release by CLAIR Lab.
### Licensing Information
Public Domain, Creative Commons Attribution NonCommercial ShareAlike v4.0 ([CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode))
### Citation Information
@article{olatunji2023afrispeech,
title={AfriSpeech-200: Pan-African Accented Speech Dataset for Clinical and General Domain ASR},
author={Olatunji, Tobi and Afonja, Tejumade and Yadavalli, Aditya and Emezue, Chris Chinenye and Singh, Sahib and Dossou, Bonaventure FP and Osuchukwu, Joanne and Osei, Salomey and Tonja, Atnafu Lambebo and Etori, Naome and others},
journal={arXiv preprint arXiv:2310.00274},
year={2023}
}
### Contributions
Thanks to [@tobiolatunji](https://github.com/tobiolatunji) for adding this dataset. | [
-0.5611181259155273,
-0.5393625497817993,
-0.07392755895853043,
0.41416028141975403,
-0.08743099123239517,
-0.08742270618677139,
-0.5149978399276733,
-0.2855062186717987,
0.4593278765678406,
0.388099730014801,
-0.7116274237632751,
-0.6129311919212341,
-0.6226507425308228,
0.261378169059753... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
clarin-pl/poquad | clarin-pl | 2023-07-04T10:50:43Z | 61 | 1 | null | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"task_ids:open-domain-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pl",
"license:cc-by-4.0",
"region:us"
... | 2023-07-04T10:50:43Z | 2023-02-28T09:46:17.000Z | 2023-02-28T09:46:17 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- pl
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: PoQuaD
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
- open-domain-qa
---
PoQuaD dataset | [
-0.21460559964179993,
-0.530676007270813,
-0.035967785865068436,
0.19868622720241547,
-0.19321823120117188,
-0.17269407212734222,
0.4395032227039337,
0.47033563256263733,
0.2981216013431549,
0.867145836353302,
-0.03205501288175583,
-0.6531211137771606,
-0.27551013231277466,
-0.411841720342... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TREC-AToMiC/AToMiC-Texts-v0.2.1 | TREC-AToMiC | 2023-05-04T18:58:43Z | 61 | 2 | null | [
"region:us"
] | 2023-05-04T18:58:43Z | 2023-04-26T16:34:45.000Z | 2023-04-26T16:34:45 | ---
dataset_info:
features:
- name: text_id
dtype: string
- name: page_url
dtype: string
- name: page_title
dtype: string
- name: section_title
dtype: string
- name: context_page_description
dtype: string
- name: context_section_description
dtype: string
- name: media
sequence: string
- name: hierachy
sequence: string
- name: category
sequence: string
- name: source_id
dtype: string
splits:
- name: train
num_bytes: 20393084595
num_examples: 10134744
download_size: 7192298025
dataset_size: 20393084595
---
# Dataset Card for "AToMiC-Texts-v0.2.updated"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.21947966516017914,
-0.40381354093551636,
0.3430868089199066,
0.18306125700473785,
-0.28754109144210815,
-0.05001898482441902,
0.058271776884794235,
-0.5077494382858276,
0.6109776496887207,
0.6606797575950623,
-0.7141236066818237,
-0.7390434145927429,
-0.6174600124359131,
-0.057236213237... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rcp-meetings/rudialogsum_v2 | rcp-meetings | 2023-05-12T14:35:48Z | 61 | 0 | null | [
"task_categories:text2text-generation",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:ru",
"license:mit",
"region:us"
] | 2023-05-12T14:35:48Z | 2023-05-12T14:30:27.000Z | 2023-05-12T14:30:27 | ---
license: mit
task_categories:
- text2text-generation
- summarization
language:
- ru
size_categories:
- 10K<n<100K
---
ะะฐัะฐัะตั dialogsum ะฟะตัะตะฒะตะดะตะฝะฝัะน ะฝะฐ ััััะบะธะน ัะทัะบ. ะะปัะบะธ ะฟะตัะตะฒะพะดะฐ ััััะฐะฝะตะฝั ะฐะฒัะพะผะฐัะธัะตัะบะพะน ัะธััะบะพะน | [
0.06686419993638992,
-0.771094024181366,
0.5380646586418152,
-0.07461640983819962,
-0.33649924397468567,
-0.02390110120177269,
0.38279488682746887,
-0.05999137833714485,
0.6715589165687561,
0.5550426840782166,
-0.976138710975647,
-0.6606115698814392,
-0.30095335841178894,
0.005007304251194... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
abokbot/wikipedia-first-paragraph | abokbot | 2023-06-04T10:58:32Z | 61 | 0 | null | [
"language:en",
"wikipedia",
"region:us"
] | 2023-06-04T10:58:32Z | 2023-06-04T10:06:17.000Z | 2023-06-04T10:06:17 | ---
language:
- en
tags:
- wikipedia
---
# Dataset Description
This dataset contains the first paragraph of cleaned Wikipedia articles in English.
It was obtained by transorming the [Wikipedia](https://huggingface.co/datasets/wikipedia) "20220301.en" dataset as follows:
```python
from datasets import load_dataset
dataset = load_dataset("wikipedia", "20220301.en")["train"]
def get_first_paragraph(example):
example["text"] = example['text'].split('\n\n')[0]
return example
dataset = dataset.map(get_first_paragraph)
```
# Why use this dataset?
The size of the original English Wikipedia dataset is over 20GB. It takes 20min to load it on a Google Colab notebook and running computations on that dataset can be costly.
If you want to create a use case that mostly needs the information in the first paragraph of a Wikipedia article (which is the paragraph with the most important information), this 'wikipedia-first-paragraph' dataset is for you.
Its size is 1.39GB and it takes 5 min to load it on a Google colab notebook.
# How to load dataset
You can load it by runnning:
```python
from datasets import load_dataset
load_dataset("abokbot/wikipedia-first-paragraph")
```
# Dataset Structure
An example looks as follows:
```
{
'id': '12',
'url': 'https://en.wikipedia.org/wiki/Anarchism',
'title': 'Anarchism',
'text': 'Anarchism is a political philosophy and movement that is sceptical of authority and rejects \
all involuntary, coercive forms of hierarchy. Anarchism calls for the abolition of the state, \
which it holds to be unnecessary, undesirable, and harmful. As a historically left-wing movement, \
placed on the farthest left of the political spectrum, it is usually described alongside communalism \
and libertarian Marxism as the libertarian wing (libertarian socialism) of the socialist movement, and \
has a strong historical association with anti-capitalism and socialism.'
}
``` | [
-0.6144932508468628,
-0.7725731730461121,
0.003322106320410967,
0.21831049025058746,
-0.3609877824783325,
-0.314763605594635,
-0.3094992935657501,
-0.06958162039518356,
0.6375300288200378,
0.188407301902771,
-0.6366996765136719,
-0.4123476445674896,
-0.33322998881340027,
0.3670265972614288... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SilpaCS/Augmented_alzheimer | SilpaCS | 2023-06-07T07:56:55Z | 61 | 0 | null | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"language:en",
"medical",
"region:us"
] | 2023-06-07T07:56:55Z | 2023-06-07T07:34:13.000Z | 2023-06-07T07:34:13 | ---
task_categories:
- image-classification
language:
- en
tags:
- medical
size_categories:
- 10K<n<100K
--- | [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tuetschek/atis | tuetschek | 2023-06-11T18:24:58Z | 61 | 0 | null | [
"region:us"
] | 2023-06-11T18:24:58Z | 2023-06-11T16:16:00.000Z | 2023-06-11T16:16:00 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jed351/Traditional-Chinese-Common-Crawl-Filtered | jed351 | 2023-07-20T23:09:09Z | 61 | 7 | null | [
"language:zh",
"region:us"
] | 2023-07-20T23:09:09Z | 2023-07-20T21:24:43.000Z | 2023-07-20T21:24:43 | ---
language:
- zh
---
# Traditional Chinese C4
### Dataset Summary
Data obtained from 2023-14 Common Crawl.
Downloaded and processed using [code](https://github.com/jedcheng/c4-dataset-script) based on another [project](https://github.com/shjwudp/c4-dataset-script) attempting to recreate the C4 dataset.
The resultant dataset contains both simplified and traditional Chinese, which could be found [here](https://huggingface.co/datasets/jed351/Chinese-Common-Crawl-Filtered).
It was then filtered using a [modified list](https://github.com/jedcheng/c4-dataset-script/blob/master/SC_filter/SC_list.txt) of simplified Chinese characters to obtain this traditional Chinese dataset.
I would like to acknowledge computational resources and support provided by the Imperial College Research Computing Service (http://doi.org/10.14469/hpc/2232) | [
-0.17058803141117096,
-0.28404659032821655,
0.3908189535140991,
0.29114440083503723,
-0.2430856078863144,
0.1603209227323532,
-0.276664137840271,
-0.6218515634536743,
0.502892255783081,
0.6540278196334839,
-0.5626057982444763,
-0.7834952473640442,
0.009803345426917076,
0.5293509364128113,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Maxx0/sexting-nsfw-adultconten | Maxx0 | 2023-09-02T15:58:40Z | 61 | 11 | null | [
"region:us"
] | 2023-09-02T15:58:40Z | 2023-09-02T15:13:00.000Z | 2023-09-02T15:13:00 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bjoernp/oasst25-08-23-filtered | bjoernp | 2023-09-04T17:46:35Z | 61 | 0 | null | [
"region:us"
] | 2023-09-04T17:46:35Z | 2023-09-04T17:46:30.000Z | 2023-09-04T17:46:30 | ---
dataset_info:
features:
- name: conversation
list:
- name: context
dtype: 'null'
- name: creativity
dtype: float64
- name: humor
dtype: float64
- name: lang
dtype: string
- name: quality
dtype: float64
- name: role
dtype: string
- name: text
dtype: string
- name: system_message
dtype: 'null'
splits:
- name: train
num_bytes: 17152145.58826024
num_examples: 9105
download_size: 9881270
dataset_size: 17152145.58826024
---
# Dataset Card for "oasst25-08-23-filtered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5696895122528076,
-0.30768296122550964,
0.3805612027645111,
0.11021462827920914,
-0.45395538210868835,
-0.08184630423784256,
0.4798859655857086,
-0.3132238984107971,
0.7260643243789673,
0.9101963043212891,
-0.9592683911323547,
-0.8065122365951538,
-0.6000006198883057,
-0.295454353094100... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/emotcmt | SEACrowd | 2023-09-26T12:33:23Z | 61 | 0 | null | [
"language:ind",
"license:mit",
"emotion-classification",
"region:us"
] | 2023-09-26T12:33:23Z | 2023-09-26T11:11:24.000Z | 2023-09-26T11:11:24 | ---
license: mit
tags:
- emotion-classification
language:
- ind
---
# emotcmt
EmotCMT is an emotion classification Indonesian-English code-mixing dataset created through an Indonesian-English code-mixed Twitter data pipeline consisting of 4 processing steps, i.e., tokenization, language identification, lexical normalization, and translation. The dataset consists of 825 tweets, 22.736 tokens with 11.204 Indonesian tokens and 5.613 English tokens. Each tweet is labelled with an emotion, i.e., cinta (love), takut (fear), sedih (sadness), senang (joy), or marah (anger).
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{barik-etal-2019-normalization,
title = "Normalization of {I}ndonesian-{E}nglish Code-Mixed {T}witter Data",
author = "Barik, Anab Maulana and
Mahendra, Rahmad and
Adriani, Mirna",
booktitle = "Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-5554",
doi = "10.18653/v1/D19-5554",
pages = "417--424"
}
@article{Yulianti2021NormalisationOI,
title={Normalisation of Indonesian-English Code-Mixed Text and its Effect on Emotion Classification},
author={Evi Yulianti and Ajmal Kurnia and Mirna Adriani and Yoppy Setyo Duto},
journal={International Journal of Advanced Computer Science and Applications},
year={2021}
}
```
## License
MIT
## Homepage
[https://github.com/ir-nlp-csui/emotcmt](https://github.com/ir-nlp-csui/emotcmt)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.3949183225631714,
-0.7588047981262207,
0.003956159111112356,
0.4493529498577118,
-0.5126736760139465,
0.08054349571466446,
-0.5549293160438538,
-0.17113804817199707,
0.6530232429504395,
0.15563836693763733,
-0.542992115020752,
-0.8054918646812439,
-0.6983587741851807,
0.3853802084922790... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rmanluo/RoG-webqsp | rmanluo | 2023-10-01T23:40:22Z | 61 | 1 | null | [
"region:us"
] | 2023-10-01T23:40:22Z | 2023-10-01T23:28:20.000Z | 2023-10-01T23:28:20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: answer
sequence: string
- name: q_entity
sequence: string
- name: a_entity
sequence: string
- name: graph
sequence:
sequence: string
- name: choices
sequence: 'null'
splits:
- name: train
num_bytes: 993540472
num_examples: 2826
- name: validation
num_bytes: 84009553
num_examples: 246
- name: test
num_bytes: 580788090
num_examples: 1628
download_size: 0
dataset_size: 1658338115
---
# Dataset Card for "RoG-webqsp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6638168096542358,
-0.1682324856519699,
-0.25489920377731323,
0.059720054268836975,
-0.3111983835697174,
-0.07692235708236694,
0.3617779314517975,
-0.14552785456180573,
0.5994258522987366,
0.4710427522659302,
-1.0054349899291992,
-0.7715942859649658,
-0.3363434970378876,
-0.3032897114753... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AI-Secure/DecodingTrust | AI-Secure | 2023-11-21T17:12:51Z | 61 | 2 | null | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-4.0",
"arxiv:2306.11698",
"region:us"
] | 2023-11-21T17:12:51Z | 2023-10-12T22:26:17.000Z | 2023-10-12T22:26:17 | ---
license: cc-by-sa-4.0
task_categories:
- text-classification
- question-answering
- text-generation
- text2text-generation
language:
- en
pretty_name: DecodingTrust
size_categories:
- 10K<n<100K
arixv: 2306.11698
configs:
- config_name: toxicity
data_files:
- split: realtoxicityprompts.nontoxic
path: "toxicity/user_prompts/nontoxic.jsonl"
- split: realtoxicityprompts.toxic
path: "toxicity/user_prompts/toxic.jsonl"
- split: toxic.gpt3.5
path: "toxicity/user_prompts/toxic.gpt3.5.jsonl"
- split: toxic.gpt4
path: "toxicity/user_prompts/toxic.gpt4.jsonl"
- config_name: adv_demonstration
data_files:
- split: counterfactual.snliPremiseCf
path: adv_demonstration/counterfactual/snli_premise_cf/42.jsonl
- split: counterfactual.snliHypothesisCf
path: adv_demonstration/counterfactual/snli_hypothesis_cf/42.jsonl
- split: counterfactual.controlRaisingCf
path: adv_demonstration/counterfactual/control_raising_cf/42.jsonl
- split: counterfactual.irregularFormCf
path: adv_demonstration/counterfactual/irregular_form_cf/42.jsonl
- split: counterfactual.mainVerbCf
path: adv_demonstration/counterfactual/main_verb_cf/42.jsonl
- split: counterfactual.syntacticCategoryCf
path: adv_demonstration/counterfactual/syntactic_category_cf/42.jsonl
- split: spurious.PP.entailBias
path: adv_demonstration/spurious/PP/entail-bias/42.jsonl
- split: spurious.PP.nonEntailBias
path: adv_demonstration/spurious/PP/non-entail-bias/42.jsonl
- split: spurious.adverb.entailBias
path: adv_demonstration/spurious/adverb/entail-bias/42.jsonl
- split: spurious.adverb.nonEntailBias
path: adv_demonstration/spurious/adverb/non-entail-bias/42.jsonl
- split: spurious.embeddedUnderVerb.entailBias
path: adv_demonstration/spurious/embedded_under_verb/entail-bias/42.jsonl
- split: spurious.embeddedUnderVerb.nonEntailBias
path: adv_demonstration/spurious/embedded_under_verb/non-entail-bias/42.jsonl
- split: spurious.lRelativeClause.entailBias
path: adv_demonstration/spurious/l_relative_clause/entail-bias/42.jsonl
- split: spurious.lRelativeClause.nonEntailBias
path: adv_demonstration/spurious/l_relative_clause/non-entail-bias/42.jsonl
- split: spurious.passive.entailBias
path: adv_demonstration/spurious/passive/entail-bias/42.jsonl
- split: spurious.passive.nonEntailBias
path: adv_demonstration/spurious/passive/non-entail-bias/42.jsonl
- split: spurious.sRelativeClause.entailBias
path: adv_demonstration/spurious/s_relative_clause/entail-bias/42.jsonl
- split: spurious.sRelativeClause.nonEntailBias
path: adv_demonstration/spurious/s_relative_clause/non-entail-bias/42.jsonl
- split: backdoor.sst2.setup1BadwordCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_badword_cacc/42.jsonl
- split: backdoor.sst2.setup1BadwordAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_badword_asr/42.jsonl
- split: backdoor.sst2.setup2BadwordCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_badword_cacc/42.jsonl
- split: backdoor.sst2.setup2BadwordAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_badword_asr/42.jsonl
- split: backdoor.sst2.setup3BadwordCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_badword_cacc/42.jsonl
- split: backdoor.sst2.setup3BadwordAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_badword_asr/42.jsonl
- split: backdoor.sst2.setup1AddsentCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_addsent_cacc/42.jsonl
- split: backdoor.sst2.setup1AddsentAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_addsent_asr/42.jsonl
- split: backdoor.sst2.setup2AddsentCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_addsent_cacc/42.jsonl
- split: backdoor.sst2.setup2AddsentAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_addsent_asr/42.jsonl
- split: backdoor.sst2.setup3AddsentCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_addsent_cacc/42.jsonl
- split: backdoor.sst2.setup3AddsentAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_addsent_asr/42.jsonl
- split: backdoor.sst2.setup1SynbkdCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_synbkd_cacc/42.jsonl
- split: backdoor.sst2.setup1SynbkdAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_synbkd_asr/42.jsonl
- split: backdoor.sst2.setup2SynbkdCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_synbkd_cacc/42.jsonl
- split: backdoor.sst2.setup2SynbkdAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_synbkd_asr/42.jsonl
- split: backdoor.sst2.setup3SynbkdCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_synbkd_cacc/42.jsonl
- split: backdoor.sst2.setup3SynbkdAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_synbkd_asr/42.jsonl
- split: backdoor.sst2.setup1StylebkdCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_stylebkd_cacc/42.jsonl
- split: backdoor.sst2.setup1StylebkdAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_stylebkd_asr/42.jsonl
- split: backdoor.sst2.setup2StylebkdCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_stylebkd_cacc/42.jsonl
- split: backdoor.sst2.setup2StylebkdAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_stylebkd_asr/42.jsonl
- split: backdoor.sst2.setup3StylebkdCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_stylebkd_cacc/42.jsonl
- split: backdoor.sst2.setup3StylebkdAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_stylebkd_asr/42.jsonl
- config_name: stereotype
data_files:
- split: stereotype
path: "stereotype/dataset/stereotype_bias_data.jsonl"
- config_name: adv-glue-plus-plus
data_files:
- split: sst2
path: "adv-glue-plus-plus/data/sst2.jsonl"
- split: qqp
path: "adv-glue-plus-plus/data/qqp.jsonl"
- split: mnli
path: "adv-glue-plus-plus/data/mnli.jsonl"
- split: mnli_mismatched
path: "adv-glue-plus-plus/data/mnli-mm.jsonl"
- split: qnli
path: "adv-glue-plus-plus/data/qnli.jsonl"
- split: rte
path: "adv-glue-plus-plus/data/rte.jsonl"
- config_name: machine_ethics
data_files:
- split: morality.train
path: "machine_ethics/cm_train.jsonl"
- split: morality.test
path: "machine_ethics/cm_test.jsonl"
- split: jiminy.train
path: "machine_ethics/jiminy_train.jsonl"
- split: jiminy.test
path: "machine_ethics/jiminy_test.jsonl"
- config_name: privacy
data_files:
- split: enron.context
path: "privacy/enron_data/context.jsonl"
- split: enron.email2name
path: "privacy/enron_data/email2name.jsonl"
- split: enron.one_shot_non_domain
path: "privacy/enron_data/one_shot_non_domain.jsonl"
- split: enron.one_shot
path: "privacy/enron_data/one_shot.jsonl"
- split: enron.two_shot_non_domain
path: "privacy/enron_data/two_shot_non_domain.jsonl"
- split: enron.two_shot
path: "privacy/enron_data/two_shot.jsonl"
- split: enron.five_shot_non_domain
path: "privacy/enron_data/five_shot_non_domain.jsonl"
- split: enron.five_shot
path: "privacy/enron_data/five_shot.jsonl"
- config_name: fairness
data_files:
- split: adult.zero_shot.br_0.0
path: "fairness/fairness_data/adult_0_200_test_base_rate_0.0.jsonl"
- split: adult.zero_shot.br_0.5
path: "fairness/fairness_data/adult_0_200_test_base_rate_0.5.jsonl"
- split: adult.zero_shot.br_1.0
path: "fairness/fairness_data/adult_0_200_test_base_rate_1.0.jsonl"
- split: adult.few_shot.tr_br_0.0
path: "fairness/fairness_data/adult_32_200_train_base_rate_0.0.jsonl"
- split: adult.few_shot.tr_br_0.5
path: "fairness/fairness_data/adult_32_200_train_base_rate_0.5.jsonl"
- split: adult.few_shot.tr_br_1.0
path: "fairness/fairness_data/adult_32_200_train_base_rate_1.0.jsonl"
- split: adult.few_shot.num_train_0
path: "fairness/fairness_data/adult_0_200_train_br_0.0_test_br_0.5.jsonl"
- split: adult.few_shot.num_train_16
path: "fairness/fairness_data/adult_16_200_train_br_0.0_test_br_0.5.jsonl"
- split: adult.few_shot.num_train_32
path: "fairness/fairness_data/adult_32_200_train_br_0.0_test_br_0.5.jsonl"
- split: crime.zero_shot.br_0.0
path: "fairness/fairness_data/crime_0_300_test_base_rate_0.0.jsonl"
- split: crime.zero_shot.br_0.5
path: "fairness/fairness_data/crime_0_300_test_base_rate_0.5.jsonl"
- split: crime.zero_shot.br_1.0
path: "fairness/fairness_data/crime_0_300_test_base_rate_1.0.jsonl"
- config_name: ood
data_files:
- split: style
path: "ood/style.jsonl"
- split: knowledge
path: "ood/knowledge.jsonl"
---
# DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
## Overview
This repo contains the source code of DecodingTrust. This research endeavor is designed to help researchers better understand the capabilities, limitations, and potential risks associated with deploying these state-of-the-art Large Language Models (LLMs). See our paper for details.
[**DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models**](https://arxiv.org/abs//2306.11698)
*Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li.*
https://arxiv.org/pdf/2306.11698.pdf
This project is organized around the following **eight** primary areas of trustworthiness, including:
1. Toxicity
2. Stereotype and bias
3. Adversarial robustness
4. Out-of-Distribution Robustness
5. Privacy
6. Robustness to Adversarial Demonstrations
7. Machine Ethics
8. Fairness
## Getting Started
To evaluate using DecodingTrust dataset, please install the DecodingTrust package as below:
### (Conda +) Pip
For now, we suggest installing DecodingTrust by cloning our repository and install it in editable mode. This will keep the data, code, and configurations in the same place.
```bash
git clone https://github.com/AI-secure/DecodingTrust.git && cd DecodingTrust
pip install -e .
```
Please note that this will install PyTorch with `pip`. If your system does not have a `CUDA` version compatible with the PyTorch `pip` wheel. To install `PyTorch` with `Conda` first, as shown below.
```bash
conda create --name dt-test python=3.9 pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
conda activate dt-test
pip install "decoding-trust @ git+https://github.com/AI-secure/DecodingTrust.git"
```
It is also possible to install DecodingTrust as a standalone package, but you will need to clone our repository again to run it will our data.
```bash
conda create --name dt-test python=3.9
conda activate dt-test
pip install "decoding-trust @ git+https://github.com/AI-secure/DecodingTrust.git"
```
### Support for the `ppc64le` Architecture
We also support the `ppc64le` architecture of IBM Power-9 platforms. To install on this platform, please first make sure you have the following `conda` channels so that we can utilize pre-built packages.
```
--add channels 'defaults' # lowest priority
--add channels 'https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda-early-access/'
--add channels 'https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda/'
--add channels 'https://opence.mit.edu'
--add channels 'https://ftp.osuosl.org/pub/open-ce/current/'
--add channels 'conda-forge' # highest priority
```
Then, install the following pre-built packages.
```bash
mamba create --name dt-test python==3.9 pytorch=2.0.1 torchvision=0.15.2 spacy=3.5.3 scipy=1.10.1 fairlearn~=0.9.0 scikit-learn~=1.1.2 pandas~=2.0.3 pyarrow~=11.0.0 rust -c conda-forge
```
Finally, install DecodingTrust with `pip` as usual.
### Docker / Singularity
To use DecodingTrust with docker, simply pull the following docker image.
```bash
sudo docker pull danielz01/decoding-trust
docker run -it \
-v /path/on/host:/path/in/container \
--gpus all \
decoding-trust/v1.0:latest [arg1 arg2 ...]
```
To use it in through singularity or apptainer container environments on HPC environments, simply run the following.
```bash
module load singularity # Change it to whatever module name your singularity / apptainer environment was given
singularity pull decoding-trust-v1.0.sif docker://danielz01/decoding-trust
singularity exec --nv --bind /path/on/host:/path/in/container decoding-trust-v1.0.sif [arg1 arg2]
```
We will also have a container build for `ppc64le` platforms soon. Stay tuned!
### Notes
+ Each of the eight areas has its own subdirectory containing the respective code and README.
+ Follow the specific `README`: Every subdirectory has its own README. Refer to these documents for information on how to run the scripts and interpret the results.
## [Important] Candidate models
In our benchmark, to have consistent conclusions and results, currently we mianly focus on evaluating the following two OpenAI models:
- `gpt-3.5-turbo-0301`
- `gpt-4-0314`
**Note we use `gpt-3.5-turbo-0301` (with time stamp) released in March instead of `gpt-3.5-turbo` for sake of model evolution to ensure reproducibility.**
Currently, we have supported evaluating all the causal LLMs **hosted in Huggingface** or hosted locally. Specifically, we have tested the following open LLMs:
- `Llama-v2-7B-Chat`
- `Vicuna-7BAlpaca-7B`
- `MPT-7B`
- `Falcon-7B`
- `Alpaca-7B`
- `RedPajama-INCITE-7B-Instruct`
## Tutorial
We have provided a [Tutorial](Tutorial.md) to help you walk through the usage of API to evaluate different trustworthiness perspectives and LLMs.
## Useful tips
- Please first evaluate your experiments with `++dry_run=True` flags on to check the input / output format, and use `gpt-3.5-turbo-0301` to check the generation since it has lower costs.
- Suggesting saving the responses from OpenAI.
## File usage
- `main.py` provides a unified entry point to evaluate all the perspectives and different LLMs with proper configuration
- `chat.py` provides robust APIs for creating requests to OpenAI **Chat Compleition** models and Huggingface autoregressive LLMs. Recommend implementing experiments based on this file. If you think `chat.py` is not good enough and want to make modifications, please let @acphile and @boxinw know.
- `utils.py` provide auxiliary functions
For other files, please refer to each subdirs for more information.
## License
This project is licensed under the [CC BY-SA 4.0 ]("http://creativecommons.org/licenses/by-sa/4.0/legalcode") - see the LICENSE file for details.
## Citation
Please cite the paper as follows if you use the data or code from DecodingTrust:
```
@article{wang2023decodingtrust,
title={DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models},
author={Wang, Boxin and Chen, Weixin and Pei, Hengzhi and Xie, Chulin and Kang, Mintong and Zhang, Chenhui and Xu, Chejian and Xiong, Zidi and Dutta, Ritik and Schaeffer, Rylan and others},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023}
}
```
## Contact
Please reach out to us if you have any questions or suggestions. You can submit an issue or pull request, or send an email to boxinw2@illinois.edu.
Thank you for your interest in DecodingTrust. We hope our work will contribute to a more trustworthy, fair, and robust AI future. | [
-0.26286807656288147,
-0.6309489607810974,
0.5271075367927551,
0.17187285423278809,
-0.2695556879043579,
-0.12318078428506851,
-0.03115629404783249,
-0.5226888060569763,
-0.046702396124601364,
0.22374339401721954,
-0.1887810081243515,
-0.598691463470459,
-0.9454278945922852,
0.075762771070... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zhen-dong-nexusflow/multi_cvecpe_apis_nested | zhen-dong-nexusflow | 2023-10-27T00:52:47Z | 61 | 0 | null | [
"region:us"
] | 2023-10-27T00:52:47Z | 2023-10-14T21:00:00.000Z | 2023-10-14T21:00:00 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
felipeoes/filtered_qa_blue_amazon_legislation | felipeoes | 2023-10-22T16:24:13Z | 61 | 0 | null | [
"region:us"
] | 2023-10-22T16:24:13Z | 2023-10-22T16:23:12.000Z | 2023-10-22T16:23:12 | ---
dataset_info:
features:
- name: file_index
dtype: int64
- name: file_name
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 55559058
num_examples: 15964
download_size: 14333761
dataset_size: 55559058
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "filtered_qa_blue_amazon_legislation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4100688397884369,
-0.2748221158981323,
0.33586499094963074,
0.19993308186531067,
-0.5370808839797974,
-0.05754504352807999,
0.5198624730110168,
-0.16581638157367706,
0.6031233668327332,
0.9626586437225342,
-0.7863924503326416,
-0.9198703169822693,
-0.22653301060199738,
-0.36248975992202... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
faizalnf1800/gpt-generated-review-product | faizalnf1800 | 2023-10-27T13:49:55Z | 61 | 0 | null | [
"license:mit",
"region:us"
] | 2023-10-27T13:49:55Z | 2023-10-27T12:47:57.000Z | 2023-10-27T12:47:57 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 26004
num_examples: 166
- name: test
num_bytes: 1475
num_examples: 9
download_size: 16196
dataset_size: 27479
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
coastalcph/fm_classifier-1-1 | coastalcph | 2023-11-04T10:39:06Z | 61 | 0 | null | [
"region:us"
] | 2023-11-04T10:39:06Z | 2023-11-01T16:46:53.000Z | 2023-11-01T16:46:53 | ---
dataset_info:
features:
- name: query
dtype: string
- name: answer
list:
- name: wikidata_id
dtype: string
- name: name
dtype: string
- name: id
dtype: string
- name: relation
dtype: string
- name: date
dtype: int64
- name: type
dtype: string
- name: is_mutable
dtype: int64
splits:
- name: train
num_bytes: 1095051.1775751072
num_examples: 6230
- name: validation
num_bytes: 995400.6136754095
num_examples: 5783
- name: test
num_bytes: 858612.5253924284
num_examples: 4360
download_size: 1062146
dataset_size: 2949064.316642945
---
# Dataset Card for "fm_classifier-1-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6732651591300964,
-0.22310106456279755,
0.1528930962085724,
0.2520580589771271,
-0.2543109655380249,
-0.2476174533367157,
0.32302168011665344,
-0.0850975438952446,
0.7159002423286438,
0.22416669130325317,
-0.9637668132781982,
-0.7585015892982483,
-0.6869906187057495,
-0.2415945231914520... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kheder/quranData | kheder | 2023-11-08T22:35:48Z | 61 | 0 | null | [
"region:us"
] | 2023-11-08T22:35:48Z | 2023-11-08T22:35:32.000Z | 2023-11-08T22:35:32 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/amazon-qa_1107 | multi-train | 2023-11-10T18:36:22Z | 61 | 0 | null | [
"region:us"
] | 2023-11-10T18:36:22Z | 2023-11-10T18:33:05.000Z | 2023-11-10T18:33:05 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 126901578
num_examples: 200000
download_size: 65627345
dataset_size: 126901578
---
# Dataset Card for "amazon-qa_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4841688871383667,
-0.09962271898984909,
0.31289076805114746,
0.2954943776130676,
-0.37005820870399475,
0.037307605147361755,
0.7065263986587524,
-0.1179187223315239,
0.8008275032043457,
0.5897547006607056,
-0.7793914079666138,
-0.7013753652572632,
-0.2585832178592682,
-0.047861736267805... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/reddit-title-body_1107 | multi-train | 2023-11-10T18:58:24Z | 61 | 0 | null | [
"region:us"
] | 2023-11-10T18:58:24Z | 2023-11-10T18:55:02.000Z | 2023-11-10T18:55:02 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 216135392
num_examples: 200000
download_size: 125472332
dataset_size: 216135392
---
# Dataset Card for "reddit-title-body_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4576382040977478,
-0.212749645113945,
0.36491265892982483,
0.25753897428512573,
-0.3680419921875,
0.0061051067896187305,
0.1136733815073967,
0.004096816759556532,
1.0275287628173828,
0.4850456416606903,
-0.7153968811035156,
-0.7983801364898682,
-0.7411795258522034,
0.2363935112953186,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/WikiAnswers_1107 | multi-train | 2023-11-10T18:58:07Z | 61 | 0 | null | [
"region:us"
] | 2023-11-10T18:58:07Z | 2023-11-10T18:58:00.000Z | 2023-11-10T18:58:00 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 54367110
num_examples: 200000
download_size: 22862968
dataset_size: 54367110
---
# Dataset Card for "WikiAnswers_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5586205720901489,
-0.0008682110346853733,
0.1465807855129242,
0.11484503000974655,
-0.11254967749118805,
-0.23609282076358795,
0.27431410551071167,
-0.06059690937399864,
0.9540206789970398,
0.5181530117988586,
-0.8388172388076782,
-0.562890887260437,
-0.7133429050445557,
0.0305783655494... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/gooaq_pairs_1107 | multi-train | 2023-11-10T18:59:01Z | 61 | 0 | null | [
"region:us"
] | 2023-11-10T18:59:01Z | 2023-11-10T18:58:51.000Z | 2023-11-10T18:58:51 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 125623207
num_examples: 200000
download_size: 62027848
dataset_size: 125623207
---
# Dataset Card for "gooaq_pairs_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4445453882217407,
-0.1998874843120575,
0.2009512484073639,
0.1391819715499878,
-0.28407758474349976,
-0.0774754136800766,
0.3334854245185852,
0.042617276310920715,
0.9157143235206604,
0.3955157399177551,
-0.6604104042053223,
-0.5734702348709106,
-0.4406108260154724,
-0.19682927429676056... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/PAQ_pairs_1107 | multi-train | 2023-11-10T19:02:41Z | 61 | 0 | null | [
"region:us"
] | 2023-11-10T19:02:41Z | 2023-11-10T19:02:23.000Z | 2023-11-10T19:02:23 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 282748365
num_examples: 200000
download_size: 135658270
dataset_size: 282748365
---
# Dataset Card for "PAQ_pairs_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.535839319229126,
-0.15420633554458618,
0.1704133152961731,
0.40737754106521606,
-0.28715234994888306,
-0.14938318729400635,
0.4721285402774811,
0.25385645031929016,
0.8182978630065918,
0.5502117276191711,
-0.45592543482780457,
-0.5762187242507935,
-0.5532909631729126,
-0.158057779073715... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kuanhuggingface/google_tts_encodec | kuanhuggingface | 2023-11-14T09:50:31Z | 61 | 0 | null | [
"region:us"
] | 2023-11-14T09:50:31Z | 2023-11-14T09:49:53.000Z | 2023-11-14T09:49:53 | ---
dataset_info:
features:
- name: file_id
dtype: string
- name: instruction
dtype: string
- name: transcription
dtype: string
- name: src_encodec_0
sequence: int64
- name: src_encodec_1
sequence: int64
- name: src_encodec_2
sequence: int64
- name: src_encodec_3
sequence: int64
- name: src_encodec_4
sequence: int64
- name: src_encodec_5
sequence: int64
- name: src_encodec_6
sequence: int64
- name: src_encodec_7
sequence: int64
- name: tgt_encodec_0
sequence: int64
- name: tgt_encodec_1
sequence: int64
- name: tgt_encodec_2
sequence: int64
- name: tgt_encodec_3
sequence: int64
- name: tgt_encodec_4
sequence: int64
- name: tgt_encodec_5
sequence: int64
- name: tgt_encodec_6
sequence: int64
- name: tgt_encodec_7
sequence: int64
splits:
- name: train
num_bytes: 3701639864
num_examples: 90000
- name: validation
num_bytes: 202925396
num_examples: 5000
- name: test
num_bytes: 208941751
num_examples: 5000
download_size: 139109305
dataset_size: 4113507011
---
# Dataset Card for "google_tts_encodec"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4584638774394989,
-0.27052563428878784,
0.3184760808944702,
0.16832993924617767,
-0.3006158769130707,
0.1603841334581375,
0.011385748162865639,
-0.09619830548763275,
0.9013310074806213,
0.22554197907447815,
-0.7801839113235474,
-0.9434464573860168,
-0.7708173394203186,
0.024943325668573... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pranjali97/ha-en_RL-grow2_I2_valid | pranjali97 | 2023-11-14T23:47:41Z | 61 | 0 | null | [
"region:us"
] | 2023-11-14T23:47:41Z | 2023-11-14T23:47:40.000Z | 2023-11-14T23:47:40 | ---
dataset_info:
features:
- name: src
dtype: string
- name: ref
dtype: string
- name: mt
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 1427995
num_examples: 3339
download_size: 378938
dataset_size: 1427995
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ha-en_RL-grow2_I2_valid"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3216221034526825,
-0.590135395526886,
0.08769066631793976,
0.5323556065559387,
-0.15247082710266113,
0.09159112721681595,
0.20125292241573334,
-0.37019142508506775,
0.8468197584152222,
0.4549866318702698,
-0.7804784774780273,
-0.6966936588287354,
-0.6023250222206116,
-0.0369481965899467... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dmacres/mimiciii-hospitalcourse-meta | dmacres | 2023-11-15T04:07:10Z | 61 | 0 | null | [
"region:us"
] | 2023-11-15T04:07:10Z | 2023-11-15T03:55:19.000Z | 2023-11-15T03:55:19 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: subject_id
dtype: int64
- name: hadm_id
dtype: float64
- name: target_text
dtype: string
- name: extractive_notes_summ
dtype: string
- name: n_notes
dtype: int64
- name: notes
list:
- name: category
dtype: string
- name: chartdate
dtype: string
- name: description
dtype: string
- name: row_id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1518715010
num_examples: 24993
- name: validation
num_bytes: 342865059
num_examples: 5356
- name: test
num_bytes: 326661857
num_examples: 5356
download_size: 896512070
dataset_size: 2188241926
---
# Dataset Card for "mimiciii-hospitalcourse-meta"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4751433730125427,
-0.15117155015468597,
0.3815416693687439,
-0.06880443543195724,
-0.07326710224151611,
-0.044950734823942184,
0.4433152675628662,
-0.17229215800762177,
0.9276973009109497,
0.5502626895904541,
-0.948083758354187,
-0.6564419269561768,
-0.42889657616615295,
-0.031372793018... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wetdog/parlament_parla_ecapa_emb | wetdog | 2023-11-20T11:39:03Z | 61 | 0 | null | [
"region:us"
] | 2023-11-20T11:39:03Z | 2023-11-15T13:16:16.000Z | 2023-11-15T13:16:16 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: path
dtype: string
- name: speaker_id
dtype: int64
- name: sentence
dtype: string
- name: gender
dtype:
class_label:
names:
'0': F
'1': M
- name: duration
dtype: float64
- name: embeddings
sequence: float64
splits:
- name: train
num_bytes: 140554656
num_examples: 78976
- name: validation
num_bytes: 3802467
num_examples: 2150
- name: test
num_bytes: 3783863
num_examples: 2138
download_size: 133275777
dataset_size: 148140986
---
# Dataset Card for "parlament_parla_ecapa_emb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6452165246009827,
-0.6701824069023132,
0.2679431140422821,
0.48864415287971497,
-0.36840203404426575,
-0.1870107352733612,
0.2613341212272644,
0.0694381520152092,
1.0557290315628052,
0.3682050406932831,
-0.4913552403450012,
-0.9271705746650696,
-0.6043784618377686,
-0.005987992975860834... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pranjali97/ha-en_RL-grow2_I2_train | pranjali97 | 2023-11-15T21:37:10Z | 61 | 0 | null | [
"region:us"
] | 2023-11-15T21:37:10Z | 2023-11-15T21:37:07.000Z | 2023-11-15T21:37:07 | ---
dataset_info:
features:
- name: src
dtype: string
- name: ref
dtype: string
- name: mt
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 12523140
num_examples: 29454
download_size: 3280720
dataset_size: 12523140
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ha-en_RL-grow2_I2_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4424634277820587,
-0.3823070824146271,
0.05924990400671959,
0.45675128698349,
-0.08235148340463638,
0.10945960879325867,
0.3247954249382019,
-0.3362351059913635,
0.9137508869171143,
0.3895832598209381,
-0.964982807636261,
-0.48103225231170654,
-0.6245638132095337,
-0.2658759653568268,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gj1997/webmd | gj1997 | 2023-11-21T07:52:45Z | 61 | 0 | null | [
"region:us"
] | 2023-11-21T07:52:45Z | 2023-11-21T07:51:38.000Z | 2023-11-21T07:51:38 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lhoestq/custom_squad | lhoestq | 2022-10-25T09:50:53Z | 60 | 0 | null | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-4.0",
... | 2022-10-25T09:50:53Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for "squad"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits Sample Size](#data-splits-sample-size)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://rajpurkar.github.io/SQuAD-explorer/](https://rajpurkar.github.io/SQuAD-explorer/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 33.51 MB
- **Size of the generated dataset:** 85.75 MB
- **Total amount of disk used:** 119.27 MB
### Dataset Summary
This dataset is a custom copy of the original SQuAD dataset. It is used to showcase dataset repositories. Data are the same as the original dataset.
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
### Supported Tasks
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
We show detailed information for up to 5 configurations of the dataset.
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 33.51 MB
- **Size of the generated dataset:** 85.75 MB
- **Total amount of disk used:** 119.27 MB
An example of 'train' looks as follows.
```
{
"answers": {
"answer_start": [1],
"text": ["This is a test text"]
},
"context": "This is a test context.",
"id": "1",
"question": "Is this a test?",
"title": "train test"
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits Sample Size
| name |train|validation|
|----------|----:|---------:|
|plain_text|87599| 10570|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
### Annotations
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. | [
-0.6052870750427246,
-0.6674754619598389,
0.08309328556060791,
0.2539057433605194,
-0.10081886500120163,
0.20678205788135529,
-0.10414336621761322,
-0.2693823277950287,
0.4786950945854187,
0.35232827067375183,
-1.1609272956848145,
-0.7588802576065063,
-0.37916454672813416,
0.26765772700309... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
midas/duc2001 | midas | 2022-01-23T06:13:06Z | 60 | 1 | null | [
"region:us"
] | 2022-01-23T06:13:06Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ## Dataset Summary
A dataset for benchmarking keyphrase extraction and generation techniques from english news articles. For more details about the dataset please refer the original paper - [https://dl.acm.org/doi/10.5555/1620163.1620205](https://dl.acm.org/doi/10.5555/1620163.1620205)
Original source of the data - []()
## Dataset Structure
### Data Fields
- **id**: unique identifier of the document.
- **document**: Whitespace separated list of words in the document.
- **doc_bio_tags**: BIO tags for each word in the document. B stands for the beginning of a keyphrase and I stands for inside the keyphrase. O stands for outside the keyphrase and represents the word that isn't a part of the keyphrase at all.
- **extractive_keyphrases**: List of all the present keyphrases.
- **abstractive_keyphrase**: List of all the absent keyphrases.
### Data Splits
|Split| #datapoints |
|--|--|
| Test | 308 |
## Usage
### Full Dataset
```python
from datasets import load_dataset
# get entire dataset
dataset = load_dataset("midas/duc2001", "raw")
# sample from the test split
print("Sample from test dataset split")
test_sample = dataset["test"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Document BIO Tags: ", test_sample["doc_bio_tags"])
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
print("\n-----------\n")
```
**Output**
```bash
Sample from test data split
Fields in the sample: ['id', 'document', 'doc_bio_tags', 'extractive_keyphrases', 'abstractive_keyphrases', 'other_metadata']
Tokenized Document: ['Here', ',', 'at', 'a', 'glance', ',', 'are', 'developments', 'today', 'involving', 'the', 'crash', 'of', 'Pan', 'American', 'World', 'Airways', 'Flight', '103', 'Wednesday', 'night', 'in', 'Lockerbie', ',', 'Scotland', ',', 'that', 'killed', 'all', '259', 'people', 'aboard', 'and', 'more', 'than', '20', 'people', 'on', 'the', 'ground', ':']
Document BIO Tags: ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'B', 'I', 'I', 'I', 'I', 'I', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']
Extractive/present Keyphrases: ['pan american world airways flight 103', 'crash', 'lockerbie']
Abstractive/absent Keyphrases: ['terrorist threats', 'widespread wreckage', 'radical palestinian faction', 'terrorist bombing', 'bomb threat', 'sabotage']
-----------
```
### Keyphrase Extraction
```python
from datasets import load_dataset
# get the dataset only for keyphrase extraction
dataset = load_dataset("midas/duc2001", "extraction")
print("Samples for Keyphrase Extraction")
# sample from the test split
print("Sample from test data split")
test_sample = dataset["test"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Document BIO Tags: ", test_sample["doc_bio_tags"])
print("\n-----------\n")
```
### Keyphrase Generation
```python
# get the dataset only for keyphrase generation
dataset = load_dataset("midas/duc2001", "generation")
print("Samples for Keyphrase Generation")
# sample from the test split
print("Sample from test data split")
test_sample = dataset["test"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
print("\n-----------\n")
```
## Citation Information
```
@inproceedings{10.5555/1620163.1620205,
author = {Wan, Xiaojun and Xiao, Jianguo},
title = {Single Document Keyphrase Extraction Using Neighborhood Knowledge},
year = {2008},
isbn = {9781577353683},
publisher = {AAAI Press},
booktitle = {Proceedings of the 23rd National Conference on Artificial Intelligence - Volume 2},
pages = {855โ860},
numpages = {6},
location = {Chicago, Illinois},
series = {AAAI'08}
}
```
## Contributions
Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax) and [@ad6398](https://github.com/ad6398) for adding this dataset
| [
-0.17217528820037842,
-0.5865203738212585,
0.4136922359466553,
0.035552430897951126,
-0.22845417261123657,
0.19467571377754211,
-0.11603598296642303,
-0.1060149222612381,
0.1323339343070984,
0.1482684463262558,
-0.6153982281684875,
-0.9096857309341431,
-0.4925568401813507,
0.56596684455871... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shibing624/source_code | shibing624 | 2022-10-30T06:30:07Z | 60 | 4 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100M<n<200M",
"source_datasets:https://github.com/shibing624/code-autocomplete",
"source_datasets:https://github.com/... | 2022-10-30T06:30:07Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
- gfdl
multilinguality:
- monolingual
size_categories:
- 100M<n<200M
source_datasets:
- https://github.com/shibing624/code-autocomplete
- https://github.com/bharathgs/Awesome-pytorch-list
- https://github.com/akullpp/awesome-java
- https://github.com/fffaraz/awesome-cpp
task_categories:
- text-generation
task_ids:
- language-modeling
---
# Dataset Card for "SourceCode"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [code-autocomplete](https://github.com/shibing624/code-autocomplete)
- **Leaderboard:** [leaderboard](https://github.com/shibing624/code-autocomplete) (located on the homepage)
- **Size of downloaded dataset files:** 105 MB
- **Total amount of disk used:** 570 MB
### Dataset Summary
Source code dataset is a collection of Github awesome repos, it contains Python, Java, C++, and other programming languages.
This dataset can be used in different NLP tasks like language modeling and text generation tasks.
data source:
- PYTHON_CODE: https://github.com/bharathgs/Awesome-pytorch-list
- JAVA_CODE: https://github.com/akullpp/awesome-java
- CPP_CODE: https://github.com/fffaraz/awesome-cpp
### Supported Tasks and Leaderboards
- language modeling
- code generation tasks, **Leaderboard:** [code-autocomplete](https://github.com/shibing624/code-autocomplete)
### Languages
- programming languages: Python, Java, C++
- natural language: English
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": """
import json
import argparse
def _parse_args():
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawTextHelpFormatter,
)
parser.add_argument(
'--model-file',
required=True,
help=(
'A pt file from '
'https://github.com/pytorch/fairseq/tree/main/examples/hubert'
)
)
return parser.parse_args()
"""
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
#### python
```shell
$ wc -l python/*
10000 python/test.txt
5215412 python/train.txt
10000 python/valid.txt
5235412 total
```
#### java
```shell
$ wc -l java/*
950083 java/test.txt
2802880 java/train.txt
940803 java/valid.txt
4693766 total
```
#### cpp
```shell
$ wc -l cpp/*
1060014 cpp/test.txt
3119241 cpp/train.txt
1099124 cpp/valid.txt
5278379 total
```
## Dataset Creation
### Curation Rationale
As code generation dataset, I upload it to huggingface datasets.
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
Citation:
APA:
```latex
Xu, M. code-autocomplete: Code AutoComplete with GPT2 model (Version 0.0.4) [Computer software]. https://github.com/shibing624/code-autocomplete
```
BibTeX:
```latex
@software{Xu_code-autocomplete_Code_AutoComplete,
author = {Xu, Ming},
title = {code-autocomplete: Code AutoComplete with GPT2 model},
url = {https://github.com/shibing624/code-autocomplete},
version = {0.0.4}
}
```
### Annotations
#### Annotation process
#### Who are the annotators?
nobody
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
This dataset was developed as a benchmark for evaluating code generation model.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
Github awesome programing code repos.
### Licensing Information
GNU Free Documentation License v1.3 or later.
For research use only.
### Contributions
Thanks to [@shibing624](https://github.com/shibing624) add this dataset.
| [
-0.4234238266944885,
-0.4599851667881012,
0.11038532108068466,
0.22974981367588043,
-0.02646145597100258,
0.1593031883239746,
-0.4606814682483673,
-0.3312138617038727,
0.24797935783863068,
0.3002140522003174,
-0.45944011211395264,
-0.8116132020950317,
-0.6194596886634827,
0.121671751141548... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
chrishuber/kaggle_mnli | chrishuber | 2022-04-23T19:19:52Z | 60 | 1 | null | [
"arxiv:1704.05426",
"region:us"
] | 2022-04-23T19:19:52Z | 2022-04-23T18:16:05.000Z | 2022-04-23T18:16:05 | # Dataset Card for [Kaggle MNLI]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: https://www.kaggle.com/c/multinli-matched-open-evaluation **
- **Repository: chrishuber/roberta-retrained-mlni **
- **Paper: Inference Detection in NLP Using the MultiNLI and SNLI Datasets**
- **Leaderboard: 8**
- **Point of Contact: chrish@sfsu.edu**
### Dataset Summary
[These are the datasets posted to Kaggle for an inference detection NLP competition. Moving them here to use with Pytorch.]
### Supported Tasks and Leaderboards
Provides train and validation data for sentence pairs with inference labels.
[https://www.kaggle.com/competitions/multinli-matched-open-evaluation/leaderboard]
[https://www.kaggle.com/competitions/multinli-mismatched-open-evaluation/leaderboard]
### Languages
[JSON, Python]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[Reposted from https://www.kaggle.com/c/multinli-matched-open-evaluation and https://www.kaggle.com/c/multinli-mismatched-open-evaluation]
### Source Data
#### Initial Data Collection and Normalization
[Please see the article at https://arxiv.org/abs/1704.05426 which discusses the creation of the MNLI dataset.]
#### Who are the source language producers?
[Please see the article at https://arxiv.org/abs/1704.05426 which discusses the creation of the MNLI dataset.]
### Annotations
#### Annotation process
[Crowdsourcing using MechanicalTurk.]
#### Who are the annotators?
[MechanicalTurk users.]
### Personal and Sensitive Information
[None.]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Kaggle]
### Licensing Information
[More Information Needed]
### Citation Information
[https://www.kaggle.com/c/multinli-matched-open-evaluation]
[https://www.kaggle.com/c/multinli-mismatched-open-evaluation]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | [
-0.37645357847213745,
-0.4978448450565338,
0.04546539857983589,
0.290174275636673,
0.14453719556331635,
-0.1904318928718567,
-0.3673592209815979,
-0.28915852308273315,
0.39389753341674805,
0.42801520228385925,
-0.8354663252830505,
-0.6849125623703003,
-0.5069937705993652,
0.243186622858047... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cjvt/sentinews | cjvt | 2022-08-17T06:28:13Z | 60 | 1 | null | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:sl",
"license:cc-by-sa-4.0",
"slovenian sentiment",
"news articles",
"region:us"
] | 2022-08-17T06:28:13Z | 2022-08-15T08:32:30.000Z | 2022-08-15T08:32:30 | ---
annotations_creators:
- crowdsourced
language:
- sl
language_creators:
- found
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: SentiNews
size_categories: []
source_datasets:
- original
tags:
- slovenian sentiment
- news articles
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for SentiNews
## Dataset Description
- **Homepage:** https://github.com/19Joey85/Sentiment-annotated-news-corpus-and-sentiment-lexicon-in-Slovene
- **Paper:** Buฤar, J., ลฝnidarลกiฤ, M. & Povh, J. Annotated news corpora and a lexicon for sentiment analysis in Slovene. Lang Resources & Evaluation 52, 895โ919 (2018). https://doi.org/10.1007/s10579-018-9413-3
### Dataset Summary
SentiNews is a Slovenian sentiment classification dataset, consisting of news articles manually annotated with their sentiment by between two and six annotators.
It is annotated at three granularities:
- document-level (config `document_level`, 10 427 documents),
- paragraph-level (config `paragraph_level`, 89 999 paragraphs), and
- sentence-level (config `sentence_level`, 168 899 sentences).
### Supported Tasks and Leaderboards
Sentiment classification, three classes (negative, neutral, positive).
### Languages
Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the sentence-level config:
```
{
'nid': 2,
'content': 'Vilo Preลกeren je na draลพbi ministrstva za obrambo kupilo nepremiฤninsko podjetje Condor Real s sedeลพem v Lescah.',
'sentiment': 'neutral',
'pid': 1,
'sid': 1
}
```
### Data Fields
The data fields are similar among all three configs, with the only difference being the IDs.
- `nid`: a uint16 containing a unique ID of the news article (document).
- `content`: a string containing the body of the news article
- `sentiment`: the sentiment of the instance
- `pid`: a uint8 containing the consecutive number of the paragraph inside the current news article, **not unique** (present in the configs `paragraph_level` and `sentence_level`)
- `sid`: a uint8 containing the consecutive number of the sentence inside the current paragraph, **not unique** (present in the config `sentence_level`)
## Additional Information
### Dataset Curators
Joลพe Buฤar, Martin ลฝnidarลกiฤ, Janez Povh.
### Licensing Information
CC BY-SA 4.0
### Citation Information
```
@article{buvcar2018annotated,
title={Annotated news corpora and a lexicon for sentiment analysis in Slovene},
author={Bu{\v{c}}ar, Jo{\v{z}}e and {\v{Z}}nidar{\v{s}}i{\v{c}}, Martin and Povh, Janez},
journal={Language Resources and Evaluation},
volume={52},
number={3},
pages={895--919},
year={2018},
publisher={Springer}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
| [
-0.36077937483787537,
-0.35246768593788147,
0.21443253755569458,
0.5690546631813049,
-0.43050599098205566,
-0.18173988163471222,
-0.38051313161849976,
-0.14702998101711273,
0.2670072317123413,
0.35400816798210144,
-0.8634251356124878,
-1.1783215999603271,
-0.6085936427116394,
0.24746353924... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
heegyu/kowikitext | heegyu | 2022-10-02T05:07:59Z | 60 | 2 | null | [
"license:cc-by-sa-3.0",
"region:us"
] | 2022-10-02T05:07:59Z | 2022-10-02T02:40:05.000Z | 2022-10-02T02:40:05 | ---
license: cc-by-sa-3.0
---
ํ๊ตญ์ด ์ํคํผ๋์ article ๋คํ(20221001)
- 1334694 rows
- download size: 474MB
```python
from datasets import load_dataset
ds = load_dataset("heegyu/kowikitext", "20221001")
ds["train"][0]
```
```
{'id': '5',
'revid': '595831',
'url': 'https://ko.wikipedia.org/wiki?curid=5',
'title': '์ง๋ฏธ ์นดํฐ',
'text': '์ ์์ค ์ผ ์นดํฐ ์ฃผ๋์ด(, 1924๋
10์ 1์ผ ~ )๋ ๋ฏผ์ฃผ๋น ์ถ์ ๋ฏธ๊ตญ 39๋ ๋ํต๋ น (1977๋
~ 1981๋
)์ด๋ค.\n์์ .\n์ด๋ฆฐ ์์ .\n์ง๋ฏธ ์นดํฐ๋ ์กฐ์ง์์ฃผ ์ฌํฐ ์นด์ดํฐ ํ๋ ์ธ์ค ๋ง์์์ ํ์ด๋ฌ๋ค.\n์กฐ์ง์ ๊ณต๊ณผ๋ํ๊ต๋ฅผ ์กธ์
ํ์๋ค. ๊ทธ ํ ํด๊ตฐ์ ๋ค์ด๊ฐ ์ ํจยท์์๋ ฅยท์ ์ํจ์ ์น๋ฌด์์ผ๋ก ์ผํ์๋ค. 1953๋
๋ฏธ๊ตญ ํด๊ตฐ ๋์๋ก ์ํธํ์๊ณ ์ดํ ๋
์ฝฉยท๋ฉดํ ๋ฑ์ ๊ฐ๊ฟ ๋ง์ ๋์ ๋ฒ์๋ค. ๊ทธ์ ๋ณ๋ช
์ด "๋
์ฝฉ ๋๋ถ" (Peanut Farmer)๋ก ์๋ ค์ก๋ค.\n์ ๊ณ ์
๋ฌธ.\n1962๋
์กฐ์ง์์ฃผ ์์ ์์ ์ ๊ฑฐ์์ ๋์ ํ๋ ๊ทธ ์ ๊ฑฐ๊ฐ ๋ถ์ ์ ๊ฑฐ ์์์ ... "
}
```
| [
-0.47674083709716797,
-0.5947266817092896,
0.1848495900630951,
0.6855568289756775,
-0.31097927689552307,
-0.3020799458026886,
-0.06052715331315994,
-0.34046322107315063,
0.5055907964706421,
0.4970276355743408,
-0.5034019351005554,
-0.5871077179908752,
-0.45611655712127686,
0.54809904098510... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Gxg/Math23K | Gxg | 2022-10-06T05:21:22Z | 60 | 16 | null | [
"region:us"
] | 2022-10-06T05:21:22Z | 2022-10-06T05:16:18.000Z | 2022-10-06T05:16:18 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
language-and-voice-lab/samromur_children | language-and-voice-lab | 2023-10-15T16:02:44Z | 60 | 2 | null | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:is",
"license:cc-by-4.0",
"samromur",
"children's speech",
"icelandic: iceland"... | 2023-10-15T16:02:44Z | 2022-11-26T03:15:54.000Z | 2022-11-26T03:15:54 | ---
annotations_creators:
- crowdsourced
language:
- is
language_creators:
- crowdsourced
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: "Samrรณmur Children Icelandic Speech 1.0"
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- "samromur"
- children's speech
- 'icelandic: iceland'
- icelandic children
- icelandic kids
- kids
task_categories:
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for samromur_children
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Samrรณmur Children Icelandic Speech 1.0](https://samromur.is/)
- **Repository:** [LDC](https://catalog.ldc.upenn.edu/LDC2022S11)
- **Paper:** [Samrรณmur Children: An Icelandic Speech Corpus](https://aclanthology.org/2022.lrec-1.105.pdf)
- **Point of Contact:** [Carlos Mena](mailto:carlos.mena@ciempiess.org), [Jรณn Guรฐnason](mailto:jg@ru.is)
### Dataset Summary
The Samrรณmur Children Corpus consists of audio recordings and metadata files containing prompts read by the participants. It contains more than 137000 validated speech-recordings uttered by Icelandic children.
The corpus is a result of the crowd-sourcing effort run by the Language and Voice Lab (LVL) at the Reykjavik University, in cooperation with Almannarรณmur, Center for Language Technology. The recording process has started in October 2019 and continues to this day (Spetember 2021).
### Example Usage
The Samrรณmur Children Corpus is divided in 3 splits: train, validation and test. To load a specific split pass its name as a config name:
```python
from datasets import load_dataset
samromur_children = load_dataset("language-and-voice-lab/samromur_children")
```
To load an specific split (for example, the validation split) do:
```python
from datasets import load_dataset
samromur_children = load_dataset("language-and-voice-lab/samromur_children",split="validation")
```
### Supported Tasks
automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### Languages
The audio is in Icelandic.
The reading prompts were gathered from a variety of sources, mainly from the [Icelandic Gigaword Corpus](http://clarin.is/en/resources/gigaword). The corpus includes text from novels, news, plays, and from a list of location names in Iceland. The prompts also came from the [Icelandic Web of Science](https://www.visindavefur.is/).
## Dataset Structure
### Data Instances
```python
{
'audio_id': '015652-0717240',
'audio': {
'path': '/home/carlos/.cache/HuggingFace/datasets/downloads/extracted/2c6b0d82de2ef0dc0879732f726809cccbe6060664966099f43276e8c94b03f2/test/015652/015652-0717240.flac',
'array': array([ 0. , 0. , 0. , ..., -0.00311279,
-0.0007019 , 0.00128174], dtype=float32),
'sampling_rate': 16000
},
'speaker_id': '015652',
'gender': 'female',
'age': '11',
'duration': 4.179999828338623,
'normalized_text': 'eiginlega var hann hin unga rรบssneska bylting lifandi komin'
}
```
### Data Fields
* `audio_id` (string) - id of audio segment
* `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
* `speaker_id` (string) - id of speaker
* `gender` (string) - gender of speaker (male or female)
* `age` (string) - range of age of the speaker: Younger (15-35), Middle-aged (36-60) or Elderly (61+).
* `duration` (float32) - duration of the audio file in seconds.
* `normalized_text` (string) - normalized audio segment transcription.
### Data Splits
The corpus is split into train, dev, and test portions. Lenghts of every portion are: train = 127h25m, test = 1h50m, dev=1h50m.
To load an specific portion please see the above section "Example Usage".
## Dataset Creation
### Curation Rationale
In the field of Automatic Speech Recognition (ASR) is a known fact that the children's speech is particularly hard to recognise due to its high variability produced by developmental changes in children's anatomy and speech production skills.
For this reason, the criteria of selection for the train/dev/test portions have to take into account the children's age. Nevertheless, the Samrรณmur Children is an unbalanced corpus in terms of gender and age of the speakers. This means that the corpus has, for example, a total of 1667 female speakers (73h38m) versus 1412 of male speakers (52h26m).
These unbalances impose conditions in the type of the experiments than can be performed with the corpus. For example, a equal number of female and male speakers through certain ranges of age is impossible. So, if one can't have a perfectly balance corpus in the training set, at least one can have it in the test portion.
The test portion of the Samrรณmur Children was meticulously selected to cover ages between 6 to 16 years in both female and male speakers. Every of these range of age in both genders have a total duration of 5 minutes each.
The development portion of the corpus contains only speakers with an unknown gender information. Both test and dev sets have a total duration of 1h50m each.
In order to perform fairer experiments, speakers in the train and test sets are not shared. Nevertheless, there is only one speaker shared between the train and development set. It can be identified with the speaker ID=010363. However, no audio files are shared between these two sets.
### Source Data
#### Initial Data Collection and Normalization
The data was collected using the website https://samromur.is, code of which is available at https://github.com/cadia-lvl/samromur. The age range selected for this corpus is between 4 and 17 years.
The original audio was collected at 44.1 kHz or 48 kHz sampling rate as *.wav files, which was down-sampled to 16 kHz and converted to *.flac. Each recording contains one read sentence from a script. The script contains 85.080 unique sentences and 90.838 unique tokens.
There was no identifier other than the session ID, which is used as the speaker ID. The corpus is distributed with a metadata file with a detailed information on each utterance and speaker. The madata file is encoded as UTF-8 Unicode.
The prompts were gathered from a variety of sources, mainly from The Icelandic Gigaword Corpus, which is available at http://clarin.is/en/resources/gigaword. The corpus includes text from novels, news, plays, and from a list of location names in Iceland. The prompts also came from the [Icelandic Web of Science](https://www.visindavefur.is/).
### Annotations
#### Annotation process
Prompts were pulled from these corpora if they met the criteria of having only letters which are present in the Icelandic alphabet, and if they are listed in the [DIM: Database Icelandic Morphology](https://aclanthology.org/W19-6116.pdf).
There are also synthesised prompts consisting of a name followed by a question or a demand, in order to simulate a dialogue with a smart-device.
#### Who are the annotators?
The audio files content was manually verified against the prompts by one or more listener (summer students mainly).
### Personal and Sensitive Information
The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This is the first ASR corpus of Icelandic children.
### Discussion of Biases
* The utterances were recorded by a smartphone or the web app.
* Participants self-reported their age group, gender, and the native language.
* Participants are aged between 4 to 17 years.
* The corpus contains 137597 utterances from 3175 speakers, totalling 131 hours.
* The amount of data due to female speakers is 73h38m, the amount of data due to male speakers is 52h26m and the amount of data due to speakers with an unknown gender information is 05h02m
* The number of female speakers is 1667, the number of male speakers is 1412. The number of speakers with an unknown gender information is 96.
* The audios due to female speakers are 78993, the audios due to male speakers are 53927 and the audios due to speakers with an unknown gender information are 4677.
### Other Known Limitations
"Samrรณmur Children: Icelandic Speech 21.09" by the Language and Voice Laboratory (LVL) at the Reykjavik University is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) License with the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
## Additional Information
### Dataset Curators
The corpus is a result of the crowd-sourcing effort run by the Language and Voice Lab (LVL) at the Reykjavik University, in cooperation with Almannarรณmur, Center for Language Technology. The recording process has started in October 2019 and continues to this day (Spetember 2021). The corpus was curated by Carlos Daniel Hernรกndez Mena in 2021.
### Licensing Information
[CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@misc{menasamromurchildren2021,
title={Samrรณmur Children Icelandic Speech 1.0},
ldc_catalog_no={LDC2022S11},
DOI={https://doi.org/10.35111/frrj-qd60},
author={Hernรกndez Mena, Carlos Daniel and Borsky, Michal and Mollberg, David Erik and Guรฐmundsson, Smรกri Freyr and Hedstrรถm, Staffan and Pรกlsson, Ragnar and Jรณnsson, รlafur Helgi and รorsteinsdรณttir, Sunneva and Guรฐmundsdรณttir, Jรณhanna Vigdรญs and Magnรบsdรณttir, Eydรญs Huld and รรณrhallsdรณttir, Ragnheiรฐur and Guรฐnason, Jรณn},
publisher={Reykjavรญk University},
journal={Linguistic Data Consortium, Philadelphia},
year={2021},
url={https://catalog.ldc.upenn.edu/LDC2022S11},
}
```
### Contributions
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by Almannarรณmur, is funded by the Icelandic Ministry of Education, Science and Culture.
The verification for the dataset was funded by the the Icelandic Directorate of Labour's Student Summer Job Program in 2020 and 2021.
Special thanks for the summer students for all the hard work.
| [
-0.4720526933670044,
-0.5539844632148743,
-0.009951101616024971,
0.1814885139465332,
-0.3215266168117523,
-0.026071423664689064,
-0.23135563731193542,
-0.4229683578014374,
0.22262853384017944,
0.26662302017211914,
-0.7466232776641846,
-0.4548867642879486,
-0.6680546402931213,
0.12651699781... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
aashsach/multiconer2 | aashsach | 2023-01-05T03:00:49Z | 60 | 0 | null | [
"region:us"
] | 2023-01-05T03:00:49Z | 2022-12-28T17:03:44.000Z | 2022-12-28T17:03:44 | ---
dataset_info:
- config_name: bn
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-AerospaceManufacturer
'2': I-AerospaceManufacturer
'3': B-AnatomicalStructure
'4': I-AnatomicalStructure
'5': B-ArtWork
'6': I-ArtWork
'7': B-Artist
'8': I-Artist
'9': B-Athlete
'10': I-Athlete
'11': B-CarManufacturer
'12': I-CarManufacturer
'13': B-Cleric
'14': I-Cleric
'15': B-Clothing
'16': I-Clothing
'17': B-Disease
'18': I-Disease
'19': B-Drink
'20': I-Drink
'21': B-Facility
'22': I-Facility
'23': B-Food
'24': I-Food
'25': B-HumanSettlement
'26': I-HumanSettlement
'27': B-MedicalProcedure
'28': I-MedicalProcedure
'29': B-Medication/Vaccine
'30': I-Medication/Vaccine
'31': B-MusicalGRP
'32': I-MusicalGRP
'33': B-MusicalWork
'34': I-MusicalWork
'35': B-ORG
'36': I-ORG
'37': B-OtherLOC
'38': I-OtherLOC
'39': B-OtherPER
'40': I-OtherPER
'41': B-OtherPROD
'42': I-OtherPROD
'43': B-Politician
'44': I-Politician
'45': B-PrivateCorp
'46': I-PrivateCorp
'47': B-PublicCorp
'48': I-PublicCorp
'49': B-Scientist
'50': I-Scientist
'51': B-Software
'52': I-Software
'53': B-SportsGRP
'54': I-SportsGRP
'55': B-SportsManager
'56': I-SportsManager
'57': B-Station
'58': I-Station
'59': B-Symptom
'60': I-Symptom
'61': B-Vehicle
'62': I-Vehicle
'63': B-VisualWork
'64': I-VisualWork
'65': B-WrittenWork
'66': I-WrittenWork
splits:
- name: train
num_bytes: 3844480
num_examples: 9708
- name: validation
num_bytes: 199756
num_examples: 507
download_size: 4017205
dataset_size: 4044236
- config_name: de
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-AerospaceManufacturer
'2': I-AerospaceManufacturer
'3': B-AnatomicalStructure
'4': I-AnatomicalStructure
'5': B-ArtWork
'6': I-ArtWork
'7': B-Artist
'8': I-Artist
'9': B-Athlete
'10': I-Athlete
'11': B-CarManufacturer
'12': I-CarManufacturer
'13': B-Cleric
'14': I-Cleric
'15': B-Clothing
'16': I-Clothing
'17': B-Disease
'18': I-Disease
'19': B-Drink
'20': I-Drink
'21': B-Facility
'22': I-Facility
'23': B-Food
'24': I-Food
'25': B-HumanSettlement
'26': I-HumanSettlement
'27': B-MedicalProcedure
'28': I-MedicalProcedure
'29': B-Medication/Vaccine
'30': I-Medication/Vaccine
'31': B-MusicalGRP
'32': I-MusicalGRP
'33': B-MusicalWork
'34': I-MusicalWork
'35': B-ORG
'36': I-ORG
'37': B-OtherLOC
'38': I-OtherLOC
'39': B-OtherPER
'40': I-OtherPER
'41': B-OtherPROD
'42': I-OtherPROD
'43': B-Politician
'44': I-Politician
'45': B-PrivateCorp
'46': I-PrivateCorp
'47': B-PublicCorp
'48': I-PublicCorp
'49': B-Scientist
'50': I-Scientist
'51': B-Software
'52': I-Software
'53': B-SportsGRP
'54': I-SportsGRP
'55': B-SportsManager
'56': I-SportsManager
'57': B-Station
'58': I-Station
'59': B-Symptom
'60': I-Symptom
'61': B-Vehicle
'62': I-Vehicle
'63': B-VisualWork
'64': I-VisualWork
'65': B-WrittenWork
'66': I-WrittenWork
splits:
- name: train
num_bytes: 2724923
num_examples: 9785
- name: validation
num_bytes: 137726
num_examples: 512
download_size: 2831813
dataset_size: 2862649
- config_name: en
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-AerospaceManufacturer
'2': I-AerospaceManufacturer
'3': B-AnatomicalStructure
'4': I-AnatomicalStructure
'5': B-ArtWork
'6': I-ArtWork
'7': B-Artist
'8': I-Artist
'9': B-Athlete
'10': I-Athlete
'11': B-CarManufacturer
'12': I-CarManufacturer
'13': B-Cleric
'14': I-Cleric
'15': B-Clothing
'16': I-Clothing
'17': B-Disease
'18': I-Disease
'19': B-Drink
'20': I-Drink
'21': B-Facility
'22': I-Facility
'23': B-Food
'24': I-Food
'25': B-HumanSettlement
'26': I-HumanSettlement
'27': B-MedicalProcedure
'28': I-MedicalProcedure
'29': B-Medication/Vaccine
'30': I-Medication/Vaccine
'31': B-MusicalGRP
'32': I-MusicalGRP
'33': B-MusicalWork
'34': I-MusicalWork
'35': B-ORG
'36': I-ORG
'37': B-OtherLOC
'38': I-OtherLOC
'39': B-OtherPER
'40': I-OtherPER
'41': B-OtherPROD
'42': I-OtherPROD
'43': B-Politician
'44': I-Politician
'45': B-PrivateCorp
'46': I-PrivateCorp
'47': B-PublicCorp
'48': I-PublicCorp
'49': B-Scientist
'50': I-Scientist
'51': B-Software
'52': I-Software
'53': B-SportsGRP
'54': I-SportsGRP
'55': B-SportsManager
'56': I-SportsManager
'57': B-Station
'58': I-Station
'59': B-Symptom
'60': I-Symptom
'61': B-Vehicle
'62': I-Vehicle
'63': B-VisualWork
'64': I-VisualWork
'65': B-WrittenWork
'66': I-WrittenWork
splits:
- name: train
num_bytes: 4448839
num_examples: 16778
- name: validation
num_bytes: 232735
num_examples: 871
download_size: 4575462
dataset_size: 4681574
- config_name: es
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-AerospaceManufacturer
'2': I-AerospaceManufacturer
'3': B-AnatomicalStructure
'4': I-AnatomicalStructure
'5': B-ArtWork
'6': I-ArtWork
'7': B-Artist
'8': I-Artist
'9': B-Athlete
'10': I-Athlete
'11': B-CarManufacturer
'12': I-CarManufacturer
'13': B-Cleric
'14': I-Cleric
'15': B-Clothing
'16': I-Clothing
'17': B-Disease
'18': I-Disease
'19': B-Drink
'20': I-Drink
'21': B-Facility
'22': I-Facility
'23': B-Food
'24': I-Food
'25': B-HumanSettlement
'26': I-HumanSettlement
'27': B-MedicalProcedure
'28': I-MedicalProcedure
'29': B-Medication/Vaccine
'30': I-Medication/Vaccine
'31': B-MusicalGRP
'32': I-MusicalGRP
'33': B-MusicalWork
'34': I-MusicalWork
'35': B-ORG
'36': I-ORG
'37': B-OtherLOC
'38': I-OtherLOC
'39': B-OtherPER
'40': I-OtherPER
'41': B-OtherPROD
'42': I-OtherPROD
'43': B-Politician
'44': I-Politician
'45': B-PrivateCorp
'46': I-PrivateCorp
'47': B-PublicCorp
'48': I-PublicCorp
'49': B-Scientist
'50': I-Scientist
'51': B-Software
'52': I-Software
'53': B-SportsGRP
'54': I-SportsGRP
'55': B-SportsManager
'56': I-SportsManager
'57': B-Station
'58': I-Station
'59': B-Symptom
'60': I-Symptom
'61': B-Vehicle
'62': I-Vehicle
'63': B-VisualWork
'64': I-VisualWork
'65': B-WrittenWork
'66': I-WrittenWork
splits:
- name: train
num_bytes: 4643093
num_examples: 16453
- name: validation
num_bytes: 237306
num_examples: 854
download_size: 4659064
dataset_size: 4880399
- config_name: fa
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-AerospaceManufacturer
'2': I-AerospaceManufacturer
'3': B-AnatomicalStructure
'4': I-AnatomicalStructure
'5': B-ArtWork
'6': I-ArtWork
'7': B-Artist
'8': I-Artist
'9': B-Athlete
'10': I-Athlete
'11': B-CarManufacturer
'12': I-CarManufacturer
'13': B-Cleric
'14': I-Cleric
'15': B-Clothing
'16': I-Clothing
'17': B-Disease
'18': I-Disease
'19': B-Drink
'20': I-Drink
'21': B-Facility
'22': I-Facility
'23': B-Food
'24': I-Food
'25': B-HumanSettlement
'26': I-HumanSettlement
'27': B-MedicalProcedure
'28': I-MedicalProcedure
'29': B-Medication/Vaccine
'30': I-Medication/Vaccine
'31': B-MusicalGRP
'32': I-MusicalGRP
'33': B-MusicalWork
'34': I-MusicalWork
'35': B-ORG
'36': I-ORG
'37': B-OtherLOC
'38': I-OtherLOC
'39': B-OtherPER
'40': I-OtherPER
'41': B-OtherPROD
'42': I-OtherPROD
'43': B-Politician
'44': I-Politician
'45': B-PrivateCorp
'46': I-PrivateCorp
'47': B-PublicCorp
'48': I-PublicCorp
'49': B-Scientist
'50': I-Scientist
'51': B-Software
'52': I-Software
'53': B-SportsGRP
'54': I-SportsGRP
'55': B-SportsManager
'56': I-SportsManager
'57': B-Station
'58': I-Station
'59': B-Symptom
'60': I-Symptom
'61': B-Vehicle
'62': I-Vehicle
'63': B-VisualWork
'64': I-VisualWork
'65': B-WrittenWork
'66': I-WrittenWork
splits:
- name: train
num_bytes: 5861165
num_examples: 16321
- name: validation
num_bytes: 316929
num_examples: 855
download_size: 5760501
dataset_size: 6178094
- config_name: fr
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-AerospaceManufacturer
'2': I-AerospaceManufacturer
'3': B-AnatomicalStructure
'4': I-AnatomicalStructure
'5': B-ArtWork
'6': I-ArtWork
'7': B-Artist
'8': I-Artist
'9': B-Athlete
'10': I-Athlete
'11': B-CarManufacturer
'12': I-CarManufacturer
'13': B-Cleric
'14': I-Cleric
'15': B-Clothing
'16': I-Clothing
'17': B-Disease
'18': I-Disease
'19': B-Drink
'20': I-Drink
'21': B-Facility
'22': I-Facility
'23': B-Food
'24': I-Food
'25': B-HumanSettlement
'26': I-HumanSettlement
'27': B-MedicalProcedure
'28': I-MedicalProcedure
'29': B-Medication/Vaccine
'30': I-Medication/Vaccine
'31': B-MusicalGRP
'32': I-MusicalGRP
'33': B-MusicalWork
'34': I-MusicalWork
'35': B-ORG
'36': I-ORG
'37': B-OtherLOC
'38': I-OtherLOC
'39': B-OtherPER
'40': I-OtherPER
'41': B-OtherPROD
'42': I-OtherPROD
'43': B-Politician
'44': I-Politician
'45': B-PrivateCorp
'46': I-PrivateCorp
'47': B-PublicCorp
'48': I-PublicCorp
'49': B-Scientist
'50': I-Scientist
'51': B-Software
'52': I-Software
'53': B-SportsGRP
'54': I-SportsGRP
'55': B-SportsManager
'56': I-SportsManager
'57': B-Station
'58': I-Station
'59': B-Symptom
'60': I-Symptom
'61': B-Vehicle
'62': I-Vehicle
'63': B-VisualWork
'64': I-VisualWork
'65': B-WrittenWork
'66': I-WrittenWork
splits:
- name: train
num_bytes: 4375159
num_examples: 16548
- name: validation
num_bytes: 229499
num_examples: 857
download_size: 4492163
dataset_size: 4604658
- config_name: hi
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-AerospaceManufacturer
'2': I-AerospaceManufacturer
'3': B-AnatomicalStructure
'4': I-AnatomicalStructure
'5': B-ArtWork
'6': I-ArtWork
'7': B-Artist
'8': I-Artist
'9': B-Athlete
'10': I-Athlete
'11': B-CarManufacturer
'12': I-CarManufacturer
'13': B-Cleric
'14': I-Cleric
'15': B-Clothing
'16': I-Clothing
'17': B-Disease
'18': I-Disease
'19': B-Drink
'20': I-Drink
'21': B-Facility
'22': I-Facility
'23': B-Food
'24': I-Food
'25': B-HumanSettlement
'26': I-HumanSettlement
'27': B-MedicalProcedure
'28': I-MedicalProcedure
'29': B-Medication/Vaccine
'30': I-Medication/Vaccine
'31': B-MusicalGRP
'32': I-MusicalGRP
'33': B-MusicalWork
'34': I-MusicalWork
'35': B-ORG
'36': I-ORG
'37': B-OtherLOC
'38': I-OtherLOC
'39': B-OtherPER
'40': I-OtherPER
'41': B-OtherPROD
'42': I-OtherPROD
'43': B-Politician
'44': I-Politician
'45': B-PrivateCorp
'46': I-PrivateCorp
'47': B-PublicCorp
'48': I-PublicCorp
'49': B-Scientist
'50': I-Scientist
'51': B-Software
'52': I-Software
'53': B-SportsGRP
'54': I-SportsGRP
'55': B-SportsManager
'56': I-SportsManager
'57': B-Station
'58': I-Station
'59': B-Symptom
'60': I-Symptom
'61': B-Vehicle
'62': I-Vehicle
'63': B-VisualWork
'64': I-VisualWork
'65': B-WrittenWork
'66': I-WrittenWork
splits:
- name: train
num_bytes: 4039051
num_examples: 9632
- name: validation
num_bytes: 217741
num_examples: 514
download_size: 4060184
dataset_size: 4256792
- config_name: it
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-AerospaceManufacturer
'2': I-AerospaceManufacturer
'3': B-AnatomicalStructure
'4': I-AnatomicalStructure
'5': B-ArtWork
'6': I-ArtWork
'7': B-Artist
'8': I-Artist
'9': B-Athlete
'10': I-Athlete
'11': B-CarManufacturer
'12': I-CarManufacturer
'13': B-Cleric
'14': I-Cleric
'15': B-Clothing
'16': I-Clothing
'17': B-Disease
'18': I-Disease
'19': B-Drink
'20': I-Drink
'21': B-Facility
'22': I-Facility
'23': B-Food
'24': I-Food
'25': B-HumanSettlement
'26': I-HumanSettlement
'27': B-MedicalProcedure
'28': I-MedicalProcedure
'29': B-Medication/Vaccine
'30': I-Medication/Vaccine
'31': B-MusicalGRP
'32': I-MusicalGRP
'33': B-MusicalWork
'34': I-MusicalWork
'35': B-ORG
'36': I-ORG
'37': B-OtherLOC
'38': I-OtherLOC
'39': B-OtherPER
'40': I-OtherPER
'41': B-OtherPROD
'42': I-OtherPROD
'43': B-Politician
'44': I-Politician
'45': B-PrivateCorp
'46': I-PrivateCorp
'47': B-PublicCorp
'48': I-PublicCorp
'49': B-Scientist
'50': I-Scientist
'51': B-Software
'52': I-Software
'53': B-SportsGRP
'54': I-SportsGRP
'55': B-SportsManager
'56': I-SportsManager
'57': B-Station
'58': I-Station
'59': B-Symptom
'60': I-Symptom
'61': B-Vehicle
'62': I-Vehicle
'63': B-VisualWork
'64': I-VisualWork
'65': B-WrittenWork
'66': I-WrittenWork
splits:
- name: train
num_bytes: 4256854
num_examples: 16579
- name: validation
num_bytes: 219489
num_examples: 858
download_size: 4454712
dataset_size: 4476343
- config_name: pt
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-AerospaceManufacturer
'2': I-AerospaceManufacturer
'3': B-AnatomicalStructure
'4': I-AnatomicalStructure
'5': B-ArtWork
'6': I-ArtWork
'7': B-Artist
'8': I-Artist
'9': B-Athlete
'10': I-Athlete
'11': B-CarManufacturer
'12': I-CarManufacturer
'13': B-Cleric
'14': I-Cleric
'15': B-Clothing
'16': I-Clothing
'17': B-Disease
'18': I-Disease
'19': B-Drink
'20': I-Drink
'21': B-Facility
'22': I-Facility
'23': B-Food
'24': I-Food
'25': B-HumanSettlement
'26': I-HumanSettlement
'27': B-MedicalProcedure
'28': I-MedicalProcedure
'29': B-Medication/Vaccine
'30': I-Medication/Vaccine
'31': B-MusicalGRP
'32': I-MusicalGRP
'33': B-MusicalWork
'34': I-MusicalWork
'35': B-ORG
'36': I-ORG
'37': B-OtherLOC
'38': I-OtherLOC
'39': B-OtherPER
'40': I-OtherPER
'41': B-OtherPROD
'42': I-OtherPROD
'43': B-Politician
'44': I-Politician
'45': B-PrivateCorp
'46': I-PrivateCorp
'47': B-PublicCorp
'48': I-PublicCorp
'49': B-Scientist
'50': I-Scientist
'51': B-Software
'52': I-Software
'53': B-SportsGRP
'54': I-SportsGRP
'55': B-SportsManager
'56': I-SportsManager
'57': B-Station
'58': I-Station
'59': B-Symptom
'60': I-Symptom
'61': B-Vehicle
'62': I-Vehicle
'63': B-VisualWork
'64': I-VisualWork
'65': B-WrittenWork
'66': I-WrittenWork
splits:
- name: train
num_bytes: 4587908
num_examples: 16469
- name: validation
num_bytes: 233471
num_examples: 854
download_size: 4622334
dataset_size: 4821379
- config_name: sv
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-AerospaceManufacturer
'2': I-AerospaceManufacturer
'3': B-AnatomicalStructure
'4': I-AnatomicalStructure
'5': B-ArtWork
'6': I-ArtWork
'7': B-Artist
'8': I-Artist
'9': B-Athlete
'10': I-Athlete
'11': B-CarManufacturer
'12': I-CarManufacturer
'13': B-Cleric
'14': I-Cleric
'15': B-Clothing
'16': I-Clothing
'17': B-Disease
'18': I-Disease
'19': B-Drink
'20': I-Drink
'21': B-Facility
'22': I-Facility
'23': B-Food
'24': I-Food
'25': B-HumanSettlement
'26': I-HumanSettlement
'27': B-MedicalProcedure
'28': I-MedicalProcedure
'29': B-Medication/Vaccine
'30': I-Medication/Vaccine
'31': B-MusicalGRP
'32': I-MusicalGRP
'33': B-MusicalWork
'34': I-MusicalWork
'35': B-ORG
'36': I-ORG
'37': B-OtherLOC
'38': I-OtherLOC
'39': B-OtherPER
'40': I-OtherPER
'41': B-OtherPROD
'42': I-OtherPROD
'43': B-Politician
'44': I-Politician
'45': B-PrivateCorp
'46': I-PrivateCorp
'47': B-PublicCorp
'48': I-PublicCorp
'49': B-Scientist
'50': I-Scientist
'51': B-Software
'52': I-Software
'53': B-SportsGRP
'54': I-SportsGRP
'55': B-SportsManager
'56': I-SportsManager
'57': B-Station
'58': I-Station
'59': B-Symptom
'60': I-Symptom
'61': B-Vehicle
'62': I-Vehicle
'63': B-VisualWork
'64': I-VisualWork
'65': B-WrittenWork
'66': I-WrittenWork
splits:
- name: train
num_bytes: 3919442
num_examples: 16363
- name: validation
num_bytes: 205910
num_examples: 856
download_size: 4100785
dataset_size: 4125352
- config_name: uk
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-AerospaceManufacturer
'2': I-AerospaceManufacturer
'3': B-AnatomicalStructure
'4': I-AnatomicalStructure
'5': B-ArtWork
'6': I-ArtWork
'7': B-Artist
'8': I-Artist
'9': B-Athlete
'10': I-Athlete
'11': B-CarManufacturer
'12': I-CarManufacturer
'13': B-Cleric
'14': I-Cleric
'15': B-Clothing
'16': I-Clothing
'17': B-Disease
'18': I-Disease
'19': B-Drink
'20': I-Drink
'21': B-Facility
'22': I-Facility
'23': B-Food
'24': I-Food
'25': B-HumanSettlement
'26': I-HumanSettlement
'27': B-MedicalProcedure
'28': I-MedicalProcedure
'29': B-Medication/Vaccine
'30': I-Medication/Vaccine
'31': B-MusicalGRP
'32': I-MusicalGRP
'33': B-MusicalWork
'34': I-MusicalWork
'35': B-ORG
'36': I-ORG
'37': B-OtherLOC
'38': I-OtherLOC
'39': B-OtherPER
'40': I-OtherPER
'41': B-OtherPROD
'42': I-OtherPROD
'43': B-Politician
'44': I-Politician
'45': B-PrivateCorp
'46': I-PrivateCorp
'47': B-PublicCorp
'48': I-PublicCorp
'49': B-Scientist
'50': I-Scientist
'51': B-Software
'52': I-Software
'53': B-SportsGRP
'54': I-SportsGRP
'55': B-SportsManager
'56': I-SportsManager
'57': B-Station
'58': I-Station
'59': B-Symptom
'60': I-Symptom
'61': B-Vehicle
'62': I-Vehicle
'63': B-VisualWork
'64': I-VisualWork
'65': B-WrittenWork
'66': I-WrittenWork
splits:
- name: train
num_bytes: 5104234
num_examples: 16429
- name: validation
num_bytes: 261125
num_examples: 851
download_size: 5245683
dataset_size: 5365359
- config_name: zh
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-AerospaceManufacturer
'2': I-AerospaceManufacturer
'3': B-AnatomicalStructure
'4': I-AnatomicalStructure
'5': B-ArtWork
'6': I-ArtWork
'7': B-Artist
'8': I-Artist
'9': B-Athlete
'10': I-Athlete
'11': B-CarManufacturer
'12': I-CarManufacturer
'13': B-Cleric
'14': I-Cleric
'15': B-Clothing
'16': I-Clothing
'17': B-Disease
'18': I-Disease
'19': B-Drink
'20': I-Drink
'21': B-Facility
'22': I-Facility
'23': B-Food
'24': I-Food
'25': B-HumanSettlement
'26': I-HumanSettlement
'27': B-MedicalProcedure
'28': I-MedicalProcedure
'29': B-Medication/Vaccine
'30': I-Medication/Vaccine
'31': B-MusicalGRP
'32': I-MusicalGRP
'33': B-MusicalWork
'34': I-MusicalWork
'35': B-ORG
'36': I-ORG
'37': B-OtherLOC
'38': I-OtherLOC
'39': B-OtherPER
'40': I-OtherPER
'41': B-OtherPROD
'42': I-OtherPROD
'43': B-Politician
'44': I-Politician
'45': B-PrivateCorp
'46': I-PrivateCorp
'47': B-PublicCorp
'48': I-PublicCorp
'49': B-Scientist
'50': I-Scientist
'51': B-Software
'52': I-Software
'53': B-SportsGRP
'54': I-SportsGRP
'55': B-SportsManager
'56': I-SportsManager
'57': B-Station
'58': I-Station
'59': B-Symptom
'60': I-Symptom
'61': B-Vehicle
'62': I-Vehicle
'63': B-VisualWork
'64': I-VisualWork
'65': B-WrittenWork
'66': I-WrittenWork
splits:
- name: train
num_bytes: 3816980
num_examples: 9759
- name: validation
num_bytes: 198669
num_examples: 506
download_size: 3935986
dataset_size: 4015649
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Umal-exvc/chocolate-captioned-dataset-400 | Umal-exvc | 2023-01-11T01:57:06Z | 60 | 0 | null | [
"region:us"
] | 2023-01-11T01:57:06Z | 2023-01-11T01:56:49.000Z | 2023-01-11T01:56:49 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 64772495.0
num_examples: 400
download_size: 63382786
dataset_size: 64772495.0
---
# Dataset Card for "chocolate-captioned-dataset-400"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.585734486579895,
-0.21435272693634033,
0.19961971044540405,
0.47429540753364563,
-0.10412690043449402,
0.24478693306446075,
0.17384779453277588,
-0.13117265701293945,
0.8208112716674805,
0.6819517612457275,
-0.8871747851371765,
-0.6249905228614807,
-0.6251867413520813,
0.014715400524437... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KBLab/rixvox | KBLab | 2023-08-17T10:26:47Z | 60 | 9 | null | [
"task_categories:automatic-speech-recognition",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:sv",
"license:cc-by-4.0",
"audio",
"speech-recognition",
"region:us"
] | 2023-08-17T10:26:47Z | 2023-03-03T11:07:18.000Z | 2023-03-03T11:07:18 | ---
language: sv
license: cc-by-4.0
tags:
- audio
- speech-recognition
task_categories:
- automatic-speech-recognition
size_categories:
- 100K<n<1M
multilinguality:
- monolingual
---
# Dataset Card for RixVox
## Dataset Description
- **Repository:** [Riksdagen anfรถranden repository](https://github.com/kb-labb/riksdagen_anforanden)
- **Paper:** ["RixVox: A Swedish Speech Corpus with 5500 Hours of Speech from Parliamentary Debates"](https://kb-labb.github.io/posts/2023-03-09-rixvox-a-swedish-speech-corpus/)
- **Point of Contact:** [KBLab](mailto:kblabb@kb.se)
- **Total amount of disk used:** ca. 1.2 TB
### Dataset Summary
RixVox is a speech dataset comprised of speeches from the Riksdag (the Swedish Parliament). It covers speeches from debates during the period 2003-2023. Audio from speeches have been aligned, on the sentence level, with transcripts from written protocols using `aeneas`. An observation may consist of one or several concatenated sentences (up to 30 seconds in duration). Detailed speaker metadata is available for each observation, including the speaker's name, gender, political party, birth year and the electoral district they represent. The dataset contains a total of 5493 hours of speech with transcriptions.
## How to use & Supported Tasks
### Supported Tasks
Tasks are not supported by default (there are no label fields). The dataset may however be suited for:
- Automatic Speech Recognition (ASR).
- Speaker identification and verification.
- Creation of synthetic diarization datasets.
- Research on bias in ASR systems.
### How to use
To download and extract the files locally you can use `load_dataset()`. We recommend you set the `cache_dir` argument to point to a location that has plenty of disk space (1.2TB+). Here's how to download the `train` split:
```python
from datasets import load_dataset
# To download/load all splits at once, don't specify a split
rixvox = load_dataset("KBLab/rixvox", split="train", cache_dir="data_rixvox")
```
You can also stream the dataset. This is useful if you want to explore the dataset or if you don't have enough disk space to download the entire dataset. Here's how to stream the `train` split:
```python
from datasets import load_dataset
rixvox = load_dataset("KBLab/rixvox", cache_dir="data_rixvox", split="train", streaming=True)
print(next(iter(rixvox)))
# Grab 5 observations
rixvox_subset = rixvox.take(5)
for example in rixvox_subset:
print(example)
```
**Create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch)** with your dataset.
Local mode:
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
# Dataset is not pre-shuffled, recommend shuffling it before training.
rixvox = load_dataset("KBLab/rixvox", split="train", cache_dir="data_rixvox")
batch_sampler = BatchSampler(RandomSampler(rixvox), batch_size=32, drop_last=False)
dataloader = DataLoader(rixvox, batch_sampler=batch_sampler)
```
Streaming mode:
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
rixvox = load_dataset("KBLab/rixvox", split="train", cache_dir="data_rixvox")
dataloader = DataLoader(rixvox, batch_size=32)
```
See Huggingface's guide on [streaming datasets](https://huggingface.co/docs/datasets/v1.11.0/dataset_streaming.html) for more information on how to shuffle in streaming mode.
### Languages
- Swedish. The BCP 47 language tag for Swedish is `sv`.
## Dataset Structure
### Data Instances
There are a total of `835044` observations from `1194` different speakers. Each observation can be up to 30 seconds in duration. An observation belongs to a debate (`dokid`), is extratected from a speech (`anforande_nummer`), and is numbered according to its order within the speech (`observation_nr`). Here is an example of an observation:
```
{'dokid': 'GR01BOU3',
'anforande_nummer': 191,
'observation_nr': 0,
'audio': {'path': 'GR01BOU3/2442210220028601121_anf191_1_25.wav',
'array': array([0.01171875, 0.01242065, 0.01071167, ..., 0.00689697, 0.00918579,
0.00650024]),
'sampling_rate': 16000},
'text': 'Kristdemokraterna stรฅr bakom alla reservationer med kristdemokratiska fรถrtecken, men jag nรถjer mig med att yrka bifall till reservation 1. Jag ska i det hรคr inlรคgget berรถra nรฅgra av de รฅtta punkter som รคr fรถremรฅl fรถr reservationer frรฅn kristdemokratiskt hรฅll, i vissa fall tillsammans med andra partier.',
'debatedate': datetime.datetime(2003, 12, 4, 0, 0),
'speaker': 'Gรถran Hรคgglund',
'party': 'KD',
'gender': 'male',
'birth_year': 1959,
'electoral_district': 'Hallands lรคn',
'intressent_id': '0584659199514',
'speaker_from_id': True,
'speaker_audio_meta': 'Gรถran Hรคgglund (Kd)',
'start': 1.4,
'end': 24.96,
'duration': 23.560000000000002,
'bleu_score': 0.7212783273624307,
'filename': 'GR01BOU3/2442210220028601121_anf191_1_25.wav',
'path': 'GR01BOU3/2442210220028601121_anf191_1_25.wav',
'speaker_total_hours': 30.621333333333332}
```
See more examples in the [dataset viewer](https://huggingface.co/datasets/KBLab/rixvox/viewer/default/train).
### Data Fields
* `dokid`: Document id for the debate used by the Riksdag. This is the same for all speeches in a debate.
* `anforande_nummer`: Speech number within the debate, or within the debate sessions on a particular day. Should create a unique primary key for a speech in combination with `dokid` (sometimes there are duplicates, but we removed them from this dataset).
* `observation_nr`: Observation number within the speech. Creates a unique identifier for an observation in combination with `dokid` and `anforande_nummer`.
* `text`: The text transcript from written protocols. The transcripts are not always verbatim. Transcribers have to different degrees adjusted sentence ordering, words and phrasing when they deemed it appropriate.
* `debatedate`: The date of the debate.
* `start`: The start time of the observation within a speech (in seconds).
* `end`: The end time of the observation within a speech (in seconds).
* `duration`: The duration of the observation (`end` subtracted with `start`).
* `intressent_id`: Unique id for the speaker within the Riksdag's database (see [person.csv.zip](https://data.riksdagen.se/dataset/person/person.csv.zip) from the Riksdag).
* `speaker`: The speaker's name retrieved via the `intressent_id`.
* `party`: The speaker's party retrieved via the `intressent_id`.
* `gender`: The speaker's gender retrieved via the `intressent_id`.
* `birth_year`: The speaker's bith year retrieved via the `intressent_id`.
* `electoral_district`: The electoral district which the speaker represents if they are/were a member of parliament (retrieved via the `intressent_id`).
* `speaker_audio_meta`: The speaker's name and title as listed in the Riksdag's oroginal text format metadata (sometimes wrong and mismatched against `intressent_id`).
* `speaker_from_id`: Whether the speaker metadata was retrieved via the `intressent_id` or via the Riksdag's original metadata (for those speeches with a missing `intressent_id`).
* `bleu_score`: The BLEU score of the automatic speech recognition (ASR) transcript against the Riksdag's written protocol. Calculated on the entirity of the speech that an observation (30s snippet) is extracted from. A low number for a speech may indicate that either i) the ASR model had trouble transcribing the speaker's accent or dialect, or ii) the transcription took certain liberties in editing and rephrasing the speech.
* `speaker_total_hours`: The total number of hours of speech from the speaker in the RixVox dataset.
* `filename`: The filename of the observation in the compressed tar.gz files. Useful if you don't want to use Huggingface `datasets`, but would rather manually download and extract the files from the data shards.
* `path`: Dynamically created variable. Contains the local path to the observation's audio file after you download and extract the files via `load_dataset()` in the `datasets` library.
### Data Splits
Dataset splits were randomly sampled on the speaker level. That is, a speaker is only present in a single split. We sample speakers for each split until the following conditions are met:
- 98% of the total number of hours of speech are included in the train split.
- 1% of the total number of hours of speech are included in the validation split.
- 1% of the total number of hours of speech are included in the test split.
| Dataset Split | Observations | Total duration of speech (hours) | Average duration obs. (seconds) | Number of speakers |
| ------------- | ----------------: | -------------------------------: | ------------------------------: | -----------------: |
| Train | 818227 | 5383 | 23.69 | 1165 |
| Validation | 7933 | 52 | 23.50 | 18 |
| Test | 8884 | 59 | 23.74 | 11 |
## Dataset Creation
For more information about the creation of this dataset, see the article ["Finding Speeches in the Riksdag's Debates"](https://kb-labb.github.io/posts/2023-02-15-finding-speeches-in-the-riksdags-debates/) from our blog.
### Curation Rationale
Before RixVox, there was only a couple of hundred hours of transcribed speech available to train ASR models for Swedish. ASR models such as Whisper have shown that the performance of models can benefit significantly from adding more supervised data during pretraining or finetuning. Media from debates in the Riksdag are published openly on the web together with transcripts and other metadata. The open data initiatives of the Riksdag presented an opportunity to create a high quality open speech corpus for Swedish.
### Source Data
The Swedish Parliament.
- [Transcripts of speeches](https://data.riksdagen.se/data/anforanden/).
- Use the `rel_dok_id` of transcripts of speeches to query the Riksdag's media API (e.g. https://data.riksdagen.se/api/mhs-vodapi?H901FiU1 ) for available media and metadata.
#### Initial Data Collection and Normalization
For information on how the speeches were segmented and identified in debate audio files, see the article ["Finding Speeches in the Riksdag's Debates"](https://kb-labb.github.io/posts/2023-02-15-finding-speeches-in-the-riksdags-debates/).
For information on how the speech segmentations were used to create the final RixVox dataset, see the article ["RixVox: A Swedish Speech Corpus with 5500 Hours of Speech from Parliamentary Debates"](https://kb-labb.github.io/posts/2023-03-09-rixvox-a-swedish-speech-corpus/).
The code to replicate the creation of the dataset is open and available at the GitHub repository [KBLab/riksdagen_anforanden](https://github.com/kb-labb/riksdagen_anforanden). Processing everything can take 1-3 weeks on a workstation with consumer grade GPU.
#### Who are the source language producers?
The written protocols of speeches are manually produced by the Riksdag. Transcription is not always verbatim, but rather catches the intent of the speaker.
Segmenting speeches to determine when they start and end in a debate was done automatically. Sentence level alignment of the written protocols to the audio files was also done automatically using `aeneas`. See the articles in citation information for more details.
### Annotations
#### Annotation process
The process of aligning speech to written protocols was automatic. It followed the following general steps:
1. We used ASR to automatically transcribe the debate audio files and get word timestamps for the machine generated transcription.
2. We used fuzzy string matching to determine approximate start/end of a speech, matching the official written protocol of the speech to the machine generated transcription of the debate.
3. We perform speaker diarization using pyannote.audio.
4. We assign speaker diarization segments to speeches by the degree of overlap between approximate start/end from fuzzy string matching and the speaker diarization segments. The start and end of the diarization segment is used as our new adjusted start and end metadata of the speech.
5. Based on adjusted metadata of start/end of as speech, we split and extract the audio of speeches from the debates and then align the segmented speeches to the written protocol using `aeneas` (sentence-level alignment).
#### Who are the annotators?
No manual annotations.
### Personal and Sensitive Information
The speakers are members of parliament or ministers speaking publicly in the Riksdag. The Riksdag is a public institution and the speeches are publicly available on the web as open data.
## Considerations for Using the Data
### Social Impact of Dataset
We except the dataset primarily to be used in training ASR models for Swedish. The performance of Swedish text-to-speech in multillingual ASR models may also benefit from the availability of a large Swedish speech corpus. In turn, improved ASR models can serve to help increase accessibility of audio and video media content for people with hearing impairments.
The dataset can also be used to train models for other audio tasks such as speaker diarization, speaker verification, and speaker recognition.
Since metadata regarding the age, gender, and electoral district of the speaker is included, the dataset can possibly also be used to study bias in ASR models.
### Discussion of Biases
The dataset includes parliamentary speeches, which are often more formal than everyday speech.
During the creation of the dataset, we found that speech segmentations based on speaker diarization were more likely to fail when a preceding speaker, the speaker of the house, and the speaker of the following speech were all of the same gender. However, all in all, only a small number of speeches were filtered out of the final RixVox dataset. After quality filtering of the dataset, 5500 out of 5858 hours remained. We do not believe any significant systematic bias was introduced by this filtering.
Only minimal deduplication was performed to weed out commonly repeated phrases. For example, certain phrases such as "Fru talman!", "Herr Talman!", tend to be used a lot as a matter of formality. These phrases tend to be present at the beginning of most transcripts regardless whether it was uttered by the speaker or not. For this reason we have removed the first aligned sentence of each speech when creating RixVox. The aforementioned phrases are repeated frequently in speeches as well, though. As such it might be beneficial to perform more aggressive deduplication of the dataset before training models.
### Other Known Limitations
## Additional Information
### Dataset Curators
KBLab at the the National Library of Sweden.
### Future updates
There is a possiblity RixVox will be periodically, and irregularly, updated by including both older and newer speeches. Older recordings of parliamentary debates from 1966 to 2002 do exist, but they are not yet part of the Riksdag's open data. KBLab are exploring the possibility of adding metadata to these recordings by applying the existing speech segmentation and alignment pipeline to them.
Each year also brings new parliamentary debates, with recent years adding 400-500 hours of speech per year.
### Licensing Information
[CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/)
Cite the Swedish Parliament.
To reference RixVox, feel free to cite KBLab blog posts in the citation information below.
### Citation Information
```
@misc{rekathati2023rixvox:,
author = {Rekathati, Faton},
title = {The KBLab Blog: RixVox: A Swedish Speech Corpus with 5500 Hours of Speech from Parliamentary Debates},
url = {https://kb-labb.github.io/posts/2023-03-09-rixvox-a-swedish-speech-corpus/},
year = {2023}
}
```
```
@misc{rekathati2023finding,
author = {Rekathati, Faton},
title = {The KBLab Blog: Finding Speeches in the Riksdag's Debates},
url = {https://kb-labb.github.io/posts/2023-02-15-finding-speeches-in-the-riksdags-debates/},
year = {2023}
}
```
The Swedish Parliament.
### Contributions
Thanks to [@lhoestq](https://huggingface.co/lhoestq) for reviewing the dataset script. | [
-0.583504855632782,
-0.5903993248939514,
0.01805928722023964,
0.25831690430641174,
-0.4117509722709656,
-0.15923406183719635,
-0.504677414894104,
-0.22366921603679657,
0.4408421218395233,
0.5407984852790833,
-0.5085791349411011,
-0.6093006134033203,
-0.6056815385818481,
-0.0477429442107677... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ontocord/OIG-moderation | ontocord | 2023-11-28T14:03:47Z | 60 | 24 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-28T14:03:47Z | 2023-03-08T20:52:23.000Z | 2023-03-08T20:52:23 | ---
license: apache-2.0
---
# This is the Open Instruction Generalist - Moderation Dataset
This is our attempt to create a diverse dataset of dialogue that may be related to NSFW subject matters, abuse eliciting text, privacy violation eliciting instructions, depression or related content, hate speech, and other similar topics. We use the [prosocial], [anthropic redteam], subsets of [English wikipedia] datasets along with other public datasets and data created or contributed by volunteers. To regularize the dataset we also have "regular" OIG instructions, which includes Q/A instructions, coding instructions, and similar types of queries. Currently there are two versions of the datasets.
- OIG_safety_v0.1.jsonl (66200)
- OIG_safety_v0.2.jsonl (134530)
OIG-moderation includes data from:
* Public datasets such as anthropic-redteam and anthropic-harmless, prosocial, and contributed datasets from community members
* Augmented toxic data such as civil comments data converted into instructions, (c) anthropic-redteam data augmented with prosocial tags
* Data provided by the LAION community that might include NSFW prompt
* Synthetic depression data generated from a public depression bag of words dataset using https://huggingface.co/pszemraj/flan-t5-large-grammar-synthesis.
* A model trained on the OIG-moderation dataset can be used to provide moderation labels, and the bot providers can choose to then block responses from their chatbots based on these labels. If a bot provider's policy for example permits sexual content, but prohibits PII eliciting text, they can hopefully do so with the output of a model trained on this data.
* The tags consist of (a) Base prosocial tags: casual, possibly needs caution, probably needs caution, needs caution, needs intervention and (b) Additional tags: abuse related, personal information related, sexual content, hate.
* An utterance can have more than one tag. For example, a wikipedia article about pornography content might be tagged: needs caution | sexual content.
## Acknowledgement
* We would like to thank all the following people for their amazing contirbutions: @Rallio, @Summer, @Iamiakk @Jue, @yp_yurilee, @Jjmachan, @Coco.han, @Pszemraj, and many others.
* We would like to thank Together.xyz for testing the v0.1 data for effectiveness and their dedication to the open source community.
* We would like to thank AI Horde and user @Db0 for their incredible contribution of filtered data that were flagged as unethical.
## Disclaimer
* These datasets contain synthetic data and in some cases data that includes NSFW subject matter and triggering text such as toxic/offensive/trolling things. If you are concerned about the presence of this type of material in the dataset please make sure you carefully inspect each of the entries and filter appropriately. Our goal is for the model to be as helpful and non-toxic as possible and we are actively evaluating ways to help create models that can detect potentially unwanted or problematic instructions or content.
## Risk Factors
* While we acknowledge that this dataset can be modified to train a model to generate unsafe text, it is important to release this publicly as a resource for both researchers and those building production agents to train detection models.
## BY ACCESSING THIS DATASET YOU AGREE YOU ARE 18 YEARS OLD OR OLDER AND UNDERSTAND THE RISKS OF USING THIS DATASET. | [
-0.3434784412384033,
-1.0348221063613892,
0.2106119990348816,
0.09105110168457031,
-0.35072436928749084,
-0.4796992838382721,
-0.08384197950363159,
-0.3358412981033325,
-0.002764784963801503,
0.837390124797821,
-0.7335614562034607,
-0.8668959140777588,
-0.443230003118515,
0.081503391265869... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pcuenq/face_synthetics | pcuenq | 2023-03-13T09:37:52Z | 60 | 1 | null | [
"region:us"
] | 2023-03-13T09:37:52Z | 2023-03-12T21:37:41.000Z | 2023-03-12T21:37:41 | ---
dataset_info:
features:
- name: image
dtype: image
- name: image_seg
dtype: image
- name: landmarks
dtype: string
splits:
- name: train
num_bytes: 33730885609.0
num_examples: 100000
download_size: 34096881533
dataset_size: 33730885609.0
---
# Dataset Card for `face_synthetics`
This is a copy of [Microsoft FaceSynthetics dataset](https://github.com/microsoft/FaceSynthetics), uploaded to Hugging Face Datasets for convenience.
Please, refer to the original [license](LICENSE.txt), which we replicate in this repo.
The dataset was uploaded using the following code, which assumes the original `zip` file was uncompressed to `/data/microsoft_face_synthetics`:
```Python
from datasets import Dataset
from pathlib import Path
from PIL import Image
face_synthetics = Path("/data/microsoft_face_synthetics")
def entry_for_id(entry_id):
if type(entry_id) == int:
entry_id = f"{entry_id:06}"
image = Image.open(face_synthetics/f"{entry_id}.png")
image_seg = Image.open(face_synthetics/f"{entry_id}_seg.png")
with open(face_synthetics/f"{entry_id}_ldmks.txt") as f:
landmarks = f.read()
return {
"image": image,
"image_seg": image_seg,
"landmarks": landmarks,
}
def generate_entries():
for x in range(100000):
yield entry_for_id(x)
ds = Dataset.from_generator(generate_entries)
ds.push_to_hub('pcuenq/face_synthetics')
```
Note that `image_seg`, the segmented images, appear to be black because each pixel contains a number between `0` to `18` corresponging to the different categories, see the [original README]() for details. We haven't created visualization code yet.
| [
-0.3752441704273224,
-0.30511537194252014,
0.3251103460788727,
0.40836137533187866,
-0.34851914644241333,
0.13390573859214783,
0.02316749282181263,
-0.37257957458496094,
0.5145421028137207,
0.36085715889930725,
-0.9617076516151428,
-0.632831871509552,
-0.32598379254341125,
0.15046197175979... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nastyboget/stackmix_hkr | nastyboget | 2023-03-23T18:42:10Z | 60 | 0 | null | [
"task_categories:image-to-text",
"size_categories:100K<n<1M",
"language:ru",
"license:mit",
"region:us"
] | 2023-03-23T18:42:10Z | 2023-03-20T15:41:33.000Z | 2023-03-20T15:41:33 | ---
license: mit
task_categories:
- image-to-text
language:
- ru
size_categories:
- 100K<n<1M
---
Dataset generated from HKR train set using Stackmix
===================================================
Number of images: 300000
Sources:
* [HKR dataset](https://github.com/abdoelsayed2016/HKR_Dataset)
* [Stackmix code](https://github.com/ai-forever/StackMix-OCR)
| [
-0.34388625621795654,
0.09640809893608093,
0.05967537313699722,
-0.02073555439710617,
-0.42404159903526306,
-0.1114320307970047,
0.4828936457633972,
-0.40431615710258484,
-0.07347072660923004,
0.8731136918067932,
-0.40288373827934265,
-0.5247037410736084,
-0.5978504419326782,
0.25312575697... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ktgiahieu/maccrobat2018_2020 | ktgiahieu | 2023-05-21T10:39:53Z | 60 | 1 | null | [
"license:cc-by-4.0",
"region:us"
] | 2023-05-21T10:39:53Z | 2023-04-15T21:27:11.000Z | 2023-04-15T21:27:11 | ---
license: cc-by-4.0
---
Modified dataset from:
Caufield, J. Harry (2019): MACCROBAT. figshare. Dataset. https://doi.org/10.6084/m9.figshare.9764942.v2
Example training notebook: https://colab.research.google.com/drive/1OzCY782KJSF0FBDS0d1CoMhfp3-RtJMV?usp=sharing
Labels:
```
0: B-Activity
1: B-Administration
2: B-Age
3: B-Area
4: B-Biological_attribute
5: B-Biological_structure
6: B-Clinical_event
7: B-Color
8: B-Coreference
9: B-Date
10: B-Detailed_description
11: B-Diagnostic_procedure
12: B-Disease_disorder
13: B-Distance
14: B-Dosage
15: B-Duration
16: B-Family_history
17: B-Frequency
18: B-Height
19: B-History
20: B-Lab_value
21: B-Mass
22: B-Medication
23: B-Nonbiological_location
24: B-Occupation
25: B-Other_entity
26: B-Other_event
27: B-Outcome
28: B-Personal_background
29: B-Qualitative_concept
30: B-Quantitative_concept
31: B-Severity
32: B-Sex
33: B-Shape
34: B-Sign_symptom
35: B-Subject
36: B-Texture
37: B-Therapeutic_procedure
38: B-Time
39: B-Volume
40: B-Weight
41: I-Activity
42: I-Administration
43: I-Age
44: I-Area
45: I-Biological_attribute
46: I-Biological_structure
47: I-Clinical_event
48: I-Color
49: I-Coreference
50: I-Date
51: I-Detailed_description
52: I-Diagnostic_procedure
53: I-Disease_disorder
54: I-Distance
55: I-Dosage
56: I-Duration
57: I-Family_history
58: I-Frequency
59: I-Height
60: I-History
61: I-Lab_value
62: I-Mass
63: I-Medication
64: I-Nonbiological_location
65: I-Occupation
66: I-Other_entity
67: I-Other_event
68: I-Outcome
69: I-Personal_background
70: I-Qualitative_concept
71: I-Quantitative_concept
72: I-Severity
73: I-Shape
74: I-Sign_symptom
75: I-Subject
76: I-Texture
77: I-Therapeutic_procedure
78: I-Time
79: I-Volume
80: I-Weight
81: O
``` | [
-0.11597321927547455,
-0.466423362493515,
0.41360020637512207,
0.2546772062778473,
0.023683931678533554,
-0.211162731051445,
0.16653716564178467,
-0.27518460154533386,
0.7131709456443787,
0.4982708692550659,
-0.5886014699935913,
-0.989685595035553,
-0.9094796776771545,
0.148133784532547,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AbeShinzo0708/AbeShinzo_voice_dataset | AbeShinzo0708 | 2023-04-17T07:40:01Z | 60 | 1 | null | [
"language:ja",
"ๅฎๅๆไธ",
"AbeShinzo",
"FormerJapanesePrimeMinister",
"voice",
"dataset",
"region:us"
] | 2023-04-17T07:40:01Z | 2023-04-17T07:36:16.000Z | 2023-04-17T07:36:16 | ---
language:
- ja
tags:
- ๅฎๅๆไธ
- AbeShinzo
- FormerJapanesePrimeMinister
- voice
- dataset
pretty_name: ๅฎๅๆไธ
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AmazonScience/xtr-wiki_qa | AmazonScience | 2023-07-24T17:32:38Z | 60 | 1 | null | [
"task_categories:question-answering",
"task_categories:text-retrieval",
"task_ids:open-domain-qa",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:100K<n<1M",
"source_datasets:extended|wiki_qa",
"l... | 2023-07-24T17:32:38Z | 2023-05-16T00:03:14.000Z | 2023-05-16T00:03:14 | ---
annotations_creators:
- machine-generated
language:
- ar
- es
- fr
- de
- hi
- it
- ja
- nl
- pt
language_creators:
- found
license_details: https://huggingface.co/datasets/AmazonScience/xtr-wiki_qa/blob/main/LICENSE.md
multilinguality:
- multilingual
- translation
pretty_name: xtr-wiki_qa
size_categories:
- 100K<n<1M
source_datasets:
- extended|wiki_qa
tags:
- as2
- answer sentence selection
- text retrieval
- question answering
task_categories:
- question-answering
- text-retrieval
task_ids:
- open-domain-qa
license: cdla-permissive-2.0
---
# Xtr-WikiQA
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Amazon Science](https://www.amazon.science/publications/cross-lingual-knowledge-distillation-for-answer-sentence-selection-in-low-resource-languages)
- **Paper:** [Cross-Lingual Knowledge Distillation for Answer Sentence Selection in Low-Resource Languages](https://aclanthology.org/2023.findings-acl.885/)
- **Point of Contact:** [Yoshitomo Matsubara](yomtsub@amazon.com)
### Dataset Summary
***Xtr-WikiQA*** is an Answer Sentence Selection (AS2) dataset in 9 non-English languages, proposed in our paper accepted at ACL 2023 (Findings): [**Cross-Lingual Knowledge Distillation for Answer Sentence Selection in Low-Resource Languages**](https://aclanthology.org/2023.findings-acl.885/).
This dataset is based on an English AS2 dataset, WikiQA ([Original](https://msropendata.com/datasets/21032bb1-88bd-4656-9570-3172ae1757f0), [Hugging Face](https://huggingface.co/datasets/wiki_qa)).
For translations, we used [Amazon Translate](https://aws.amazon.com/translate/).
### Languages
- Arabic (ar)
- Spanish (es)
- French (fr)
- German (de)
- Hindi (hi)
- Italian (it)
- Japanese (ja)
- Dutch (nl)
- Portuguese (pt)
File location: [`tsv/`](https://huggingface.co/datasets/AmazonScience/xtr-wiki_qa/tree/main/tsv)
## Dataset Structure
### Data Instances
This is an example instance from the Arabic training split of Xtr-WikiQA dataset.
```
{
"QuestionID": "Q1",
"Question": "ููู ุชุชุดูู ุงููููู ุงูุฌููุฏูุฉุ",
"DocumentID": "D1",
"DocumentTitle": "ููู ุฌููุฏู",
"SentenceID": "D1-0",
"Sentence": "ููู ุฌููุฏู ู
ุบู
ูุฑ ุฌุฒุฆููุง ุนูู ููุฑ ุจูุฑูุชู ู
ูุฑููู ุงูุฌููุฏู.",
"Label": 0
}
```
All the translated instances in tsv files are listed in the same order of the original (native) instances in the WikiQA dataset.
For example, the 2nd instance in [`tsv/ar-train.tsv`](https://huggingface.co/datasets/AmazonScience/xtr-wiki_qa/blob/main/tsv/ar-train.tsv) (Arabic-translated from English)
corresponds to the 2nd instance in [`WikiQA-train.tsv`](https://msropendata.com/datasets/21032bb1-88bd-4656-9570-3172ae1757f0) (English).
### Data Fields
Each instance (a QA pair) consists of the following fields:
- `QuestionID`: Question ID (str)
- `Question`: Question to be answered (str)
- `DocumentID`: Document ID (str)
- `DocumentTitle`: Document title (str)
- `SentenceID`: Answer sentence in the document (str)
- `Sentence`: Answer sentence in the document (str)
- `Label`: Label that indicates the answer sentence correctly answers the question (int, 1: correct, 0: incorrect)
### Data Splits
| | | **#Questions** | | | | **#Sentences** | |
|-------------------|------------:|---------------:|---------:|---|----------:|---------------:|---------:|
| | **train** | **dev** | **test** | | **train** | **dev** | **test** |
| **Each language** | 873 | 126 | 243 | | 8,671 | 1,130 | 2,351 |
See [our paper](#citation-information) for more details about the statistics of the datasets.
## Dataset Creation
### Source Data
The source of Xtr-WikiQA dataset is [WikiQA](https://msropendata.com/datasets/21032bb1-88bd-4656-9570-3172ae1757f0).
## Additional Information
### Licensing Information
[CDLA-Permissive-2.0](LICENSE.md)
### Citation Information
```bibtex
@inproceedings{gupta2023cross-lingual,
title={{Cross-Lingual Knowledge Distillation for Answer Sentence Selection in Low-Resource Languages}},
author={Gupta, Shivanshu and Matsubara, Yoshitomo and Chadha, Ankit and Moschitti, Alessandro},
booktitle={Findings of the Association for Computational Linguistics: ACL 2023},
pages={14078--14092},
year={2023}
}
```
### Contributions
- [Shivanshu Gupta](https://huggingface.co/shivanshu)
- [Yoshitomo Matsubara](https://huggingface.co/yoshitomo-matsubara)
- Ankit Chadha
- Alessandro Moschitti | [
-0.3657236397266388,
-0.5847952365875244,
0.17598038911819458,
-0.007593338377773762,
-0.22852568328380585,
0.1352175772190094,
-0.25791752338409424,
-0.34115099906921387,
0.21448755264282227,
0.21356193721294403,
-0.8012048602104187,
-0.6053479909896851,
-0.5510423183441162,
0.35743337869... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
argilla/comparison-data-falcon-with-feedback | argilla | 2023-06-07T14:38:44Z | 60 | 2 | null | [
"size_categories:1K<n<10K",
"rlfh",
"argilla",
"human-feedback",
"region:us"
] | 2023-06-07T14:38:44Z | 2023-06-07T13:54:15.000Z | 2023-06-07T13:54:15 | ---
size_categories: 1K<n<10K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for comparison-data-falcon-with-feedback
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.cfg`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("argilla/comparison-data-falcon-with-feedback")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("argilla/comparison-data-falcon-with-feedback")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/guides/llms/conceptual_guides/data_model.html) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| instruction | User instruction | TextField | True | False |
| response-1 | Response-1 | TextField | True | False |
| response-2 | Response-2 | TextField | True | False |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, single choice, or multiple choice.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| choose-best | Choose the best response: | RatingQuestion | True | Choose the most helpful, harmless, and truthful response. Select 1 for response-1, 2 for response-2, or discard if both are equally good/bad. | [1, 2] |
Finally, the **guidelines** are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": "518",
"fields": {
"instruction": "What is Depreciation",
"response-1": "Depreciation is the drop in value of an asset due to wear and tear, age and obsolescence (going out of date) as recorded in an organization\u0027s financial records.",
"response-2": "What is Depreciation \u2013 10 Important Facts to Know?\nWhen a business buys a new asset, the purchase price of that asset is depreciated over time to reflect its usage and eventual obsolescence. Depreciation expense can be a tax deductible expense and is usually a non-cash expense reported on a company\u2019s income statement and balance sheet. The amount of depreciation expense a company reports each year is the difference between the original purchase price of the asset and what the current value of that asset might be. Here are 10 important facts to know about depreciation:\n1. Depreciation is a non-cash expense. It is an expense that is reported in a business\u2019s income statement and balance sheet and not a cash flow expense.\n2. Depreciation is an accounting standard and it is required to be disclosed in a business\u2019s financial statements.\n3. The amount of depreciation is usually a tax expense and not a cash expense reported on a company\u2019s income statement"
},
"responses": [
{
"status": "submitted",
"user_id": null,
"values": {
"choose-best": {
"value": 1
}
}
}
]
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"choose-best": {
"status": [
"submitted"
],
"user_id": [
null
],
"value": [
1
]
},
"external_id": "518",
"instruction": "What is Depreciation",
"response-1": "Depreciation is the drop in value of an asset due to wear and tear, age and obsolescence (going out of date) as recorded in an organization\u0027s financial records.",
"response-2": "What is Depreciation \u2013 10 Important Facts to Know?\nWhen a business buys a new asset, the purchase price of that asset is depreciated over time to reflect its usage and eventual obsolescence. Depreciation expense can be a tax deductible expense and is usually a non-cash expense reported on a company\u2019s income statement and balance sheet. The amount of depreciation expense a company reports each year is the difference between the original purchase price of the asset and what the current value of that asset might be. Here are 10 important facts to know about depreciation:\n1. Depreciation is a non-cash expense. It is an expense that is reported in a business\u2019s income statement and balance sheet and not a cash flow expense.\n2. Depreciation is an accounting standard and it is required to be disclosed in a business\u2019s financial statements.\n3. The amount of depreciation is usually a tax expense and not a cash expense reported on a company\u2019s income statement"
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
* **instruction** is of type `TextField`.
* **response-1** is of type `TextField`.
* **response-2** is of type `TextField`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as rating, text, single choice, or multiple choice.
* **choose-best** is of type `RatingQuestion` with the following allowed values [1, 2], and description "Choose the most helpful, harmless, and truthful response. Select 1 for response-1, 2 for response-2, or discard if both are equally good/bad.".
Additionally, we also have one more field which is optional and is the following:
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
These guidelines are based on the paper [Training Language Models to Follow Instructions with Human Feedback](https://cdn.openai.com/papers/Training_language_models_to_follow_instructions_with_human_feedback.pdf)
**You are given a text-based description of a task, submitted by a user.**
This task description may be in the form of an explicit instruction (e.g. "Write a story about a wise frog."). The task may also be specified indirectly, for example by using several examples of the desired behavior (e.g. given a sequence of movie reviews followed by their sentiment, followed by one movie review without a sentiment, you can assume the task is to predict the sentiment of the final review), or by generating the start of a desired output (e.g. given "There once was a wise frog named Julius," you can assume the task is to continue the story).
**You will also be given several text outputs, intended to help the user with their task.** Your job is to evaluate these outputs to ensure that they are helpful, truthful, and harmless, and select the best according to these criteria.
In case **none of the outputs is correct, conforming to the criteria, or they're exactly equal, use the discard option to not rank them**.
For most tasks, being truthful and harmless is more important than being helpful.
By **helpful**, we mean that the output should follow the user's intention and help the user solve their task. Some examples of helpful behavior:
- Writing in clear language.
- Answering the question they meant to ask, even if they mis-asked it.
- Being sensitive to internationality (e.g., "football" shouldn't mean American football, and "the president" doesn't necessarily mean the US president).
- If an instruction is too confusing, asking for clarification and explaining why the instruction is confusing.
- Not giving overly long or rambling answers, or repeating information from the question.
- Not assuming extraneous extra context outside of what's given (besides things like facts about the world), unless that's an implied part of the task. For example, if asked to "write a polite email response to this email: email body," the output shouldn't assume "I can't make it this time, but am free next weekend." But if asked to "Write an email to Socrates," then you can safely assume the output can make up a context.
By **truthful**, we mean that the output contains accurate information and doesn't mislead the user. Some examples of truthful behavior:
- On tasks like summarization, where the output should only use information from input, not making up details that are not part of the input description.
- Not producing clearly false information about the world (e.g., making up facts or promoting conspiracies). For example, the output should not state that Hillary Clinton has served time in prison.
- Avoid generating misleading information or information with questionable authenticity. For example, if the input asks "Why did Hillary Clinton go to jail?", the output should not say "It's not totally clear," but rather should refute the premise of the question.
By **harmless**, we mean that the output should not cause physical, psychological, or social harm to people; damage to or loss of equipment or property; damage to the environment; or harm to institutions or resources necessary to human well-being. Some examples of harmless behavior:
- Treating other humans with kindness, respect, and consideration; not denigrating members of certain groups or using biased language against a particular group.
- Not generating abusive, threatening, or offensive language or promoting violence.
- Not writing sexual or violent content if it's not asked for.
- Not giving bad real-world advice or promoting illegal activity.
Evaluating model outputs may involve making trade-offs between these criteria. These trade-offs will depend on the task.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.6174037456512451,
-0.9694830179214478,
0.1261301338672638,
0.2612028419971466,
-0.13613177835941315,
-0.23419307172298431,
0.07173634320497513,
-0.48618924617767334,
0.5037440061569214,
0.6212784647941589,
-0.7510342597961426,
-0.5483898520469666,
-0.40466535091400146,
0.228031113743782... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DFKI-SLT/conll2012_ontonotesv5 | DFKI-SLT | 2023-07-13T15:02:27Z | 60 | 0 | null | [
"region:us"
] | 2023-07-13T15:02:27Z | 2023-07-13T15:02:11.000Z | 2023-07-13T15:02:11 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ziozzang/EverythingLM-data-V2-Ko | ziozzang | 2023-08-23T07:03:47Z | 60 | 8 | null | [
"language:ko",
"license:mit",
"region:us"
] | 2023-08-23T07:03:47Z | 2023-08-23T06:53:09.000Z | 2023-08-23T06:53:09 | ---
license: mit
language:
- ko
---
# Translated into Korean with DeepL
All Texts are translated with DeepL. (Machine Translated.)
- Issue: some data items are missing, cause of DeepL plan and processing method. I use very cheap plan and all datas are merged into single file and splitted by few code and hand.
- This is sample/test processing of data set creation with DeepL.
- Original Dataset: totally-not-an-llm/EverythingLM-data-V2
# EverythingLM V2 Dataset
**EverythingLM V2** is a diverse instruct dataset consisting of 1k of human-assistant conversations. These sets were generated using principles from both evol-instruct and Orca. The dataset encompasses a wide array of topics and interactions.
### Differences for V1:
- All data in V2 is generated by GPT4
- Higher quality dataset generation pipeline:
- More humalike seed prompts
- Fixed some bugs in the script
- More diverse creative writing
- More diverse seed prompts in general
- Attempt not to overfit the model on complex instructions by occasionally skipping evol
### Cost:
Reproducing this dataset would cost roughly $40.
### Instruction Categories:
- Reasoning
- Creative Writing
- General Knowledge
- Brainstorming
- Search Query
- Coding
- Basic Instruct
We also leverage various system prompts for evol-instruct and for responding to prompts.
This dataset has also been filtered to remove OpenAI alignment.
### How it stands out:
- Long, detailed outputs
- Humanlike creativity
- CoT reasoning
- Complex & challenging tasks
### Plans:
- Train Llama 7b & 13b models (13b model V1 trained)
- Train Llama 70b QLoRA
- Generate V2 of the dataset, with more categories and GPT-4 (DONE) โ
Included in this repo is the script to generate the dataset. | [
-0.2128707617521286,
-0.6661460399627686,
0.44697728753089905,
0.21021340787410736,
-0.16962078213691711,
0.08629702776670456,
-0.15246255695819855,
-0.44371041655540466,
0.036104362457990646,
0.9396071434020996,
-0.8828398585319519,
-0.6972050666809082,
-0.23485895991325378,
-0.0199613105... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.