id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
hf-internal-testing/test-dataset | hf-internal-testing | 2022-09-05T16:10:12Z | 149 | 0 | null | [
"region:us"
] | 2022-09-05T16:10:12Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
huggingFaceUser02/air21_grp13_inference_results | huggingFaceUser02 | 2021-06-15T19:15:17Z | 149 | 0 | null | [
"region:us"
] | 2021-06-15T19:15:17Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
huggingFaceUser02/air21_grp13_tokenized_results | huggingFaceUser02 | 2021-06-10T11:00:20Z | 149 | 0 | null | [
"region:us"
] | 2021-06-10T11:00:20Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
huggingartists/hillsong-worship | huggingartists | 2021-08-30T18:36:51Z | 149 | 0 | null | [
"region:us"
] | 2021-08-30T18:36:51Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hyeonduck/your_dataset_name | hyeonduck | 2021-12-16T08:19:27Z | 149 | 0 | null | [
"region:us"
] | 2021-12-16T08:19:27Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
public-data/sample-images-TADNE | public-data | 2022-01-23T23:03:47Z | 149 | 0 | null | [
"region:us"
] | 2022-01-23T23:03:47Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
imflash217/github-issues | imflash217 | 2022-02-28T23:47:32Z | 149 | 0 | null | [
"region:us"
] | 2022-02-28T23:47:32Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ixxi/my_v1 | ixxi | 2022-02-07T15:39:44Z | 149 | 0 | null | [
"region:us"
] | 2022-02-07T15:39:44Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jacobbieker/open-crab-sample | jacobbieker | 2022-02-11T11:56:00Z | 149 | 0 | null | [
"region:us"
] | 2022-02-11T11:56:00Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | astrophysics
astroparticle
simulation
timeseries
point-cloud
# Dataset Card for FACT Open Crab Sample
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://factdata.app.tu-dortmund.de/
- **Repository:** [Needs More Information]
- **Paper:** https://iopscience.iop.org/article/10.1088/1748-0221/8/06/P06008/pdf, https://iopscience.iop.org/article/10.1088/1748-0221/9/10/P10012/pdf
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This is a mirror of the Open Crab Sample released by the FACT collaboration, containing simulations of astroparticle events as seen by the FACT telescope from the CORSIKA simulation program, as well as a few nights of observations of the Crab Nebula over 2013 and 2014. The simulation data is in two formats, the photon stream format, as well as a preprocessed version containing extracted features, and cleaned point clouds, which were performed with various levels of DBSCAN. The observations are all the raw data, with no cleaning or extracted features.
### Supported Tasks and Leaderboards
- 'classification': Classification of simulated events as either hadron or gamma events.
- 'regression': Predicting the energy of the initial energy of the simulated events, or where in the night sky the original particle originated
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
The goal of the Open Crab Sample is to open up astroparticle data for exploring different ways of doing analysis.
### Source Data
#### Initial Data Collection and Normalization
The initial simulated data was generated by the CORSIKA simulation program. The observations were taken by the FACT telescope on La Palma between 2013 and 2014. The data is not normalized.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
The simulations were annotated from the ground truth in the simulation, while the observations have no ground truths.
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | [
-0.5636676549911499,
-0.5543457865715027,
0.7857710123062134,
-0.018043089658021927,
-0.5601067543029785,
-0.19208210706710815,
-0.15163755416870117,
-0.27599677443504333,
1.0294989347457886,
0.5962145924568176,
-0.8161333203315735,
-0.5512980222702026,
-0.4201556444168091,
-0.037958286702... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jakemarcus/MATH | jakemarcus | 2021-09-22T16:00:35Z | 149 | 0 | null | [
"region:us"
] | 2021-09-22T16:00:35Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jamol1741/test_dataset | jamol1741 | 2021-09-10T11:12:46Z | 149 | 0 | null | [
"region:us"
] | 2021-09-10T11:12:46Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | test | [
-0.3891207277774811,
-0.9635319709777832,
0.2210875153541565,
0.15607964992523193,
-0.07489927858114243,
0.0734468623995781,
0.25029265880584717,
-0.2309502363204956,
0.24025516211986542,
0.5704401135444641,
-0.4998493492603302,
-0.16452203691005707,
-0.5219650864601135,
0.2751986682415008... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jel/covid | jel | 2022-02-15T01:34:31Z | 149 | 0 | null | [
"region:us"
] | 2022-02-15T01:34:31Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jhqwqq/2 | jhqwqq | 2021-09-29T06:58:22Z | 149 | 0 | null | [
"region:us"
] | 2021-09-29T06:58:22Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jianhong/dateset1 | jianhong | 2022-01-18T11:35:45Z | 149 | 0 | null | [
"region:us"
] | 2022-01-18T11:35:45Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jianhong/dateset2 | jianhong | 2022-01-18T11:37:44Z | 149 | 0 | null | [
"region:us"
] | 2022-01-18T11:37:44Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jiminsun/atc0_demo | jiminsun | 2022-02-24T01:39:29Z | 149 | 0 | null | [
"region:us"
] | 2022-02-24T01:39:29Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tner/mit_movie_trivia | tner | 2022-07-18T10:24:52Z | 149 | 2 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:other",
"region:us"
] | 2022-07-18T10:24:52Z | 2022-07-16T11:12:14.000Z | 2022-07-16T11:12:14 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: MIT Movie
---
# Dataset Card for "tner/mit_movie_trivia"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Dataset:** MIT Movie
- **Domain:** Movie
- **Number of Entity:** 12
### Dataset Summary
MIT Movie NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `Actor`, `Plot`, `Opinion`, `Award`, `Year`, `Genre`, `Origin`, `Director`, `Soundtrack`, `Relationship`, `Character_Name`, `Quote`
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tags': [0, 13, 14, 0, 0, 0, 3, 4, 4, 4, 4, 4, 4, 4, 4],
'tokens': ['a', 'steven', 'spielberg', 'film', 'featuring', 'a', 'bluff', 'called', 'devil', 's', 'tower', 'and', 'a', 'spectacular', 'mothership']
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/mit_movie_trivia/raw/main/dataset/label.json).
```python
{
"O": 0,
"B-Actor": 1,
"I-Actor": 2,
"B-Plot": 3,
"I-Plot": 4,
"B-Opinion": 5,
"I-Opinion": 6,
"B-Award": 7,
"I-Award": 8,
"B-Year": 9,
"B-Genre": 10,
"B-Origin": 11,
"I-Origin": 12,
"B-Director": 13,
"I-Director": 14,
"I-Genre": 15,
"I-Year": 16,
"B-Soundtrack": 17,
"I-Soundtrack": 18,
"B-Relationship": 19,
"I-Relationship": 20,
"B-Character_Name": 21,
"I-Character_Name": 22,
"B-Quote": 23,
"I-Quote": 24
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|mit_movie_trivia |6816 | 1000| 1953|
| [
-0.56097811460495,
-0.43477874994277954,
0.11098147928714752,
-0.16495878994464874,
-0.3870794475078583,
0.25804051756858826,
0.07164640724658966,
0.1884341984987259,
0.46943217515945435,
0.39201709628105164,
-0.6534345149993896,
-0.8539924621582031,
-0.6491678953170776,
0.1963397711515426... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lighteval/boolq_helm | lighteval | 2023-05-25T12:28:12Z | 149 | 0 | null | [
"region:us"
] | 2023-05-25T12:28:12Z | 2023-05-04T09:56:35.000Z | 2023-05-04T09:56:35 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
roneneldan/TinyStoriesInstruct | roneneldan | 2023-05-18T21:20:35Z | 149 | 19 | null | [
"region:us"
] | 2023-05-18T21:20:35Z | 2023-05-12T23:44:15.000Z | 2023-05-12T23:44:15 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Ankursingh/openwebtext_10K | Ankursingh | 2023-07-27T01:47:37Z | 149 | 0 | null | [
"license:mpl-2.0",
"region:us"
] | 2023-07-27T01:47:37Z | 2023-07-27T01:43:35.000Z | 2023-07-27T01:43:35 | ---
license: mpl-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 49626451.28403377
num_examples: 10000
- name: val
num_bytes: 19885319.02951233
num_examples: 4007
download_size: 41616125
dataset_size: 69511770.31354609
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
argilla/llama-2-banking-fine-tune | argilla | 2023-07-28T06:24:22Z | 149 | 7 | null | [
"size_categories:n<1K",
"rlfh",
"argilla",
"human-feedback",
"region:us"
] | 2023-07-28T06:24:22Z | 2023-07-28T06:24:20.000Z | 2023-07-28T06:24:20 | ---
size_categories: n<1K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for llama-2-banking-fine-tune
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("argilla/llama-2-banking-fine-tune")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("argilla/llama-2-banking-fine-tune")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/guides/llms/conceptual_guides/data_model.html) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| request | User request | TextField | True | False |
| response-1 | Assistant response 1 | TextField | True | True |
| response-2 | Assistant response 2 | TextField | True | True |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, single choice, or multiple choice.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| preference | Order response based on your preference | RankingQuestion | True | 1 = most accurate and helpful, 2 = worst. Ties are allowed. | ['response-1', 'response-2'] |
| correct-response | If both responses are innacurate, complete the response: | TextQuestion | True | N/A | N/A |
**✨ NEW** Additionally, we also have **suggestions**, which are linked to the existing questions, and so on, named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above.
Finally, the **guidelines** are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": null,
"fields": {
"request": "I tried to make a transfer but it failed",
"response-1": " Thank you for choosing the Bank of America for your financial needs! I\u0027m so sorry to hear that your transfer failed. Can you please provide me with more details about the transfer, such as the amount and the account it was being transferred from/to? This will help me better understand the issue and assist you in resolving it as quickly as possible. Additionally, have you tried checking your account balance or recent transactions to see if the transfer has been processed? If you have any further questions or concerns, please don\u0027t hesitate to ask. Thank you for trusting the Bank of America with your financial matters!",
"response-2": " Hello there! Thank you for choosing the Bank of America for your financial needs. I\u0027m so sorry to hear that your transfer failed. Can you please provide me with more details about the transfer, such as the amount and the account it was being transferred from/to? Additionally, do you have any error messages or confirmations that you received after attempting the transfer? This information will help me better understand the issue and assist you in resolving it as quickly as possible. Thank you for trusting the Bank of America with your financial matters."
},
"id": null,
"metadata": {},
"responses": [],
"suggestions": [
{
"agent": null,
"question_id": "b80fb550-1add-4ad6-93c9-b403e6342306",
"question_name": "preference",
"score": null,
"type": null,
"value": [
{
"rank": 1,
"value": "response-2"
},
{
"rank": 2,
"value": "response-1"
}
]
}
]
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"correct-response": null,
"correct-response-suggestion": null,
"correct-response-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"external_id": null,
"metadata": null,
"preference": null,
"preference-suggestion": {
"rank": [
1,
2
],
"value": [
"response-2",
"response-1"
]
},
"preference-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"request": "I tried to make a transfer but it failed",
"response-1": " Thank you for choosing the Bank of America for your financial needs! I\u0027m so sorry to hear that your transfer failed. Can you please provide me with more details about the transfer, such as the amount and the account it was being transferred from/to? This will help me better understand the issue and assist you in resolving it as quickly as possible. Additionally, have you tried checking your account balance or recent transactions to see if the transfer has been processed? If you have any further questions or concerns, please don\u0027t hesitate to ask. Thank you for trusting the Bank of America with your financial matters!",
"response-2": " Hello there! Thank you for choosing the Bank of America for your financial needs. I\u0027m so sorry to hear that your transfer failed. Can you please provide me with more details about the transfer, such as the amount and the account it was being transferred from/to? Additionally, do you have any error messages or confirmations that you received after attempting the transfer? This information will help me better understand the issue and assist you in resolving it as quickly as possible. Thank you for trusting the Bank of America with your financial matters."
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
* **request** is of type `TextField`.
* **response-1** is of type `TextField`.
* **response-2** is of type `TextField`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **preference** is of type `RankingQuestion` with the following allowed values ['response-1', 'response-2'], and description "1 = most accurate and helpful, 2 = worst. Ties are allowed.".
* (optional) **correct-response** is of type `TextQuestion`.
* **✨ NEW** **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **preference-suggestion** is of type `ranking` with the following allowed values ['response-1', 'response-2'].
* (optional) **correct-response-suggestion** is of type `text`.
Additionally, we also have one more field which is optional and is the following:
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
Please, read the question carefully and try to answer it as accurately as possible.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.44572335481643677,
-1.030228853225708,
0.29868805408477783,
0.5382071137428284,
-0.24355800449848175,
0.028559960424900055,
0.21203510463237762,
-0.4916195571422577,
0.5267009735107422,
0.789473295211792,
-0.45436692237854004,
-0.6107293963432312,
-0.8126723170280457,
0.1605113446712494... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gaodrew/roco-65k-256px | gaodrew | 2023-08-05T12:07:37Z | 149 | 0 | null | [
"region:us"
] | 2023-08-05T12:07:37Z | 2023-08-05T11:30:11.000Z | 2023-08-05T11:30:11 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 675508431.156
num_examples: 65418
download_size: 651136006
dataset_size: 675508431.156
---
# Dataset Card for "roco-65k-256px"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.44690772891044617,
0.019677357748150826,
0.03964144363999367,
0.20556597411632538,
-0.42791447043418884,
-0.05064154043793678,
0.05412302911281586,
-0.3134852945804596,
0.7790176272392273,
0.5813185572624207,
-0.7918979525566101,
-0.8407235145568848,
-0.5956907272338867,
-0.243798971176... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
awhall/aita_21-11_22-10 | awhall | 2023-11-10T05:28:52Z | 149 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-10T05:28:52Z | 2023-11-10T03:48:37.000Z | 2023-11-10T03:48:37 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
edfews/szdfcszdf | edfews | 2021-04-03T12:14:28Z | 148 | 0 | null | [
"region:us"
] | 2021-04-03T12:14:28Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
edge2992/rri_short | edge2992 | 2021-12-10T16:01:26Z | 148 | 0 | null | [
"region:us"
] | 2021-12-10T16:01:26Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
frtna/test | frtna | 2022-01-04T05:09:17Z | 148 | 0 | null | [
"region:us"
] | 2022-01-04T05:09:17Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
frtna/test2 | frtna | 2022-01-04T05:23:40Z | 148 | 0 | null | [
"region:us"
] | 2022-01-04T05:23:40Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gagan3012/vizwiz | gagan3012 | 2022-02-15T20:45:30Z | 148 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2022-02-15T20:45:30Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
license: apache-2.0
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
geekydevu/mlquestions | geekydevu | 2021-11-11T08:11:10Z | 148 | 0 | null | [
"region:us"
] | 2021-11-11T08:11:10Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
habu24/fdz | habu24 | 2021-09-10T14:47:37Z | 148 | 0 | null | [
"region:us"
] | 2021-09-10T14:47:37Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
herbievore/test | herbievore | 2021-11-21T14:50:05Z | 148 | 0 | null | [
"region:us"
] | 2021-11-21T14:50:05Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hfface/poopi | hfface | 2021-03-20T20:20:31Z | 148 | 0 | null | [
"region:us"
] | 2021-03-20T20:20:31Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
honghungle/dataset | honghungle | 2021-11-23T08:13:10Z | 148 | 0 | null | [
"region:us"
] | 2021-11-23T08:13:10Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
huyongquan/d2 | huyongquan | 2021-10-19T11:37:27Z | 148 | 0 | null | [
"region:us"
] | 2021-10-19T11:37:27Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | d2 | [
0.12379376590251923,
-0.5159492492675781,
0.6918704509735107,
0.40814709663391113,
-0.04033201187849045,
0.07225412130355835,
0.7772648334503174,
-0.1529453694820404,
0.1926647424697876,
0.8667584657669067,
-1.132832646369934,
-0.22621433436870575,
-1.2666165828704834,
0.2635475695133209,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
indonesian-nlp/id_personachat | indonesian-nlp | 2021-09-19T05:57:40Z | 148 | 2 | null | [
"region:us"
] | 2021-09-19T05:57:40Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jaimin/wav2vec2-large-xlsr-gujarati-demo | jaimin | 2021-03-24T03:41:24Z | 148 | 0 | null | [
"region:us"
] | 2021-03-24T03:41:24Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jamescalam/climate-fever-similarity | jamescalam | 2021-12-18T09:24:24Z | 148 | 0 | null | [
"region:us"
] | 2021-12-18T09:24:24Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SetFit/amazon_reviews_multi_ja | SetFit | 2022-03-23T15:40:06Z | 148 | 2 | null | [
"region:us"
] | 2022-03-23T15:40:06Z | 2022-03-13T02:46:28.000Z | 2022-03-13T02:46:28 | #amazon reviews multi japanese
This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the Japanese language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. | [
-0.8356197476387024,
-0.49609196186065674,
0.030084650963544846,
0.6328415870666504,
-0.37791064381599426,
0.08074486255645752,
0.00040069568785838783,
-0.5775901675224304,
0.712975263595581,
1.0753153562545776,
-1.1052712202072144,
-0.40902066230773926,
-0.16145940124988556,
0.40222248435... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
GATE-engine/mini_imagenet | GATE-engine | 2023-06-06T11:44:26Z | 148 | 1 | null | [
"region:us"
] | 2023-06-06T11:44:26Z | 2023-06-05T10:59:59.000Z | 2023-06-05T10:59:59 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: int64
splits:
- name: train
num_bytes: 2533332667.0
num_examples: 38400
- name: validation
num_bytes: 623452894.0
num_examples: 9600
- name: test
num_bytes: 781497663.0
num_examples: 12000
download_size: 3938112512
dataset_size: 3938283224.0
---
# Dataset Card for "mini_imagenet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8033517599105835,
-0.22306756675243378,
0.06610879302024841,
0.0410211905837059,
-0.3355262577533722,
-0.2095605432987213,
0.40586960315704346,
-0.17017284035682678,
1.0622328519821167,
0.4252772629261017,
-0.8024371862411499,
-0.5849301218986511,
-0.6436801552772522,
-0.303883165121078... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
albertvillanova/dummy_libri2mix | albertvillanova | 2021-07-28T09:44:02Z | 147 | 0 | null | [
"region:us"
] | 2021-07-28T09:44:02Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kudo-research/mustc-en-es-text-only | kudo-research | 2022-10-22T08:40:43Z | 147 | 0 | null | [
"annotations_creators:other",
"language_creators:other",
"multilinguality:translation",
"size_categories:unknown",
"language:en",
"language:es",
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2022-10-22T08:40:43Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
- es
language_bcp47:
- en-US
- es-ES
license:
- cc-by-nc-nd-4.0
multilinguality:
- translation
pretty_name: must-c_en-es_text-only
size_categories:
- unknown
source_datasets: []
task_categories:
- conditional-text-generation
task_ids:
- machine-translation
---
# Dataset Card for kudo-research/mustc-en-es-text-only
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://ict.fbk.eu/must-c-release-v1-2/](https://ict.fbk.eu/must-c-release-v1-2/)
- **Repository:** n/a
- **Paper:** [MuST-C: A multilingual corpus for end-to-end speech translation](https://www.sciencedirect.com/science/article/abs/pii/S0885230820300887)
- **Leaderboard:** n/a
- **Point of Contact:** Roldano Cattoni <cattoni@fbk.eu>; Marco Turchi <turchi@fbk.eu>
### Dataset Summary
This dataset is a selection of text only (English-Spanish) from the MuST-C corpus.
MuST-C is a multilingual speech translation corpus whose size and quality will facilitate the training of end-to-end systems for SLT from English into 14 languages (Arabic, Chinese, Czech, Dutch, French, German, Italian, Persian, Portuguese, Romanian, Russian, Spanish, Turkish and Vietnamese).
For each target language, MuST-C comprises several hundred hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual transcriptions and translations.
### Supported Tasks and Leaderboards
- `machine-translation`: The dataset can be used to train a model for machine-translation.
[More Information Needed]
### Languages
- en-US
- es-ES
## Dataset Structure
### Data Instances
Dataset example:
```
{
"translation": {
"en": "I'll tell you one quick story to illustrate what that's been like for me.",
"es": "Les diré una rápida historia para ilustrar lo que ha sido para mí."
}
}
```
### Data Fields
The fields are:
- `translation`: an object containing two items, constructed as key-value pairs:
- language code (key)
- text (value)
### Data Splits
More Information Needed...
| | Tain | Valid | Test |
|-------------------------|---------|-------|------|
| Input Sentences | 265,625 | 1316 | 2502 |
| Average Sentence Length | n/a | n/a | n/a |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
TED Talks
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
FBK - Fondazione Bruno Kessler, Trento, Italy
- Roldano Cattoni, Mattia Antonino Di Gangi, Luisa Bentivogli, Matteo Negri, Marco Turchi
### Licensing Information
- TED talks are copyrighted by TED Conference LLC and licensed under a
Creative Commons Attribution-NonCommercial-NoDerivs 4.0
(cfr. https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy)
- the MuST-C corpus is released under the same Creative Commons
Attribution-NonCommercial-NoDerivs 4.0 License.
### Citation Information
Bibtex reference:
```
@article{CATTONI2021101155,
title = {MuST-C: A multilingual corpus for end-to-end speech translation},
journal = {Computer Speech & Language},
volume = {66},
pages = {101155},
year = {2021},
issn = {0885-2308},
doi = {https://doi.org/10.1016/j.csl.2020.101155},
url = {https://www.sciencedirect.com/science/article/pii/S0885230820300887},
author = {Roldano Cattoni and Mattia Antonino {Di Gangi} and Luisa Bentivogli and Matteo Negri and Marco Turchi},
keywords = {Spoken language translation, Multilingual corpus},
abstract = {End-to-end spoken language translation (SLT) has recently gained popularity thanks to the advancement of sequence to sequence learning in its two parent tasks: automatic speech recognition (ASR) and machine translation (MT). However, research in the field has to confront with the scarcity of publicly available corpora to train data-hungry neural networks. Indeed, while traditional cascade solutions can build on sizable ASR and MT training data for a variety of languages, the available SLT corpora suitable for end-to-end training are few, typically small and of limited language coverage. We contribute to fill this gap by presenting MuST-C, a large and freely available Multilingual Speech Translation Corpus built from English TED Talks. Its unique features include: i) language coverage and diversity (from English into 14 languages from different families), ii) size (at least 237 hours of transcribed recordings per language, 430 on average), iii) variety of topics and speakers, and iv) data quality. Besides describing the corpus creation methodology and discussing the outcomes of empirical and manual quality evaluations, we present baseline results computed with strong systems on each language direction covered by MuST-C.}
}```
[DOI available here](https://doi.org/10.1016/j.csl.2020.101155)
### Contributions
Thanks to [@dblandan](https://github.com/dblandan) for adding this dataset.
| [
-0.3604227602481842,
-0.634975254535675,
0.24160028994083405,
0.38594257831573486,
-0.3342880308628082,
0.11849940568208694,
-0.6317835450172424,
-0.38034120202064514,
0.3944561779499054,
0.4745478928089142,
-0.7102238535881042,
-0.9091489315032959,
-0.6495550274848938,
0.33809953927993774... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ziq/depression_advice | ziq | 2022-06-07T02:48:24Z | 147 | 1 | null | [
"license:mit",
"region:us"
] | 2022-06-07T02:48:24Z | 2022-06-07T02:29:03.000Z | 2022-06-07T02:29:03 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pkavumba/balanced-copa | pkavumba | 2022-10-03T00:39:01Z | 147 | 0 | null | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|copa",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-10-03T00:39:01Z | 2022-10-03T00:33:09.000Z | 2022-10-03T00:33:09 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: BCOPA
size_categories:
- unknown
source_datasets:
- extended|copa
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
---
# Dataset Card for "Balanced COPA"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://balanced-copa.github.io/](https://balanced-copa.github.io/)
- **Repository:** [Balanced COPA](https://github.com/Balanced-COPA/Balanced-COPA)
- **Paper:** [When Choosing Plausible Alternatives, Clever Hans can be Clever](https://aclanthology.org/D19-6004/)
- **Point of Contact:** [@pkavumba](https://github.com/pkavumba)
### Dataset Summary
Bala-COPA: An English language Dataset for Training Robust Commonsense Causal Reasoning Models
The Balanced Choice of Plausible Alternatives dataset is a benchmark for training machine learning models that are robust to superficial cues/spurious correlations. The dataset extends the COPA dataset(Roemmele et al. 2011) with mirrored instances that mitigate against token-level superficial cues in the original COPA answers. The superficial cues in the original COPA datasets result from an unbalanced token distribution between the correct and the incorrect answer choices, i.e., some tokens appear more in the correct choices than the incorrect ones. Balanced COPA equalizes the token distribution by adding mirrored instances with identical answer choices but different labels.
The details about the creation of Balanced COPA and the implementation of the baselines are available in the paper.
Balanced COPA language en
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
- English
## Dataset Structure
### Data Instances
An example of 'validation' looks as follows.
```
{
"id": 1,
"premise": "My body cast a shadow over the grass.",
"choice1": "The sun was rising.",
"choice2": "The grass was cut.",
"question": "cause",
"label": 1,
"mirrored": false,
}
{
"id": 1001,
"premise": "The garden looked well-groomed.",
"choice1": "The sun was rising.",
"choice2": "The grass was cut.",
"question": "cause",
"label": 1,
"mirrored": true,
}
```
### Data Fields
The data fields are the same among all splits.
#### en
- `premise`: a `string` feature.
- `choice1`: a `string` feature.
- `choice2`: a `string` feature.
- `question`: a `string` feature.
- `label`: a `int32` feature.
- `id`: a `int32` feature.
- `mirrored`: a `bool` feature.
### Data Splits
| validation | test |
| ---------: | ---: |
| 1,000 | 500 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```
@inproceedings{kavumba-etal-2019-choosing,
title = "When Choosing Plausible Alternatives, Clever Hans can be Clever",
author = "Kavumba, Pride and
Inoue, Naoya and
Heinzerling, Benjamin and
Singh, Keshav and
Reisert, Paul and
Inui, Kentaro",
booktitle = "Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-6004",
doi = "10.18653/v1/D19-6004",
pages = "33--42",
abstract = "Pretrained language models, such as BERT and RoBERTa, have shown large improvements in the commonsense reasoning benchmark COPA. However, recent work found that many improvements in benchmarks of natural language understanding are not due to models learning the task, but due to their increasing ability to exploit superficial cues, such as tokens that occur more often in the correct answer than the wrong one. Are BERT{'}s and RoBERTa{'}s good performance on COPA also caused by this? We find superficial cues in COPA, as well as evidence that BERT exploits these cues.To remedy this problem, we introduce Balanced COPA, an extension of COPA that does not suffer from easy-to-exploit single token cues. We analyze BERT{'}s and RoBERTa{'}s performance on original and Balanced COPA, finding that BERT relies on superficial cues when they are present, but still achieves comparable performance once they are made ineffective, suggesting that BERT learns the task to a certain degree when forced to. In contrast, RoBERTa does not appear to rely on superficial cues.",
}
@inproceedings{roemmele2011choice,
title={Choice of plausible alternatives: An evaluation of commonsense causal reasoning},
author={Roemmele, Melissa and Bejan, Cosmin Adrian and Gordon, Andrew S},
booktitle={2011 AAAI Spring Symposium Series},
year={2011},
url={https://people.ict.usc.edu/~gordon/publications/AAAI-SPRING11A.PDF},
}
```
### Contributions
Thanks to [@pkavumba](https://github.com/pkavumba) for adding this dataset.
| [
-0.5213557481765747,
-0.7425644993782043,
0.2243429571390152,
0.40510907769203186,
-0.3031456470489502,
-0.06305272877216339,
-0.30259189009666443,
-0.5609720349311829,
0.4087109863758087,
0.4020010828971863,
-0.6925023198127747,
-0.7082513570785522,
-0.521067202091217,
0.11399725824594498... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ashraq/tmdb-people-image | ashraq | 2023-04-21T20:02:31Z | 147 | 3 | null | [
"region:us"
] | 2023-04-21T20:02:31Z | 2022-12-02T17:34:52.000Z | 2022-12-02T17:34:52 | ---
dataset_info:
features:
- name: adult
dtype: bool
- name: also_known_as
dtype: string
- name: biography
dtype: string
- name: birthday
dtype: string
- name: deathday
dtype: string
- name: gender
dtype: int64
- name: homepage
dtype: string
- name: id
dtype: int64
- name: imdb_id
dtype: string
- name: known_for_department
dtype: string
- name: name
dtype: string
- name: place_of_birth
dtype: string
- name: popularity
dtype: float64
- name: profile_path
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 3749610460.6819267
num_examples: 116403
download_size: 3733145768
dataset_size: 3749610460.6819267
---
Data was obtained from [TMDB API](https://developers.themoviedb.org/3) | [
-0.13072019815444946,
-0.7450228929519653,
1.0956419706344604,
0.30528539419174194,
-0.28212252259254456,
0.5292971730232239,
0.6392936706542969,
-0.12810157239437103,
0.5364494919776917,
0.7676789164543152,
-0.82757568359375,
-1.1869150400161743,
-0.2605063021183014,
-0.1777734011411667,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DavidVivancos/MindBigData2022_MNIST_MU | DavidVivancos | 2022-12-29T22:50:39Z | 147 | 0 | null | [
"license:odbl",
"region:us"
] | 2022-12-29T22:50:39Z | 2022-12-29T22:49:09.000Z | 2022-12-29T22:49:09 | ---
license: odbl
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cambridgeltl/vsr_random | cambridgeltl | 2023-03-22T17:28:37Z | 147 | 1 | null | [
"task_categories:text-classification",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"multimodality",
"vision-and-language",
"arxiv:2205.00363",
"region:us"
] | 2023-03-22T17:28:37Z | 2023-03-22T16:27:00.000Z | 2023-03-22T16:27:00 | ---
license: cc-by-4.0
task_categories:
- text-classification
- question-answering
language:
- en
tags:
- multimodality
- vision-and-language
pretty_name: VSR (random split)
size_categories:
- 10K<n<100K
---
# VSR: Visual Spatial Reasoning
This is the **random set** of **VSR**: *Visual Spatial Reasoning* (TACL 2023) [[paper]](https://arxiv.org/abs/2205.00363).
### Usage
```python
from datasets import load_dataset
data_files = {"train": "train.jsonl", "dev": "dev.jsonl", "test": "test.jsonl"}
dataset = load_dataset("cambridgeltl/vsr_random", data_files=data_files)
```
Note that the image files still need to be downloaded separately. See [`data/`](https://github.com/cambridgeltl/visual-spatial-reasoning/tree/master/data) for details.
Go to our [github repo](https://github.com/cambridgeltl/visual-spatial-reasoning) for more introductions.
### Citation
If you find VSR useful:
```bibtex
@article{Liu2022VisualSR,
title={Visual Spatial Reasoning},
author={Fangyu Liu and Guy Edward Toh Emerson and Nigel Collier},
journal={Transactions of the Association for Computational Linguistics},
year={2023},
}
``` | [
-0.4027846157550812,
-0.6635030508041382,
0.6168079972267151,
0.1406962275505066,
-0.25739675760269165,
-0.07703355699777603,
-0.1774817854166031,
-0.2984480857849121,
-0.012923816218972206,
0.3851014971733093,
-0.37087488174438477,
-0.6215378046035767,
-0.3699505925178528,
0.0765286237001... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nanyy1025/ChangeMyView | nanyy1025 | 2023-11-08T23:46:39Z | 147 | 0 | null | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"region:us"
] | 2023-11-08T23:46:39Z | 2023-11-07T07:32:58.000Z | 2023-11-07T07:32:58 | ---
dataset_info:
- config_name: op_task
features:
- name: op_title
dtype: string
- name: op_text
dtype: string
- name: label
dtype: bool
splits:
- name: train
num_bytes: 16468331
num_examples: 10743
- name: test
num_bytes: 2676807
num_examples: 1529
download_size: 11928858
dataset_size: 19145138
- config_name: pair_task
features:
- name: op_title
dtype: string
- name: op_text
dtype: string
- name: label
dtype: bool
- name: arguments
sequence: string
splits:
- name: train
num_bytes: 23867387
num_examples: 6912
- name: test
num_bytes: 5539208
num_examples: 1614
download_size: 11481110
dataset_size: 29406595
configs:
- config_name: op_task
data_files:
- split: train
path: op_task/train-*
- split: test
path: op_task/test-*
- config_name: pair_task
data_files:
- split: train
path: pair_task/train-*
- split: test
path: pair_task/test-*
license: mit
task_categories:
- text-classification
language:
- en
size_categories:
- 1K<n<10K
---
# Dataset Card for "ChangeMyView"
# Abstract
Changing someone's opinion is arguably one of the most important challenges of social interaction. The underlying process proves difficult to study: it is hard to know how someone's opinions are formed and whether and how someone's views shift. Fortunately, ChangeMyView, an active community on Reddit, provides a platform where users present their own opinions and reasoning, invite others to contest them, and acknowledge when the ensuing discussions change their original views. In this work, we study these interactions to understand the mechanisms behind persuasion.
We find that persuasive arguments are characterized by interesting patterns of interaction dynamics, such as participant entry-order and degree of back-and-forth exchange. Furthermore, by comparing similar counterarguments to the same opinion, we show that language factors play an essential role. In particular, the interplay between the language of the opinion holder and that of the counterargument provides highly predictive cues of persuasiveness. Finally, since even in this favorable setting people may not be persuaded, we investigate the problem of determining whether someone's opinion is susceptible to being changed at all. For this more difficult task, we show that stylistic choices in how the opinion is expressed carry predictive power.
# Note
The origional dataset is downloaded from [link](https://chenhaot.com/data/cmv/cmv.tar.bz2). This dataset has been preprocessed. The dataset includes two tasks and can be used to replicate the experiments in the paper "Winning Arguments: Interaction Dynamics and Persuasion Strategies in Good-faith Online Discussions"
# Webpage
For more information, visit the [ChangeMyView webpage](https://chenhaot.com/papers/changemyview.html).
# Citation
```
@inproceedings{tan+etal:16a,
author = {Chenhao Tan and Vlad Niculae and Cristian Danescu-Niculescu-Mizil and Lillian Lee},
title = {Winning Arguments: Interaction Dynamics and Persuasion Strategies in Good-faith Online Discussions},
year = {2016},
booktitle = {Proceedings of WWW}
}
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5927771329879761,
-0.9027031660079956,
0.3729950189590454,
0.17778030037879944,
-0.3898361921310425,
-0.04844122380018234,
-0.22115913033485413,
-0.3159904479980469,
0.5136544704437256,
0.1495828926563263,
-0.6140698790550232,
-0.3994610905647278,
-0.6772686839103699,
0.0714953020215034... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
andreagasparini/librispeech_test_only | andreagasparini | 2022-07-06T17:26:04Z | 146 | 1 | null | [
"region:us"
] | 2022-07-06T17:26:04Z | 2022-07-06T17:13:36.000Z | 2022-07-06T17:13:36 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kakaobrain/coyo-700m | kakaobrain | 2022-08-30T19:07:52Z | 146 | 81 | null | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_categories:zero-shot-classification",
"task_ids:image-captioning",
"annotations_creators:no-annotation",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"langu... | 2022-08-30T19:07:52Z | 2022-08-25T15:54:43.000Z | 2022-08-25T15:54:43 |
---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- other
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: COYO-700M
size_categories:
- 100M<n<1B
source_datasets:
- original
tags:
- image-text pairs
task_categories:
- text-to-image
- image-to-text
- zero-shot-classification
task_ids:
- image-captioning
---
# Dataset Card for COYO-700M
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [COYO homepage](https://kakaobrain.com/contents/?contentId=7eca73e3-3089-43cb-b701-332e8a1743fd)
- **Repository:** [COYO repository](https://github.com/kakaobrain/coyo-dataset)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [COYO email](coyo@kakaobrain.com)
### Dataset Summary
**COYO-700M** is a large-scale dataset that contains **747M image-text pairs** as well as many other **meta-attributes** to increase the usability to train various models. Our dataset follows a similar strategy to previous vision-and-language datasets, collecting many informative pairs of alt-text and its associated image in HTML documents. We expect COYO to be used to train popular large-scale foundation models
complementary to other similar datasets. For more details on the data acquisition process, please refer to the technical paper to be released later.
### Supported Tasks and Leaderboards
We empirically validated the quality of COYO dataset by re-implementing popular models such as [ALIGN](https://arxiv.org/abs/2102.05918), [unCLIP](https://arxiv.org/abs/2204.06125), and [ViT](https://arxiv.org/abs/2010.11929).
We trained these models on COYO-700M or its subsets from scratch, achieving competitive performance to the reported numbers or generated samples in the original papers.
Our pre-trained models and training codes will be released soon along with the technical paper.
### Languages
The texts in the COYO-700M dataset consist of English.
## Dataset Structure
### Data Instances
Each instance in COYO-700M represents single image-text pair information with meta-attributes:
```
{
'id': 841814333321,
'url': 'https://blog.dogsof.com/wp-content/uploads/2021/03/Image-from-iOS-5-e1614711641382.jpg',
'text': 'A Pomsky dog sitting and smiling in field of orange flowers',
'width': 1000,
'height': 988,
'image_phash': 'c9b6a7d8469c1959',
'text_length': 59,
'word_count': 11,
'num_tokens_bert': 13,
'num_tokens_gpt': 12,
'num_faces': 0,
'clip_similarity_vitb32': 0.4296875,
'clip_similarity_vitl14': 0.35205078125,
'nsfw_score_opennsfw2': 0.00031447410583496094,
'nsfw_score_gantman': 0.03298913687467575,
'watermark_score': 0.1014641746878624,
'aesthetic_score_laion_v2': 5.435476303100586
}
```
### Data Fields
| name | type | description |
|--------------------------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| id | long | Unique 64-bit integer ID generated by [monotonically_increasing_id()](https://spark.apache.org/docs/3.1.3/api/python/reference/api/pyspark.sql.functions.monotonically_increasing_id.html) |
| url | string | The image URL extracted from the `src` attribute of the `<img>` tag |
| text | string | The text extracted from the `alt` attribute of the `<img>` tag |
| width | integer | The width of the image |
| height | integer | The height of the image |
| image_phash | string | The [perceptual hash(pHash)](http://www.phash.org/) of the image |
| text_length | integer | The length of the text |
| word_count | integer | The number of words separated by spaces. |
| num_tokens_bert | integer | The number of tokens using [BertTokenizer](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertTokenizer) |
| num_tokens_gpt | integer | The number of tokens using [GPT2TokenizerFast](https://huggingface.co/docs/transformers/model_doc/gpt2#transformers.GPT2TokenizerFast) |
| num_faces | integer | The number of faces in the image detected by [SCRFD](https://insightface.ai/scrfd) |
| clip_similarity_vitb32 | float | The cosine similarity between text and image(ViT-B/32) embeddings by [OpenAI CLIP](https://github.com/openai/CLIP) |
| clip_similarity_vitl14 | float | The cosine similarity between text and image(ViT-L/14) embeddings by [OpenAI CLIP](https://github.com/openai/CLIP) |
| nsfw_score_opennsfw2 | float | The NSFW score of the image by [OpenNSFW2](https://github.com/bhky/opennsfw2) |
| nsfw_score_gantman | float | The NSFW score of the image by [GantMan/NSFW](https://github.com/GantMan/nsfw_model) |
| watermark_score | float | The watermark probability of the image by our internal model |
| aesthetic_score_laion_v2 | float | The aesthetic score of the image by [LAION-Aesthetics-Predictor-V2](https://github.com/christophschuhmann/improved-aesthetic-predictor) |
### Data Splits
Data was not split, since the evaluation was expected to be performed on more widely used downstream task(s).
## Dataset Creation
### Curation Rationale
Similar to most vision-and-language datasets, our primary goal in the data creation process is to collect many pairs of alt-text and image sources in HTML documents crawled from the web. Therefore, We attempted to eliminate uninformative images or texts with minimal cost and improve our dataset's usability by adding various meta-attributes. Users can use these meta-attributes to sample a subset from COYO-700M and use it to train the desired model. For instance, the *num_faces* attribute could be used to make a subset like *COYO-Faces* and develop a privacy-preserving generative model.
### Source Data
#### Initial Data Collection and Normalization
We collected about 10 billion pairs of alt-text and image sources in HTML documents in [CommonCrawl](https://commoncrawl.org/) from Oct. 2020 to Aug. 2021. and eliminated uninformative pairs through the image and/or text level filtering process with minimal cost.
**Image Level**
* Included all image formats that [Pillow library](https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html) can decode. (JPEG, WEBP, PNG, BMP, ...)
* Removed images less than 5KB image size.
* Removed images with an aspect ratio greater than 3.0.
* Removed images with min(width, height) < 200.
* Removed images with a score of [OpenNSFW2](https://github.com/bhky/opennsfw2) or [GantMan/NSFW](https://github.com/GantMan/nsfw_model) higher than 0.5.
* Removed all duplicate images based on the image [pHash](http://www.phash.org/) value from external public datasets.
* ImageNet-1K/21K, Flickr-30K, MS-COCO, CC-3M, CC-12M
**Text Level**
* Collected only English text using [cld3](https://github.com/google/cld3).
* Replaced consecutive whitespace characters with a single whitespace and removed the whitespace before and after the sentence.
(e.g. `"\n \n Load image into Gallery viewer, valentine&#39;s day roses\n \n" → "Load image into Gallery viewer, valentine&#39;s day roses"`)
* Removed texts with a length of 5 or less.
* Removed texts that do not have a noun form.
* Removed texts with less than 3 words or more than 256 words and texts over 1000 in length.
* Removed texts appearing more than 10 times.
(e.g. `“thumbnail for”, “image for”, “picture of”`)
* Removed texts containing NSFW words collected from [profanity_filter](https://github.com/rominf/profanity-filter/blob/master/profanity_filter/data/en_profane_words.txt), [better_profanity](https://github.com/snguyenthanh/better_profanity/blob/master/better_profanity/profanity_wordlist.txt), and [google_twunter_lol](https://gist.github.com/ryanlewis/a37739d710ccdb4b406d).
**Image-Text Level**
* Removed duplicated samples based on (image_phash, text).
(Different text may exist for the same image URL.)
#### Who are the source language producers?
[Common Crawl](https://commoncrawl.org/) is the data source for COYO-700M.
### Annotations
#### Annotation process
The dataset was built in a fully automated process that did not require human annotation.
#### Who are the annotators?
No human annotation
### Personal and Sensitive Information
#### Disclaimer & Content Warning
The COYO dataset is recommended to be used for research purposes.
Kakao Brain tried to construct a "Safe" dataset when building the COYO dataset. (See [Data Filtering](#source-data) Section) Kakao Brain is constantly making efforts to create more "Safe" datasets.
However, despite these efforts, this large-scale dataset was not hand-picked by humans to avoid the risk due to its very large size (over 700M).
Keep in mind that the unscreened nature of the dataset means that the collected images can lead to strongly discomforting and disturbing content for humans.
The COYO dataset may contain some inappropriate data, and any problems resulting from such data are the full responsibility of the user who used it.
Therefore, it is strongly recommended that this dataset be used only for research, keeping this in mind when using the dataset, and Kakao Brain does not recommend using this dataset as it is without special processing to clear inappropriate data to create commercial products.
## Considerations for Using the Data
### Social Impact of Dataset
It will be described in a paper to be released soon.
### Discussion of Biases
It will be described in a paper to be released soon.
### Other Known Limitations
It will be described in a paper to be released soon.
## Additional Information
### Dataset Curators
COYO dataset was released as an open source in the hope that it will be helpful to many research institutes and startups for research purposes. We look forward to contacting us from various places who wish to cooperate with us.
[coyo@kakaobrain.com](mailto:coyo@kakaobrain.com)
### Licensing Information
#### License
The COYO dataset of Kakao Brain is licensed under [CC-BY-4.0 License](https://creativecommons.org/licenses/by/4.0/).
The full license can be found in the [LICENSE.cc-by-4.0 file](./coyo-700m/blob/main/LICENSE.cc-by-4.0).
The dataset includes “Image URL” and “Text” collected from various sites by analyzing Common Crawl data, an open data web crawling project.
The collected data (images and text) is subject to the license to which each content belongs.
#### Obligation to use
While Open Source may be free to use, that does not mean it is free of obligation.
To determine whether your intended use of the COYO dataset is suitable for the CC-BY-4.0 license, please consider the license guide.
If you violate the license, you may be subject to legal action such as the prohibition of use or claim for damages depending on the use.
### Citation Information
If you apply this dataset to any project and research, please cite our code:
```
@misc{kakaobrain2022coyo-700m,
title = {COYO-700M: Image-Text Pair Dataset},
author = {Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, Saehoon Kim},
year = {2022},
howpublished = {\url{https://github.com/kakaobrain/coyo-dataset}},
}
```
### Contributions
- Minwoo Byeon ([@mwbyeon](https://github.com/mwbyeon))
- Beomhee Park ([@beomheepark](https://github.com/beomheepark))
- Haecheon Kim ([@HaecheonKim](https://github.com/HaecheonKim))
- Sungjun Lee ([@justhungryman](https://github.com/justHungryMan))
- Woonhyuk Baek ([@wbaek](https://github.com/wbaek))
- Saehoon Kim ([@saehoonkim](https://github.com/saehoonkim))
- and Kakao Brain Large-Scale AI Studio
| [
-0.712491512298584,
-0.7373230457305908,
0.09025337547063828,
0.22189047932624817,
-0.4066471755504608,
-0.26203691959381104,
-0.18940864503383636,
-0.5139219760894775,
0.3315695822238922,
0.30633583664894104,
-0.6559172868728638,
-0.8412361741065979,
-0.5184744596481323,
0.108816631138324... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/gad | bigbio | 2022-12-22T15:25:28Z | 146 | 1 | gad | [
"multilinguality:momolingual",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-12-22T15:25:28Z | 2022-09-26T03:36:32.000Z | 2022-09-26T03:36:32 | ---
language:
- en
bigbio_language:
- English
license: cc-by-4.0
multilinguality: momolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: GAD
homepage: https://geneticassociationdb.nih.gov/
bigbio_pubmed: true
bigbio_public: true
bigbio_tasks:
- TEXT_CLASSIFICATION
paperswithcode_id: gad
---
# Dataset Card for GAD
## Dataset Description
- **Homepage:** https://geneticassociationdb.nih.gov/
- **Pubmed:** True
- **Public:** True
- **Tasks:** TXTCLASS
A corpus identifying associations between genes and diseases by a semi-automatic
annotation procedure based on the Genetic Association Database.
## Note about homepage
The homepage for this dataset is no longer reachable, but the url is recorded here.
Data for this dataset was originally downloaded from a google drive
folder (the link used in the [BLURB benchmark data download script](https://microsoft.github.io/BLURB/submit.html).
However, we host the data in the huggingface hub for more reliable downloads and access.
## Citation Information
```
@article{Bravo2015,
doi = {10.1186/s12859-015-0472-9},
url = {https://doi.org/10.1186/s12859-015-0472-9},
year = {2015},
month = feb,
publisher = {Springer Science and Business Media {LLC}},
volume = {16},
number = {1},
author = {{\`{A}}lex Bravo and Janet Pi{\~{n}}ero and N{\'{u}}ria Queralt-Rosinach and Michael Rautschka and Laura I Furlong},
title = {Extraction of relations between genes and diseases from text and large-scale data analysis: implications for translational research},
journal = {{BMC} Bioinformatics}
}
```
| [
-0.4190123379230499,
-0.9583996534347534,
0.3841128349304199,
0.029283637180924416,
-0.17025059461593628,
0.20772112905979156,
-0.03843570873141289,
-0.6720916628837585,
0.6423609852790833,
0.1521596610546112,
-0.5024784803390503,
-0.9175227284431458,
-0.6587544083595276,
0.136713102459907... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rahmanfadhil/squad_v2_id | rahmanfadhil | 2023-01-12T11:14:51Z | 146 | 0 | null | [
"region:us"
] | 2023-01-12T11:14:51Z | 2023-01-12T11:01:07.000Z | 2023-01-12T11:01:07 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int32
- name: text
sequence: string
splits:
- name: train
num_bytes: 121632833
num_examples: 130318
- name: validation
num_bytes: 12218827
num_examples: 11858
download_size: 0
dataset_size: 133851660
---
# Dataset Card for "squad_id"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5371058583259583,
-0.2326512485742569,
0.1137322336435318,
0.40853628516197205,
-0.1724412590265274,
0.34089532494544983,
0.415700227022171,
-0.12936651706695557,
0.7773733139038086,
0.15774542093276978,
-1.3113659620285034,
-0.7777351140975952,
-0.41962388157844543,
-0.0435229912400245... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
martinsinnona/visdecode | martinsinnona | 2023-11-28T18:46:20Z | 146 | 0 | null | [
"region:us"
] | 2023-11-28T18:46:20Z | 2023-06-30T14:39:33.000Z | 2023-06-30T14:39:33 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1362584.0
num_examples: 100
- name: test
num_bytes: 884677.0
num_examples: 59
download_size: 1817149
dataset_size: 2247261.0
---
# Dataset Card for "ploty"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.49726602435112,
-0.18321825563907623,
0.34270620346069336,
0.41451799869537354,
-0.15567201375961304,
0.06150895729660988,
0.4590156376361847,
-0.21887226402759552,
1.0951567888259888,
0.6065908074378967,
-0.6789281964302063,
-0.5858474969863892,
-0.7572375535964966,
-0.3683722615242004... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
amlan107/syn_0 | amlan107 | 2023-10-20T13:54:20Z | 146 | 0 | null | [
"region:us"
] | 2023-10-20T13:54:20Z | 2023-10-20T13:54:19.000Z | 2023-10-20T13:54:19 | ---
dataset_info:
features:
- name: bn
dtype: string
- name: ck
dtype: string
splits:
- name: train
num_bytes: 1794536.5235337194
num_examples: 12016
download_size: 839316
dataset_size: 1794536.5235337194
---
# Dataset Card for "syn_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4741959571838379,
-0.2147265076637268,
0.3114033341407776,
0.07158491015434265,
-0.2612222731113434,
-0.06639168411493301,
0.16509190201759338,
-0.3859046995639801,
1.2532916069030762,
0.2838181257247925,
-1.0085963010787964,
-0.7623676061630249,
-0.4875742197036743,
-0.1034950241446495... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
siavava/ai-tech-articles | siavava | 2023-11-19T14:06:56Z | 146 | 1 | null | [
"task_categories:text-generation",
"task_categories:feature-extraction",
"language:en",
"license:mit",
"temporal series data",
"language data",
"doi:10.57967/hf/1324",
"region:us"
] | 2023-11-19T14:06:56Z | 2023-11-05T05:56:04.000Z | 2023-11-05T05:56:04 | ---
language:
- en
license: mit
task_categories:
- text-generation
- feature-extraction
pretty_name: AI/Technology Articles
tags:
- temporal series data
- language data
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: int64
- name: year
dtype: int64
- name: title
dtype: string
- name: url
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 180820047
num_examples: 17092
download_size: 81702923
dataset_size: 180820047
---
# AI/Tech Dataset
This dataset is a collection of AI/tech articles scraped from the web.
It's hosted on [HuggingFace Datasets](https://huggingface.co/datasets/siavava/ai-tech-articles), so it is easier to load in and work with.
## To load the dataset
### 1. Install [HuggingFace Datasets](https://huggingface.co/docs/datasets/installation.html)
```bash
pip install datasets
```
### 2. Load the dataset
```python
from datasets import load_dataset
dataset = load_dataset("siavava/ai-tech-articles")
# optionally, convert it to a pandas dataframe:
df = dataset["train"].to_pandas()
```
You do not need to clone this repo.
HuggingFace will download the dataset for you, the first time that you load it,
and cache it locally so it does not need to re-download it again
(unless it detects a change upstream).
## File Structure
- [`analytics.ipynb`](analytics.ipynb) - Notebook containing some details about the dataset.
- [`example.ipynb`](example.ipynb) - A minimal notebook that loads in the dataset and converts to Pandas.
- [`raw.csv`](raw.csv) - The raw data, in CSV format.
- `data/*.parquet`- compressed [parquet](https://www.databricks.com/glossary/what-is-parquet) containing the data.
- For raw text files, see the [scraper repo](https://github.com/siavava/scrape.hs) on GitHub.
| [
-0.4316302239894867,
-0.6352395415306091,
0.03504965081810951,
0.34825509786605835,
-0.19443291425704956,
0.2461887001991272,
0.038502417504787445,
-0.19848929345607758,
0.7189627289772034,
0.3899078369140625,
-0.673542320728302,
-0.607367753982544,
-0.4250885248184204,
0.19145697355270386... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Langame/starter | Langame | 2022-12-06T18:54:01Z | 145 | 0 | null | [
"task_categories:text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:mit",
"region:us"
] | 2022-12-06T18:54:01Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- expert-generated
license:
- mit
multilinguality:
- monolingual
pretty_name: ''
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-generation
task_ids: []
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | [
-0.47841677069664,
-0.5084842443466187,
0.14602939784526825,
0.278889000415802,
-0.21702472865581512,
0.24832050502300262,
-0.3366999328136444,
-0.375893235206604,
0.6720379590988159,
0.6457639932632446,
-0.9167346358299255,
-1.2200126647949219,
-0.7551794052124023,
0.07273735105991364,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DDSC/dagw_reddit_filtered_v1.0.0 | DDSC | 2022-11-06T15:30:56Z | 145 | 1 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:DDSC/partial-danish-gigaword-no-twitter",
"source_datasets:DDSC/reddit-da",
"language:da... | 2022-11-06T15:30:56Z | 2022-05-11T13:46:39.000Z | 2022-05-11T13:46:39 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- da
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- DDSC/partial-danish-gigaword-no-twitter
- DDSC/reddit-da
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: Danish Gigaword Corpus, Reddit (filtered)
language_bcp47:
- da
- da-bornholm
- da-synnejyl
---
# Danish Gigaword Corpus, Reddit (filtered)
*Version*: 1.0.0
*License*: See the respective dataset
This dataset is a variant of the Danish Gigaword [3], which excludes the sections containing
tweets and modified news contained in danavis20.
Twitter was excluded as it was a sample of a dataset which was available to the authors only.
DanAvis20 (or danavis) was excluded due to preprocessing described in [3] (version 1 on
[arxiv](https://arxiv.org/pdf/2005.03521v1.pdf))including shuffling of sentences,
pseudonymization of proper nouns and the replacement of infrequent content-words with
statistical cognates, which could lead to sentences such as *"Der er skilsmissesager i
forsikringsselskabet"*.
Additionally this dataset includes the [reddit-da](https://huggingface.co/datasets/DDSC/reddit-da) dataset, which includes
1,908,887 documents. This dataset has had low-quality text removed using a series
of heuristic filters. Following filtering,
DAGW$_{DFM}$ is deduplicated to remove exact and near-duplicates. For more on data
cleaning, see the section on post-processing.
This dataset included 1,310,789,818 tokens before filtering, and 833,664,528 (0.64%) after.
# Dataset information
This is a composite dataset consisting of Danish Gigaword and
[reddit-da](https://huggingface.co/datasets/DDSC/reddit-da). Thus it does not contains its own documentation. For more information, we recommend checking the documentation of the
respective datasets.
### Motivation:
**For what purpose was the dataset created? Who created the dataset? Who funded the
creation of the dataset?**
This dataset was created with the purpose of pre-training Danish language models. It was created by a team of
researchers at the Center for Humanities Computing Aarhus (CHCAA) using a codebase jointly
developed with partners from industry and academia, e.g. KMD, Ekstra Bladet, deepdivr,
and Bristol University. For more on collaborators on this project see
the [GitHub repository](https://github.com/centre-for-humanities-computing/danish-foundation-models
).
## Processing
### Quality Filter:
DAGW$_{DFM}$ applies a filter akin to [2]. It keeps documents that:
- Contain at least 2 Danish stopwords. For the stopword list, we use the one used in
SpaCy v.3.1.4.
- Have a mean word length between 3 and 10.
- Have a token length between 50 and 100,000.
- Contain fewer than 5,000,000 characters.
- Among all words, at least 60% have at least one alphabetic character.
- Have a symbol-to-word ratio lower than 10% for hashtags and ellipsis.
- Have fewer than 90% of lines starting with a bullet point.
- Have fewer than 30% of lines ending with an ellipsis.
- Have a low degree of repetitious text:
- Fewer than 30% duplicate lines.
- Fewer than 30% duplicate paragraphs.
- Fewer than 30% of characters are contained within duplicate lines.
- The top 2-4 grams constitute less than 20%, 18%, and 16% of characters, respectively.
- Where, for each document, 5-10 grams which occur more than once, constitute less than 15%, 14%, 13%, 12%, 11%, and 10% of
the characters, respectively.
### Deduplication
The deduplication removed all documents with a 13-gram similarity higher than 80%
following the MinHash algorithm [1] using 128 permutations. The MinHash algorithm is a
probabilistic data structure for approximating the Jaccard similarity between two sets.
# References:
- [1] Broder, Andrei Z. "On the resemblance and containment of documents."
Proceedings. Compression and Complexity of SEQUENCES 1997
(Cat. No. 97TB100171). IEEE, 1997.
- [2] Rae, J. W., Borgeaud, S., Cai, T., Millican, K., Hoffmann, J., Song, F.,
Aslanides, J., Henderson, S., Ring, R., Young, S., Rutherford, E., Hennigan,
T., Menick, J., Cassirer, A., Powell, R., Driessche, G. van den, Hendricks,
L. A., Rauh, M., Huang, P.-S., … Irving, G. (2021).
Scaling Language Models: Methods, Analysis & Insights from Training Gopher.
https://arxiv.org/abs/2112.11446v2
- [3] Strømberg-Derczynski, L., Ciosici, M., Baglini, R., Christiansen, M. H.,
Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A.,
Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Madsen, J., Petersen, M. L.,
Rystrøm, J. H., & Varab, D. (2021). The Danish Gigaword corpus. Proceedings of the
23rd Nordic Conference on Computational Linguistics (NoDaLiDa), 413–421.
https://aclanthology.org/2021.nodalida-main.46
### Citation
If you wish to cite this work, please see the GitHub page for an up-to-date citation:
https://github.com/centre-for-humanities-computing/danish-foundation-models
| [
-0.5536198616027832,
-0.7467564344406128,
0.3854801654815674,
0.267557829618454,
-0.44270262122154236,
-0.028480686247348785,
-0.3218849003314972,
-0.39102715253829956,
0.4856434762477875,
0.5714398622512817,
-0.36257222294807434,
-0.7833759188652039,
-0.6752520799636841,
0.318580687046051... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dreamerdeo/finqa | dreamerdeo | 2023-03-06T08:29:39Z | 145 | 2 | null | [
"region:us"
] | 2023-03-06T08:29:39Z | 2023-03-05T08:38:40.000Z | 2023-03-05T08:38:40 | dataset_info:
features:
- name: id
dtype: string
- name: post_text
sequence: string
- name: pre_text
sequence: string
- name: question
dtype: string
- name: answers
dtype: string
- name: table
sequence:
sequence: string
splits:
- name: train
num_bytes: 26984130
num_examples: 6251
- name: validation
num_bytes: 3757103
num_examples: 883
- name: test
num_bytes: 4838430
num_examples: 1147
download_size: 21240722
dataset_size: 35579663
| [
-0.8185880780220032,
-0.6971942186355591,
0.07829934358596802,
0.47606703639030457,
-0.5775181651115417,
-0.14187826216220856,
0.1075177863240242,
0.12946970760822296,
0.5690054893493652,
0.6408299207687378,
-0.4653719663619995,
-0.4274206757545471,
-0.6706176996231079,
0.409691721200943,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/mqp | bigbio | 2022-12-22T15:45:40Z | 144 | 0 | null | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-12-22T15:45:40Z | 2022-11-13T22:10:07.000Z | 2022-11-13T22:10:07 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: MQP
homepage: https://github.com/curai/medical-question-pair-dataset
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- SEMANTIC_SIMILARITY
---
# Dataset Card for MQP
## Dataset Description
- **Homepage:** https://github.com/curai/medical-question-pair-dataset
- **Pubmed:** False
- **Public:** True
- **Tasks:** STS
Medical Question Pairs dataset by McCreery et al (2020) contains pairs of medical questions and paraphrased versions of
the question prepared by medical professional. Paraphrased versions were labelled as similar (syntactically dissimilar
but contextually similar ) or dissimilar (syntactically may look similar but contextually dissimilar). Labels 1: similar, 0: dissimilar
## Citation Information
```
@article{DBLP:journals/biodb/LiSJSWLDMWL16,
author = {Krallinger, M., Rabal, O., Lourenço, A.},
title = {Effective Transfer Learning for Identifying Similar Questions: Matching User Questions to COVID-19 FAQs},
journal = {KDD '20: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining},
volume = {3458–3465},
year = {2020},
url = {https://github.com/curai/medical-question-pair-dataset},
doi = {},
biburl = {},
bibsource = {}
}
```
| [
-0.15049469470977783,
-0.5988210439682007,
0.31417202949523926,
-0.2002190500497818,
-0.3037155270576477,
0.10150474309921265,
0.09886907786130905,
-0.2866927683353424,
0.2801983654499054,
0.43562889099121094,
-0.5224310159683228,
-0.49234944581985474,
-0.34089916944503784,
0.1942335665225... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DDSC/partial-danish-gigaword-small-test-sample | DDSC | 2023-01-09T13:11:16Z | 144 | 0 | null | [
"language:da",
"region:us"
] | 2023-01-09T13:11:16Z | 2023-01-09T13:07:16.000Z | 2023-01-09T13:07:16 | ---
dataset_info:
features:
- name: text
dtype: string
- name: source
dtype: string
- name: doc_id
dtype: string
- name: LICENSE
dtype: string
- name: uri
dtype: string
- name: date_built
dtype: string
splits:
- name: train
num_bytes: 23816547.04337273
num_examples: 2411
download_size: 11686492
dataset_size: 23816547.04337273
language:
- da
pretty_name: Danish Gigaword Test Sample
---
# Dataset Card for "Danish Gigaword Test Sample"
This is a small sample of the dataset `DDSC/partial-danish-gigaword-no-twitter`. It is meant as a small dataset for testing code. It is constructed using the following code:
```python
from datasets import concatenate_datasets, load_dataset
# download dataset from huggingface
dataset = load_dataset("DDSC/partial-danish-gigaword-no-twitter")
# All of the dataset is available in the train split - we can simply:
dataset = dataset["train"]
# downsample it to three domains
legal = dataset.filter(lambda x: x["source"] == "retsinformationdk")
news = dataset.filter(lambda x: x["source"] == "tv2r")
speech = dataset.filter(lambda x: x["source"] == "spont")
# downsample to 1000 samples
legal = legal.select(range(1000))
news = news.select(range(1000))
# combine the three domains
dataset = concatenate_datasets([legal, news, speech])
# upload to hub
dataset.push_to_hub("DDSC/partial-danish-gigaword-small-test-sample")
``` | [
-0.6656700968742371,
-0.4816093444824219,
0.04643535986542702,
0.4361943304538727,
-0.6010194420814514,
0.05728139728307724,
-0.29894277453422546,
-0.20067337155342102,
0.607970118522644,
0.45014822483062744,
-0.7687014937400818,
-0.5103777050971985,
-0.2634011209011078,
0.2135226875543594... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BelleGroup/school_math_0.25M | BelleGroup | 2023-04-08T03:55:03Z | 144 | 66 | null | [
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:zh",
"license:gpl-3.0",
"region:us"
] | 2023-04-08T03:55:03Z | 2023-04-02T06:57:09.000Z | 2023-04-02T06:57:09 | ---
license: gpl-3.0
task_categories:
- text2text-generation
language:
- zh
size_categories:
- 100K<n<1M
---
# School Math 0.25M
## 内容
包含约25万条由[BELLE](https://github.com/LianjiaTech/BELLE)项目生成的中文数学题数据,包含解题过程。
注意:此数据集是由ChatGPT产生的,未经过严格校验,题目或解题过程可能包含错误。使用过程中请注意这一点。
## 样例
```
{
"instruction": "题目:小华手里有一个装满糖果的袋子,共有12个,小明想知道里面有多少颗糖果,于是他问小华:“你手里的糖果袋子里有偶数个糖果吗?”小华回答:“有,而且多于10颗。”请问小华手里的糖果袋子里最少有几颗糖果?",
"input": "",
"output": "\n由题目可知:小华手里的糖果袋子里有偶数个糖果;\n又知道小华手里的糖果袋子里有多于10颗糖果。\n因为糖果数为偶数,多于10颗,所以糖果数最小必须是12。\n所以小华手里的糖果袋子里最少有12颗糖果。"
}
```
### 字段
```
instruction: 指令
input: 输入(本数据集均为空)
output: 输出
```
## 局限性和使用限制
我们要求开发者仅将我们开源的代码、数据、模型及后续衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。
由于数据是由*ChatGPT*生成的,未经严格验证,在事实性和其他方面还存在一些不足。因此,在使用此数据集时,请务必注意甄别。
本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目的开发者不承担任何责任。
***
# School Math 0.25M
## Contents
Includes approx. 250k Chinese school math quizzes and answers generated by BELLE, with steps to solve the quizzes.
Note: this subset was generated by *ChatGPT* and was not strictly verified. The quizzes or answers might contain errors. Please take this in mind when using this subset.
## Sample
```
{
"instruction": "题目:小华手里有一个装满糖果的袋子,共有12个,小明想知道里面有多少颗糖果,于是他问小华:“你手里的糖果袋子里有偶数个糖果吗?”小华回答:“有,而且多于10颗。”请问小华手里的糖果袋子里最少有几颗糖果?",
"input": "",
"output": "\n由题目可知:小华手里的糖果袋子里有偶数个糖果;\n又知道小华手里的糖果袋子里有多于10颗糖果。\n因为糖果数为偶数,多于10颗,所以糖果数最小必须是12。\n所以小华手里的糖果袋子里最少有12颗糖果。"
}
```
### Schema
```
instruction: 指令
input: 输入(本数据集均为空)
output: 输出
```
## Limitation and Usage Limits
We require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed.
Since this dataset was generated by *ChatGPT* and was not strictly verified, it still has shortcomings regarding factuality and other aspects. When using this dataset, careful inspection is needed.
This dataset does not represent anyone's ground, interest or thought, and is not related to any kind of claim of any groups. The developers of this project do not assume any responsibility to potential harm inflicted by using this dataset and project. | [
-0.36712339520454407,
-0.8582386374473572,
0.1802322268486023,
0.6791313886642456,
-0.40444591641426086,
-0.2583051025867462,
-0.04328708350658417,
-0.248006209731102,
0.2934579849243164,
0.26774564385414124,
-0.6201363801956177,
-0.8683972358703613,
-0.5895798802375793,
-0.196085989475250... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
togethercomputer/Long-Data-Collections | togethercomputer | 2023-07-26T17:03:50Z | 144 | 62 | null | [
"license:other",
"region:us"
] | 2023-07-26T17:03:50Z | 2023-07-26T07:11:25.000Z | 2023-07-26T07:11:25 | ---
license: other
---
# Dataset Summary
This collection is a compilation of long context datasets, specifically designed for tasks requiring extensive comprehension and inference from large text inputs.
Currently, it encompasses data intended for training a robust base model, which can be found in the pretrain/ directory. Additionally, it includes datasets tailored for specific needs, located in the fine-tune/ directory. These specialized datasets include multi-passage question answering, derived from Natural Questions, and long-context summarization, exemplified by the BookSum dataset.
# Detailed Description
## Pretrain Data
The pretraining data is a collection of diverse datasets utilized to train the AI model. These datasets include a variety of sources that provide a wide range of information, from books to scientific papers, and instruction data. Here's a detailed look at each:
### RedPajama-Book
This dataset is a specific slice of the larger RedPajama-Data-1T. The RedPajama-Book subset specifically focuses on data extracted from books. This broad and diverse range of literary content helps the model to understand and generate text in a wide variety of styles, genres, and topics, and especially in a wide range of context.
### RedPajama-ArXiv
The RedPajama-ArXiv dataset is another specific slice of RedPajama-Data-1T. In this dataset, the abstract corresponding to each paper is appended after the paper, providing a summary of the paper's content. This helps the model to leverage the long-range context.
### UL2 Oscar
This dataset is generated with LAION-AI's Open-Instruction-Generalist, asking the model to fill in missing chunks, or complete the text.
### RedPajama
This is a subset of the RedPajama-Data-1T. The RedPajama dataset is a large and diverse dataset that includes a wide variety of data sources. The specific subset used in this case (togethercomputer/RedPajama-Data-1T-Sample) is a representative sample of the larger dataset, providing a broad overview of the types of data included in RedPajama-Data-1T.
### NI
The Materialized Natural Instruction (NI) data is a dataset that focuses on natural language instructions. This dataset has been decontaminated against HELM core scenarios, meaning any data that matches specific scenarios outlined in the HELM core has been removed to avoid bias or overfitting. This dataset aids the model in understanding and generating instructional text.
### P3
The Materialized Public Pool of Prompts (P3) data is a dataset that includes a wide variety of user-generated prompts. This dataset has also been decontaminated against HELM core scenarios. The P3 dataset helps the model in understanding a broad set of user prompts and generating appropriate responses.
### Pile
The Pile dataset is a large and diverse dataset that includes a wide variety of data sources. The specific subset used in this case is a subsample of the larger Pile dataset.
## Fine-tune Data
### Multi-passage QA from Natural Questions:
This dataset is a multi-passage question answering dataset derived from the original Natural Questions (NQ) dataset by Google. The NQ dataset consists of real user queries issued to Google's search engine, paired with high-quality answers. In this derived version, each example consists of a question along with multiple (10-200) Wiki passages, from which the model must infer the correct answer. This dataset is designed to challenge and evaluate models on their ability to handle complex, multi-passage question answering.
### BookSum:
BookSum is a dataset for long context summarization. It includes a vast collection of books from various genres, and the task is to generate a coherent and concise summary given a long context from the book. This dataset is designed to test and train models on their ability to understand and summarize long, complex narratives.
# Dataset Limitations and Future Work
While these datasets provide a robust platform for training and evaluating models on long context tasks, they may still contain some limitations. For instance, the datasets might be biased towards the types of questions asked in Google's search engine and the genres of books included in the BookSum dataset. In the future, we plan to expand this collection to include more diverse datasets for a wider range of long context tasks.
# Licensing Information
Please refer to the original sources of the datasets for information on their respective licenses. | [
-0.6303051114082336,
-0.7179576754570007,
0.18674400448799133,
-0.12987783551216125,
-0.2360701560974121,
-0.2516320049762726,
-0.09623841941356659,
-0.27332642674446106,
-0.08925919234752655,
0.7814992070198059,
-0.736831784248352,
-0.589947521686554,
-0.39901936054229736,
0.2340380549430... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gtfintechlab/fomc_communication | gtfintechlab | 2023-09-12T21:18:49Z | 144 | 1 | null | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-nc-4.0",
"finance",
"region:us"
] | 2023-09-12T21:18:49Z | 2023-09-12T21:00:59.000Z | 2023-09-12T21:00:59 | ---
license: cc-by-nc-4.0
task_categories:
- text-classification
language:
- en
tags:
- finance
size_categories:
- 1K<n<10K
---
## Citation and Contact Information
### Cite
Please cite our paper if you use any code, data, or models.
```c
@inproceedings{shah-etal-2023-trillion,
title = "Trillion Dollar Words: A New Financial Dataset, Task {\&} Market Analysis",
author = "Shah, Agam and
Paturi, Suvan and
Chava, Sudheer",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.368",
doi = "10.18653/v1/2023.acl-long.368",
pages = "6664--6679",
abstract = "Monetary policy pronouncements by Federal Open Market Committee (FOMC) are a major driver of financial market returns. We construct the largest tokenized and annotated dataset of FOMC speeches, meeting minutes, and press conference transcripts in order to understand how monetary policy influences financial markets. In this study, we develop a novel task of hawkish-dovish classification and benchmark various pre-trained language models on the proposed dataset. Using the best-performing model (RoBERTa-large), we construct a measure of monetary policy stance for the FOMC document release days. To evaluate the constructed measure, we study its impact on the treasury market, stock market, and macroeconomic indicators. Our dataset, models, and code are publicly available on Huggingface and GitHub under CC BY-NC 4.0 license.",
}
```
### Contact Information
Please contact Agam Shah (ashah482[at]gatech[dot]edu) for any issues and questions.
GitHub: [@shahagam4](https://github.com/shahagam4)
Website: [https://shahagam4.github.io/](https://shahagam4.github.io/) | [
-0.3375941216945648,
-0.7327091097831726,
0.4299246072769165,
0.2922547161579132,
-0.022318094968795776,
0.03032718040049076,
-0.5803124904632568,
-0.21489253640174866,
-0.06084994226694107,
0.4145492911338806,
-0.3794272243976593,
-0.6387717127799988,
-0.6526363492012024,
-0.0185312610119... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yangwang825/sst2-pwws | yangwang825 | 2023-10-09T22:08:55Z | 144 | 0 | null | [
"region:us"
] | 2023-10-09T22:08:55Z | 2023-10-09T21:10:05.000Z | 2023-10-09T21:10:05 | # Stanford Sentiment Treebank - Binary | [
-0.12394005805253983,
-0.22449058294296265,
0.1937677413225174,
0.899057149887085,
-0.5129232406616211,
0.2830127477645874,
0.2150372862815857,
-0.20814304053783417,
0.4294002652168274,
0.2890351116657257,
-0.44846874475479126,
-0.7690048813819885,
-0.8366124629974365,
0.033661291003227234... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kinianlo/prlang | kinianlo | 2023-10-29T23:18:56Z | 144 | 1 | null | [
"region:us"
] | 2023-10-29T23:18:56Z | 2023-10-21T02:01:27.000Z | 2023-10-21T02:01:27 | ---
dataset_info:
- config_name: conceptnet5_vocabulary_en
features:
- name: word
dtype: string
- name: tag
dtype: string
splits:
- name: train
num_bytes: 123167929
num_examples: 6846008
download_size: 45799508
dataset_size: 123167929
- config_name: wiki_20220301_en_nltk_adjectives
features:
- name: adj_id
dtype: uint32
- name: adj
dtype: string
- name: count
dtype: uint64
splits:
- name: train
num_bytes: 39119443
num_examples: 1323576
download_size: 24403987
dataset_size: 39119443
- config_name: wiki_20220301_en_nltk_nouns
features:
- name: noun_id
dtype: uint32
- name: noun
dtype: string
splits:
- name: train
num_bytes: 12442756.0
num_examples: 676770
download_size: 11115529
dataset_size: 12442756.0
- config_name: wiki_20220301_en_nltk_phrases
features:
- name: phrase_id
dtype: uint32
- name: adj_id
dtype: uint32
- name: noun_id
dtype: uint32
- name: count
dtype: uint64
splits:
- name: train
num_bytes: 207602960
num_examples: 10380148
download_size: 129734024
dataset_size: 207602960
- config_name: wiki_20220301_en_nltk_phrases_with_string
features:
- name: phrase_id
dtype: uint32
- name: adj
dtype: string
- name: noun
dtype: string
- name: count
dtype: uint64
splits:
- name: train
num_bytes: 377124084
num_examples: 10380148
download_size: 172157247
dataset_size: 377124084
- config_name: wiki_20220301_en_nltk_uncased_adjectives
features:
- name: adj_id
dtype: uint32
- name: adj
dtype: string
- name: count
dtype: uint64
splits:
- name: train
num_bytes: 36784396
num_examples: 1235601
download_size: 22724468
dataset_size: 36784396
- config_name: wiki_20220301_en_nltk_uncased_nouns
features:
- name: noun_id
dtype: uint32
- name: noun
dtype: string
- name: count
dtype: uint64
splits:
- name: train
num_bytes: 17153952
num_examples: 647524
download_size: 10809791
dataset_size: 17153952
- config_name: wiki_20220301_en_nltk_uncased_phrases
features:
- name: phrase_id
dtype: uint32
- name: adj_id
dtype: uint32
- name: noun_id
dtype: uint32
- name: count
dtype: uint64
splits:
- name: train
num_bytes: 198626820
num_examples: 9931341
download_size: 124034311
dataset_size: 198626820
- config_name: wiki_20220301_en_nltk_uncased_phrases_clean
features:
- name: phrase_id
dtype: uint32
- name: adj_id
dtype: uint32
- name: noun_id
dtype: uint32
- name: count
dtype: uint64
splits:
- name: train
num_bytes: 67986800
num_examples: 3399340
download_size: 41983842
dataset_size: 67986800
- config_name: wiki_20220301_en_nltk_uncased_phrases_with_string
features:
- name: phrase_id
dtype: uint32
- name: adj
dtype: string
- name: noun
dtype: string
- name: count
dtype: uint64
splits:
- name: train
num_bytes: 361160989
num_examples: 9931341
download_size: 164282553
dataset_size: 361160989
- config_name: wiki_20220301_simple_tags_nltk_adjectives
features:
- name: id
dtype: int32
- name: adjective
dtype: string
- name: count
dtype: int64
splits:
- name: train
num_bytes: 508056
num_examples: 21152
download_size: 351437
dataset_size: 508056
- config_name: wiki_20220301_simple_tags_nltk_contexts
features:
- name: noun1_id
dtype: int64
- name: noun2_id
dtype: int64
- name: noun1_bert_id
dtype: int64
- name: noun2_bert_id
dtype: int64
- name: adjective1_id
dtype: int64
- name: adjective2_id
dtype: int64
- name: schema_id
dtype: int64
- name: sentence
dtype: string
- name: mask_position
dtype: int64
splits:
- name: train
num_bytes: 5162738640
num_examples: 34644320
download_size: 562170983
dataset_size: 5162738640
- config_name: wiki_20220301_simple_tags_nltk_contexts_epsilon
features:
- name: noun1_id
dtype: int64
- name: noun2_id
dtype: int64
- name: adjective1_id
dtype: int64
- name: adjective2_id
dtype: int64
- name: schema_id
dtype: int64
- name: epsilon
dtype: float64
splits:
- name: train
num_bytes: 1662927360
num_examples: 34644320
download_size: 342106520
dataset_size: 1662927360
- config_name: wiki_20220301_simple_tags_nltk_contexts_epsilon_no_intro
features:
- name: noun1_id
dtype: int64
- name: noun2_id
dtype: int64
- name: adjective1_id
dtype: int64
- name: adjective2_id
dtype: int64
- name: schema_id
dtype: int64
- name: epsilon
dtype: float64
splits:
- name: train
num_bytes: 1662927360
num_examples: 34644320
download_size: 337961367
dataset_size: 1662927360
- config_name: wiki_20220301_simple_tags_nltk_contexts_no_intro
features:
- name: noun1_id
dtype: int64
- name: noun2_id
dtype: int64
- name: noun1_bert_id
dtype: int64
- name: noun2_bert_id
dtype: int64
- name: adjective1_id
dtype: int64
- name: adjective2_id
dtype: int64
- name: schema_id
dtype: int64
- name: sentence
dtype: string
- name: mask_position
dtype: int64
splits:
- name: train
num_bytes: 4022762320
num_examples: 34644320
download_size: 285243023
dataset_size: 4022762320
- config_name: wiki_20220301_simple_tags_nltk_filtered_noun_pairs
features:
- name: noun1_id
dtype: int64
- name: noun2_id
dtype: int64
- name: adjectives_id
sequence: int64
splits:
- name: train
num_bytes: 25983240
num_examples: 433054
download_size: 4499602
dataset_size: 25983240
- config_name: wiki_20220301_simple_tags_nltk_noun_pairs
features:
- name: noun1_id
dtype: int32
- name: noun2_id
dtype: int32
- name: adjectives_id
sequence: int32
splits:
- name: train
num_bytes: 125583432
num_examples: 3245260
download_size: 44230314
dataset_size: 125583432
- config_name: wiki_20220301_simple_tags_nltk_nouns
features:
- name: id
dtype: int32
- name: noun
dtype: string
- name: count
dtype: int64
splits:
- name: train
num_bytes: 221774
num_examples: 9521
download_size: 154872
dataset_size: 221774
- config_name: wiki_20220301_simple_tags_nltk_phrases
features:
- name: adjective_id
dtype: int32
- name: noun_id
dtype: int32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 3514128
num_examples: 219633
download_size: 1993091
dataset_size: 3514128
- config_name: wiki_20220301_simple_tags_nltk_scenarios
features:
- name: noun1_id
dtype: uint32
- name: noun2_id
dtype: uint32
- name: adjectives_id
sequence: uint32
- name: epsilons
sequence: float64
splits:
- name: train
num_bytes: 2702256960
num_examples: 51966480
download_size: 553286399
dataset_size: 2702256960
configs:
- config_name: conceptnet5_vocabulary_en
data_files:
- split: train
path: conceptnet5_vocabulary_en/train-*
- config_name: wiki_20220301_en_nltk_adjectives
data_files:
- split: train
path: wiki_20220301_en_nltk_adjectives/train-*
- config_name: wiki_20220301_en_nltk_nouns
data_files:
- split: train
path: wiki_20220301_en_nltk_nouns/train-*
- config_name: wiki_20220301_en_nltk_phrases
data_files:
- split: train
path: wiki_20220301_en_nltk_phrases/train-*
- config_name: wiki_20220301_en_nltk_phrases_with_string
data_files:
- split: train
path: wiki_20220301_en_nltk_phrases_with_string/train-*
- config_name: wiki_20220301_en_nltk_uncased_adjectives
data_files:
- split: train
path: wiki_20220301_en_nltk_uncased_adjectives/train-*
- config_name: wiki_20220301_en_nltk_uncased_nouns
data_files:
- split: train
path: wiki_20220301_en_nltk_uncased_nouns/train-*
- config_name: wiki_20220301_en_nltk_uncased_phrases
data_files:
- split: train
path: wiki_20220301_en_nltk_uncased_phrases/train-*
- config_name: wiki_20220301_en_nltk_uncased_phrases_clean
data_files:
- split: train
path: wiki_20220301_en_nltk_uncased_phrases_clean/train-*
- config_name: wiki_20220301_en_nltk_uncased_phrases_with_string
data_files:
- split: train
path: wiki_20220301_en_nltk_uncased_phrases_with_string/train-*
- config_name: wiki_20220301_simple_tags_nltk_adjectives
data_files:
- split: train
path: wiki_20220301_simple_tags_nltk_adjectives/train-*
- config_name: wiki_20220301_simple_tags_nltk_contexts
data_files:
- split: train
path: wiki_20220301_simple_tags_nltk_contexts/train-*
- config_name: wiki_20220301_simple_tags_nltk_contexts_epsilon
data_files:
- split: train
path: wiki_20220301_simple_tags_nltk_contexts_epsilon/train-*
- config_name: wiki_20220301_simple_tags_nltk_contexts_epsilon_no_intro
data_files:
- split: train
path: wiki_20220301_simple_tags_nltk_contexts_epsilon_no_intro/train-*
- config_name: wiki_20220301_simple_tags_nltk_contexts_no_intro
data_files:
- split: train
path: wiki_20220301_simple_tags_nltk_contexts_no_intro/train-*
- config_name: wiki_20220301_simple_tags_nltk_filtered_noun_pairs
data_files:
- split: train
path: wiki_20220301_simple_tags_nltk_filtered_noun_pairs/train-*
- config_name: wiki_20220301_simple_tags_nltk_noun_pairs
data_files:
- split: train
path: wiki_20220301_simple_tags_nltk_noun_pairs/train-*
- config_name: wiki_20220301_simple_tags_nltk_nouns
data_files:
- split: train
path: wiki_20220301_simple_tags_nltk_nouns/train-*
- config_name: wiki_20220301_simple_tags_nltk_phrases
data_files:
- split: train
path: wiki_20220301_simple_tags_nltk_phrases/train-*
- config_name: wiki_20220301_simple_tags_nltk_scenarios
data_files:
- split: train
path: wiki_20220301_simple_tags_nltk_scenarios/train-*
---
# Dataset Card for "prlang"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5404379367828369,
-0.23192952573299408,
0.021447831764817238,
0.45923471450805664,
-0.2761651575565338,
-0.0837741568684578,
-0.09023324400186539,
-0.13410097360610962,
0.7889487147331238,
0.5581076741218567,
-0.7324997782707214,
-0.7823659777641296,
-0.4755600690841675,
-0.132679313421... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PsiPi/CodeAlpaca_20k_NoBlanks | PsiPi | 2023-10-29T06:06:30Z | 144 | 1 | null | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"license:cc-by-4.0",
"code",
"region:us"
] | 2023-10-29T06:06:30Z | 2023-10-28T19:33:39.000Z | 2023-10-28T19:33:39 | ---
license: cc-by-4.0
task_categories:
- text-generation
tags:
- code
size_categories:
- 10K<n<100K
---
Just a repost of the upstream with "" records elided | [
-0.12531021237373352,
-0.5053056478500366,
0.5261115431785583,
0.2537934184074402,
-0.5049706697463989,
-0.35091397166252136,
0.38672617077827454,
-0.1649816781282425,
0.8094879388809204,
0.8973783850669861,
-0.8348460793495178,
-0.4097643494606018,
-0.6090119481086731,
0.22707925736904144... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/chebi_nactem | bigbio | 2022-12-22T15:44:20Z | 143 | 0 | null | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-12-22T15:44:20Z | 2022-11-13T22:07:43.000Z | 2022-11-13T22:07:43 |
---
language:
- en
bigbio_language:
- English
license: cc-by-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: CHEBI Corpus
homepage: http://www.nactem.ac.uk/chebi
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- RELATION_EXTRACTION
---
# Dataset Card for CHEBI Corpus
## Dataset Description
- **Homepage:** http://www.nactem.ac.uk/chebi
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,RE
The ChEBI corpus contains 199 annotated abstracts and 100 annotated full papers.
All documents in the corpus have been annotated for named entities and relations
between these. In total, our corpus provides over 15000 named entity annotations
and over 6,000 relations between entities.
## Citation Information
```
@inproceedings{Shardlow2018,
title = {
A New Corpus to Support Text Mining for the Curation of Metabolites in the
{ChEBI} Database
},
author = {
Shardlow, M J and Nguyen, N and Owen, G and O'Donovan, C and Leach, A and
McNaught, J and Turner, S and Ananiadou, S
},
year = 2018,
month = may,
booktitle = {
Proceedings of the Eleventh International Conference on Language Resources
and Evaluation ({LREC} 2018)
},
location = {Miyazaki, Japan},
pages = {280--285},
conference = {
Eleventh International Conference on Language Resources and Evaluation
(LREC 2018)
},
language = {en}
}
```
| [
-0.22618980705738068,
-0.40599456429481506,
0.6221673488616943,
0.07253780961036682,
-0.10811430215835571,
0.021445341408252716,
-0.17855554819107056,
-0.6653991341590881,
0.38762229681015015,
0.6479132175445557,
-0.4892768859863281,
-0.984594464302063,
-0.5186741352081299,
0.4438834786415... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Whispering-GPT/whisper-transcripts-ml-street-talk | Whispering-GPT | 2022-12-02T07:19:28Z | 143 | 0 | null | [
"task_categories:automatic-speech-recognition",
"whisper",
"whispering",
"medium",
"region:us"
] | 2022-12-02T07:19:28Z | 2022-12-01T05:24:10.000Z | 2022-12-01T05:24:10 | ---
task_categories:
- automatic-speech-recognition
dataset_info:
features:
- name: CHANNEL_NAME
dtype: string
- name: URL
dtype: string
- name: TITLE
dtype: string
- name: DESCRIPTION
dtype: string
- name: TRANSCRIPTION
dtype: string
- name: SEGMENTS
dtype: string
splits:
- name: train
num_bytes: 21764632
num_examples: 83
download_size: 10320580
dataset_size: 21764632
tags:
- whisper
- whispering
- medium
---
# Dataset Card for "whisper-transcripts-mlst"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.29509568214416504,
-0.23438496887683868,
0.26289746165275574,
0.3773082494735718,
-0.15831603109836578,
0.23529112339019775,
0.1285397708415985,
-0.14899516105651855,
0.7904952168464661,
0.3464270830154419,
-1.2009329795837402,
-0.9381423592567444,
-0.7077713012695312,
-0.43077459931373... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
language-and-voice-lab/althingi_asr | language-and-voice-lab | 2023-02-24T22:14:42Z | 143 | 0 | null | [
"task_categories:automatic-speech-recognition",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:is",
"license:cc-by-4.0",
"icelandic",
"parliamentary speech",
"parlament",
"al... | 2023-02-24T22:14:42Z | 2022-12-09T20:33:28.000Z | 2022-12-09T20:33:28 | ---
annotations_creators:
- machine-generated
language:
- is
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Althingi Parliamentary Speech
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- icelandic
- parliamentary speech
- parlament
- althingi
task_categories:
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for althingi_asr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Data](#data)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Other Known Limitations](#other-known-limitations)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** Althingi Parliamentary Speech
- **Repository:** [LDC](https://catalog.ldc.upenn.edu/LDC2021S01)
- **Paper:** [Building an ASR corpus using Althingi’s Parliamentary Speeches](https://www.researchgate.net/profile/Jon-Gudnason/publication/319185185_Building_an_ASR_Corpus_Using_Althingi's_Parliamentary_Speeches/links/5d1dbdd3a6fdcc2462bdda0f/Building-an-ASR-Corpus-Using-Althingis-Parliamentary-Speeches.pdf)
- **Point of Contact:** [Jón Guðnason](mailto:jg@ru.is)
### Dataset Summary
Althingi Parliamentary Speech consists of approximately 542 hours of recorded speech from Althingi, the Icelandic Parliament, along with corresponding transcripts, a pronunciation dictionary and two language models. Speeches date from 2005-2016.
This dataset was collected in 2016 by the ASR for Althingi project at [Reykjavik University](https://en.ru.is/) in collaboration with the Althingi speech department. The purpose of that project was to develop an ASR (automatic speech recognition) system for parliamentary speech to replace the procedure of manually transcribing performed speeches.
### Data
The mean speech length is six minutes, with speeches ranging from under one minute to around thirty minutes. The corpus features 197 speakers (105 male, 92 female) and is split into training, development and evaluation sets. The language models are of two types: a pruned trigram model, used in decoding, and an unpruned constant ARPA 5-gram model, used for re-scoring decoding results.
Audio data is presented as single channel 16-bit mp3 files; the majority of these files have a sample rate of 44.1 kHz. Transcripts and other text data are plain text encoded in UTF-8.
### Example Usage
The Althingi Corpus is divided in 3 splits: train, validation and test. To load a specific split pass its name as a config name:
```python
from datasets import load_dataset
althingi_asr = load_dataset("language-and-voice-lab/althingi_asr")
```
To load an specific split (for example, the validation split) do:
```python
from datasets import load_dataset
althingi_asr = load_dataset("language-and-voice-lab/althingi_asr",split="validation")
```
### Supported Tasks
automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### Languages
The audio is in Icelandic.
## Dataset Structure
### Data Instances
```python
{
'audio_id': 'rad20160602T000219_00083',
'audio': {
'path': '/home/inga/.cache/HuggingFace/datasets/downloads/extracted/52607f9db9e3394263070575d29323213b99a06a996c43d4fe75bca115827d12/dev/EyH/rad20160602T000219/rad20160602T000219_00083.flac',
'array': array([-0.01098633, -0.01489258, -0.01040649, ..., 0.00314331,
0.00186157, 0.00527954], dtype=float32),
'sampling_rate': 16000
},
'speaker_id': 'rad20160602T000219',
'duration': 12.67199993133545,
'normalized_text': 'og má svo sannarlega segja að landslagið sé nokkuð breytt frá því þrjú komma tvö prósent þjóðarinnar töldust vera innflytjendur árið tvö þúsund en nú teljast tíu prósent þjóðarinnar vera fyrsta og önnur kynslóð innflytjenda'
}
```
### Data Fields
* `audio_id` (string) - id of audio segment
* `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
* `speaker_id` (string) - id of speaker
* `duration` (float32) - duration of the audio file in seconds.
* `normalized_text` (string) - normalized audio segment transcription.
### Data Splits
The corpus is split into train, evaluation, and test portions. Lenghts of every portion are: train = 514h29m, test = 13h52m, evaluation=14h02m.
To load an specific portion please see the above section "Example Usage".
## Additional Information
### Other Known Limitations
"Althingi Parliamentary Speech" by the Language and Voice Laboratory (LVL) at the Reykjavik University is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) License with the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
### Licensing Information
[CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@misc{helgadottiralthingi2021,
title={Althingi Parliamentary Speech},
ldc_catalog_no={LDC2021S01},
DOI={https://doi.org/10.35111/695b-6697},
author={Helgadóttir, Inga Rún and Kjaran, Róbert and Nikulásdóttir, Anna Björk and Guðnason, Jón},
publisher={Reykjavík University}
journal={Linguistic Data Consortium, Philadelphia},
year={2021},
url={https://catalog.ldc.upenn.edu/LDC2021S01},
}
```
### Contributions
This project was made possible through the support of Althingi’s information and publications departments. The authors would like to thank Solveig K. Jónsdóttir, Þorbjörg Árnadóttir and Ingvi Stígsson for their valuable help.
| [
-0.38224226236343384,
-0.5292953848838806,
-0.07540936022996902,
0.03620922565460205,
-0.4069589078426361,
-0.1352873295545578,
-0.48126184940338135,
-0.3218780755996704,
0.22881139814853668,
0.36667004227638245,
-0.309953510761261,
-0.5346649885177612,
-0.5499333739280701,
0.2364294528961... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nthngdy/bert_dataset_202203 | nthngdy | 2023-01-17T10:10:06Z | 143 | 0 | null | [
"task_categories:text-generation",
"task_categories:fill-mask",
"language:en",
"license:apache-2.0",
"language-modeling",
"masked-language-modeling",
"region:us"
] | 2023-01-17T10:10:06Z | 2023-01-16T14:40:52.000Z | 2023-01-16T14:40:52 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 24635440616
num_examples: 146707688
download_size: 14651841592
dataset_size: 24635440616
license: apache-2.0
task_categories:
- text-generation
- fill-mask
language:
- en
tags:
- language-modeling
- masked-language-modeling
pretty_name: BERT Dataset (BookCorpus + Wikipedia 03/2022)
---
# Dataset Card for "bert_dataset_202203"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7664685845375061,
-0.2309035062789917,
0.27741169929504395,
0.42882227897644043,
-0.1989615261554718,
-0.21764130890369415,
0.28621503710746765,
-0.4117991030216217,
0.7897561192512512,
0.48327380418777466,
-1.0617256164550781,
-0.5940335988998413,
-0.431477427482605,
-0.365467131137847... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KaiLv/UDR_Yelp | KaiLv | 2023-06-21T12:52:51Z | 143 | 0 | null | [
"region:us"
] | 2023-06-21T12:52:51Z | 2023-06-21T12:52:34.000Z | 2023-06-21T12:52:34 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: int64
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 22696875
num_examples: 30000
- name: test
num_bytes: 2261177
num_examples: 3000
- name: debug
num_bytes: 3745338
num_examples: 5000
download_size: 18407788
dataset_size: 28703390
---
# Dataset Card for "UDR_Yelp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3159898817539215,
-0.2332763969898224,
0.2806316018104553,
-0.06183980405330658,
-0.04252012446522713,
-0.05063404515385628,
0.4239482879638672,
-0.17729875445365906,
0.7795320153236389,
0.582156777381897,
-0.8903606534004211,
-0.687609076499939,
-0.21982786059379578,
-0.234940126538276... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yuzuai/rakuda-questions | yuzuai | 2023-06-23T08:01:35Z | 143 | 3 | null | [
"task_categories:conversational",
"task_categories:question-answering",
"size_categories:n<1K",
"source_datasets:original",
"language:ja",
"license:mit",
"region:us"
] | 2023-06-23T08:01:35Z | 2023-06-23T01:08:52.000Z | 2023-06-23T01:08:52 | ---
license: mit
language:
- ja
pretty_name: Rakuda - Questions for Japanese Models
task_categories:
- conversational
- question-answering
size_categories:
- n<1K
source_datasets:
- original
---
# Rakuda - Questions for Japanese models
**Repository**: [https://github.com/yuzu-ai/japanese-llm-ranking](https://github.com/yuzu-ai/japanese-llm-ranking)
This is a set of 40 questions in Japanese about Japanese-specific topics designed to evaluate the capabilities of AI Assistants in Japanese.
The questions are evenly distributed between four categories: history, society, government, and geography.
Questions in the first three categories are open-ended, while the geography questions are more specific.
Answers to these questions can be used to rank the Japanese abilities of models, in the same way the [vicuna-eval questions](https://lmsys.org/vicuna_eval/) are frequently used to measure the usefulness of assistants.
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("yuzuai/rakuda-questions")
print(dataset)
# => DatasetDict({
# train: Dataset({
# features: ['category', 'question_id', 'text'],
# num_rows: 40
# })
# })
```
| [
-0.7409700751304626,
-0.9379522800445557,
0.5460542440414429,
0.09475305676460266,
-0.13708806037902832,
-0.363832026720047,
-0.355996310710907,
-0.29708436131477356,
0.2932009696960449,
0.5958436131477356,
-0.504711925983429,
-0.5860086679458618,
-0.4042951464653015,
0.2490694373846054,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
iashchak/igor_link_dialogues | iashchak | 2023-11-08T00:46:25Z | 143 | 2 | null | [
"language:ru",
"not-for-all-audiences",
"region:us"
] | 2023-11-08T00:46:25Z | 2023-09-01T19:24:32.000Z | 2023-09-01T19:24:32 | ---
language:
- ru
tags:
- not-for-all-audiences
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 34596231.08868832
num_examples: 31516
download_size: 16169467
dataset_size: 34596231.08868832
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
joshswartz/hellaswag_train_d2_wikihow | joshswartz | 2023-11-07T11:41:54Z | 143 | 0 | null | [
"region:us"
] | 2023-11-07T11:41:54Z | 2023-11-07T11:27:01.000Z | 2023-11-07T11:27:01 | ---
dataset_info:
features:
- name: ind
dtype: int32
- name: activity_label
dtype: string
- name: ctx_a
dtype: string
- name: ctx_b
dtype: string
- name: ctx
dtype: string
- name: endings
sequence: string
- name: source_id
dtype: string
- name: split
dtype: string
- name: split_type
dtype: string
- name: label
dtype: string
- name: full_text
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 39854306.15676535
num_examples: 12583
download_size: 23194203
dataset_size: 39854306.15676535
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "hellaswag_train_wikihow_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5354622006416321,
-0.09868244081735611,
0.12048743665218353,
0.2487645298242569,
-0.5445643067359924,
-0.292766273021698,
0.14835305511951447,
-0.0016075900057330728,
0.648391604423523,
0.3001948893070221,
-1.0304855108261108,
-0.5568305253982544,
-0.6389672756195068,
-0.403149157762527... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zxvix/Anatomy_Gray | zxvix | 2023-11-26T10:52:22Z | 143 | 0 | null | [
"region:us"
] | 2023-11-26T10:52:22Z | 2023-11-24T12:53:10.000Z | 2023-11-24T12:53:10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: paraphrase
data_files:
- split: train
path: paraphrase/train-*
- config_name: x4
data_files:
- split: train
path: x4/train-*
dataset_info:
- config_name: default
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2278914
num_examples: 639
download_size: 1004445
dataset_size: 2278914
- config_name: paraphrase
features:
- name: text
dtype: string
- name: original_text
dtype: string
splits:
- name: train
num_bytes: 4359695
num_examples: 639
download_size: 1934418
dataset_size: 4359695
- config_name: x4
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 10429864
num_examples: 2920
download_size: 1808255
dataset_size: 10429864
---
# Dataset Card for "Anatomy_Gray"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.2865196466445923,
-0.08412310481071472,
0.5874561667442322,
0.09185626357793808,
-0.38483867049217224,
-0.14300751686096191,
0.40623265504837036,
-0.15837223827838898,
0.8819935321807861,
0.3259768784046173,
-0.9309656620025635,
-0.804029107093811,
-0.6368144154548645,
-0.30672937631607... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
glitchbench/GlitchBench | glitchbench | 2023-11-25T05:25:03Z | 143 | 1 | null | [
"region:us"
] | 2023-11-25T05:25:03Z | 2023-11-25T05:24:26.000Z | 2023-11-25T05:24:26 | ---
dataset_info:
features:
- name: image
dtype: image
- name: id
dtype: string
- name: reddit
dtype: string
- name: glitch-type
dtype: string
- name: game
dtype: string
- name: source
dtype: string
- name: description
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: validation
num_bytes: 686309290.0
num_examples: 607
download_size: 686303027
dataset_size: 686309290.0
---
# Dataset Card for "GlitchBench"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5914178490638733,
-0.4222186207771301,
0.30453526973724365,
0.22262734174728394,
-0.22628386318683624,
0.18642012774944305,
0.4530356824398041,
-0.4209285080432892,
1.0001407861709595,
0.39645546674728394,
-0.9424450993537903,
-0.6162673234939575,
-0.44870898127555847,
-0.44389829039573... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/chia | bigbio | 2022-12-22T15:44:25Z | 142 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-12-22T15:44:25Z | 2022-11-13T22:07:53.000Z | 2022-11-13T22:07:53 |
---
language:
- en
bigbio_language:
- English
license: cc-by-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: CHIA
homepage: https://github.com/WengLab-InformaticsResearch/CHIA
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- RELATION_EXTRACTION
---
# Dataset Card for CHIA
## Dataset Description
- **Homepage:** https://github.com/WengLab-InformaticsResearch/CHIA
- **Pubmed:** False
- **Public:** True
- **Tasks:** NER,RE
A large annotated corpus of patient eligibility criteria extracted from 1,000
interventional, Phase IV clinical trials registered in ClinicalTrials.gov. This
dataset includes 12,409 annotated eligibility criteria, represented by 41,487
distinctive entities of 15 entity types and 25,017 relationships of 12
relationship types.
## Citation Information
```
@article{kury2020chia,
title = {Chia, a large annotated corpus of clinical trial eligibility criteria},
author = {
Kury, Fabr{'\i}cio and Butler, Alex and Yuan, Chi and Fu, Li-heng and
Sun, Yingcheng and Liu, Hao and Sim, Ida and Carini, Simona and Weng,
Chunhua
},
year = 2020,
journal = {Scientific data},
publisher = {Nature Publishing Group},
volume = 7,
number = 1,
pages = {1--11}
}
```
| [
-0.15211103856563568,
-0.3355785608291626,
0.5566981434822083,
0.4624205529689789,
-0.39657092094421387,
-0.17302292585372925,
0.12831971049308777,
-0.5887582898139954,
0.3424457907676697,
0.4429016709327698,
-0.25098007917404175,
-0.826920747756958,
-0.5965725183486938,
0.551830530166626,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
biu-nlp/abstract-sim | biu-nlp | 2023-05-29T09:33:17Z | 142 | 2 | null | [
"region:us"
] | 2023-05-29T09:33:17Z | 2023-05-13T16:43:12.000Z | 2023-05-13T16:43:12 | A dataset of Wikipedia sentences accompannied by valid and invalid abstract descriptions. | [
-0.3750840127468109,
-0.7168946266174316,
0.5023364424705505,
0.16511473059654236,
-0.09550261497497559,
-0.3757340908050537,
-0.06463879346847534,
-0.3121302127838135,
0.3410230875015259,
0.3976600170135498,
-0.5768712759017944,
-0.24082987010478973,
-0.3554874062538147,
0.786430239677429... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
JasperLS/prompt-injections | JasperLS | 2023-05-16T17:16:21Z | 142 | 6 | null | [
"region:us"
] | 2023-05-16T17:16:21Z | 2023-05-16T17:16:15.000Z | 2023-05-16T17:16:15 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 71720
num_examples: 546
- name: test
num_bytes: 15981
num_examples: 116
download_size: 51215
dataset_size: 87701
---
# Dataset Card for "deberta-v3-base-injection-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4362238347530365,
-0.4642913341522217,
0.4360530972480774,
0.34602388739585876,
-0.3675534427165985,
0.010183866135776043,
0.6049597859382629,
-0.1933138221502304,
0.7048846483230591,
0.5993927717208862,
-0.4807889759540558,
-0.9770994782447815,
-0.7282224893569946,
-0.3842751681804657,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
GATE-engine/fungi | GATE-engine | 2023-06-05T16:36:25Z | 142 | 1 | null | [
"region:us"
] | 2023-06-05T16:36:25Z | 2023-06-05T00:42:00.000Z | 2023-06-05T00:42:00 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: int64
splits:
- name: train
num_bytes: 6188400790.875
num_examples: 64449
- name: validation
num_bytes: 1173258274.625
num_examples: 12195
- name: test
num_bytes: 1260333216.5
num_examples: 13116
download_size: 835444680
dataset_size: 8621992282.0
---
# Dataset Card for "fungi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.41960781812667847,
-0.3959372341632843,
0.3841085135936737,
0.11620794981718063,
-0.2373027503490448,
0.08673427999019623,
0.31831392645835876,
-0.18723908066749573,
1.0239510536193848,
0.618147611618042,
-0.8706645965576172,
-0.9814344048500061,
-0.6618761420249939,
-0.0313792265951633... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hugcyp/LCSTS | hugcyp | 2023-08-16T02:48:38Z | 142 | 1 | null | [
"region:us"
] | 2023-08-16T02:48:38Z | 2023-08-16T01:59:31.000Z | 2023-08-16T01:59:31 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
FudanSELab/ClassEval | FudanSELab | 2023-09-04T06:35:53Z | 142 | 1 | null | [
"task_categories:text2text-generation",
"size_categories:n<1K",
"language:en",
"license:mit",
"code-generation",
"arxiv:2308.01861",
"region:us"
] | 2023-09-04T06:35:53Z | 2023-09-02T09:28:37.000Z | 2023-09-02T09:28:37 | ---
license: mit
language:
- en
size_categories:
- n<1K
tags:
- code-generation
task_categories:
- text2text-generation
pretty_name: ClassEval
configs:
- config_name: default
data_files:
- split: test
path: "ClassEval_data.json"
---
# Dataset Card for FudanSELab ClassEval
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/FudanSELab/ClassEval)
- **Paper:** [ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation](https://arxiv.org/abs/2308.01861)
### Dataset Summary
We manually build ClassEval of 100 class-level Python coding tasks, consists of 100 classes and 412 methods, and average 33.1 test cases per class.
For 100 class-level tasks, diversity is maintained by encompassing these tasks over a wide spectrum of topics, including Management Systems, Data Formatting, Mathematical Operations, Game Development, File Handing, Database Operations and Natural Language Processing.
For 412 methods, they have been constructed with diverse dependencies, including (i) Library Dependency, where the methods rely on specific external libraries; (ii) Field Dependency, in which the methods are contingent on class instance variables, or fields; (iii) Method Dependency, where the methods are dependent on other methods within the same class; and (iv) Standalone, wherein the methods operate independently without reliance on fields, other methods, or external libraries.
### Languages
The programming language is Python. The natural language used in the comments and docstrings is English.
## Dataset Structure
```python
from datasets import load_dataset
dataset = load_dataset("FudanSELab/ClassEval")
DatasetDict({
test: Dataset({
features: ['task_id', 'skeleton', 'test', 'solution_code', 'import_statement', 'class_description', 'methods_info',
'class_name', 'test_classes', 'class_constructor', 'fields'],
num_rows: 100
})
})
```
### Data Fields
The specific data fields for each task are delineated as follows:
* task_id: the unique identifier for each task.
* skeleton: the class skeleton, including all input descriptions in our class-level coding tasks.
* test: all test cases for the whole class.
* solution_code: the ground-truth class-level code for each task.
More fine-grained class-level information from the class skeleton, including:
* import_statement: the import statements for each task.
* class_name: the name of the class.
* class_description: a concise description of the purpose and functionality of the class.
* class_constructor: the whole constructor of the class.
* fields: the fields defined in the class_constructor.
Detailed information for each method in the "methods_info" field, including:
* method_name: the method signature.
* method_input: the method contract design, including all input descriptions in the method.
* test_code: the test cases for the method.
* solution_code: the ground-truth method-level code.
* dependencies: the dependency information of the method.
### Data Splits
The dataset only consists of a test split with 100 samples.
## Dataset Creation
### Source Data
Manually-crafted
## Additional Information
### Licensing Information
This repository is under [MIT](https://github.com/FudanSELab/ClassEval/blob/master/LICENSE) license. But the data is distributes through [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) license.
### Citation Information
```
@misc{du2023classeval,
title={ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation},
author={Xueying Du and Mingwei Liu and Kaixin Wang and Hanlin Wang and Junwei Liu and Yixuan Chen and Jiayi Feng and Chaofeng Sha and Xin Peng and Yiling Lou},
year={2023},
eprint={2308.01861},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Xueying Du xueyingdu21@m.fudan.edu.cn
Mingwei Liu liumingwei@fudan.edu.cn
Kaixin Wang kxwang23@m.fudan.edu.cn
Hanlin Wang wanghanlin23@m.fudan.edu.cn
Junwei Liu jwliu22@m.fudan.edu.cn
Yixuan Chen 23212010005@m.fudan.edu.cn
Jiayi Feng 23210240148@m.fudan.edu.cn
Chaofeng Sha cfsha@fudan.edu.cn
Xin Peng pengxin@fudan.edu.cn
Yiling Lou yilinglou@fudan.edu.cn
| [
-0.5375862121582031,
-0.36726224422454834,
0.10989781469106674,
0.27379506826400757,
0.1287257820367813,
0.022283103317022324,
-0.18025729060173035,
-0.34486857056617737,
-0.2894355058670044,
0.1389322429895401,
-0.4661102294921875,
-0.7934409976005554,
-0.21042919158935547,
0.030809622257... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Equall/mini_cuad | Equall | 2023-11-13T20:06:46Z | 142 | 0 | null | [
"legal",
"region:us"
] | 2023-11-13T20:06:46Z | 2023-11-13T19:46:45.000Z | 2023-11-13T19:46:45 | ---
tags:
- legal
pretty_name: Mini CUAD
---
Sample from cuad | [
-0.21482236683368683,
-0.8684799671173096,
0.33213287591934204,
0.2548478841781616,
0.15836390852928162,
0.5652797818183899,
0.36759063601493835,
-0.06562888622283936,
0.8827913403511047,
0.8829211592674255,
-0.228546142578125,
-0.4727451801300049,
-0.390796035528183,
-0.3865293264389038,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Norod78/microsoft-fluentui-emoji-768 | Norod78 | 2023-07-16T12:13:07Z | 141 | 6 | null | [
"task_categories:text-to-image",
"size_categories:n<10K",
"language:en",
"license:mit",
"emoji",
"fluentui",
"region:us"
] | 2023-07-16T12:13:07Z | 2023-01-01T09:35:07.000Z | 2023-01-01T09:35:07 | ---
language: en
license: mit
size_categories:
- n<10K
task_categories:
- text-to-image
pretty_name: Microsoft FluentUI Emoji 768x768
dataset_info:
features:
- name: text
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 679617796.94
num_examples: 7564
download_size: 704564297
dataset_size: 679617796.94
tags:
- emoji
- fluentui
---
# Dataset Card for "microsoft-fluentui-emoji-768"
[svg and their file names were converted to images and text from Microsoft's fluentui-emoji repo](https://github.com/microsoft/fluentui-emoji) | [
-0.2475263476371765,
-0.22251302003860474,
0.07567678391933441,
-0.012750961817800999,
-0.6084526777267456,
0.1548270583152771,
-0.15096910297870636,
-0.24895471334457397,
0.5003206729888916,
0.7714667916297913,
-0.7677912712097168,
-0.7425941824913025,
-0.4249874949455261,
0.3194983005523... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split | ehartford | 2023-06-17T21:33:36Z | 141 | 22 | null | [
"region:us"
] | 2023-06-17T21:33:36Z | 2023-06-17T18:55:18.000Z | 2023-06-17T18:55:18 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LeoLM/wikitext-en-de | LeoLM | 2023-09-28T14:04:12Z | 141 | 1 | null | [
"size_categories:1K<n<10K",
"language:de",
"language:en",
"license:cc-by-3.0",
"arxiv:1609.07843",
"region:us"
] | 2023-09-28T14:04:12Z | 2023-09-28T13:39:48.000Z | 2023-09-28T13:39:48 | ---
license: cc-by-3.0
configs:
- config_name: exzellent_de
data_files: wiki_de_exzellent.parquet
- config_name: featured_en
data_files: wiki_en_featured.parquet
- config_name: exzellent_de_small
data_files: wiki_de_exzellent_small.parquet
- config_name: featured_en_small
data_files: wiki_en_featured_small.parquet
language:
- de
- en
size_categories:
- 1K<n<10K
---
# German+English Wikitext
Wikitext_en_de is a replication of the `wikitext` dataset following the work by [Merity et al. (2016)](https://arxiv.org/abs/1609.07843).
It contains (mostly) all articles that Wikipedia classifies as ["exzellent"](https://de.wikipedia.org/wiki/Wikipedia:Exzellente_Artikel) or ["featured"](https://en.wikipedia.org/wiki/Wikipedia:Featured_articles) and can be used for example for perplexity evaluation.
This dataset was created by first scraping the names of the articles belonging to these categories from Wikipedia. Afterwards, we take a recent dump from
wikipedia ("20230901.de" from [`graelo/wikipedia`](https://huggingface.co/datasets/graelo/wikipedia)) and filter the articles to only include those on either list.
| Config Name | Num Documents |
|-------------|--------------|
| exzellent_de | 2822 |
| featured_en | 6356 |
| exzellent_de_small | 1024 |
| featured_en_small | 1024 |
The code for creating the datasets is available in this repository ("wikitext_de.py", "wikitext_en.py").
Be aware that this download a whole wikipedia dump, which might take a while depending on your connection. | [
-0.8161483407020569,
-0.6052670478820801,
0.055482421070337296,
0.1291750967502594,
-0.3123522698879242,
-0.12100870162248611,
0.01569223590195179,
-0.5465607643127441,
0.7731102108955383,
0.3735375702381134,
-0.6849552989006042,
-0.3708285987377167,
-0.40467509627342224,
0.408599615097045... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SinKove/synthetic_mammography_csaw | SinKove | 2023-10-11T21:04:10Z | 141 | 16 | null | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"license:openrail",
"medical",
"arxiv:2112.01330",
"arxiv:2307.15208",
"doi:10.57967/hf/1254",
"region:us"
] | 2023-10-11T21:04:10Z | 2023-10-11T18:50:12.000Z | 2023-10-11T18:50:12 | ---
task_categories:
- image-classification
tags:
- medical
pretty_name: C
size_categories:
- 10K<n<100K
license: openrail
---
# Dataset Card for Synthetic CSAW 100k Mammograms
## Dataset Description
This is a synthetic mammogram dataset created with the latent diffusion model from *Generative AI for Medical Imaging: extending the MONAI Framework* paper.
The generative model was trained on the [CSAW-M dataset](https://arxiv.org/abs/2112.01330).
- **Paper: https://arxiv.org/abs/2307.15208
- **Point of Contact: walter.diaz_sanz@kcl.ac.uk
### Dataset Summary
### Supported Tasks
Classification masking of cancer in mammogram.
The dataset contains 100k synthetic mammograms with 3 labels:
- "Low masking level" (score <= 2),
- "Medium masking level" (2 < score <= 6),
- "High masking level" (score > 6).
## Dataset Structure
- Images
- CSAW-M Labels
### Data Splits
We did not define data splits.
## Dataset Creation
We generated the synthetic data samples using the diffusion model finetuned on the [CSAW-M dataset](https://arxiv.org/abs/2112.01330).
### Personal and Sensitive Information
Following GDPR "Personal data is any information that relates to an identified or identifiable living individual."
We make sure that there are not "personal data" (re-identifiable information) by filtering with a deep learning model trained for identifying patients.
## Considerations for Using the Data
### Social Impact of Dataset
We hope that this dataset can used to enhance AI models training for cancer masking.
### Discussion of Biases
There are biases towards specific pathologies.
## Additional Information
### Dataset Curators
### Licensing Information
This dataset is released under the [Open & Responsible AI license ("OpenRAIL")](https://huggingface.co/blog/open_rail)
### Citation Information
Pinaya, W. H., Graham, M. S., Kerfoot, E., Tudosiu, P. D., Dafflon, J., Fernandez, V., ... & Cardoso, M. J. (2023). Generative ai for medical imaging: extending the monai framework. arXiv preprint arXiv:2307.15208.
https://arxiv.org/abs/2307.15208
| [
-0.3051287829875946,
-0.5539669394493103,
0.5946611762046814,
-0.04529089853167534,
-0.262463241815567,
0.11162332445383072,
0.43869128823280334,
-0.2756131887435913,
0.31587284803390503,
0.6625812649726868,
-0.868651807308197,
-0.8791863918304443,
-0.7652053833007812,
-0.17000190913677216... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bri25yu-temp/playbook | bri25yu-temp | 2023-11-28T20:08:02Z | 141 | 0 | null | [
"region:us"
] | 2023-11-28T20:08:02Z | 2023-11-15T19:16:23.000Z | 2023-11-15T19:16:23 | ---
dataset_info:
- config_name: route
features:
- name: '@type'
dtype: string
- name: name
dtype: string
- name: targetStep
dtype: string
- name: sourceStep
dtype: string
- name: label
dtype: string
- name: isExecuted
dtype: bool
- name: group
dtype: 'null'
- name: uuid
dtype: string
- name: workflow_uuid
dtype: string
splits:
- name: train
num_bytes: 554945
num_examples: 2048
download_size: 0
dataset_size: 554945
- config_name: step
features:
- name: '@type'
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: arguments
dtype: string
- name: status
dtype: 'null'
- name: top
dtype: string
- name: left
dtype: string
- name: stepType
dtype: string
- name: group
dtype: 'null'
- name: uuid
dtype: string
- name: stepTypeName
dtype: string
- name: workflow_uuid
dtype: string
splits:
- name: train
num_bytes: 3214416
num_examples: 2614
download_size: 823908
dataset_size: 3214416
- config_name: step_type_eval
features:
- name: ID
dtype: int64
- name: WID
dtype: int64
- name: RID
dtype: int64
- name: PID
dtype: string
- name: Function
dtype: string
- name: Difficulty
dtype: string
- name: Metric
dtype: string
- name: Query
dtype: string
- name: Reference
dtype: string
- name: Context
dtype: string
splits:
- name: train
num_bytes: 33556
num_examples: 200
download_size: 15601
dataset_size: 33556
- config_name: structure_generation_eval
features:
- name: ID
dtype: string
- name: WID
dtype: string
- name: RID
dtype: string
- name: PID
dtype: string
- name: Function
dtype: string
- name: Difficulty
dtype: string
- name: Metric
dtype: string
- name: Query
dtype: string
- name: Reference
dtype: string
- name: Context
dtype: string
splits:
- name: train
num_bytes: 132872
num_examples: 280
download_size: 21623
dataset_size: 132872
- config_name: workflow
features:
- name: '@type'
dtype: string
- name: triggerLimit
dtype: 'null'
- name: name
dtype: string
- name: aliasName
dtype: 'null'
- name: tag
dtype: string
- name: description
dtype: string
- name: isActive
dtype: bool
- name: debug
dtype: bool
- name: singleRecordExecution
dtype: bool
- name: remoteExecutableFlag
dtype: bool
- name: parameters
sequence: string
- name: synchronous
dtype: bool
- name: lastModifyDate
dtype: int64
- name: collection
dtype: string
- name: versions
sequence: 'null'
- name: triggerStep
dtype: string
- name: steps
list:
- name: '@type'
dtype: string
- name: arguments
dtype: string
- name: description
dtype: string
- name: group
dtype: 'null'
- name: left
dtype: string
- name: name
dtype: string
- name: status
dtype: 'null'
- name: stepType
dtype: string
- name: stepTypeName
dtype: string
- name: top
dtype: string
- name: uuid
dtype: string
- name: routes
list:
- name: '@type'
dtype: string
- name: group
dtype: 'null'
- name: isExecuted
dtype: bool
- name: label
dtype: string
- name: name
dtype: string
- name: sourceStep
dtype: string
- name: targetStep
dtype: string
- name: uuid
dtype: string
- name: groups
sequence: 'null'
- name: priority
dtype: string
- name: uuid
dtype: string
- name: id
dtype: int64
- name: owners
sequence: 'null'
- name: isPrivate
dtype: bool
- name: deletedAt
dtype: 'null'
- name: importedBy
list:
- name: apiName
dtype: string
- name: name
dtype: string
- name: version
dtype: string
- name: recordTags
sequence: string
splits:
- name: train
num_bytes: 3861263
num_examples: 621
download_size: 1151037
dataset_size: 3861263
configs:
- config_name: route
data_files:
- split: train
path: route/train-*
- config_name: step
data_files:
- split: train
path: step/train-*
- config_name: step_type_eval
data_files:
- split: train
path: step_type_eval/train-*
- config_name: structure_generation_eval
data_files:
- split: train
path: structure_generation_eval/train-*
- config_name: workflow
data_files:
- split: train
path: workflow/train-*
---
# Dataset Card for "playbook"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.40507227182388306,
-0.16333229839801788,
0.07650811225175858,
0.19845935702323914,
-0.014376120641827583,
0.00978669710457325,
0.2649531662464142,
-0.18156245350837708,
0.7049799561500549,
0.688440203666687,
-0.9303510189056396,
-0.6944013833999634,
-0.4824659824371338,
-0.5630591511726... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Vipitis/Shadertoys | Vipitis | 2023-06-26T19:04:58Z | 140 | 5 | null | [
"task_categories:text-generation",
"task_categories:text-to-image",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"size_categories:10K<n<100K",
"language:en",
"language:code",
"license:cc-by-nc-sa-3.0",
"code",
"region:us"
] | 2023-06-26T19:04:58Z | 2022-07-24T15:08:41.000Z | 2022-07-24T15:08:41 | ---
annotations_creators:
- no-annotation
language:
- en
- code
language_creators:
- machine-generated
license:
- cc-by-nc-sa-3.0
multilinguality: []
pretty_name: Shadertoys
size_categories:
- 10K<n<100K
source_datasets: []
tags:
- code
task_categories:
- text-generation
- text-to-image
task_ids: []
dataset_info:
features:
- name: num_passes
dtype: int64
- name: has_inputs
dtype: bool
- name: name
dtype: string
- name: type
dtype: string
- name: code
dtype: string
- name: title
dtype: string
- name: description
dtype: string
- name: tags
sequence: string
- name: author
dtype: string
- name: license
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 162960894
num_examples: 37841
- name: test
num_bytes: 26450429
num_examples: 6617
download_size: 86294414
dataset_size: 189411323
---
# Dataset Card for Shadertoys
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Repository:** https://github.com/Vipitis/project (private placeholder)
### Dataset Summary
The Shadertoys dataset contains over 44k renderpasses collected from the Shadertoy.com API. Some shader programm contain multiple render passes.
To browse a subset of this dataset, look at the [ShaderEval](https://huggingface.co/spaces/Vipitis/ShaderCoder) space. A finer variant of this dataset is [Shadertoys-fine](https://huggingface.co/datasets/Vipitis/Shadertoys-fine).
### Supported Tasks and Leaderboards
`text-generation` the dataset can be used to train generative language models, for code completion tasks.
`ShaderEval` [task1](https://huggingface.co/spaces/Vipitis/ShaderEval) from ShaderEval uses a dataset derived from Shadertoys to test return completion of autoregressive language models.
### Languages
- English (title, description, tags, comments)
- Shadercode **programming** language, a subset of GLSL specifically for Shadertoy.com
## Dataset Structure
### Data Instances
A data point consists of the whole shadercode, some information from the API as well as additional metadata.
```
{
'num_passes': 1,
'has_inputs': False,
'name': 'Image',
'type': 'image',
'code': '<full code>',
'title': '<title of the shader>',
'description': '<description of the shader>',
'tags': ['tag1','tag2','tag3', ... ],
'license': 'unknown',
'author': '<username>',
'source': 'https://shadertoy.com/view/<shaderID>'
}
```
### Data Fields
- 'num_passes' number of passes the parent shader program has
- 'has_inputs' if any inputs were used like textures, audio streams,
- 'name' Name of the renderpass, usually Image, Buffer A, Common, etc
- 'type' type of the renderpass; one of `{'buffer', 'common', 'cubemap', 'image', 'sound'}`
- 'code' the raw code (including comments) the whole renderpass.
- 'title' Name of the Shader
- 'description' description given for the Shader
- 'tags' List of tags assigned to the Shader (by it's creator); there are more than 10000 unique tags.
- 'license' currently in development
- 'author' username of the shader author
- 'source' URL to the shader. Not to the specific renderpass.
### Data Splits
Currently available (shuffled):
- train (85.0%)
- test (15.0%)
## Dataset Creation
Data retrieved starting 2022-07-20
### Source Data
#### Initial Data Collection and Normalization
All data was collected via the [Shadertoy.com API](https://www.shadertoy.com/howto#q2) and then iterated over the items in 'renderpass' while adding some of the fields from 'info'.
The code to generate these datasets should be published on the GitHub repository in the near future.
#### Who are the source language producers?
Shadertoy.com contributers which publish shaders as 'public+API'
## Licensing Information
The Default [license for each Shader](https://www.shadertoy.com/terms) is CC BY-NC-SA 3.0. However, some Shaders might have a different license attached.
The Dataset is currently not filtering for any licenses but gives a license tag, if easily recognizeable by naive means.
Please check the first comment of each shader program yourself as to not violate any copyrights for downstream use. The main license requires share alike and by attribution.
Attribution of every data field can be found in the 'author' column, but might not include further attribution within the code itself or parents from forked shaders. | [
-0.449830561876297,
-0.2824936807155609,
0.22101765871047974,
0.4437125623226166,
-0.07511206716299057,
0.10667301714420319,
-0.01972327195107937,
-0.6077364087104797,
0.11202492564916611,
0.6055679321289062,
-0.8335103392601013,
-0.8808397650718689,
-0.17642179131507874,
0.003600375959649... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hezarai/sentiment-dksf | hezarai | 2023-09-02T10:33:35Z | 140 | 0 | null | [
"task_categories:text-classification",
"language:fa",
"region:us"
] | 2023-09-02T10:33:35Z | 2023-02-24T10:39:43.000Z | 2023-02-24T10:39:43 | ---
task_categories:
- text-classification
language:
- fa
pretty_name: Digikala/SnappFood comments sentiment analysis
---
The Sentiment DKSF (Digikala/Snappfood comments) is a dataset for sentiment analysis. | [
-0.5499428510665894,
-0.3381492495536804,
-0.07468611747026443,
0.5917017459869385,
-0.7600679993629456,
0.5531795620918274,
0.409615695476532,
0.3169463276863098,
0.7921077609062195,
0.40768182277679443,
-1.3479405641555786,
-0.5482375621795654,
-0.6778656244277954,
0.3335304260253906,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multimodalart/facesyntheticsspigacaptioned | multimodalart | 2023-03-23T14:56:28Z | 140 | 15 | null | [
"region:us"
] | 2023-03-23T14:56:28Z | 2023-03-21T02:37:14.000Z | 2023-03-21T02:37:14 | ---
dataset_info:
features:
- name: image
dtype: image
- name: image_seg
dtype: image
- name: landmarks
dtype: string
- name: spiga
sequence:
sequence: float64
- name: spiga_seg
dtype: image
- name: image_caption
dtype: string
splits:
- name: train
num_bytes: 31087489990.0
num_examples: 100000
download_size: 31011261945
dataset_size: 31087489990.0
---
# Dataset Card for "face_synthetics_spiga_captioned"
This is a copy of the [Microsoft FaceSynthetics dataset with SPIGA-calculated landmark annotations](https://huggingface.co/datasets/pcuenq/face_synthetics_spiga), and additional BLIP-generated captions.
For a copy of the original FaceSynthetics dataset with no extra annotations, please refer to [pcuenq/face_synthetics](https://huggingface.co/datasets/pcuenq/face_synthetics).
Here is the code for parsing the dataset and generating the BLIP captions:
```py
from transformers import pipeline
dataset_name = "pcuenq/face_synthetics_spiga"
faces = load_dataset(dataset_name)
faces = faces["train"]
captioner = pipeline("image-to-text",model="Salesforce/blip-image-captioning-large", device=0)
def caption_image_data(example):
image = example["image"]
image_caption = captioner(image)[0]['generated_text']
example['image_caption'] = image_caption
return example
faces_proc = faces.map(caption_image_data)
faces_proc.push_to_hub(f"multimodalart/face_synthetics_spiga_captioned")
```
| [
-0.2614768147468567,
-0.36114218831062317,
0.008773736655712128,
0.7264969944953918,
-0.25658270716667175,
0.04713446646928787,
-0.0715939998626709,
-0.22074110805988312,
0.45496025681495667,
0.6955156922340393,
-1.000292420387268,
-0.3345535695552826,
-0.4884202480316162,
0.24516090750694... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Isotonic/human_assistant_conversation_deduped | Isotonic | 2023-07-05T12:35:56Z | 140 | 4 | null | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:100K<n<1M",
"language:en",
"language:es",
"language:zh",
"license:afl-3.0",
"region:us"
] | 2023-07-05T12:35:56Z | 2023-04-11T06:16:00.000Z | 2023-04-11T06:16:00 | ---
license: afl-3.0
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1069951715.5157907
num_examples: 586784
- name: test
num_bytes: 133745787.85612378
num_examples: 73349
- name: validation
num_bytes: 133743964.43947384
num_examples: 73348
download_size: 701202899
dataset_size: 1337441467.8113883
task_categories:
- text-generation
- conversational
language:
- en
- es
- zh
size_categories:
- 100K<n<1M
---
# Deduplicated version of Isotonic/human_assistant_conversation
- Deduped with max jaccard similarity of 0.75 | [
-0.38178056478500366,
-0.7313324213027954,
0.1906970888376236,
0.33616942167282104,
-0.5000154972076416,
-0.31995341181755066,
-0.3798592984676361,
-0.5777686238288879,
0.7407305240631104,
0.729888916015625,
-0.47629255056381226,
-0.592673659324646,
-0.2269708812236786,
0.36616694927215576... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
OVAWARE/Minecrafter | OVAWARE | 2023-11-14T22:54:09Z | 140 | 1 | null | [
"region:us"
] | 2023-11-14T22:54:09Z | 2023-05-31T03:53:47.000Z | 2023-05-31T03:53:47 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 64316640
num_examples: 20556
download_size: 8604268
dataset_size: 64316640
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Minecrafter"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6791496276855469,
-0.3951505720615387,
0.12044677883386612,
0.12061960250139236,
-0.01220613531768322,
0.048905301839113235,
0.26333704590797424,
-0.14275948703289032,
0.5587958097457886,
0.4342688322067261,
-0.9291241765022278,
-0.5769961476325989,
-0.573546826839447,
-0.26718223094940... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lytang/MeetingBank-transcript | lytang | 2023-07-17T21:05:12Z | 140 | 0 | null | [
"task_categories:summarization",
"license:cc-by-nc-sa-4.0",
"arxiv:2305.17529",
"region:us"
] | 2023-07-17T21:05:12Z | 2023-07-15T18:00:10.000Z | 2023-07-15T18:00:10 | ---
license: cc-by-nc-sa-4.0
task_categories:
- summarization
---
This dataset consists of transcripts from the [MeetingBank dataset](https://meetingbank.github.io/).
**Overview**
MeetingBank, a benchmark dataset created from the city councils of 6 major U.S. cities to supplement existing datasets. It contains 1,366 meetings with over 3,579 hours of video, as well as transcripts, PDF documents of meeting minutes, agenda, and other metadata. On average, a council meeting is 2.6 hours long and its transcript contains over 28k tokens, making it a valuable testbed for meeting summarizers and for extracting structure from meeting videos. The datasets contains 6,892 segment-level summarization instances for training and evaluating of performance.
**Acknowledgement**
Please cite the following paper in work that makes use of this dataset:
[MeetingBank: A Benchmark Dataset for Meeting Summarization](https://arxiv.org/abs/2305.17529) \
Yebowen Hu, Tim Ganter, Hanieh Deilamsalehy, Franck Dernoncourt, Hassan Foroosh, Fei Liu \
In main conference of Association for Computational Linguistics (ACL’23), Toronto, Canada.
**Bibtex**
```
@inproceedings{hu-etal-2023-meetingbank,
title = "MeetingBank: A Benchmark Dataset for Meeting Summarization",
author = "Yebowen Hu and Tim Ganter and Hanieh Deilamsalehy and Franck Dernoncourt and Hassan Foroosh and Fei Liu",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL)",
month = July,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
}
```
**Resources**
MeetingBank dataset will be hosted at Zenodo. The audio files of each meeting will be hosted individually on Huggingface. All resources will includes meeting audio, transcripts, meetingbank main JSON file, summaries from 6 systems and human annotations.
**Summary, Segments Transcripts and VideoList:** [zenodo](https://zenodo.org/record/7989108)
**Meeting Audios:** [HuggingFace](https://huggingface.co/datasets/huuuyeah/MeetingBank_Audio)
**Meeting Transcripts:** [HuggingFace](https://huggingface.co/datasets/lytang/MeetingBank-transcript)
Some scripts can be found in github repo [MeetingBank_Utils](https://github.com/YebowenHu/MeetingBank-utils) | [
-0.47949159145355225,
-0.35176804661750793,
0.3210791349411011,
0.12863145768642426,
-0.28138959407806396,
0.035978805273771286,
-0.5419567823410034,
-0.6293753385543823,
0.44763901829719543,
0.36520618200302124,
-0.6591885089874268,
-0.5872882604598999,
-0.3306874632835388,
0.257144063711... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bri25yu-temp/cve | bri25yu-temp | 2023-11-09T01:27:18Z | 140 | 0 | null | [
"region:us"
] | 2023-11-09T01:27:18Z | 2023-10-23T16:10:43.000Z | 2023-10-23T16:10:43 | ---
dataset_info:
- config_name: cve_search_eval
features:
- name: function_call
dtype: string
- name: reference
sequence: string
- name: count
dtype: int64
- name: results
sequence: string
- name: results_count
dtype: int64
- name: correct
dtype: bool
splits:
- name: train
num_bytes: 5294673
num_examples: 11
download_size: 1905758
dataset_size: 5294673
- config_name: function_calling_retrieval
features:
- name: completion
dtype: string
- name: query
dtype: string
splits:
- name: train
num_bytes: 4395
num_examples: 31
download_size: 0
dataset_size: 4395
- config_name: metadata
features:
- name: Affected products
sequence:
sequence: string
- name: Assigner
dtype: string
- name: CVSS v2 ac insuf info
dtype: bool
- name: CVSS v2 access complexity
dtype: string
- name: CVSS v2 access vector
dtype: string
- name: CVSS v2 authentication
dtype: string
- name: CVSS v2 availability impact
dtype: string
- name: CVSS v2 base score
dtype: float64
- name: CVSS v2 confidentiality impact
dtype: string
- name: CVSS v2 exploitability score
dtype: float64
- name: CVSS v2 impact score
dtype: float64
- name: CVSS v2 integrity impact
dtype: string
- name: CVSS v2 obtain all privilege
dtype: bool
- name: CVSS v2 obtain other privilege
dtype: bool
- name: CVSS v2 obtain user privilege
dtype: bool
- name: CVSS v2 severity
dtype: string
- name: CVSS v2 user interaction required
dtype: bool
- name: CVSS v2 vector string
dtype: string
- name: CVSS v2 version
dtype: string
- name: CVSS v3 attack complexity
dtype: string
- name: CVSS v3 attack vector
dtype: string
- name: CVSS v3 availability impact
dtype: string
- name: CVSS v3 base score
dtype: float64
- name: CVSS v3 base severity
dtype: string
- name: CVSS v3 confidentiality impact
dtype: string
- name: CVSS v3 exploitability score
dtype: float64
- name: CVSS v3 impact score
dtype: float64
- name: CVSS v3 integrity impact
dtype: string
- name: CVSS v3 privileges required
dtype: string
- name: CVSS v3 scope
dtype: string
- name: CVSS v3 user interaction
dtype: string
- name: CVSS v3 vector string
dtype: string
- name: CVSS v3 version
dtype: string
- name: Description
dtype: string
- name: Id
dtype: string
- name: Last modified date
dtype: string
- name: Problem type
struct:
- name: problemtype_data
list:
- name: description
list:
- name: lang
dtype: string
- name: value
dtype: string
- name: Published date
dtype: string
- name: References
struct:
- name: reference_data
list:
- name: name
dtype: string
- name: refsource
dtype: string
- name: tags
sequence: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 280126465
num_examples: 230171
download_size: 60372673
dataset_size: 280126465
- config_name: metadata_with_references
features:
- name: CVSS v2 severity
dtype: string
- name: CVSS v3 base severity
dtype: string
- name: Last modified date
dtype: string
- name: Published date
dtype: string
- name: text_to_search
dtype: string
- name: chunks
list:
- name: Reference URL
dtype: string
- name: text
dtype: string
- name: text_to_embed
dtype: string
- name: CVE URL
dtype: string
- name: CVE ID
dtype: string
splits:
- name: train
num_bytes: 11604379994
num_examples: 229429
download_size: 2349591033
dataset_size: 11604379994
- config_name: references_only
features:
- name: url
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2719849957
num_examples: 279921
download_size: 867942737
dataset_size: 2719849957
configs:
- config_name: cve_search_eval
data_files:
- split: train
path: cve_search_eval/train-*
- config_name: function_calling_retrieval
data_files:
- split: train
path: function_calling_retrieval/train-*
- config_name: metadata
data_files:
- split: train
path: metadata/train-*
- config_name: metadata_with_references
data_files:
- split: train
path: metadata_with_references/train-*
- config_name: references_only
data_files:
- split: train
path: references_only/train-*
---
# Dataset Card for "cve"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5735284686088562,
-0.06989867985248566,
0.3423108756542206,
0.1746523529291153,
-0.2838270664215088,
-0.16739551723003387,
0.44576647877693176,
-0.23406261205673218,
0.8119505643844604,
0.6542496681213379,
-0.7689447999000549,
-0.9031947255134583,
-0.5651698708534241,
-0.437425941228866... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
domrachev03/toxic_comments_subset | domrachev03 | 2023-11-01T14:15:07Z | 140 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-01T14:15:07Z | 2023-10-23T16:14:31.000Z | 2023-10-23T16:14:31 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: reference
dtype: string
- name: translation
dtype: string
- name: similarity
dtype: float64
- name: lenght_diff
dtype: float64
- name: ref_tox
dtype: float64
- name: trn_tox
dtype: float64
splits:
- name: train
num_bytes: 20449737.40323276
num_examples: 156516
- name: test
num_bytes: 2272236.596767238
num_examples: 17391
download_size: 17422773
dataset_size: 22721974.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kira/phi_books_32k | kira | 2023-11-07T21:12:10Z | 140 | 0 | null | [
"region:us"
] | 2023-11-07T21:12:10Z | 2023-11-07T21:11:07.000Z | 2023-11-07T21:11:07 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: content
dtype: string
splits:
- name: train
num_bytes: 2662816001.5737844
num_examples: 23626
- name: test
num_bytes: 2704968.426215645
num_examples: 24
download_size: 866928760
dataset_size: 2665520970.0
---
# Dataset Card for "phi_books_32k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5091490745544434,
0.1482858806848526,
0.020696189254522324,
0.21620480716228485,
-0.43987661600112915,
-0.19678297638893127,
0.47354933619499207,
-0.15494079887866974,
0.6058918833732605,
0.6242293119430542,
-0.8268873691558838,
-0.8379215002059937,
-0.4674820005893707,
-0.2105596959590... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
keylazy/ark | keylazy | 2023-11-16T18:40:36Z | 140 | 0 | null | [
"region:us"
] | 2023-11-16T18:40:36Z | 2023-11-10T06:38:56.000Z | 2023-11-10T06:38:56 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: evaluation
path: data/evaluation-*
- split: test
path: data/test-*
- split: train_full
path: data/train_full-*
dataset_info:
features:
- name: text1
dtype: string
- name: text2
dtype: string
splits:
- name: train
num_bytes: 246977207
num_examples: 900000
- name: evaluation
num_bytes: 27414347
num_examples: 100000
- name: test
num_bytes: 27471369
num_examples: 100000
- name: train_full
num_bytes: 274391554
num_examples: 1000000
download_size: 189206059
dataset_size: 576254477
---
# Dataset Card for "ark"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8236438632011414,
-0.3450222313404083,
0.2207813262939453,
0.06301039457321167,
-0.3988795578479767,
0.10948388278484344,
0.35573315620422363,
-0.486983060836792,
0.7207332253456116,
0.6200013756752014,
-0.8104755878448486,
-0.8805983066558838,
-0.5595632195472717,
-0.03185880556702614,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
erbacher/PDEBench-1D | erbacher | 2023-11-22T16:05:42Z | 140 | 0 | null | [
"region:us"
] | 2023-11-22T16:05:42Z | 2023-11-22T13:13:55.000Z | 2023-11-22T13:13:55 | ---
dataset_info:
- config_name: Advection_Sols_beta0.1
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2079020000
num_examples: 10000
download_size: 1030317301
dataset_size: 2079020000
- config_name: Advection_Sols_beta0.2
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2079020000
num_examples: 10000
download_size: 1034054442
dataset_size: 2079020000
- config_name: Advection_Sols_beta0.4
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2079020000
num_examples: 10000
download_size: 1037220772
dataset_size: 2079020000
- config_name: Advection_Sols_beta0.7
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2079020000
num_examples: 10000
download_size: 1039496575
dataset_size: 2079020000
- config_name: Advection_Sols_beta1.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2079020000
num_examples: 10000
download_size: 1041009183
dataset_size: 2079020000
- config_name: Advection_Sols_beta2.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2079020000
num_examples: 10000
download_size: 1041263590
dataset_size: 2079020000
- config_name: Advection_Sols_beta4.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2079020000
num_examples: 10000
download_size: 1041302186
dataset_size: 2079020000
- config_name: Advection_Sols_beta7.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2079020000
num_examples: 10000
download_size: 1041314010
dataset_size: 2079020000
- config_name: Burgers_Sols_Nu0.001
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2079000000
num_examples: 10000
download_size: 1025858949
dataset_size: 2079000000
- config_name: Burgers_Sols_Nu0.002
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2079000000
num_examples: 10000
download_size: 1032095037
dataset_size: 2079000000
- config_name: Burgers_Sols_Nu0.004
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2079000000
num_examples: 10000
download_size: 1037155483
dataset_size: 2079000000
- config_name: Burgers_Sols_Nu0.01
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2078990000
num_examples: 10000
download_size: 1040325750
dataset_size: 2078990000
- config_name: Burgers_Sols_Nu0.02
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2078990000
num_examples: 10000
download_size: 1040663851
dataset_size: 2078990000
- config_name: Burgers_Sols_Nu0.04
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2078990000
num_examples: 10000
download_size: 1033200091
dataset_size: 2078990000
- config_name: Burgers_Sols_Nu0.1
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2078980000
num_examples: 10000
download_size: 993568525
dataset_size: 2078980000
- config_name: Burgers_Sols_Nu0.2
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2078980000
num_examples: 10000
download_size: 945791995
dataset_size: 2078980000
- config_name: Burgers_Sols_Nu0.4
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2078980000
num_examples: 10000
download_size: 844969978
dataset_size: 2078980000
- config_name: Burgers_Sols_Nu1.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2078980000
num_examples: 10000
download_size: 570861141
dataset_size: 2078980000
- config_name: Burgers_Sols_Nu2.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2078980000
num_examples: 10000
download_size: 313254875
dataset_size: 2078980000
- config_name: Burgers_Sols_Nu4.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2078980000
num_examples: 10000
download_size: 187697612
dataset_size: 2078980000
- config_name: CFD_Rand_Eta0.01_Zeta0.01_periodic
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2099620000
num_examples: 10000
download_size: 1576405761
dataset_size: 2099620000
- config_name: CFD_Rand_Eta0.1_Zeta0.1_periodic
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2099600000
num_examples: 10000
download_size: 946984963
dataset_size: 2099600000
- config_name: CFD_Rand_Eta1.e-8_Zeta1.e-8_periodic
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2099640000
num_examples: 10000
download_size: 1573309616
dataset_size: 2099640000
- config_name: CFD_Rand_Eta1.e-8_Zeta1.e-8_trans
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2099610000
num_examples: 10000
download_size: 0
dataset_size: 2099610000
- config_name: ReacDiff_Nu0.5_Rho1.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055010000
num_examples: 10000
download_size: 103983829
dataset_size: 1055010000
- config_name: ReacDiff_Nu0.5_Rho10.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055020000
num_examples: 10000
download_size: 124933565
dataset_size: 1055020000
- config_name: ReacDiff_Nu0.5_Rho2.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055010000
num_examples: 10000
download_size: 193004745
dataset_size: 1055010000
- config_name: ReacDiff_Nu0.5_Rho5.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055010000
num_examples: 10000
download_size: 146090506
dataset_size: 1055010000
- config_name: ReacDiff_Nu1.0_Rho1.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055010000
num_examples: 10000
download_size: 217153008
dataset_size: 1055010000
- config_name: ReacDiff_Nu1.0_Rho10.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055020000
num_examples: 10000
download_size: 113039664
dataset_size: 1055020000
- config_name: ReacDiff_Nu1.0_Rho2.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055010000
num_examples: 10000
download_size: 139659779
dataset_size: 1055010000
- config_name: ReacDiff_Nu1.0_Rho5.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055010000
num_examples: 10000
download_size: 121216648
dataset_size: 1055010000
- config_name: ReacDiff_Nu2.0_Rho1.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055010000
num_examples: 10000
download_size: 57854036
dataset_size: 1055010000
- config_name: ReacDiff_Nu2.0_Rho10.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055020000
num_examples: 10000
download_size: 73754842
dataset_size: 1055020000
- config_name: ReacDiff_Nu2.0_Rho2.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055010000
num_examples: 10000
download_size: 122071454
dataset_size: 1055010000
- config_name: ReacDiff_Nu2.0_Rho5.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055010000
num_examples: 10000
download_size: 74329093
dataset_size: 1055010000
- config_name: ReacDiff_Nu5.0_Rho1.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055010000
num_examples: 10000
download_size: 158789252
dataset_size: 1055010000
- config_name: ReacDiff_Nu5.0_Rho10.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055020000
num_examples: 10000
download_size: 55445429
dataset_size: 1055020000
- config_name: ReacDiff_Nu5.0_Rho2.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055010000
num_examples: 10000
download_size: 58220881
dataset_size: 1055010000
- config_name: ReacDiff_Nu5.0_Rho5.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055010000
num_examples: 10000
download_size: 57392368
dataset_size: 1055010000
configs:
- config_name: Advection_Sols_beta0.1
data_files:
- split: train
path: Advection_Sols_beta0.1/train-*
- config_name: Advection_Sols_beta0.2
data_files:
- split: train
path: Advection_Sols_beta0.2/train-*
- config_name: Advection_Sols_beta0.4
data_files:
- split: train
path: Advection_Sols_beta0.4/train-*
- config_name: Advection_Sols_beta0.7
data_files:
- split: train
path: Advection_Sols_beta0.7/train-*
- config_name: Advection_Sols_beta1.0
data_files:
- split: train
path: Advection_Sols_beta1.0/train-*
- config_name: Advection_Sols_beta2.0
data_files:
- split: train
path: Advection_Sols_beta2.0/train-*
- config_name: Advection_Sols_beta4.0
data_files:
- split: train
path: Advection_Sols_beta4.0/train-*
- config_name: Advection_Sols_beta7.0
data_files:
- split: train
path: Advection_Sols_beta7.0/train-*
- config_name: Burgers_Sols_Nu0.001
data_files:
- split: train
path: Burgers_Sols_Nu0.001/train-*
- config_name: Burgers_Sols_Nu0.002
data_files:
- split: train
path: Burgers_Sols_Nu0.002/train-*
- config_name: Burgers_Sols_Nu0.004
data_files:
- split: train
path: Burgers_Sols_Nu0.004/train-*
- config_name: Burgers_Sols_Nu0.01
data_files:
- split: train
path: Burgers_Sols_Nu0.01/train-*
- config_name: Burgers_Sols_Nu0.02
data_files:
- split: train
path: Burgers_Sols_Nu0.02/train-*
- config_name: Burgers_Sols_Nu0.04
data_files:
- split: train
path: Burgers_Sols_Nu0.04/train-*
- config_name: Burgers_Sols_Nu0.1
data_files:
- split: train
path: Burgers_Sols_Nu0.1/train-*
- config_name: Burgers_Sols_Nu0.2
data_files:
- split: train
path: Burgers_Sols_Nu0.2/train-*
- config_name: Burgers_Sols_Nu0.4
data_files:
- split: train
path: Burgers_Sols_Nu0.4/train-*
- config_name: Burgers_Sols_Nu1.0
data_files:
- split: train
path: Burgers_Sols_Nu1.0/train-*
- config_name: Burgers_Sols_Nu2.0
data_files:
- split: train
path: Burgers_Sols_Nu2.0/train-*
- config_name: Burgers_Sols_Nu4.0
data_files:
- split: train
path: Burgers_Sols_Nu4.0/train-*
- config_name: CFD_Rand_Eta0.01_Zeta0.01_periodic
data_files:
- split: train
path: CFD_Rand_Eta0.01_Zeta0.01_periodic/train-*
- config_name: CFD_Rand_Eta0.1_Zeta0.1_periodic
data_files:
- split: train
path: CFD_Rand_Eta0.1_Zeta0.1_periodic/train-*
- config_name: CFD_Rand_Eta1.e-8_Zeta1.e-8_periodic
data_files:
- split: train
path: CFD_Rand_Eta1.e-8_Zeta1.e-8_periodic/train-*
- config_name: CFD_Rand_Eta1.e-8_Zeta1.e-8_trans
data_files:
- split: train
path: CFD_Rand_Eta1.e-8_Zeta1.e-8_trans/train-*
- config_name: ReacDiff_Nu0.5_Rho1.0
data_files:
- split: train
path: ReacDiff_Nu0.5_Rho1.0/train-*
- config_name: ReacDiff_Nu0.5_Rho10.0
data_files:
- split: train
path: ReacDiff_Nu0.5_Rho10.0/train-*
- config_name: ReacDiff_Nu0.5_Rho2.0
data_files:
- split: train
path: ReacDiff_Nu0.5_Rho2.0/train-*
- config_name: ReacDiff_Nu0.5_Rho5.0
data_files:
- split: train
path: ReacDiff_Nu0.5_Rho5.0/train-*
- config_name: ReacDiff_Nu1.0_Rho1.0
data_files:
- split: train
path: ReacDiff_Nu1.0_Rho1.0/train-*
- config_name: ReacDiff_Nu1.0_Rho10.0
data_files:
- split: train
path: ReacDiff_Nu1.0_Rho10.0/train-*
- config_name: ReacDiff_Nu1.0_Rho2.0
data_files:
- split: train
path: ReacDiff_Nu1.0_Rho2.0/train-*
- config_name: ReacDiff_Nu1.0_Rho5.0
data_files:
- split: train
path: ReacDiff_Nu1.0_Rho5.0/train-*
- config_name: ReacDiff_Nu2.0_Rho1.0
data_files:
- split: train
path: ReacDiff_Nu2.0_Rho1.0/train-*
- config_name: ReacDiff_Nu2.0_Rho10.0
data_files:
- split: train
path: ReacDiff_Nu2.0_Rho10.0/train-*
- config_name: ReacDiff_Nu2.0_Rho2.0
data_files:
- split: train
path: ReacDiff_Nu2.0_Rho2.0/train-*
- config_name: ReacDiff_Nu2.0_Rho5.0
data_files:
- split: train
path: ReacDiff_Nu2.0_Rho5.0/train-*
- config_name: ReacDiff_Nu5.0_Rho1.0
data_files:
- split: train
path: ReacDiff_Nu5.0_Rho1.0/train-*
- config_name: ReacDiff_Nu5.0_Rho10.0
data_files:
- split: train
path: ReacDiff_Nu5.0_Rho10.0/train-*
- config_name: ReacDiff_Nu5.0_Rho2.0
data_files:
- split: train
path: ReacDiff_Nu5.0_Rho2.0/train-*
- config_name: ReacDiff_Nu5.0_Rho5.0
data_files:
- split: train
path: ReacDiff_Nu5.0_Rho5.0/train-*
---
# Dataset Card for "PDEBench-1D"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8869379758834839,
-0.37321895360946655,
0.16300925612449646,
0.6254310607910156,
-0.42181915044784546,
-0.13692647218704224,
0.4213698208332062,
0.025671111419796944,
0.9048948287963867,
0.5798101425170898,
-1.0883909463882446,
-1.0653613805770874,
-0.49601107835769653,
-0.2277786731719... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jfarray/TFM | jfarray | 2022-02-15T06:27:36Z | 139 | 0 | null | [
"region:us"
] | 2022-02-15T06:27:36Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/bionlp_st_2019_bb | bigbio | 2022-12-22T15:44:04Z | 139 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-12-22T15:44:04Z | 2022-11-13T22:07:17.000Z | 2022-11-13T22:07:17 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: BioNLP 2019 BB
homepage: https://sites.google.com/view/bb-2019/dataset
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
- RELATION_EXTRACTION
---
# Dataset Card for BioNLP 2019 BB
## Dataset Description
- **Homepage:** https://sites.google.com/view/bb-2019/dataset
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED,RE
The task focuses on the extraction of the locations and phenotypes of
microorganisms from PubMed abstracts and full-text excerpts, and the
characterization of these entities with respect to reference knowledge
sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by
the importance of the knowledge on biodiversity for fundamental research
and applications in microbiology.
## Citation Information
```
@inproceedings{bossy-etal-2019-bacteria,
title = "Bacteria Biotope at {B}io{NLP} Open Shared Tasks 2019",
author = "Bossy, Robert and
Del{'e}ger, Louise and
Chaix, Estelle and
Ba, Mouhamadou and
N{'e}dellec, Claire",
booktitle = "Proceedings of The 5th Workshop on BioNLP Open Shared Tasks",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-5719",
doi = "10.18653/v1/D19-5719",
pages = "121--131",
abstract = "This paper presents the fourth edition of the Bacteria
Biotope task at BioNLP Open Shared Tasks 2019. The task focuses on
the extraction of the locations and phenotypes of microorganisms
from PubMed abstracts and full-text excerpts, and the characterization
of these entities with respect to reference knowledge sources (NCBI
taxonomy, OntoBiotope ontology). The task is motivated by the importance
of the knowledge on biodiversity for fundamental research and applications
in microbiology. The paper describes the different proposed subtasks, the
corpus characteristics, and the challenge organization. We also provide an
analysis of the results obtained by participants, and inspect the evolution
of the results since the last edition in 2016.",
}
```
| [
-0.3413633406162262,
-0.3951989412307739,
0.4969559609889984,
-0.09792938828468323,
-0.470664381980896,
0.016152843832969666,
-0.23363174498081207,
-0.4613889753818512,
0.7181051969528198,
0.4793647229671478,
-0.3632827699184418,
-0.6775923371315002,
-0.4385107457637787,
0.4888719022274017... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.