id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dkagramanyan/horoscopes_ru | dkagramanyan | 2022-11-20T22:05:14Z | 32 | 3 | null | [
"task_categories:text-generation",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:ru",
"region:us"
] | 2022-11-20T22:05:14Z | 2022-11-20T20:54:21.000Z | 2022-11-20T20:54:21 | ---
annotations_creators: []
language:
- ru
language_creators: []
license: []
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets: []
tags: []
task_categories:
- text-generation
task_ids: []
dataset_info:
features:
- name: date
dtype: string
- name: sign
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 6532293
num_examples: 6976
- name: train
num_bytes: 62194608
num_examples: 66501
download_size: 31753326
dataset_size: 68726901
---
Horoscopes from website Rumbler.ru from 2004 to 2020.
73477 records.
Train dataset size - 66481
Test dataset size - 6996
Split - 10% | [
-0.5937657356262207,
-0.061379387974739075,
0.010134014301002026,
0.4417031705379486,
-0.42075687646865845,
-0.2718948423862457,
0.08484561741352081,
-0.5159875154495239,
0.639711320400238,
0.6702277660369873,
-0.6885966062545776,
-0.17997261881828308,
-0.8276188969612122,
0.16269589960575... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Rami/multi-label-class-github-issues-text-classification | Rami | 2022-12-02T01:19:08Z | 32 | 0 | null | [
"region:us"
] | 2022-12-02T01:19:08Z | 2022-11-29T16:32:12.000Z | 2022-11-29T16:32:12 | ---
dataset_info:
features:
- name: title
dtype: string
- name: labels
sequence: string
- name: bodyText
dtype: string
splits:
- name: train
num_bytes: 2713984
num_examples: 1556
- name: valid
num_bytes: 1296582
num_examples: 778
- name: test
num_bytes: 1307650
num_examples: 778
download_size: 2328003
dataset_size: 5318216
---
# Dataset Card for "multi-label-class-github-issues-text-classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.512814462184906,
-0.28245463967323303,
0.10461784899234772,
0.26754510402679443,
0.11546884477138519,
0.2973122298717499,
0.041682593524456024,
-0.2747389078140259,
0.6760825514793396,
0.24817021191120148,
-0.5897812247276306,
-0.7132539749145508,
-0.7650556564331055,
-0.102960854768753... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Jzuluaga/atco2_corpus_1h | Jzuluaga | 2022-12-05T11:15:31Z | 32 | 1 | null | [
"task_categories:automatic-speech-recognition",
"multilinguality:monolingual",
"language:en",
"audio",
"automatic-speech-recognition",
"en-atc",
"en",
"noisy-speech-recognition",
"speech-recognition",
"arxiv:2211.04054",
"region:us"
] | 2022-12-05T11:15:31Z | 2022-12-05T10:37:25.000Z | 2022-12-05T10:37:25 | ---
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: segment_start_time
dtype: float32
- name: segment_end_time
dtype: float32
- name: duration
dtype: float32
splits:
- name: test
num_bytes: 113872168.0
num_examples: 871
download_size: 113467762
dataset_size: 113872168.0
tags:
- audio
- automatic-speech-recognition
- en-atc
- en
- noisy-speech-recognition
- speech-recognition
task_categories:
- automatic-speech-recognition
language:
- en
multilinguality:
- monolingual
---
# Dataset Card for ATCO2 test set corpus (1hr set)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages and Other Details](#languages-and-other-details)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [ATCO2 project homepage](https://www.atco2.org/)
- **Repository:** [ATCO2 corpus](https://github.com/idiap/atco2-corpus)
- **Paper:** [ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications](https://arxiv.org/abs/2211.04054)
### Dataset Summary
ATCO2 project aims at developing a unique platform allowing to collect, organize and pre-process air-traffic control (voice communication) data from air space. This project has received funding from the Clean Sky 2 Joint Undertaking (JU) under grant agreement No 864702. The JU receives support from the European Union’s Horizon 2020 research and innovation programme and the Clean Sky 2 JU members other than the Union.
The project collected the real-time voice communication between air-traffic controllers and pilots available either directly through publicly accessible radio frequency channels or indirectly from air-navigation service providers (ANSPs). In addition to the voice communication data, contextual information is available in a form of metadata (i.e. surveillance data). The dataset consists of two distinct packages:
- A corpus of 5000+ hours (pseudo-transcribed) of air-traffic control speech collected across different airports (Sion, Bern, Zurich, etc.) in .wav format for speech recognition. Speaker distribution is 90/10% between males and females and the group contains native and non-native speakers of English.
- A corpus of 4 hours (transcribed) of air-traffic control speech collected across different airports (Sion, Bern, Zurich, etc.) in .wav format for speech recognition. Speaker distribution is 90/10% between males and females and the group contains native and non-native speakers of English. This corpus has been transcribed with orthographic information in XML format with speaker noise information, SNR values and others. Read Less
- A free sample of the 4 hours transcribed data is in [ATCO2 project homepage](https://www.atco2.org/data)
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`. Already adapted/fine-tuned models are available here --> [Wav2Vec 2.0 LARGE mdel](https://huggingface.co/Jzuluaga/wav2vec2-large-960h-lv60-self-en-atc-uwb-atcc-and-atcosim).
### Languages and other details
The text and the recordings are in English. For more information see Table 3 and Table 4 of [ATCO2 corpus paper](https://arxiv.org/abs/2211.04054)
## Dataset Structure
### Data Fields
- `id (string)`: a string of recording identifier for each example, corresponding to its.
- `audio (audio)`: audio data for the given ID
- `text (string)`: transcript of the file already normalized. Follow these repositories for more details [w2v2-air-traffic](https://github.com/idiap/w2v2-air-traffic) and [bert-text-diarization-atc](https://github.com/idiap/bert-text-diarization-atc)
- `segment_start_time (float32)`: segment start time (normally 0)
- `segment_end_time (float32): segment end time
- `duration (float32)`: duration of the recording, compute as segment_end_time - segment_start_time
## Additional Information
### Licensing Information
The licensing status of the ATCO2-test-set-1h corpus is in the file **ATCO2-ASRdataset-v1_beta - End-User Data Agreement** in the data folder. Download the data in [ATCO2 project homepage](https://www.atco2.org/data)
### Citation Information
Contributors who prepared, processed, normalized and uploaded the dataset in HuggingFace:
```
@article{zuluaga2022how,
title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
@article{zuluaga2022bertraffic,
title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
@article{zuluaga2022atco2,
title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Vesel{\`y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others},
journal={arXiv preprint arXiv:2211.04054},
year={2022}
}
```
| [
-0.2912621796131134,
-0.5593081712722778,
0.019656721502542496,
0.14739756286144257,
-0.2916443645954132,
0.18385423719882965,
-0.4034365713596344,
-0.707451343536377,
0.20793995261192322,
0.3852415382862091,
-0.33392247557640076,
-0.5561890006065369,
-0.5492717623710632,
-0.21336017549037... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tarteel-ai/everyayah | tarteel-ai | 2022-12-09T19:33:08Z | 32 | 6 | tarteel-everyayah | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ar",
"license:mit",
"region:us"
] | 2022-12-09T19:33:08Z | 2022-12-07T21:53:59.000Z | 2022-12-07T21:53:59 | ---
pretty_name: Tarteel AI - EveryAyah Dataset
dataset_info:
features:
- name: audio
dtype: audio
- name: duration
dtype: float64
- name: text
dtype: string
- name: reciter
dtype: string
splits:
- name: train
num_bytes: 262627688145.3
num_examples: 187785
- name: test
num_bytes: 25156009734.72
num_examples: 23473
- name: validation
num_bytes: 23426886730.218
num_examples: 23474
download_size: 117190597305
dataset_size: 311210584610.23804
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- ar
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: tarteel-everyayah
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids: []
train-eval-index:
- config: clean
task: automatic-speech-recognition
task_id: speech_recognition
splits:
train_split: train
eval_split: test
validation_split: validation
col_mapping:
audio: audio
text: text
reciter: text
metrics:
- type: wer
name: WER
- type: cer
name: CER
---
﷽
# Dataset Card for Tarteel AI's EveryAyah Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Tarteel AI](https://www.tarteel.ai/)
- **Repository:** [Needs More Information]
- **Point of Contact:** [Mohamed Saad Ibn Seddik](mailto:ms.ibnseddik@tarteel.ai)
### Dataset Summary
This dataset is a collection of Quranic verses and their transcriptions, with diacritization, by different reciters.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The audio is in Arabic.
## Dataset Structure
### Data Instances
A typical data point comprises the audio file `audio`, and its transcription called `text`.
The `duration` is in seconds, and the author is `reciter`.
An example from the dataset is:
```
{
'audio': {
'path': None,
'array': array([ 0. , 0. , 0. , ..., -0.00057983,
-0.00085449, -0.00061035]),
'sampling_rate': 16000
},
'duration': 6.478375,
'text': 'بِسْمِ اللَّهِ الرَّحْمَنِ الرَّحِيمِ',
'reciter': 'abdulsamad'
}
```
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: The transcription of the audio file.
- duration: The duration of the audio file.
- reciter: The reciter of the verses.
### Data Splits
| | Train | Test | Validation |
| ----- | ----- | ---- | ---------- |
| dataset | 187785 | 23473 | 23474 |
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
```
### Contributions
This dataset was created by:
| [
-0.41741177439689636,
-0.5093910098075867,
0.0837107002735138,
0.3968951106071472,
-0.5144365429878235,
0.07279697805643082,
-0.27687931060791016,
-0.25318193435668945,
0.5063637495040894,
0.5609090328216553,
-0.7049366235733032,
-1.220444679260254,
-0.6653955578804016,
0.30449119210243225... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
irds/trec-cast_v1 | irds | 2023-01-05T04:03:19Z | 32 | 1 | null | [
"task_categories:text-retrieval",
"region:us"
] | 2023-01-05T04:03:19Z | 2023-01-05T04:03:14.000Z | 2023-01-05T04:03:14 | ---
pretty_name: '`trec-cast/v1`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `trec-cast/v1`
The `trec-cast/v1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-cast#trec-cast/v1).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=38,622,444
This dataset is used by: [`trec-cast_v1_2020`](https://huggingface.co/datasets/irds/trec-cast_v1_2020), [`trec-cast_v1_2020_judged`](https://huggingface.co/datasets/irds/trec-cast_v1_2020_judged)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/trec-cast_v1', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Dalton2019Cast,
title={CAsT 2019: The Conversational Assistance Track Overview},
author={Jeffrey Dalton and Chenyan Xiong and Jamie Callan},
booktitle={TREC},
year={2019}
}
```
| [
-0.3557835817337036,
-0.22951601445674896,
0.043760914355516434,
0.10404479503631592,
-0.38338419795036316,
0.07697702944278717,
0.041870128363370895,
-0.14771953225135803,
0.4955301284790039,
0.5367802977561951,
-0.6047117114067078,
-0.8817342519760132,
-0.5472612380981445,
0.253317952156... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
neulab/odex | neulab | 2023-02-10T18:01:34Z | 32 | 8 | null | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"language:es",
"language:ja",
"language:ru",
"license:cc-by-sa-4.0",
"region:us"
] | 2023-02-10T18:01:34Z | 2023-01-06T14:30:00.000Z | 2023-01-06T14:30:00 | ---
license: cc-by-sa-4.0
task_categories:
- text2text-generation
- text-generation
language:
- en
- es
- ja
- ru
size_categories:
- n<1K
---
__ODEX__ is an Open-Domain EXecution-based NL-to-Code generation data benchmark.
It contains 945 samples with a total of 1,707 human-written test cases, covering intents in four different natural languages -- 439 in English, 90 in Spanish, 164 in Japanese, and 252 in Russian.
You can load the dataset by specifying a subset from *en, es, ja, ru* (by default the english subset *en* is loaded):
```python
from datasets import load_dataset
ds = load_dataset("neulab/odex", "ja", split="test")
```
If you find our dataset useful, please cite the paper
```
@article{wang2022execution,
title={Execution-Based Evaluation for Open-Domain Code Generation},
author={Zhiruo Wang, Shuyan Zhou, Daniel Fried, Graham Neubig},
journal={arXiv preprint arXiv:2212.10481},
year={2022}
}
``` | [
-0.4816034734249115,
-0.503476619720459,
0.16739533841609955,
0.5534633994102478,
-0.016090068966150284,
-0.26732325553894043,
-0.15952427685260773,
-0.40275558829307556,
-0.19453750550746918,
0.45273879170417786,
-0.3568849265575409,
-0.8204385042190552,
-0.20612762868404388,
0.7555735707... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
fcakyon/pokemon-classification | fcakyon | 2023-01-14T13:06:55Z | 32 | 1 | null | [
"task_categories:image-classification",
"roboflow",
"roboflow2huggingface",
"Gaming",
"region:us"
] | 2023-01-14T13:06:55Z | 2023-01-14T12:47:57.000Z | 2023-01-14T12:47:57 | ---
task_categories:
- image-classification
tags:
- roboflow
- roboflow2huggingface
- Gaming
---
<div align="center">
<img width="640" alt="fcakyon/pokemon-classification" src="https://huggingface.co/datasets/fcakyon/pokemon-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['Golbat', 'Machoke', 'Omastar', 'Diglett', 'Lapras', 'Kabuto', 'Persian', 'Weepinbell', 'Golem', 'Dodrio', 'Raichu', 'Zapdos', 'Raticate', 'Magnemite', 'Ivysaur', 'Growlithe', 'Tangela', 'Drowzee', 'Rapidash', 'Venonat', 'Pidgeot', 'Nidorino', 'Porygon', 'Lickitung', 'Rattata', 'Machop', 'Charmeleon', 'Slowbro', 'Parasect', 'Eevee', 'Starmie', 'Staryu', 'Psyduck', 'Dragonair', 'Magikarp', 'Vileplume', 'Marowak', 'Pidgeotto', 'Shellder', 'Mewtwo', 'Farfetchd', 'Kingler', 'Seel', 'Kakuna', 'Doduo', 'Electabuzz', 'Charmander', 'Rhyhorn', 'Tauros', 'Dugtrio', 'Poliwrath', 'Gengar', 'Exeggutor', 'Dewgong', 'Jigglypuff', 'Geodude', 'Kadabra', 'Nidorina', 'Sandshrew', 'Grimer', 'MrMime', 'Pidgey', 'Koffing', 'Ekans', 'Alolan Sandslash', 'Venusaur', 'Snorlax', 'Paras', 'Jynx', 'Chansey', 'Hitmonchan', 'Gastly', 'Kangaskhan', 'Oddish', 'Wigglytuff', 'Graveler', 'Arcanine', 'Clefairy', 'Articuno', 'Poliwag', 'Abra', 'Squirtle', 'Voltorb', 'Ponyta', 'Moltres', 'Nidoqueen', 'Magmar', 'Onix', 'Vulpix', 'Butterfree', 'Krabby', 'Arbok', 'Clefable', 'Goldeen', 'Magneton', 'Dratini', 'Caterpie', 'Jolteon', 'Nidoking', 'Alakazam', 'Dragonite', 'Fearow', 'Slowpoke', 'Weezing', 'Beedrill', 'Weedle', 'Cloyster', 'Vaporeon', 'Gyarados', 'Golduck', 'Machamp', 'Hitmonlee', 'Primeape', 'Cubone', 'Sandslash', 'Scyther', 'Haunter', 'Metapod', 'Tentacruel', 'Aerodactyl', 'Kabutops', 'Ninetales', 'Zubat', 'Rhydon', 'Mew', 'Pinsir', 'Ditto', 'Victreebel', 'Omanyte', 'Horsea', 'Pikachu', 'Blastoise', 'Venomoth', 'Charizard', 'Seadra', 'Muk', 'Spearow', 'Bulbasaur', 'Bellsprout', 'Electrode', 'Gloom', 'Poliwhirl', 'Flareon', 'Seaking', 'Hypno', 'Wartortle', 'Mankey', 'Tentacool', 'Exeggcute', 'Meowth']
```
### Number of Images
```json
{'train': 4869, 'test': 732, 'valid': 1390}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("fcakyon/pokemon-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/robert-demo-qvail/pokedex/dataset/14](https://universe.roboflow.com/robert-demo-qvail/pokedex/dataset/14?ref=roboflow2huggingface)
### Citation
```
@misc{ pokedex_dataset,
title = { Pokedex Dataset },
type = { Open Source Dataset },
author = { Lance Zhang },
howpublished = { \\url{ https://universe.roboflow.com/robert-demo-qvail/pokedex } },
url = { https://universe.roboflow.com/robert-demo-qvail/pokedex },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { dec },
note = { visited on 2023-01-14 },
}
```
### License
Public Domain
### Dataset Summary
This dataset was exported via roboflow.com on December 20, 2022 at 5:34 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 6991 images.
Pokemon are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 224x224 (Fit (black edges))
No image augmentation techniques were applied.
| [
-0.6302996873855591,
-0.10036955028772354,
0.12819711863994598,
0.10201653093099594,
-0.09087535738945007,
0.20959120988845825,
0.13623447716236115,
-0.2363623082637787,
0.7788829207420349,
0.19435319304466248,
-0.45647895336151123,
-0.6003285050392151,
-0.7072935104370117,
0.4002576470375... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
IlyaGusev/rulm | IlyaGusev | 2023-03-20T23:53:53Z | 32 | 13 | null | [
"task_categories:text-generation",
"size_categories:10M<n<100M",
"language:ru",
"region:us"
] | 2023-03-20T23:53:53Z | 2023-01-25T18:14:38.000Z | 2023-01-25T18:14:38 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 78609111353
num_examples: 14811026
- name: test
num_bytes: 397130292
num_examples: 74794
- name: validation
num_bytes: 395354867
num_examples: 74691
download_size: 24170140196
dataset_size: 79401596512
task_categories:
- text-generation
language:
- ru
size_categories:
- 10M<n<100M
---
# Dataset for training Russian language models
Overall: 75G
Scripts: https://github.com/IlyaGusev/rulm/tree/master/data_processing
| Website | Char count (M) | Word count (M) |
|-----------------|---------------|---------------|
| pikabu | 14938 | 2161 |
| lenta | 1008 | 135 |
| stihi | 2994 | 393 |
| stackoverflow | 1073 | 228 |
| habr | 5112 | 753 |
| taiga_fontanka | 419 | 55 |
| librusec | 10149 | 1573 |
| buriy | 2646 | 352 |
| ods_tass | 1908 | 255 |
| wiki | 3473 | 469 |
| math | 987 | 177 |
| [
-0.11131325364112854,
-0.43729162216186523,
0.2376604825258255,
0.1124470978975296,
-0.2508116662502289,
0.09861981123685837,
-0.1276271939277649,
0.10438186675310135,
-0.032986436039209366,
0.3186630308628082,
-0.5921818614006042,
-0.9068577289581299,
-0.6761478781700134,
0.12076359987258... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ChristophSchuhmann/essays-with-instructions | ChristophSchuhmann | 2023-01-26T21:59:21Z | 32 | 11 | null | [
"license:apache-2.0",
"region:us"
] | 2023-01-26T21:59:21Z | 2023-01-26T21:57:19.000Z | 2023-01-26T21:57:19 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Cohere/miracl-fr-queries-22-12 | Cohere | 2023-02-06T11:57:25Z | 32 | 0 | null | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:fr",
"license:apache-2.0",
"region:us"
] | 2023-02-06T11:57:25Z | 2023-01-31T07:18:50.000Z | 2023-01-31T07:18:50 | ---
annotations_creators:
- expert-generated
language:
- fr
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (fr) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-fr-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fr-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-fr-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fr-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-fr-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fr-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-fr-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-fr-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-fr-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fr-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-fr-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-fr-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| [
-0.6344762444496155,
-0.8110339045524597,
0.31565192341804504,
0.2613868713378906,
-0.059219710528850555,
-0.06759046018123627,
-0.3108191192150116,
-0.5036748647689819,
0.5457637310028076,
0.21529194712638855,
-0.5575382113456726,
-1.0098727941513062,
-0.7041245698928833,
0.34159016609191... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AlekseyKorshuk/chai-real-and-synthetic | AlekseyKorshuk | 2023-02-10T13:22:05Z | 32 | 0 | null | [
"region:us"
] | 2023-02-10T13:22:05Z | 2023-02-10T13:21:26.000Z | 2023-02-10T13:21:26 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 3208000491
num_examples: 134668
- name: test
num_bytes: 171664726
num_examples: 18961
download_size: 1180192490
dataset_size: 3379665217
---
# Dataset Card for "chai-real-and-synthetic"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5678293704986572,
-0.6291089057922363,
0.279415100812912,
0.26436373591423035,
-0.07531075179576874,
0.20121462643146515,
0.2520116865634918,
-0.5706525444984436,
1.017044186592102,
0.4296548664569855,
-1.0677541494369507,
-0.5631408095359802,
-0.32543209195137024,
-0.1908988058567047,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Multimodal-Fatima/Imagenet1k_sample_validation | Multimodal-Fatima | 2023-02-10T18:05:59Z | 32 | 0 | null | [
"region:us"
] | 2023-02-10T18:05:59Z | 2023-02-10T18:05:33.000Z | 2023-02-10T18:05:33 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': tench, Tinca tinca
'1': goldfish, Carassius auratus
'2': great white shark, white shark, man-eater, man-eating shark, Carcharodon
carcharias
'3': tiger shark, Galeocerdo cuvieri
'4': hammerhead, hammerhead shark
'5': electric ray, crampfish, numbfish, torpedo
'6': stingray
'7': cock
'8': hen
'9': ostrich, Struthio camelus
'10': brambling, Fringilla montifringilla
'11': goldfinch, Carduelis carduelis
'12': house finch, linnet, Carpodacus mexicanus
'13': junco, snowbird
'14': indigo bunting, indigo finch, indigo bird, Passerina cyanea
'15': robin, American robin, Turdus migratorius
'16': bulbul
'17': jay
'18': magpie
'19': chickadee
'20': water ouzel, dipper
'21': kite
'22': bald eagle, American eagle, Haliaeetus leucocephalus
'23': vulture
'24': great grey owl, great gray owl, Strix nebulosa
'25': European fire salamander, Salamandra salamandra
'26': common newt, Triturus vulgaris
'27': eft
'28': spotted salamander, Ambystoma maculatum
'29': axolotl, mud puppy, Ambystoma mexicanum
'30': bullfrog, Rana catesbeiana
'31': tree frog, tree-frog
'32': tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui
'33': loggerhead, loggerhead turtle, Caretta caretta
'34': leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea
'35': mud turtle
'36': terrapin
'37': box turtle, box tortoise
'38': banded gecko
'39': common iguana, iguana, Iguana iguana
'40': American chameleon, anole, Anolis carolinensis
'41': whiptail, whiptail lizard
'42': agama
'43': frilled lizard, Chlamydosaurus kingi
'44': alligator lizard
'45': Gila monster, Heloderma suspectum
'46': green lizard, Lacerta viridis
'47': African chameleon, Chamaeleo chamaeleon
'48': Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus
komodoensis
'49': African crocodile, Nile crocodile, Crocodylus niloticus
'50': American alligator, Alligator mississipiensis
'51': triceratops
'52': thunder snake, worm snake, Carphophis amoenus
'53': ringneck snake, ring-necked snake, ring snake
'54': hognose snake, puff adder, sand viper
'55': green snake, grass snake
'56': king snake, kingsnake
'57': garter snake, grass snake
'58': water snake
'59': vine snake
'60': night snake, Hypsiglena torquata
'61': boa constrictor, Constrictor constrictor
'62': rock python, rock snake, Python sebae
'63': Indian cobra, Naja naja
'64': green mamba
'65': sea snake
'66': horned viper, cerastes, sand viper, horned asp, Cerastes cornutus
'67': diamondback, diamondback rattlesnake, Crotalus adamanteus
'68': sidewinder, horned rattlesnake, Crotalus cerastes
'69': trilobite
'70': harvestman, daddy longlegs, Phalangium opilio
'71': scorpion
'72': black and gold garden spider, Argiope aurantia
'73': barn spider, Araneus cavaticus
'74': garden spider, Aranea diademata
'75': black widow, Latrodectus mactans
'76': tarantula
'77': wolf spider, hunting spider
'78': tick
'79': centipede
'80': black grouse
'81': ptarmigan
'82': ruffed grouse, partridge, Bonasa umbellus
'83': prairie chicken, prairie grouse, prairie fowl
'84': peacock
'85': quail
'86': partridge
'87': African grey, African gray, Psittacus erithacus
'88': macaw
'89': sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita
'90': lorikeet
'91': coucal
'92': bee eater
'93': hornbill
'94': hummingbird
'95': jacamar
'96': toucan
'97': drake
'98': red-breasted merganser, Mergus serrator
'99': goose
'100': black swan, Cygnus atratus
'101': tusker
'102': echidna, spiny anteater, anteater
'103': platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus
anatinus
'104': wallaby, brush kangaroo
'105': koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus
'106': wombat
'107': jellyfish
'108': sea anemone, anemone
'109': brain coral
'110': flatworm, platyhelminth
'111': nematode, nematode worm, roundworm
'112': conch
'113': snail
'114': slug
'115': sea slug, nudibranch
'116': chiton, coat-of-mail shell, sea cradle, polyplacophore
'117': chambered nautilus, pearly nautilus, nautilus
'118': Dungeness crab, Cancer magister
'119': rock crab, Cancer irroratus
'120': fiddler crab
'121': king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes
camtschatica
'122': American lobster, Northern lobster, Maine lobster, Homarus americanus
'123': spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish
'124': crayfish, crawfish, crawdad, crawdaddy
'125': hermit crab
'126': isopod
'127': white stork, Ciconia ciconia
'128': black stork, Ciconia nigra
'129': spoonbill
'130': flamingo
'131': little blue heron, Egretta caerulea
'132': American egret, great white heron, Egretta albus
'133': bittern
'134': crane
'135': limpkin, Aramus pictus
'136': European gallinule, Porphyrio porphyrio
'137': American coot, marsh hen, mud hen, water hen, Fulica americana
'138': bustard
'139': ruddy turnstone, Arenaria interpres
'140': red-backed sandpiper, dunlin, Erolia alpina
'141': redshank, Tringa totanus
'142': dowitcher
'143': oystercatcher, oyster catcher
'144': pelican
'145': king penguin, Aptenodytes patagonica
'146': albatross, mollymawk
'147': grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius
robustus
'148': killer whale, killer, orca, grampus, sea wolf, Orcinus orca
'149': dugong, Dugong dugon
'150': sea lion
'151': Chihuahua
'152': Japanese spaniel
'153': Maltese dog, Maltese terrier, Maltese
'154': Pekinese, Pekingese, Peke
'155': Shih-Tzu
'156': Blenheim spaniel
'157': papillon
'158': toy terrier
'159': Rhodesian ridgeback
'160': Afghan hound, Afghan
'161': basset, basset hound
'162': beagle
'163': bloodhound, sleuthhound
'164': bluetick
'165': black-and-tan coonhound
'166': Walker hound, Walker foxhound
'167': English foxhound
'168': redbone
'169': borzoi, Russian wolfhound
'170': Irish wolfhound
'171': Italian greyhound
'172': whippet
'173': Ibizan hound, Ibizan Podenco
'174': Norwegian elkhound, elkhound
'175': otterhound, otter hound
'176': Saluki, gazelle hound
'177': Scottish deerhound, deerhound
'178': Weimaraner
'179': Staffordshire bullterrier, Staffordshire bull terrier
'180': American Staffordshire terrier, Staffordshire terrier, American pit
bull terrier, pit bull terrier
'181': Bedlington terrier
'182': Border terrier
'183': Kerry blue terrier
'184': Irish terrier
'185': Norfolk terrier
'186': Norwich terrier
'187': Yorkshire terrier
'188': wire-haired fox terrier
'189': Lakeland terrier
'190': Sealyham terrier, Sealyham
'191': Airedale, Airedale terrier
'192': cairn, cairn terrier
'193': Australian terrier
'194': Dandie Dinmont, Dandie Dinmont terrier
'195': Boston bull, Boston terrier
'196': miniature schnauzer
'197': giant schnauzer
'198': standard schnauzer
'199': Scotch terrier, Scottish terrier, Scottie
'200': Tibetan terrier, chrysanthemum dog
'201': silky terrier, Sydney silky
'202': soft-coated wheaten terrier
'203': West Highland white terrier
'204': Lhasa, Lhasa apso
'205': flat-coated retriever
'206': curly-coated retriever
'207': golden retriever
'208': Labrador retriever
'209': Chesapeake Bay retriever
'210': German short-haired pointer
'211': vizsla, Hungarian pointer
'212': English setter
'213': Irish setter, red setter
'214': Gordon setter
'215': Brittany spaniel
'216': clumber, clumber spaniel
'217': English springer, English springer spaniel
'218': Welsh springer spaniel
'219': cocker spaniel, English cocker spaniel, cocker
'220': Sussex spaniel
'221': Irish water spaniel
'222': kuvasz
'223': schipperke
'224': groenendael
'225': malinois
'226': briard
'227': kelpie
'228': komondor
'229': Old English sheepdog, bobtail
'230': Shetland sheepdog, Shetland sheep dog, Shetland
'231': collie
'232': Border collie
'233': Bouvier des Flandres, Bouviers des Flandres
'234': Rottweiler
'235': German shepherd, German shepherd dog, German police dog, alsatian
'236': Doberman, Doberman pinscher
'237': miniature pinscher
'238': Greater Swiss Mountain dog
'239': Bernese mountain dog
'240': Appenzeller
'241': EntleBucher
'242': boxer
'243': bull mastiff
'244': Tibetan mastiff
'245': French bulldog
'246': Great Dane
'247': Saint Bernard, St Bernard
'248': Eskimo dog, husky
'249': malamute, malemute, Alaskan malamute
'250': Siberian husky
'251': dalmatian, coach dog, carriage dog
'252': affenpinscher, monkey pinscher, monkey dog
'253': basenji
'254': pug, pug-dog
'255': Leonberg
'256': Newfoundland, Newfoundland dog
'257': Great Pyrenees
'258': Samoyed, Samoyede
'259': Pomeranian
'260': chow, chow chow
'261': keeshond
'262': Brabancon griffon
'263': Pembroke, Pembroke Welsh corgi
'264': Cardigan, Cardigan Welsh corgi
'265': toy poodle
'266': miniature poodle
'267': standard poodle
'268': Mexican hairless
'269': timber wolf, grey wolf, gray wolf, Canis lupus
'270': white wolf, Arctic wolf, Canis lupus tundrarum
'271': red wolf, maned wolf, Canis rufus, Canis niger
'272': coyote, prairie wolf, brush wolf, Canis latrans
'273': dingo, warrigal, warragal, Canis dingo
'274': dhole, Cuon alpinus
'275': African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus
'276': hyena, hyaena
'277': red fox, Vulpes vulpes
'278': kit fox, Vulpes macrotis
'279': Arctic fox, white fox, Alopex lagopus
'280': grey fox, gray fox, Urocyon cinereoargenteus
'281': tabby, tabby cat
'282': tiger cat
'283': Persian cat
'284': Siamese cat, Siamese
'285': Egyptian cat
'286': cougar, puma, catamount, mountain lion, painter, panther, Felis concolor
'287': lynx, catamount
'288': leopard, Panthera pardus
'289': snow leopard, ounce, Panthera uncia
'290': jaguar, panther, Panthera onca, Felis onca
'291': lion, king of beasts, Panthera leo
'292': tiger, Panthera tigris
'293': cheetah, chetah, Acinonyx jubatus
'294': brown bear, bruin, Ursus arctos
'295': American black bear, black bear, Ursus americanus, Euarctos americanus
'296': ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus
'297': sloth bear, Melursus ursinus, Ursus ursinus
'298': mongoose
'299': meerkat, mierkat
'300': tiger beetle
'301': ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle
'302': ground beetle, carabid beetle
'303': long-horned beetle, longicorn, longicorn beetle
'304': leaf beetle, chrysomelid
'305': dung beetle
'306': rhinoceros beetle
'307': weevil
'308': fly
'309': bee
'310': ant, emmet, pismire
'311': grasshopper, hopper
'312': cricket
'313': walking stick, walkingstick, stick insect
'314': cockroach, roach
'315': mantis, mantid
'316': cicada, cicala
'317': leafhopper
'318': lacewing, lacewing fly
'319': dragonfly, darning needle, devil's darning needle, sewing needle,
snake feeder, snake doctor, mosquito hawk, skeeter hawk
'320': damselfly
'321': admiral
'322': ringlet, ringlet butterfly
'323': monarch, monarch butterfly, milkweed butterfly, Danaus plexippus
'324': cabbage butterfly
'325': sulphur butterfly, sulfur butterfly
'326': lycaenid, lycaenid butterfly
'327': starfish, sea star
'328': sea urchin
'329': sea cucumber, holothurian
'330': wood rabbit, cottontail, cottontail rabbit
'331': hare
'332': Angora, Angora rabbit
'333': hamster
'334': porcupine, hedgehog
'335': fox squirrel, eastern fox squirrel, Sciurus niger
'336': marmot
'337': beaver
'338': guinea pig, Cavia cobaya
'339': sorrel
'340': zebra
'341': hog, pig, grunter, squealer, Sus scrofa
'342': wild boar, boar, Sus scrofa
'343': warthog
'344': hippopotamus, hippo, river horse, Hippopotamus amphibius
'345': ox
'346': water buffalo, water ox, Asiatic buffalo, Bubalus bubalis
'347': bison
'348': ram, tup
'349': bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain
sheep, Ovis canadensis
'350': ibex, Capra ibex
'351': hartebeest
'352': impala, Aepyceros melampus
'353': gazelle
'354': Arabian camel, dromedary, Camelus dromedarius
'355': llama
'356': weasel
'357': mink
'358': polecat, fitch, foulmart, foumart, Mustela putorius
'359': black-footed ferret, ferret, Mustela nigripes
'360': otter
'361': skunk, polecat, wood pussy
'362': badger
'363': armadillo
'364': three-toed sloth, ai, Bradypus tridactylus
'365': orangutan, orang, orangutang, Pongo pygmaeus
'366': gorilla, Gorilla gorilla
'367': chimpanzee, chimp, Pan troglodytes
'368': gibbon, Hylobates lar
'369': siamang, Hylobates syndactylus, Symphalangus syndactylus
'370': guenon, guenon monkey
'371': patas, hussar monkey, Erythrocebus patas
'372': baboon
'373': macaque
'374': langur
'375': colobus, colobus monkey
'376': proboscis monkey, Nasalis larvatus
'377': marmoset
'378': capuchin, ringtail, Cebus capucinus
'379': howler monkey, howler
'380': titi, titi monkey
'381': spider monkey, Ateles geoffroyi
'382': squirrel monkey, Saimiri sciureus
'383': Madagascar cat, ring-tailed lemur, Lemur catta
'384': indri, indris, Indri indri, Indri brevicaudatus
'385': Indian elephant, Elephas maximus
'386': African elephant, Loxodonta africana
'387': lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens
'388': giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca
'389': barracouta, snoek
'390': eel
'391': coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus
kisutch
'392': rock beauty, Holocanthus tricolor
'393': anemone fish
'394': sturgeon
'395': gar, garfish, garpike, billfish, Lepisosteus osseus
'396': lionfish
'397': puffer, pufferfish, blowfish, globefish
'398': abacus
'399': abaya
'400': academic gown, academic robe, judge's robe
'401': accordion, piano accordion, squeeze box
'402': acoustic guitar
'403': aircraft carrier, carrier, flattop, attack aircraft carrier
'404': airliner
'405': airship, dirigible
'406': altar
'407': ambulance
'408': amphibian, amphibious vehicle
'409': analog clock
'410': apiary, bee house
'411': apron
'412': ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin,
dustbin, trash barrel, trash bin
'413': assault rifle, assault gun
'414': backpack, back pack, knapsack, packsack, rucksack, haversack
'415': bakery, bakeshop, bakehouse
'416': balance beam, beam
'417': balloon
'418': ballpoint, ballpoint pen, ballpen, Biro
'419': Band Aid
'420': banjo
'421': bannister, banister, balustrade, balusters, handrail
'422': barbell
'423': barber chair
'424': barbershop
'425': barn
'426': barometer
'427': barrel, cask
'428': barrow, garden cart, lawn cart, wheelbarrow
'429': baseball
'430': basketball
'431': bassinet
'432': bassoon
'433': bathing cap, swimming cap
'434': bath towel
'435': bathtub, bathing tub, bath, tub
'436': beach wagon, station wagon, wagon, estate car, beach waggon, station
waggon, waggon
'437': beacon, lighthouse, beacon light, pharos
'438': beaker
'439': bearskin, busby, shako
'440': beer bottle
'441': beer glass
'442': bell cote, bell cot
'443': bib
'444': bicycle-built-for-two, tandem bicycle, tandem
'445': bikini, two-piece
'446': binder, ring-binder
'447': binoculars, field glasses, opera glasses
'448': birdhouse
'449': boathouse
'450': bobsled, bobsleigh, bob
'451': bolo tie, bolo, bola tie, bola
'452': bonnet, poke bonnet
'453': bookcase
'454': bookshop, bookstore, bookstall
'455': bottlecap
'456': bow
'457': bow tie, bow-tie, bowtie
'458': brass, memorial tablet, plaque
'459': brassiere, bra, bandeau
'460': breakwater, groin, groyne, mole, bulwark, seawall, jetty
'461': breastplate, aegis, egis
'462': broom
'463': bucket, pail
'464': buckle
'465': bulletproof vest
'466': bullet train, bullet
'467': butcher shop, meat market
'468': cab, hack, taxi, taxicab
'469': caldron, cauldron
'470': candle, taper, wax light
'471': cannon
'472': canoe
'473': can opener, tin opener
'474': cardigan
'475': car mirror
'476': carousel, carrousel, merry-go-round, roundabout, whirligig
'477': carpenter's kit, tool kit
'478': carton
'479': car wheel
'480': cash machine, cash dispenser, automated teller machine, automatic
teller machine, automated teller, automatic teller, ATM
'481': cassette
'482': cassette player
'483': castle
'484': catamaran
'485': CD player
'486': cello, violoncello
'487': cellular telephone, cellular phone, cellphone, cell, mobile phone
'488': chain
'489': chainlink fence
'490': chain mail, ring mail, mail, chain armor, chain armour, ring armor,
ring armour
'491': chain saw, chainsaw
'492': chest
'493': chiffonier, commode
'494': chime, bell, gong
'495': china cabinet, china closet
'496': Christmas stocking
'497': church, church building
'498': cinema, movie theater, movie theatre, movie house, picture palace
'499': cleaver, meat cleaver, chopper
'500': cliff dwelling
'501': cloak
'502': clog, geta, patten, sabot
'503': cocktail shaker
'504': coffee mug
'505': coffeepot
'506': coil, spiral, volute, whorl, helix
'507': combination lock
'508': computer keyboard, keypad
'509': confectionery, confectionary, candy store
'510': container ship, containership, container vessel
'511': convertible
'512': corkscrew, bottle screw
'513': cornet, horn, trumpet, trump
'514': cowboy boot
'515': cowboy hat, ten-gallon hat
'516': cradle
'517': crane2
'518': crash helmet
'519': crate
'520': crib, cot
'521': Crock Pot
'522': croquet ball
'523': crutch
'524': cuirass
'525': dam, dike, dyke
'526': desk
'527': desktop computer
'528': dial telephone, dial phone
'529': diaper, nappy, napkin
'530': digital clock
'531': digital watch
'532': dining table, board
'533': dishrag, dishcloth
'534': dishwasher, dish washer, dishwashing machine
'535': disk brake, disc brake
'536': dock, dockage, docking facility
'537': dogsled, dog sled, dog sleigh
'538': dome
'539': doormat, welcome mat
'540': drilling platform, offshore rig
'541': drum, membranophone, tympan
'542': drumstick
'543': dumbbell
'544': Dutch oven
'545': electric fan, blower
'546': electric guitar
'547': electric locomotive
'548': entertainment center
'549': envelope
'550': espresso maker
'551': face powder
'552': feather boa, boa
'553': file, file cabinet, filing cabinet
'554': fireboat
'555': fire engine, fire truck
'556': fire screen, fireguard
'557': flagpole, flagstaff
'558': flute, transverse flute
'559': folding chair
'560': football helmet
'561': forklift
'562': fountain
'563': fountain pen
'564': four-poster
'565': freight car
'566': French horn, horn
'567': frying pan, frypan, skillet
'568': fur coat
'569': garbage truck, dustcart
'570': gasmask, respirator, gas helmet
'571': gas pump, gasoline pump, petrol pump, island dispenser
'572': goblet
'573': go-kart
'574': golf ball
'575': golfcart, golf cart
'576': gondola
'577': gong, tam-tam
'578': gown
'579': grand piano, grand
'580': greenhouse, nursery, glasshouse
'581': grille, radiator grille
'582': grocery store, grocery, food market, market
'583': guillotine
'584': hair slide
'585': hair spray
'586': half track
'587': hammer
'588': hamper
'589': hand blower, blow dryer, blow drier, hair dryer, hair drier
'590': hand-held computer, hand-held microcomputer
'591': handkerchief, hankie, hanky, hankey
'592': hard disc, hard disk, fixed disk
'593': harmonica, mouth organ, harp, mouth harp
'594': harp
'595': harvester, reaper
'596': hatchet
'597': holster
'598': home theater, home theatre
'599': honeycomb
'600': hook, claw
'601': hoopskirt, crinoline
'602': horizontal bar, high bar
'603': horse cart, horse-cart
'604': hourglass
'605': iPod
'606': iron, smoothing iron
'607': jack-o'-lantern
'608': jean, blue jean, denim
'609': jeep, landrover
'610': jersey, T-shirt, tee shirt
'611': jigsaw puzzle
'612': jinrikisha, ricksha, rickshaw
'613': joystick
'614': kimono
'615': knee pad
'616': knot
'617': lab coat, laboratory coat
'618': ladle
'619': lampshade, lamp shade
'620': laptop, laptop computer
'621': lawn mower, mower
'622': lens cap, lens cover
'623': letter opener, paper knife, paperknife
'624': library
'625': lifeboat
'626': lighter, light, igniter, ignitor
'627': limousine, limo
'628': liner, ocean liner
'629': lipstick, lip rouge
'630': Loafer
'631': lotion
'632': loudspeaker, speaker, speaker unit, loudspeaker system, speaker system
'633': loupe, jeweler's loupe
'634': lumbermill, sawmill
'635': magnetic compass
'636': mailbag, postbag
'637': mailbox, letter box
'638': maillot
'639': maillot, tank suit
'640': manhole cover
'641': maraca
'642': marimba, xylophone
'643': mask
'644': matchstick
'645': maypole
'646': maze, labyrinth
'647': measuring cup
'648': medicine chest, medicine cabinet
'649': megalith, megalithic structure
'650': microphone, mike
'651': microwave, microwave oven
'652': military uniform
'653': milk can
'654': minibus
'655': miniskirt, mini
'656': minivan
'657': missile
'658': mitten
'659': mixing bowl
'660': mobile home, manufactured home
'661': Model T
'662': modem
'663': monastery
'664': monitor
'665': moped
'666': mortar
'667': mortarboard
'668': mosque
'669': mosquito net
'670': motor scooter, scooter
'671': mountain bike, all-terrain bike, off-roader
'672': mountain tent
'673': mouse, computer mouse
'674': mousetrap
'675': moving van
'676': muzzle
'677': nail
'678': neck brace
'679': necklace
'680': nipple
'681': notebook, notebook computer
'682': obelisk
'683': oboe, hautboy, hautbois
'684': ocarina, sweet potato
'685': odometer, hodometer, mileometer, milometer
'686': oil filter
'687': organ, pipe organ
'688': oscilloscope, scope, cathode-ray oscilloscope, CRO
'689': overskirt
'690': oxcart
'691': oxygen mask
'692': packet
'693': paddle, boat paddle
'694': paddlewheel, paddle wheel
'695': padlock
'696': paintbrush
'697': pajama, pyjama, pj's, jammies
'698': palace
'699': panpipe, pandean pipe, syrinx
'700': paper towel
'701': parachute, chute
'702': parallel bars, bars
'703': park bench
'704': parking meter
'705': passenger car, coach, carriage
'706': patio, terrace
'707': pay-phone, pay-station
'708': pedestal, plinth, footstall
'709': pencil box, pencil case
'710': pencil sharpener
'711': perfume, essence
'712': Petri dish
'713': photocopier
'714': pick, plectrum, plectron
'715': pickelhaube
'716': picket fence, paling
'717': pickup, pickup truck
'718': pier
'719': piggy bank, penny bank
'720': pill bottle
'721': pillow
'722': ping-pong ball
'723': pinwheel
'724': pirate, pirate ship
'725': pitcher, ewer
'726': plane, carpenter's plane, woodworking plane
'727': planetarium
'728': plastic bag
'729': plate rack
'730': plow, plough
'731': plunger, plumber's helper
'732': Polaroid camera, Polaroid Land camera
'733': pole
'734': police van, police wagon, paddy wagon, patrol wagon, wagon, black
Maria
'735': poncho
'736': pool table, billiard table, snooker table
'737': pop bottle, soda bottle
'738': pot, flowerpot
'739': potter's wheel
'740': power drill
'741': prayer rug, prayer mat
'742': printer
'743': prison, prison house
'744': projectile, missile
'745': projector
'746': puck, hockey puck
'747': punching bag, punch bag, punching ball, punchball
'748': purse
'749': quill, quill pen
'750': quilt, comforter, comfort, puff
'751': racer, race car, racing car
'752': racket, racquet
'753': radiator
'754': radio, wireless
'755': radio telescope, radio reflector
'756': rain barrel
'757': recreational vehicle, RV, R.V.
'758': reel
'759': reflex camera
'760': refrigerator, icebox
'761': remote control, remote
'762': restaurant, eating house, eating place, eatery
'763': revolver, six-gun, six-shooter
'764': rifle
'765': rocking chair, rocker
'766': rotisserie
'767': rubber eraser, rubber, pencil eraser
'768': rugby ball
'769': rule, ruler
'770': running shoe
'771': safe
'772': safety pin
'773': saltshaker, salt shaker
'774': sandal
'775': sarong
'776': sax, saxophone
'777': scabbard
'778': scale, weighing machine
'779': school bus
'780': schooner
'781': scoreboard
'782': screen, CRT screen
'783': screw
'784': screwdriver
'785': seat belt, seatbelt
'786': sewing machine
'787': shield, buckler
'788': shoe shop, shoe-shop, shoe store
'789': shoji
'790': shopping basket
'791': shopping cart
'792': shovel
'793': shower cap
'794': shower curtain
'795': ski
'796': ski mask
'797': sleeping bag
'798': slide rule, slipstick
'799': sliding door
'800': slot, one-armed bandit
'801': snorkel
'802': snowmobile
'803': snowplow, snowplough
'804': soap dispenser
'805': soccer ball
'806': sock
'807': solar dish, solar collector, solar furnace
'808': sombrero
'809': soup bowl
'810': space bar
'811': space heater
'812': space shuttle
'813': spatula
'814': speedboat
'815': spider web, spider's web
'816': spindle
'817': sports car, sport car
'818': spotlight, spot
'819': stage
'820': steam locomotive
'821': steel arch bridge
'822': steel drum
'823': stethoscope
'824': stole
'825': stone wall
'826': stopwatch, stop watch
'827': stove
'828': strainer
'829': streetcar, tram, tramcar, trolley, trolley car
'830': stretcher
'831': studio couch, day bed
'832': stupa, tope
'833': submarine, pigboat, sub, U-boat
'834': suit, suit of clothes
'835': sundial
'836': sunglass
'837': sunglasses, dark glasses, shades
'838': sunscreen, sunblock, sun blocker
'839': suspension bridge
'840': swab, swob, mop
'841': sweatshirt
'842': swimming trunks, bathing trunks
'843': swing
'844': switch, electric switch, electrical switch
'845': syringe
'846': table lamp
'847': tank, army tank, armored combat vehicle, armoured combat vehicle
'848': tape player
'849': teapot
'850': teddy, teddy bear
'851': television, television system
'852': tennis ball
'853': thatch, thatched roof
'854': theater curtain, theatre curtain
'855': thimble
'856': thresher, thrasher, threshing machine
'857': throne
'858': tile roof
'859': toaster
'860': tobacco shop, tobacconist shop, tobacconist
'861': toilet seat
'862': torch
'863': totem pole
'864': tow truck, tow car, wrecker
'865': toyshop
'866': tractor
'867': trailer truck, tractor trailer, trucking rig, rig, articulated lorry,
semi
'868': tray
'869': trench coat
'870': tricycle, trike, velocipede
'871': trimaran
'872': tripod
'873': triumphal arch
'874': trolleybus, trolley coach, trackless trolley
'875': trombone
'876': tub, vat
'877': turnstile
'878': typewriter keyboard
'879': umbrella
'880': unicycle, monocycle
'881': upright, upright piano
'882': vacuum, vacuum cleaner
'883': vase
'884': vault
'885': velvet
'886': vending machine
'887': vestment
'888': viaduct
'889': violin, fiddle
'890': volleyball
'891': waffle iron
'892': wall clock
'893': wallet, billfold, notecase, pocketbook
'894': wardrobe, closet, press
'895': warplane, military plane
'896': washbasin, handbasin, washbowl, lavabo, wash-hand basin
'897': washer, automatic washer, washing machine
'898': water bottle
'899': water jug
'900': water tower
'901': whiskey jug
'902': whistle
'903': wig
'904': window screen
'905': window shade
'906': Windsor tie
'907': wine bottle
'908': wing
'909': wok
'910': wooden spoon
'911': wool, woolen, woollen
'912': worm fence, snake fence, snake-rail fence, Virginia fence
'913': wreck
'914': yawl
'915': yurt
'916': web site, website, internet site, site
'917': comic book
'918': crossword puzzle, crossword
'919': street sign
'920': traffic light, traffic signal, stoplight
'921': book jacket, dust cover, dust jacket, dust wrapper
'922': menu
'923': plate
'924': guacamole
'925': consomme
'926': hot pot, hotpot
'927': trifle
'928': ice cream, icecream
'929': ice lolly, lolly, lollipop, popsicle
'930': French loaf
'931': bagel, beigel
'932': pretzel
'933': cheeseburger
'934': hotdog, hot dog, red hot
'935': mashed potato
'936': head cabbage
'937': broccoli
'938': cauliflower
'939': zucchini, courgette
'940': spaghetti squash
'941': acorn squash
'942': butternut squash
'943': cucumber, cuke
'944': artichoke, globe artichoke
'945': bell pepper
'946': cardoon
'947': mushroom
'948': Granny Smith
'949': strawberry
'950': orange
'951': lemon
'952': fig
'953': pineapple, ananas
'954': banana
'955': jackfruit, jak, jack
'956': custard apple
'957': pomegranate
'958': hay
'959': carbonara
'960': chocolate sauce, chocolate syrup
'961': dough
'962': meat loaf, meatloaf
'963': pizza, pizza pie
'964': potpie
'965': burrito
'966': red wine
'967': espresso
'968': cup
'969': eggnog
'970': alp
'971': bubble
'972': cliff, drop, drop-off
'973': coral reef
'974': geyser
'975': lakeside, lakeshore
'976': promontory, headland, head, foreland
'977': sandbar, sand bar
'978': seashore, coast, seacoast, sea-coast
'979': valley, vale
'980': volcano
'981': ballplayer, baseball player
'982': groom, bridegroom
'983': scuba diver
'984': rapeseed
'985': daisy
'986': yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus,
Cypripedium parviflorum
'987': corn
'988': acorn
'989': hip, rose hip, rosehip
'990': buckeye, horse chestnut, conker
'991': coral fungus
'992': agaric
'993': gyromitra
'994': stinkhorn, carrion fungus
'995': earthstar
'996': hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola
frondosa
'997': bolete
'998': ear, spike, capitulum
'999': toilet tissue, toilet paper, bathroom tissue
- name: lexicon
sequence: string
- name: id
dtype: int64
splits:
- name: validation
num_bytes: 406246742.0
num_examples: 3000
download_size: 398667087
dataset_size: 406246742.0
---
# Dataset Card for "Imagenet1k_sample_validation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5923609733581543,
-0.1341775804758072,
0.0027105258777737617,
0.3385498523712158,
-0.42684051394462585,
-0.28668293356895447,
0.4357210099697113,
-0.10786259174346924,
0.6702897548675537,
0.5896738767623901,
-0.9202130436897278,
-0.8651220798492432,
-0.44333240389823914,
-0.120964944362... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Isamu136/big-animal-dataset-with-embedding | Isamu136 | 2023-02-12T22:42:07Z | 32 | 1 | null | [
"license:mit",
"region:us"
] | 2023-02-12T22:42:07Z | 2023-02-12T21:19:13.000Z | 2023-02-12T21:19:13 | ---
license: mit
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
- name: l14_embeddings
sequence: float32
- name: moco_vitb_imagenet_embeddings
sequence: float32
- name: moco_vitb_imagenet_embeddings_without_last_layer
sequence: float32
splits:
- name: train
num_bytes: 2125655956.375
num_examples: 62149
download_size: 2238679414
dataset_size: 2125655956.375
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
leminda-ai/s2orc_small | leminda-ai | 2023-03-01T10:57:59Z | 32 | 0 | null | [
"region:us"
] | 2023-03-01T10:57:59Z | 2023-03-01T08:43:20.000Z | 2023-03-01T08:43:20 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: paperAbstract
dtype: string
- name: entities
sequence: string
- name: s2Url
dtype: string
- name: pdfUrls
sequence: string
- name: s2PdfUrl
dtype: string
- name: authors
list:
- name: name
dtype: string
- name: ids
sequence: string
- name: inCitations
sequence: string
- name: outCitations
sequence: string
- name: fieldsOfStudy
sequence: string
- name: year
dtype: int32
- name: venue
dtype: string
- name: journalName
dtype: string
- name: journalVolume
dtype: string
- name: journalPages
dtype: string
- name: sources
sequence: string
- name: doi
dtype: string
- name: doiUrl
dtype: string
- name: pmid
dtype: string
- name: magId
dtype: string
splits:
- name: train
num_bytes: 1725313131.1503427
num_examples: 889289
download_size: 2180008218
dataset_size: 1725313131.1503427
---
# Dataset Card for "s2orc_small"
A small split of the s2orc dataset, includes ~900k english papers with abstract included.
See all detailes in the original dataset card - https://huggingface.co/datasets/allenai/s2orc | [
-0.4122408628463745,
-0.18349285423755646,
0.24626314640045166,
-0.13577069342136383,
-0.35299229621887207,
-0.05579499527812004,
0.10407154262065887,
-0.5712590217590332,
0.8281416296958923,
0.24136324226856232,
-0.5037883520126343,
-0.2860186994075775,
-0.7603103518486023,
0.246852263808... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
totuta/youtube_subs_howto100M | totuta | 2023-03-04T01:38:37Z | 32 | 3 | null | [
"task_categories:conversational",
"size_categories:10M<n<100M",
"language:en",
"license:apache-2.0",
"arxiv:1906.03327",
"region:us"
] | 2023-03-04T01:38:37Z | 2023-03-03T04:56:10.000Z | 2023-03-03T04:56:10 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: response
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 1260882571
num_examples: 309136
download_size: 668637627
dataset_size: 1260882571
license: apache-2.0
task_categories:
- conversational
language:
- en
pretty_name: 'YouTube Subtitles of Instructions: HowTo100M'
size_categories:
- 10M<n<100M
---
# Dataset Card for youtube_subs_howto100M
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [HowTo100M homepage](https://www.di.ens.fr/willow/research/howto100m/)
- **Repository:** [HowTo100M repository](https://github.com/antoine77340/howto100m)
- **Paper:** [HowTo100M: Learning a Text-Video Embedding by Watching Hundred Million Narrated Video Clips](https://arxiv.org/abs/1906.03327)
### Dataset Summary
The `youtube_subs_howto100M` dataset is an English-language dataset of instruction-response pairs extracted from 309136 YouTube videos. The dataset was orignally inspired by and sourced from the HowTo100M dataset, which was developed for natural language search for video clips.
### Supported Tasks and Leaderboards
- `conversational`: The dataset can be used to train a model for instruction(request) and a long form of response generation. This dataset is originally prepared for the [Open Assistant](https://github.com/LAION-AI/Open-Assistant), which is an open-source chat-based large language model.
### Languages
Currently, all text in the dataset is in English.
## Dataset Structure
### Data Instances
A typical data point comprises an `instruction`, `response`, and a `source`
An example from the youtube_subs_howto100M looks as follows:
```
{"instruction": "Please explain how to remove plaque without going to the dentist 2016", "response": "mineral deposit on teeth is known as tartar or plaque as time passes by the amount of tartar increases and if you don't take care it can cause periodontitis of course the best way to remove tartar is paying a visit to your dentist but another way is to remove plaque at your home in this video you will learn how to remove plaque at home to do so you will need baking soda toothbrush salt you hydrogen peroxide cup you gentle pick you water anti septic mouthwash you step one first mix one tablespoon of bacon soda with TSP of salt into the cup after you at the toothbrush with warm water dip it into the mixture scrub teeth with an in spit continue the same process for five minutes step to mix a cup full with hydrogen peroxide with cup of warm water and rinse your mouth for one minute then spit and rinse with cup of cool water step 3 rub the yellow tartar from teeth with a dental pick be careful not to scrape the gums it may irritate and damage them step 4 rinse mouth with an antiseptic mouthwash and repeat every second day here are some other advice is to help you keep your beautiful smile tomatoes and strawberries tomatoes and strawberries are rich in vitamin C which is excellent for oral health you can rub these fruits directly onto your teeth and let it sit for five minutes this way the tartar buildup will soften cheese being a Swiss or cheddar before meals helps neutralize the acids that involve black creation an ingredient in a cheese works as a barrier agent guava both guava fruit and leaves are considered excellent anti black agents to help remove plaque accumulated on the teeth and gums gloss they have anti-inflammatory and analgesic properties that help reduce swelling and pain in the gums brush your teeth regularly with a soft brush and make vertical movements pay attention on the space between gums and teeth floss regularly consuming spicy food stimulates syllabary glands that way saliva cleans mouth in a natural way five bacteria with an orange peel before going to bed and don't rinse mouth", "source": "YouTube"}
```
### Data Fields
- `instruction`: a request for an explanation.
- `response`: a long text of response sentences, currently not punctuated.
- `source`: the source of the datapoint, currently all `YouTube`.
### Data Splits
The dataset does not have train/valid/eval splits now.
## Dataset Creation
### Curation Rationale
The original HowTo100M dataset was developed for natural language search for video clips, not necessarily for conversational or chat based training. However, the long monologue response can be regarded as a sequence of answers for a question, which can be induced from the video title. Therefore, a good amount of high-quality request-response(long) pairs can be extracted from HowTo100M youtube videos.
Concretely, this dataset is curated like below:
```
for each video in YouTube100M dataset
if video_title starts with `how to`
add `Please explain` to the title to make an `instruction`
extract subtitles from the video to make a `response`
```
### Source Data
#### Initial Data Collection and Normalization
Refer to the [Curation Rationale](#curation-rationale)
#### Who are the source language producers?
The language producers are YouTube users of the videos in HowTo100M dataset.
### Annotations
#### Annotation process
Refer to the [Curation Rationale](#curation-rationale)
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
Apache license 2.0
## Additional Information
### Dataset Curators
The youtube_subs_howto100M dataset was created by [@totuta](https://github.com/totuta). The original HowTo100M dataset was created by Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic.
### Licensing Information
[N/A]
### Citation Information
@inproceedings{miech19howto100m,
title={How{T}o100{M}: {L}earning a {T}ext-{V}ideo {E}mbedding by {W}atching {H}undred {M}illion {N}arrated {V}ideo {C}lips},
author={Miech, Antoine and Zhukov, Dimitri and Alayrac, Jean-Baptiste and Tapaswi, Makarand and Laptev, Ivan and Sivic, Josef},
booktitle={ICCV},
year={2019},
}
### Contributions
Thanks to [@totuta](https://github.com/totuta) for adding this dataset. | [
-0.5058151483535767,
-1.0779657363891602,
0.13171018660068512,
0.061440277844667435,
-0.08866839855909348,
0.09354382008314133,
-0.3149355351924896,
0.21519744396209717,
0.6285970211029053,
0.4602631628513336,
-0.853480339050293,
-0.6923302412033081,
-0.5074617266654968,
0.0776653513312339... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/arhythmia | mstz | 2023-04-15T11:37:57Z | 32 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"arrhythmia",
"tabular_classification",
"multiclass_classification",
"binary_classification",
"UCI",
"region:us"
] | 2023-04-15T11:37:57Z | 2023-04-05T12:44:38.000Z | 2023-04-05T12:44:38 | ---
language:
- en
tags:
- arrhythmia
- tabular_classification
- multiclass_classification
- binary_classification
- UCI
pretty_name: Arhythmia
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- arhytmia
- has_arhytmia
license: cc
---
# Arhythmia
The [Arrhythmia dataset](https://archive.ics.uci.edu/ml/datasets/Arrhythmia) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Does the patient have arhythmia? If so, what type?
# Configurations and tasks
| **Configuration** | **Task** | Description |
|-------------------|---------------------------|---------------------------------------------------------------|
| arhytmia | Multiclass classification | What type of arhythmia does the patient have? |
| has_arhytmia | Binary classification | Does the patient have arhythmia? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/arhythmia", "arhythmia")["train"]
```
# Features
Target feature changes according to the selected configuration and is always in last position in the dataset. | [
-0.3667314946651459,
-0.3463653326034546,
0.5066984295845032,
0.24577327072620392,
-0.17591072618961334,
-0.3593645691871643,
-0.007302752695977688,
-0.343082994222641,
0.3391551971435547,
0.5412301421165466,
-0.3182941675186157,
-0.7410598397254944,
-0.7276224493980408,
0.0964678600430488... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
koutch/intro_prog | koutch | 2023-06-05T08:45:02Z | 32 | 0 | null | [
"region:us"
] | 2023-06-05T08:45:02Z | 2023-04-05T14:44:41.000Z | 2023-04-05T14:44:41 | ---
dataset_info:
- config_name: dublin_metadata
features:
- name: assignment_id
dtype: string
- name: func_name
dtype: string
- name: reference_solution
dtype: string
- name: description
dtype: string
- name: test
dtype: string
splits:
- name: train
num_bytes: 18983
num_examples: 36
- name: test
num_bytes: 17403
num_examples: 35
download_size: 41873
dataset_size: 36386
- config_name: singapore_metadata
features:
- name: assignment_id
dtype: string
- name: func_name
dtype: string
- name: reference_solution
dtype: string
- name: description
dtype: string
- name: test
dtype: string
splits:
- name: train
num_bytes: 5577
num_examples: 5
download_size: 6139
dataset_size: 5577
- config_name: dublin_data
features:
- name: submission_id
dtype: int32
- name: func_code
dtype: string
- name: assignment_id
dtype: string
- name: func_name
dtype: string
- name: description
dtype: string
- name: test
dtype: string
- name: correct
dtype: bool
- name: user
dtype: string
- name: academic_year
dtype: int32
splits:
- name: train
num_bytes: 4412068
num_examples: 7486
- name: test
num_bytes: 7737585
num_examples: 14259
download_size: 15756562
dataset_size: 12149653
- config_name: singapore_data
features:
- name: submission_id
dtype: int32
- name: func_code
dtype: string
- name: assignment_id
dtype: string
- name: func_name
dtype: string
- name: description
dtype: string
- name: test
dtype: string
- name: correct
dtype: bool
splits:
- name: train
num_bytes: 5098928
num_examples: 4394
download_size: 5705043
dataset_size: 5098928
- config_name: dublin_repair
features:
- name: submission_id
dtype: int32
- name: func_code
dtype: string
- name: assignment_id
dtype: string
- name: func_name
dtype: string
- name: description
dtype: string
- name: test
dtype: string
- name: annotation
dtype: string
- name: user
dtype: string
- name: academic_year
dtype: int32
splits:
- name: train
num_bytes: 229683
num_examples: 307
- name: test
num_bytes: 1451820
num_examples: 1698
download_size: 1929518
dataset_size: 1681503
- config_name: singapore_repair
features:
- name: submission_id
dtype: int32
- name: func_code
dtype: string
- name: assignment_id
dtype: string
- name: func_name
dtype: string
- name: description
dtype: string
- name: test
dtype: string
- name: annotation
dtype: string
splits:
- name: train
num_bytes: 18979
num_examples: 18
download_size: 21737
dataset_size: 18979
- config_name: newcaledonia_metadata
features:
- name: assignment_id
dtype: string
- name: func_name
dtype: string
- name: reference_solution
dtype: string
- name: description
dtype: string
- name: test
dtype: string
splits:
- name: train
num_bytes: 9053
num_examples: 9
download_size: 9760
dataset_size: 9053
- config_name: newcaledonia_data
features:
- name: submission_id
dtype: int32
- name: func_code
dtype: string
- name: assignment_id
dtype: string
- name: func_name
dtype: string
- name: description
dtype: string
- name: test
dtype: string
- name: correct
dtype: bool
splits:
- name: train
num_bytes: 932024
num_examples: 1201
download_size: 1198518
dataset_size: 932024
---
# Dataset Card for intro_prog
## Dataset Description
### Dataset Summary
IntroProg is a collection of students' submissions to assignments in various introductory programming courses offered at different universities.
Currently, the dataset contains submissions collected from Dublin City University, and the University of Singapore.
#### Dublin
The Dublin programming dataset is a dataset composed of students' submissions to introductory programming assignments at the University of Dublin.
Students submitted these programs for multiple programming courses over the duration of three academic years.
#### Singapore
The Singapore dataset contains 2442 correct and 1783 buggy program attempts by 361 undergraduate students
crediting an introduction to Python programming course at NUS (National University of Singapore).
### Supported Tasks and Leaderboards
#### "Metadata": Program synthesis
Similarly to the [Most Basic Python Programs](https://huggingface.co/datasets/mbpp) (mbpp), the data split can be used to evaluate
code generations models.
#### "Data"
The data configuration contains all the submissions as well as an indicator of whether these passed the required test.
#### "repair": Program refinement/repair
The "repair" configuration of each dataset is a subset of the "data" configuration
augmented with educators' annotations on the corrections to the buggy programs.
This configuration can be used for the task of program refinement. In [Computing Education Research](https://faculty.washington.edu/ajko/cer/) (CER),
methods for automatically repairing student programs are used to provide students with feedback and help them debug their code.
#### "bug": Bug classification
[Coming soon]
### Languages
The assignments were written in Python.
## Dataset Structure
One configuration is defined by one source dataset *dublin* or *singapore* and one subconfiguration ("metadata", "data", or "repair"):
* "dublin_metadata"
* "dublin_data"
* "dublin_repair"
* "singapore_metadata"
* "singapore_data"
* "singapore_repair"
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
Some of the fields are configuration specific
* submission_id: a unique number identifying the submission
* user: a unique string identifying the (anonymized) student who submitted the solution
* date: the timestamp at which the grading server received the submission
* func_code: the cleaned code submitted
* func_name: the name of the function that had to be implemented
* assingment_id: the unique (string) identifier of the assignment that had to be completed
* academic_year: the starting year of the academic year (e.g. 2015 for the academic year 2015-2016)
* module: the course/module
* test: a human eval-style string which can be used to execute the submitted solution on the provided test cases
* Description: a description of what the function is supposed to achieve
* correct: whether the solution passed all tests or not
### Data Splits
#### Dublin
The Dublin dataset is split into a training and validation set. The training set contains the submissions to the assignments
written during the academic years 2015-2016, and 2016-2017, while the test set contains programs written during the academic year 2017-2018.
#### Singapore
The Singapore dataset only contains a training split, which can be used as a test split for evaluating how your feedback
methods perform on an unseen dataset (if, for instance, you train your methods on the Dublin Dataset).
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
#### Dublin
#### Singapore
The data was released under a [GNU Lesser General Public License v3.0](https://github.com/githubhuyang/refactory/blob/master/LICENSE) license
### Citation Information
```
@inproceedings{azcona2019user2code2vec,
title={user2code2vec: Embeddings for Profiling Students Based on Distributional Representations of Source Code},
author={Azcona, David and Arora, Piyush and Hsiao, I-Han and Smeaton, Alan},
booktitle={Proceedings of the 9th International Learning Analytics & Knowledge Conference (LAK’19)},
year={2019},
organization={ACM}
}
@inproceedings{DBLP:conf/edm/CleuziouF21,
author = {Guillaume Cleuziou and
Fr{\'{e}}d{\'{e}}ric Flouvat},
editor = {Sharon I{-}Han Hsiao and
Shaghayegh (Sherry) Sahebi and
Fran{\c{c}}ois Bouchet and
Jill{-}J{\^{e}}nn Vie},
title = {Learning student program embeddings using abstract execution traces},
booktitle = {Proceedings of the 14th International Conference on Educational Data
Mining, {EDM} 2021, virtual, June 29 - July 2, 2021},
publisher = {International Educational Data Mining Society},
year = {2021},
timestamp = {Wed, 09 Mar 2022 16:47:22 +0100},
biburl = {https://dblp.org/rec/conf/edm/CleuziouF21.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
[More Information Needed] | [
-0.4335111677646637,
-0.6794446706771851,
0.19190768897533417,
0.023725321516394615,
0.09887578338384628,
0.1281660944223404,
-0.2890568673610687,
-0.19862589240074158,
0.3072499632835388,
0.3685608208179474,
-0.6186345219612122,
-0.8624795079231262,
-0.28410089015960693,
0.053848631680011... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/electricity | mstz | 2023-04-16T17:30:58Z | 32 | 1 | null | [
"task_categories:tabular-classification",
"size_categories:10k<n<100K",
"language:en",
"license:cc",
"electricity",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | 2023-04-16T17:30:58Z | 2023-04-10T23:24:07.000Z | 2023-04-10T23:24:07 | ---
language:
- en
tags:
- electricity
- tabular_classification
- binary_classification
- UCI
pretty_name: Electricity
size_categories:
- 10k<n<100K
task_categories:
- tabular-classification
configs:
- electricity
license: cc
---
# Electricity
The [Electricity dataset](https://www.openml.org/search?type=data&sort=runs&id=151&status=active) from the [OpenML repository](https://www.openml.org/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-------------------------|
| electricity | Binary classification | Has the electricity cost gone up?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/electricity", "electricity")["train"]
``` | [
-0.4362109303474426,
-0.21403738856315613,
0.2541596591472626,
0.3103175461292267,
0.013085811398923397,
-0.4722193777561188,
-0.30064621567726135,
-0.09173481166362762,
-0.3210776746273041,
0.6012927293777466,
-0.1802341192960739,
-0.5867458581924438,
-0.3221243917942047,
0.43764144182205... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/pol | mstz | 2023-04-16T17:58:01Z | 32 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:10k<n<100K",
"language:en",
"license:cc",
"pol",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | 2023-04-16T17:58:01Z | 2023-04-10T23:31:17.000Z | 2023-04-10T23:31:17 | ---
language:
- en
tags:
- pol
- tabular_classification
- binary_classification
- UCI
pretty_name: Pol
size_categories:
- 10k<n<100K
task_categories:
- tabular-classification
configs:
- pol
license: cc
---
# Pol
The [Pol dataset](https://www.openml.org/search?type=data&sort=runs&id=151&status=active) from the [OpenML repository](https://www.openml.org/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-------------------------|
| pol | Binary classification | Has the pol cost gone up?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/pol", "pol")["train"]
``` | [
-0.5099862217903137,
-0.09580668807029724,
0.20247668027877808,
0.11135133355855942,
-0.3185628354549408,
-0.26086169481277466,
-0.48592066764831543,
-0.12154881656169891,
-0.13250349462032318,
0.5937511920928955,
-0.2793523967266083,
-0.6865099668502808,
-0.44796377420425415,
0.3692508339... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/glass | mstz | 2023-04-16T17:29:45Z | 32 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:n<1k",
"language:en",
"license:cc",
"glass",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | 2023-04-16T17:29:45Z | 2023-04-12T09:53:57.000Z | 2023-04-12T09:53:57 | ---
language:
- en
tags:
- glass
- tabular_classification
- binary_classification
- UCI
pretty_name: Glass evaluation
size_categories:
- n<1k
task_categories:
- tabular-classification
configs:
- glass
- windows
- vehicles
- containers
- tableware
- headlamps
license: cc
---
# Glass
The [Glass dataset](https://archive-beta.ics.uci.edu/dataset/42/glass+identification) from the [UCI repository](https://archive-beta.ics.uci.edu).
Classify the type of glass.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|--------------------------|
| glass | Multiclass classification | Classify glass type. |
| windows | Binary classification | Is this windows glass? |
| vehicles | Binary classification | Is this vehicles glass? |
| containers | Binary classification | Is this containers glass?|
| tableware | Binary classification | Is this tableware glass? |
| headlamps | Binary classification | Is this headlamps glass? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/glass", "glass")["train"]
``` | [
-0.2698138952255249,
-0.36356544494628906,
0.3874920904636383,
0.2826721668243408,
-0.032789308577775955,
0.03317996859550476,
-0.031438253819942474,
-0.11954358965158463,
-0.10789279639720917,
0.23925814032554626,
-0.43190324306488037,
-0.4874029755592346,
-0.6184852719306946,
0.153726369... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
emozilla/booksum-summary-analysis | emozilla | 2023-07-12T17:43:41Z | 32 | 5 | null | [
"language:en",
"region:us"
] | 2023-07-12T17:43:41Z | 2023-05-25T17:23:56.000Z | 2023-05-25T17:23:56 | ---
language: en
dataset_info:
features:
- name: chapter
dtype: string
- name: text
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 215494460.97875556
num_examples: 11834
- name: test
num_bytes: 27122769.0
num_examples: 1658
- name: validation
num_bytes: 43846669.0
num_examples: 2234
download_size: 134838536
dataset_size: 286463898.9787556
---
# Dataset Card for "booksum-summary-analysis"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6505795121192932,
-0.05818226560950279,
0.025931211188435555,
-0.007704560644924641,
-0.24464650452136993,
-0.24215656518936157,
0.3252984285354614,
0.04032689705491066,
0.7776296734809875,
0.5355315208435059,
-0.7747006416320801,
-0.8596508502960205,
-0.5941941142082214,
-0.24998302757... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
argilla/comparison-dataset-dolly-curated-falcon | argilla | 2023-06-02T14:52:40Z | 32 | 0 | null | [
"region:us"
] | 2023-06-02T14:52:40Z | 2023-06-02T11:51:48.000Z | 2023-06-02T11:51:48 | ---
{}
---
## Guidelines
These guidelines are based on the paper [Training Language Models to Follow Instructions with Human Feedback](https://cdn.openai.com/papers/Training_language_models_to_follow_instructions_with_human_feedback.pdf)
**You are given a text-based description of a task, submitted by a user.**
This task description may be in the form of an explicit instruction (e.g. "Write a story about a wise frog."). The task may also be specified indirectly, for example by using several examples of the desired behavior (e.g. given a sequence of movie reviews followed by their sentiment, followed by one movie review without a sentiment, you can assume the task is to predict the sentiment of the final review), or by generating the start of a desired output (e.g. given "There once was a wise frog named Julius," you can assume the task is to continue the story).
**You will also be given several text outputs, intended to help the user with their task.** Your job is to evaluate these outputs to ensure that they are helpful, truthful, and harmless. For most tasks, being truthful and harmless is more important than being helpful.
By **helpful**, we mean that the output should follow the user's intention and help the user solve their task. Some examples of helpful behavior:
- Writing in clear language.
- Answering the question they meant to ask, even if they mis-asked it.
- Being sensitive to internationality (e.g., "football" shouldn't mean American football, and "the president" doesn't necessarily mean the US president).
- If an instruction is too confusing, asking for clarification and explaining why the instruction is confusing.
- Not giving overly long or rambling answers, or repeating information from the question.
- Not assuming extraneous extra context outside of what's given (besides things like facts about the world), unless that's an implied part of the task. For example, if asked to "write a polite email response to this email: email body," the output shouldn't assume "I can't make it this time, but am free next weekend." But if asked to "Write an email to Socrates," then you can safely assume the output can make up a context.
By **truthful**, we mean that the output contains accurate information and doesn't mislead the user. Some examples of truthful behavior:
- On tasks like summarization, where the output should only use information from input, not making up details that are not part of the input description.
- Not producing clearly false information about the world (e.g., making up facts or promoting conspiracies). For example, the output should not state that Hillary Clinton has served time in prison.
- Avoid generating misleading information or information with questionable authenticity. For example, if the input asks "Why did Hillary Clinton go to jail?", the output should not say "It's not totally clear," but rather should refute the premise of the question.
By **harmless**, we mean that the output should not cause physical, psychological, or social harm to people; damage to or loss of equipment or property; damage to the environment; or harm to institutions or resources necessary to human well-being. Some examples of harmless behavior:
- Treating other humans with kindness, respect, and consideration; not denigrating members of certain groups or using biased language against a particular group.
- Not generating abusive, threatening, or offensive language or promoting violence.
- Not writing sexual or violent content if it's not asked for.
- Not giving bad real-world advice or promoting illegal activity.
Evaluating model outputs may involve making trade-offs between these criteria. These trade-offs will depend on the task.
## Fields
* `prompt` is of type <class 'str'>
* `response-1` is of type <class 'str'>
* `response-2` is of type <class 'str'>
## Questions
* `response_ranking` : Select response 1 or 2. Select 3 if no response is suitable.
\If you select 3, provide a response using the field below, or discard the record.
Helpful: output follows the user's intention.
Truthful: output contains accurate information and doesn't mislead the user.
Harmless: the output should not cause physical, psychological, or social harm to people, property, environment, or institutions
## Load with Argilla
To load this dataset with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface('argilla/comparison-dataset-dolly-curated-falcon')
```
## Load with Datasets
To load this dataset with Datasets, you'll just need to install Datasets as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset('argilla/comparison-dataset-dolly-curated-falcon')
```
| [
-0.23159544169902802,
-0.9705104231834412,
0.3074084520339966,
0.3496958613395691,
-0.1436929553747177,
-0.3957500755786896,
-0.042779725044965744,
-0.28415682911872864,
0.08195729553699493,
0.7481971383094788,
-0.7011102437973022,
-0.5171151161193848,
-0.5733419060707092,
0.12078346312046... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
andersonbcdefg/redteaming_eval_pairwise | andersonbcdefg | 2023-06-08T05:51:12Z | 32 | 0 | null | [
"region:us"
] | 2023-06-08T05:51:12Z | 2023-06-08T05:48:52.000Z | 2023-06-08T05:48:52 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response_a
dtype: string
- name: response_b
dtype: string
- name: explanation
dtype: string
- name: preferred
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 79844
num_examples: 105
download_size: 0
dataset_size: 79844
---
# Dataset Card for "redteaming_eval_pairwise"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3679980933666229,
-0.506653904914856,
0.07263664901256561,
0.5199450254440308,
-0.1672389954328537,
0.2828477621078491,
0.19045229256153107,
-0.05367244407534599,
1.016118049621582,
0.40509259700775146,
-0.5684422254562378,
-0.725041389465332,
-0.512459933757782,
-0.11324237287044525,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dominguesm/wikipedia-ptbr-20230601 | dominguesm | 2023-07-13T12:31:13Z | 32 | 3 | null | [
"language:pt",
"region:us"
] | 2023-07-13T12:31:13Z | 2023-06-17T18:45:12.000Z | 2023-06-17T18:45:12 | ---
language: pt
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2424207600
num_examples: 993101
- name: test
num_bytes: 269529120
num_examples: 110345
download_size: 1626930291
dataset_size: 2693736720
---
# Dataset Card for "wikipedia-ptbr-20230601"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.9197243452072144,
-0.24857653677463531,
0.1542305201292038,
0.4110976755619049,
-0.44194138050079346,
-0.11344607174396515,
0.29406872391700745,
-0.09284936636686325,
0.7256175875663757,
0.32346752285957336,
-0.7998464107513428,
-0.6794225573539734,
-0.5766922235488892,
-0.1103593930602... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NavidVafaei/surnames | NavidVafaei | 2023-06-24T07:05:54Z | 32 | 0 | null | [
"region:us"
] | 2023-06-24T07:05:54Z | 2023-06-24T07:05:13.000Z | 2023-06-24T07:05:13 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CVdatasets/food101_50 | CVdatasets | 2023-06-29T19:58:54Z | 32 | 0 | null | [
"region:us"
] | 2023-06-29T19:58:54Z | 2023-06-29T14:08:05.000Z | 2023-06-29T14:08:05 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': apple_pie
'1': baby_back_ribs
'2': beef_tartare
'3': beignets
'4': bruschetta
'5': cannoli
'6': carrot_cake
'7': ceviche
'8': cheesecake
'9': cheese_plate
'10': chicken_curry
'11': chicken_wings
'12': chocolate_cake
'13': chocolate_mousse
'14': cup_cakes
'15': donuts
'16': dumplings
'17': edamame
'18': filet_mignon
'19': fish_and_chips
'20': french_onion_soup
'21': french_toast
'22': fried_calamari
'23': garlic_bread
'24': guacamole
'25': gyoza
'26': hamburger
'27': hot_and_sour_soup
'28': hot_dog
'29': huevos_rancheros
'30': ice_cream
'31': macarons
'32': miso_soup
'33': mussels
'34': nachos
'35': omelette
'36': onion_rings
'37': oysters
'38': pizza
'39': poutine
'40': prime_rib
'41': ravioli
'42': red_velvet_cake
'43': samosa
'44': scallops
'45': spring_rolls
'46': steak
'47': strawberry_shortcake
'48': tiramisu
'49': waffles
splits:
- name: train
num_bytes: 1892100970.0
num_examples: 37500
- name: validation
num_bytes: 628838834.0
num_examples: 12500
download_size: 1091112117
dataset_size: 2520939804.0
---
# Dataset Card for "food101_50"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.507836103439331,
-0.09867709875106812,
0.06390903145074844,
0.19038045406341553,
0.20047476887702942,
-0.0663052424788475,
0.3235296905040741,
-0.11788439005613327,
1.0379457473754883,
0.4370889365673065,
-0.7909632325172424,
-0.657367467880249,
-0.625114917755127,
0.08334456384181976,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
totally-not-an-llm/sharegpt-hyperfiltered-3k | totally-not-an-llm | 2023-07-13T02:17:45Z | 32 | 6 | null | [
"license:apache-2.0",
"region:us"
] | 2023-07-13T02:17:45Z | 2023-07-11T01:54:08.000Z | 2023-07-11T01:54:08 | ---
license: apache-2.0
---
# sharegpt-hyperfiltered-3k
90k sharegpt convos brought down to ~3k (3243) via language filtering, keyword detection, deduping, and regex. Following things were done:
- Deduplication on first message from human
- Remove non-English convos
- Remove censorship, refusals, and alignment
- Remove incorrect/low-quality answers
- Remove creative tasks
- ChatGPT's creative outputs are very censored and robotic; I think the base model can do better.
- Remove URLs
- Remove cutoffs
- Remove math/reasoning questions
- It sucks without CoT prompting, so this data should be mixed with better reasoning examples like OpenOrca or Dolphin.
| [
-1.0935722589492798,
-0.8992806673049927,
0.20239514112472534,
0.14621256291866302,
-0.5299379229545593,
-0.15227051079273224,
-0.24433806538581848,
-0.5703834891319275,
0.07879777997732162,
1.0087836980819702,
-0.6141847968101501,
-0.36647382378578186,
-0.8552340269088745,
0.0750616490840... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Rhasan97/test_data | Rhasan97 | 2023-07-12T18:39:15Z | 32 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-07-12T18:39:15Z | 2023-07-12T18:38:03.000Z | 2023-07-12T18:38:03 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Gustrd/dolly-15k-libretranslate-pt | Gustrd | 2023-07-18T02:04:29Z | 32 | 4 | null | [
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:pt",
"license:cc-by-sa-3.0",
"region:us"
] | 2023-07-18T02:04:29Z | 2023-07-13T12:30:13.000Z | 2023-07-13T12:30:13 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
language:
- pt
size_categories:
- 10K<n<100K
---
# Summary
databricks-dolly-15k ( https://huggingface.co/datasets/databricks/databricks-dolly-15k/ ) is an open source dataset of instruction-following records generated by thousands of Databricks employees in several of the behavioral categories outlined in the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization.
This is a portuguese translation done with libretranslate ( https://github.com/LibreTranslate/LibreTranslate ).
This dataset can be used for any purpose, whether academic or commercial, under the terms of the Creative Commons Attribution-ShareAlike 3.0 Unported License.
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: Portuguese
Version: 1.0
---
# Original Readme
Dataset Overview
databricks-dolly-15k is a corpus of more than 15,000 records generated by thousands of Databricks employees to enable large language models to exhibit the magical interactivity of ChatGPT. Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories, including the seven outlined in the InstructGPT paper, as well as an open-ended free-form category. The contributors were instructed to avoid using information from any source on the web with the exception of Wikipedia (for particular subsets of instruction categories), and explicitly instructed to avoid using generative AI in formulating instructions or responses. Examples of each behavior were provided to motivate the types of questions and instructions appropriate to each category.
Halfway through the data generation process, contributors were given the option of answering questions posed by other contributors. They were asked to rephrase the original question and only select questions they could be reasonably expected to answer correctly.
For certain categories contributors were asked to provide reference texts copied from Wikipedia. Reference text (indicated by the context field in the actual dataset) may contain bracketed Wikipedia citation numbers (e.g. [42]) which we recommend users remove for downstream applications.
Intended Uses
While immediately valuable for instruction fine tuning large language models, as a corpus of human-generated instruction prompts, this dataset also presents a valuable opportunity for synthetic data generation in the methods outlined in the Self-Instruct paper. For example, contributor--generated prompts could be submitted as few-shot examples to a large open language model to generate a corpus of millions of examples of instructions in each of the respective InstructGPT categories.
Likewise, both the instructions and responses present fertile ground for data augmentation. A paraphrasing model might be used to restate each prompt or short responses, with the resulting text associated to the respective ground-truth sample. Such an approach might provide a form of regularization on the dataset that could allow for more robust instruction-following behavior in models derived from these synthetic datasets.
Dataset
Purpose of Collection
As part of our continuing commitment to open source, Databricks developed what is, to the best of our knowledge, the first open source, human-generated instruction corpus specifically designed to enable large language models to exhibit the magical interactivity of ChatGPT. Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including academic or commercial applications.
Sources
Human-generated data: Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories.
Wikipedia: For instruction categories that require an annotator to consult a reference text (information extraction, closed QA, summarization) contributors selected passages from Wikipedia for particular subsets of instruction categories. No guidance was given to annotators as to how to select the target passages.
Annotator Guidelines
To create a record, employees were given a brief description of the annotation task as well as examples of the types of prompts typical of each annotation task. Guidelines were succinct by design so as to encourage a high task completion rate, possibly at the cost of rigorous compliance to an annotation rubric that concretely and reliably operationalizes the specific task. Caveat emptor.
The annotation guidelines for each of the categories are as follows:
Creative Writing: Write a question or instruction that requires a creative, open-ended written response. The instruction should be reasonable to ask of a person with general world knowledge and should not require searching. In this task, your prompt should give very specific instructions to follow. Constraints, instructions, guidelines, or requirements all work, and the more of them the better.
Closed QA: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form.
Open QA: Write a question that can be answered using general world knowledge or at most a single search. This task asks for opinions and facts about the world at large and does not provide any reference text for consultation.
Summarization: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form.
Information Extraction: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form.
Classification: These prompts contain lists or examples of entities to be classified, e.g. movie reviews, products, etc. In this task the text or list of entities under consideration is contained in the prompt (e.g. there is no reference text.). You can choose any categories for classification you like, the more diverse the better.
Brainstorming: Think up lots of examples in response to a question asking to brainstorm ideas.
Personal or Sensitive Data
This dataset contains public information (e.g., some information from Wikipedia). To our knowledge, there are no private person’s personal identifiers or sensitive information.
Known Limitations
Wikipedia is a crowdsourced corpus and the contents of this dataset may reflect the bias, factual errors and topical focus found in Wikipedia
Some annotators may not be native English speakers
Annotator demographics and subject matter may reflect the makeup of Databricks employees
---
license: cc-by-sa-3.0
--- | [
-0.4128796458244324,
-1.0411674976348877,
0.1763751357793808,
0.25832128524780273,
-0.11719150096178055,
-0.1040940061211586,
-0.25467443466186523,
-0.1644556075334549,
-0.0044354223646223545,
0.4907342195510864,
-0.6176873445510864,
-0.5764173865318298,
-0.23640704154968262,
0.34442701935... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
katielink/med_qa | katielink | 2023-08-31T00:55:31Z | 32 | 1 | null | [
"license:mit",
"region:us"
] | 2023-08-31T00:55:31Z | 2023-07-22T22:56:09.000Z | 2023-07-22T22:56:09 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pvduy/arena_synth | pvduy | 2023-08-02T16:02:03Z | 32 | 0 | null | [
"region:us"
] | 2023-08-02T16:02:03Z | 2023-08-02T16:01:59.000Z | 2023-08-02T16:01:59 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: selected
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 53190421
num_examples: 29851
- name: test
num_bytes: 14269380
num_examples: 8000
download_size: 36514341
dataset_size: 67459801
---
# Dataset Card for "arena_synth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6791266202926636,
-0.3554321825504303,
0.3577691912651062,
0.2649043798446655,
-0.04687168076634407,
0.0803084447979927,
0.302766352891922,
-0.11994019895792007,
0.7521111965179443,
0.3599916696548462,
-0.9251846075057983,
-0.7248145341873169,
-0.30583828687667847,
-0.1080455332994461,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lionelchg/dolly_closed_qa | lionelchg | 2023-08-09T18:27:56Z | 32 | 1 | null | [
"region:us"
] | 2023-08-09T18:27:56Z | 2023-08-07T23:30:03.000Z | 2023-08-07T23:30:03 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: category
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4661240.764805415
num_examples: 1684
- name: test
num_bytes: 246348.23519458546
num_examples: 89
download_size: 2899694
dataset_size: 4907589.0
---
# Dataset Card for "dolly_closed_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.2750625014305115,
-0.3152807354927063,
0.1158432811498642,
0.12171061336994171,
-0.17611923813819885,
-0.15413111448287964,
0.5002150535583496,
0.12821532785892487,
0.8441863059997559,
0.7573384642601013,
-0.9300277829170227,
-0.9414968490600586,
-0.2412548065185547,
-0.1915929317474365... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
imvladikon/QAmeleon | imvladikon | 2023-08-13T19:36:48Z | 32 | 0 | null | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:ar",
"language:bn",
"language:fi",
"language:id",
"language:ko",
"language:ru",
"language:sw",
"language:te",
"license:cc-by-4.0",
"arxiv:2211.08264",
"region:us"
] | 2023-08-13T19:36:48Z | 2023-08-13T19:29:03.000Z | 2023-08-13T19:29:03 | ---
language:
- ar
- bn
- fi
- id
- ko
- ru
- sw
- te
license: cc-by-4.0
size_categories:
- 10K<n<100K
task_categories:
- question-answering
dataset_info:
- config_name: ar
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 4773335
num_examples: 6966
download_size: 0
dataset_size: 4773335
- config_name: bn
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 6458441
num_examples: 6084
download_size: 0
dataset_size: 6458441
- config_name: default
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 32190633
num_examples: 47173
download_size: 16811173
dataset_size: 32190633
- config_name: fi
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 2158030
num_examples: 5028
download_size: 0
dataset_size: 2158030
- config_name: id
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 2635540
num_examples: 6797
download_size: 0
dataset_size: 2635540
- config_name: ko
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 5074624
num_examples: 6471
download_size: 0
dataset_size: 5074624
- config_name: ru
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 3952632
num_examples: 5557
download_size: 0
dataset_size: 3952632
- config_name: sw
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 2113909
num_examples: 5597
download_size: 0
dataset_size: 2113909
- config_name: te
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 5024122
num_examples: 4673
download_size: 0
dataset_size: 5024122
configs:
- config_name: ar
data_files:
- split: train
path: ar/train-*
- config_name: bn
data_files:
- split: train
path: bn/train-*
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: fi
data_files:
- split: train
path: fi/train-*
- config_name: id
data_files:
- split: train
path: id/train-*
- config_name: ko
data_files:
- split: train
path: ko/train-*
- config_name: ru
data_files:
- split: train
path: ru/train-*
- config_name: sw
data_files:
- split: train
path: sw/train-*
- config_name: te
data_files:
- split: train
path: te/train-*
---
# Dataset Card for "QAmeleon"
QAmeleon introduces synthetic multilingual QA data contaning in 8 langauges using PaLM-540B, a large language model. This dataset was generated by prompt tuning PaLM with only five examples per language. We use the synthetic data to finetune downstream QA models leading to improved accuracy in comparison to English-only and translation-based baselines.
Data available at https://storage.googleapis.com/qameleon/qamelon_pt_accepted.csv
More details can be found in the [QAmeleon: Multilingual QA with Only 5 Examples](https://arxiv.org/abs/2211.08264) which can be cited as follows:
```
@misc{agrawal2022qameleon,
title={QAmeleon: Multilingual QA with Only 5 Examples},
author={Priyanka Agrawal and Chris Alberti and Fantine Huot and Joshua Maynez and Ji Ma and Sebastian Ruder and Kuzman Ganchev and Dipanjan Das and Mirella Lapata},
year={2022},
eprint={2211.08264},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
This dataset contains a total of 47173 Question Answer instances across 8 langauges, following is the count per language.
|Language | Count |
|---------|------:|
|ar |6966 |
|bn |6084 |
|fi |5028 |
|id |6797 |
|ko |6471 |
|ru |5557 |
|sw |5597 |
|te |4673 |
|**Total** |**47173**|
The QAmeleon dataset is released under the [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.535263180732727,
-0.5689687132835388,
0.20994840562343597,
0.12691345810890198,
-0.14306876063346863,
0.11955150961875916,
-0.17186953127384186,
-0.4678724706172943,
0.38967400789260864,
0.5912100672721863,
-0.5643091201782227,
-0.7278550267219543,
-0.09350769966840744,
0.43796458840370... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CATIE-AQ/orange_sum_fr_prompt_summarization | CATIE-AQ | 2023-10-11T12:24:23Z | 32 | 0 | null | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:orange_sum",
"language:fr",
"license:cc-by-sa-4.0",
"DFP",
"french prompts",
"region:us"
] | 2023-10-11T12:24:23Z | 2023-08-21T14:45:07.000Z | 2023-08-21T14:45:07 | ---
language:
- fr
license: cc-by-sa-4.0
size_categories:
- 100K<n<1M
task_categories:
- summarization
tags:
- DFP
- french prompts
annotations_creators:
- found
language_creators:
- found
multilinguality:
- monolingual
source_datasets:
- orange_sum
---
# orange_sum_fr_prompt_summarization
## Summary
**orange_sum_fr_prompt_summarization** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **683,228** rows that can be used for a summary task.
The original data (without prompts) comes from the dataset [orange_sum](https://huggingface.co/datasets/orange_sum) by Eddine et al.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
28 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'Résumer le texte suivant : "'+document+'"',
'Résume le texte suivant : "'+document+'"',
'Résumez le texte suivant : "'+document+'"',
'Résumer le texte suivant en quelques mots : "'+document+'"',
'Résume le texte suivant en quelques mots : "'+document+'"',
'Résumez le texte suivant en quelques mots : "'+document+'"',
"Condenser le texte à l'essentiel :" +document,
"Condense le texte à l'essentiel :" +document,
"Condensez le texte à l'essentiel :" +document,
'"'+document+' Rédiger un résumé du texte ci-dessus :',
'"'+document+' Rédige un résumé du texte ci-dessus :',
'"'+document+' Rédigez un résumé du texte ci-dessus :',
'Premièrement, lire le texte ci-dessous. \n\n "'+document+'"\n\n Maintenant, rédiger un court résumé.',
'Premièrement, lis le texte ci-dessous. \n\n "'+document+'"\n\n Maintenant, rédige un court résumé.',
'Premièrement, lisez le texte ci-dessous. \n\n "'+document+'"\n\n Maintenant, rédigez un court résumé.',
'Article : "'+document+'"/n Résumé : ',
'"'+document+' Comment reformuler cela en quelques mots ?',
'"'+document+' Comment peux-tu reformuler cela en quelques mots ?',
'"'+document+' Comment pouvez-vous reformuler cela en quelques mots ?',
'Résumer ce document : "'+document+'" Résumé :',
'Résume ce document : "'+document+'" Résumé :',
'Résumez ce document : "'+document+'" Résumé :',
'"'+document+' Compte tenu du document ci-dessus, écrire une phrase pour le résumer :',
'"'+document+' Compte tenu du document ci-dessus, écris une phrase pour le résumer :',
'"'+document+' Compte tenu du document ci-dessus, écrivez une phrase pour le résumer :',
'"'+document+' Rédiger un résumé du texte ci-dessus : ',
'"'+document+' Rédige un résumé du texte ci-dessus : ',
'"'+document+' Rédigez un résumé du texte ci-dessus : '
```
### Features used in the prompts
In the prompt list above, `document` and `targets` have been constructed from:
```
orange_sum = load_dataset('orange_sum','abstract')
document = orange_sum['train'][i]['text']
targets = orange_sum['train'][i]['summary']
```
# Splits
- `train` with 599,228 samples
- `valid` with 42,000 samples
- `test` with 42,000 samples
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/orange_sum_fr_prompt_summarization")
```
# Citation
## Original data
> @article{eddine2020barthez,
title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},
author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},
journal={arXiv preprint arXiv:2010.12321},
year={2020}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC-BY-SA-4.0 | [
-0.20519442856311798,
-0.3684827983379364,
0.48465216159820557,
0.6128813624382019,
-0.12582358717918396,
-0.1302635818719864,
-0.07353479415178299,
0.1607370525598526,
0.460103839635849,
0.5324563384056091,
-0.7473490238189697,
-0.7011473178863525,
-0.5388069748878479,
0.5931873321533203,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
philschmid/meta-shepherd-human-data | philschmid | 2023-08-23T14:38:43Z | 32 | 3 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | 2023-08-23T14:38:43Z | 2023-08-23T14:36:13.000Z | 2023-08-23T14:36:13 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: dataset
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: feedback
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2588873
num_examples: 1317
download_size: 1319368
dataset_size: 2588873
license: cc-by-nc-4.0
---
# Dataset Card for "meta-shepherd-human-data"
Original Dataset: https://github.com/facebookresearch/Shepherd
## Example
```bash
### Question: Where on the planet would you expect a bald eagle to live?
Here are the options:
Option 1: colorado
Option 2: outside
Option 3: protection
Option 4: zoo exhibit
Option 5: world
Please choose the correct option and justify your choice:
### Answer: Bald eagles are found throughout most of North America, from Alaska and Canada south to Mexico. They are also found in some parts of Europe and Asia. In the United States, bald eagles are usually seen near large bodies of water where they hunt for fish. They build their nests high up in tall trees near rivers, lakes, marshes, and coastal areas. Because of hunting and loss of habitat, bald eagle populations declined dramatically during the twentieth century. Conservation efforts have helped increase their numbers, and they were removed from the U.S. Endangered Species List in 2007.
### Feedback: The answer adds extra information, last 2 sentences are unnecessary. It does not state which options specifically are where bald eagles might live, like outside, Colorado, Zoo Exhibit, and World.
``` | [
-0.7740611433982849,
-0.7563286423683167,
0.5850846767425537,
0.22140203416347504,
-0.262776643037796,
-0.13362881541252136,
0.2000606209039688,
-0.4922613799571991,
0.564554750919342,
0.5488649606704712,
-0.9775041341781616,
-0.7827737331390381,
-0.38124215602874756,
0.38392069935798645,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mlabonne/Evol-Instruct-Python-26k | mlabonne | 2023-08-25T16:29:36Z | 32 | 4 | null | [
"region:us"
] | 2023-08-25T16:29:36Z | 2023-08-25T13:25:34.000Z | 2023-08-25T13:25:34 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 39448413.53337422
num_examples: 26588
download_size: 22381182
dataset_size: 39448413.53337422
---
# Evol-Instruct-Python-26k
Filtered version of the [`nickrosh/Evol-Instruct-Code-80k-v1`](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) dataset that only keeps Python code (26,588 samples). You can find a smaller version of it here [`mlabonne/Evol-Instruct-Python-1k`](https://huggingface.co/datasets/mlabonne/Evol-Instruct-Python-1k).
Here is the distribution of the number of tokens in each row (instruction + output) using Llama's tokenizer:
 | [
-0.3207612633705139,
-0.4751726984977722,
0.11660826206207275,
0.3001834452152252,
-0.5573354363441467,
-0.09188243746757507,
0.12438415735960007,
-0.18595488369464874,
0.7040348052978516,
0.5393690466880798,
-0.5674087405204773,
-0.7265970706939697,
-0.36755940318107605,
0.370246559381485... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Siddish/change-my-view-subreddit-cleaned | Siddish | 2023-09-02T16:00:46Z | 32 | 0 | null | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | 2023-09-02T16:00:46Z | 2023-09-02T15:35:51.000Z | 2023-09-02T15:35:51 | ---
task_categories:
- text-generation
language:
- en
pretty_name: Opinionated LLM with r/CMV
size_categories:
- 1K<n<10K
---
# Opinionated LLM | [
-0.2589404881000519,
-0.327580064535141,
0.7224818468093872,
0.4730435907840729,
-0.6643589735031128,
0.3700020909309387,
0.14124444127082825,
0.11667556315660477,
0.4535891115665436,
0.8351244926452637,
-0.2220725268125534,
-0.5905371308326721,
-0.9601101875305176,
-0.024362921714782715,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
codefuse-ai/CodeExercise-Python-27k | codefuse-ai | 2023-10-23T08:02:40Z | 32 | 43 | null | [
"license:cc-by-nc-sa-4.0",
"arxiv:2303.17760",
"region:us"
] | 2023-10-23T08:02:40Z | 2023-09-07T07:47:24.000Z | 2023-09-07T07:47:24 | ---
license: cc-by-nc-sa-4.0
viewer: false
---
# Dataset Card for CodeFuse-CodeExercise-Python-27k
<div align='center'>

[[中文]](#chinese) [[English]](#english)
</div>
<a id="english"></a>
### Dataset Description
This dataset consists of 27K Python programming exercises (in English), covering hundreds of Python-related topics including basic syntax and data structures, algorithm applications, database queries, machine learning, and more.
*Please note that this dataset was generated with the help of ChatGPT (gpt-turbo-3.5) and [Camel](https://arxiv.org/abs/2303.17760), and has not undergone strict validation. There may be errors or semantic duplicates in the questions or answers, so please use it with caution.*
## Field Structure
```
{
"dataset": #Dataset name,
"id": #Sample ID,
"chat_rounds": [
{
"role": "human",
"content": #instruction,
"chat_round_id": 0
},
{
"role": "bot",
"content": #output,
"chat_round_id": 1
}]
}
```
### Examples
```
{
"data_name": "CodeExercise-Py",
"id": 27219,
"chat_rounds": [
{
"role": "human",
"content": "Write a Python program to rotate an array by a given number of steps.",
"chat_round_id": 0
},
{
"role": "bot",
"content": "Sure! Here's a Python program that rotates an array by a given number of steps.\n\n```python\ndef rotate_array(arr, steps):\n \"\"\"\n Rotate an array by the given number of steps.\n\n Parameters:\n arr (list): The input array.\n steps (int): The number of steps to rotate the array. Positive steps rotate it to the right,\n while negative steps rotate it to the left.\n\n Returns:\n list: The rotated array.\n\n Examples:\n >>> rotate_array([1, 2, 3, 4, 5], 2)\n [4, 5, 1, 2, 3]\n\n >>> rotate_array([1, 2, 3, 4, 5], -1)\n [2, 3, 4, 5, 1]\n \"\"\"\n arr_length = len(arr)\n steps = steps % arr_length # Handle steps larger than array length\n\n if steps == 0: # No rotation needed\n return arr\n\n rotated_arr = arr[-steps:] + arr[:-steps] # Split the list and concatenate in the desired order\n\n return rotated_arr\n\n\n# Example usage\narr = [1, 2, 3, 4, 5]\nsteps = 2\nrotated_arr = rotate_array(arr, steps)\nprint(rotated_arr)\n```\n\nThe `rotate_array()` function takes an input array `arr` and the number of steps `steps` to rotate the array. It first calculates the actual number of rotations by performing a modulo operation on `steps` with the length of the input array. This ensures that even if `steps` is larger than the array length, we always perform the appropriate number of rotations.\n\nIf `steps` is 0, the function returns the input array as no rotation is needed.\n\nOtherwise, the function splits the input array into two parts based on the number of steps: the last `steps` elements (using negative indexing) and the remaining elements. These two parts are then concatenated in the desired order to obtain the rotated array.\n\nFinally, the program demonstrates an example usage by rotating an array `[1, 2, 3, 4, 5]` by 2 steps (to the right) and printing the resulting rotated array.",
"chat_round_id": 1
}]
}
```
## Creation

* Step 1: Curate a seed set of Python knowledge points.
* Step 2: Embed each seed into a fixed task template to obtain a fixed template's "Task Prompt," which serves as a prompt for ChatGPT to generate exercise questions related to the given knowledge point.
* Step 3: Use Camel to refine the "Task Prompt" obtained in step 2, in order to achieve more accurate and diverse descriptions.
* Step 4: Input the obtained Task Prompt into ChatGPT to generate exercise questions (instructions) corresponding to the knowledge point.
* Step 5: For each exercise question (instruction), leverage ChatGPT to generate the corresponding answer.
* Step 6: Assemble each question with its answer and remove duplicates.
<a id="chinese"></a>
## 简介
该数据集由2.7万道Python编程练习题(英文)组成,覆盖基础语法与数据结构、算法应用、数据库查询、机器学习等数百个Python相关知识点。
注意:该数据集是借助ChatGPT (gpt-turbo-3.5)和[Camel](https://arxiv.org/abs/2303.17760)生成,未经严格校验,题目或答案可能存在错误或语义重复,使用时请注意。
## 数据字段
```
{
"dataset": #数据集名称,
"id": #样本ID,
"chat_rounds": [
{
"role": "human",
"content": #指令内容,
"chat_round_id": 0
},
{
"role": "bot",
"content": #答案内容,
"chat_round_id": 1
}]
}
```
### 样例
```
{
"data_name": "CodeExercise-Py",
"id": 27219,
"chat_rounds": [
{
"role": "human",
"content": "Write a Python program to rotate an array by a given number of steps.",
"chat_round_id": 0
},
{
"role": "bot",
"content": "Sure! Here's a Python program that rotates an array by a given number of steps.\n\n```python\ndef rotate_array(arr, steps):\n \"\"\"\n Rotate an array by the given number of steps.\n\n Parameters:\n arr (list): The input array.\n steps (int): The number of steps to rotate the array. Positive steps rotate it to the right,\n while negative steps rotate it to the left.\n\n Returns:\n list: The rotated array.\n\n Examples:\n >>> rotate_array([1, 2, 3, 4, 5], 2)\n [4, 5, 1, 2, 3]\n\n >>> rotate_array([1, 2, 3, 4, 5], -1)\n [2, 3, 4, 5, 1]\n \"\"\"\n arr_length = len(arr)\n steps = steps % arr_length # Handle steps larger than array length\n\n if steps == 0: # No rotation needed\n return arr\n\n rotated_arr = arr[-steps:] + arr[:-steps] # Split the list and concatenate in the desired order\n\n return rotated_arr\n\n\n# Example usage\narr = [1, 2, 3, 4, 5]\nsteps = 2\nrotated_arr = rotate_array(arr, steps)\nprint(rotated_arr)\n```\n\nThe `rotate_array()` function takes an input array `arr` and the number of steps `steps` to rotate the array. It first calculates the actual number of rotations by performing a modulo operation on `steps` with the length of the input array. This ensures that even if `steps` is larger than the array length, we always perform the appropriate number of rotations.\n\nIf `steps` is 0, the function returns the input array as no rotation is needed.\n\nOtherwise, the function splits the input array into two parts based on the number of steps: the last `steps` elements (using negative indexing) and the remaining elements. These two parts are then concatenated in the desired order to obtain the rotated array.\n\nFinally, the program demonstrates an example usage by rotating an array `[1, 2, 3, 4, 5]` by 2 steps (to the right) and printing the resulting rotated array.",
"chat_round_id": 1
}]
}
```
## 数据生成过程

* 第一步: 整理Python知识点,作为初始种子集
* 第二步:将每个种子嵌入到固定的任务模版中,获得固定模版的"Task Prompt",该任务模版的主题是提示ChatGPT生成给定知识点的练习题问题。
* 第三步:调用Camel对第二步获得的"Task Prompt"进行润色,以获得更加描述准确且多样的Task Prompt
* 第四步:将获得的Task Prompt输入给ChatGPT,令其生成对应知识点的练习题问题(指令)
* 第五步:对每个练习题问题(指令),借助ChatGPT生成对应的问题答案
* 第六步:组装每个问题和其答案,并进行去重操作 | [
-0.2542975842952728,
-0.7324808239936829,
0.17486310005187988,
0.5346266031265259,
-0.12590280175209045,
0.005745033733546734,
0.11076480150222778,
-0.061722852289676666,
0.456715852022171,
0.48796260356903076,
-0.9903713464736938,
-0.4777999222278595,
-0.5212146043777466,
-0.1142611652612... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rb05751/reuters_articles | rb05751 | 2023-09-09T17:03:54Z | 32 | 1 | null | [
"license:cc",
"region:us"
] | 2023-09-09T17:03:54Z | 2023-09-09T17:03:24.000Z | 2023-09-09T17:03:24 | ---
license: cc
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: title
dtype: string
- name: body
dtype: string
splits:
- name: train
num_bytes: 12503434
num_examples: 15000
- name: validation
num_bytes: 4272675
num_examples: 5000
- name: test
num_bytes: 1709070
num_examples: 2000
download_size: 10790292
dataset_size: 18485179
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DDSC/angry-tweets-binary | DDSC | 2023-10-11T08:25:19Z | 32 | 0 | null | [
"region:us"
] | 2023-10-11T08:25:19Z | 2023-10-11T08:25:11.000Z | 2023-10-11T08:25:11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 269093.3579427623
num_examples: 1559
- name: test
num_bytes: 120444.7564469914
num_examples: 684
download_size: 273118
dataset_size: 389538.1143897537
---
# Dataset Card for "angry-tweets-binary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3430401682853699,
-0.3163929283618927,
0.16368544101715088,
0.581035852432251,
-0.45407426357269287,
0.35955026745796204,
0.0797288715839386,
-0.2311580777168274,
0.9878876209259033,
0.19828253984451294,
-0.8579373955726624,
-0.7900488376617432,
-0.8945122361183167,
-0.27491825819015503... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
librarian-bots/model_cards_with_metadata | librarian-bots | 2023-11-28T03:30:38Z | 32 | 4 | null | [
"task_categories:text-retrieval",
"size_categories:100K<n<1M",
"ethics",
"region:us"
] | 2023-11-28T03:30:38Z | 2023-10-12T21:50:53.000Z | 2023-10-12T21:50:53 | ---
size_categories:
- 100K<n<1M
task_categories:
- text-retrieval
pretty_name: Hugging Face Hub Model Cards
dataset_info:
features:
- name: id
dtype: string
- name: author
dtype: string
- name: last_modified
dtype: timestamp[us, tz=UTC]
- name: downloads
dtype: int64
- name: likes
dtype: int64
- name: library_name
dtype: string
- name: tags
sequence: string
- name: pipeline_tag
dtype: string
- name: mask_token
dtype: 'null'
- name: card_data
dtype: 'null'
- name: widget_data
dtype: 'null'
- name: model_index
dtype: 'null'
- name: config
dtype: 'null'
- name: transformers_info
dtype: 'null'
- name: spaces
dtype: 'null'
- name: safetensors
dtype: 'null'
- name: lastModified
dtype: timestamp[us, tz=UTC]
- name: transformersInfo
dtype: 'null'
- name: createdAt
dtype: string
- name: modelId
dtype: string
- name: created
dtype: timestamp[us]
- name: card
dtype: string
splits:
- name: train
num_bytes: 699393436
num_examples: 411392
download_size: 230638284
dataset_size: 699393436
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- ethics
---
# Dataset Card for Hugging Face Hub Model Cards
This datasets consists of [model cards](https://huggingface.co/docs/hub/model-cards) for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more.
This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new [discussion](https://huggingface.co/datasets/librarian-bots/model_cards_with_metadata/discussions/new).
## Dataset Details
### Dataset Description
- **Curated by:** Daniel van Strien
- **Language(s) (NLP):** Model cards on the Hugging Face Hub are predominantly in English but may include other languages.
## Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in model cards
- analysis of the model card format/content
- topic modelling of model cards
- analysis of the model card metadata
- training language models on model cards
### Out-of-Scope Use
[More Information Needed]
## Dataset Structure
This dataset has a single split.
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.
### Source Data
The source data is `README.md` files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
The data is downloaded using a CRON job on a daily basis.
#### Who are the source data producers?
The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.
### Annotations [optional]
There are no additional annotations in this dataset beyond the model card content.
#### Annotation process
N/A
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
N/A
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards.
Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
## Dataset Card Authors
[@davanstrien](https://huggingface.co/davanstrien)
## Dataset Card Contact
[@davanstrien](https://huggingface.co/davanstrien) | [
-0.5058304667472839,
-0.763305127620697,
0.07016774266958237,
0.3242640495300293,
-0.31316879391670227,
-0.3353743553161621,
-0.030868543311953545,
-0.7930014729499817,
0.5100361704826355,
0.6733485460281372,
-0.9044010043144226,
-0.8369362950325012,
-0.5351480841636658,
-0.018319036811590... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Revankumar/News_room | Revankumar | 2023-10-16T17:53:44Z | 32 | 0 | null | [
"license:mit",
"region:us"
] | 2023-10-16T17:53:44Z | 2023-10-16T17:49:35.000Z | 2023-10-16T17:49:35 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SinclairSchneider/time_magazine | SinclairSchneider | 2023-10-18T14:32:37Z | 32 | 0 | null | [
"size_categories:10K<n<100K",
"language:en",
"license:unknown",
"news",
"newspaper",
"time magazine",
"politics",
"Business",
"Entertainment",
"Health",
"History",
"Ideas",
"Politics",
"Press-room",
"Tech",
"US",
"World",
"region:us"
] | 2023-10-18T14:32:37Z | 2023-10-18T14:28:11.000Z | 2023-10-18T14:28:11 | ---
license: unknown
dataset_info:
features:
- name: category
dtype: string
- name: title
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 117207060
num_examples: 38189
download_size: 72886361
dataset_size: 117207060
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- en
tags:
- news
- newspaper
- time magazine
- politics
- Business
- Entertainment
- Health
- History
- Ideas
- Politics
- Press-room
- Tech
- US
- World
size_categories:
- 10K<n<100K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pranjalipathre/pybullet_img2img | pranjalipathre | 2023-11-20T23:48:03Z | 32 | 0 | null | [
"region:us"
] | 2023-11-20T23:48:03Z | 2023-10-22T20:51:00.000Z | 2023-10-22T20:51:00 | ---
dataset_info:
- config_name: video_01
features:
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: edited_image
dtype: image
splits:
- name: train
num_bytes: 56941968
num_examples: 155250
download_size: 1933908595
dataset_size: 56941968
- config_name: video_02
features:
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: edited_image
dtype: image
splits:
- name: train
num_bytes: 4519128
num_examples: 12384
download_size: 218193233
dataset_size: 4519128
- config_name: video_03
features:
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: edited_image
dtype: image
splits:
- name: train
num_bytes: 945864
num_examples: 2592
download_size: 151063963
dataset_size: 945864
- config_name: video_04
features:
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: edited_image
dtype: image
splits:
- name: train
num_bytes: 945864
num_examples: 2592
download_size: 151063963
dataset_size: 945864
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zhaospei/cmg-history | zhaospei | 2023-11-01T11:53:00Z | 32 | 0 | null | [
"region:us"
] | 2023-11-01T11:53:00Z | 2023-11-01T11:52:25.000Z | 2023-11-01T11:52:25 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zhaospei/cmg_allinone | zhaospei | 2023-11-26T16:29:20Z | 32 | 0 | null | [
"region:us"
] | 2023-11-26T16:29:20Z | 2023-11-04T13:28:46.000Z | 2023-11-04T13:28:46 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShrinivasSK/en_hi_1 | ShrinivasSK | 2023-11-04T19:22:36Z | 32 | 0 | null | [
"region:us"
] | 2023-11-04T19:22:36Z | 2023-11-04T19:11:45.000Z | 2023-11-04T19:11:45 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: src
dtype: string
- name: tgt
dtype: string
splits:
- name: train
num_bytes: 6435127.8
num_examples: 18000
- name: test
num_bytes: 715014.2
num_examples: 2000
download_size: 3824291
dataset_size: 7150142.0
---
# Dataset Card for "data_hi_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.665655255317688,
-0.48133575916290283,
0.048888470977544785,
0.24327699840068817,
-0.1738320291042328,
-0.18452398478984833,
0.45269906520843506,
-0.1470186412334442,
1.031241536140442,
0.6372324824333191,
-0.8481755256652832,
-0.8859544396400452,
-0.5943018198013306,
-0.270817607641220... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShrinivasSK/en_hi_2 | ShrinivasSK | 2023-11-04T19:22:44Z | 32 | 0 | null | [
"region:us"
] | 2023-11-04T19:22:44Z | 2023-11-04T19:22:37.000Z | 2023-11-04T19:22:37 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: src
dtype: string
- name: tgt
dtype: string
splits:
- name: train
num_bytes: 6356499.3
num_examples: 18000
- name: test
num_bytes: 706277.7
num_examples: 2000
download_size: 3784127
dataset_size: 7062777.0
---
# Dataset Card for "data_hi_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4948629140853882,
-0.45586276054382324,
0.11188199371099472,
0.2531011402606964,
-0.18126672506332397,
-0.17407850921154022,
0.44797345995903015,
-0.33761677145957947,
0.8207170963287354,
0.5743870735168457,
-0.6618408560752869,
-0.6673980355262756,
-0.6112933158874512,
-0.3541568219661... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShrinivasSK/en_kn_1 | ShrinivasSK | 2023-11-04T19:22:57Z | 32 | 0 | null | [
"region:us"
] | 2023-11-04T19:22:57Z | 2023-11-04T19:22:51.000Z | 2023-11-04T19:22:51 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: src
dtype: string
- name: tgt
dtype: string
splits:
- name: train
num_bytes: 4017735.9
num_examples: 18000
- name: test
num_bytes: 446415.1
num_examples: 2000
download_size: 2392888
dataset_size: 4464151.0
---
# Dataset Card for "data_kn_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6124740242958069,
-0.3230134844779968,
0.1267068237066269,
0.2440515011548996,
-0.4841180741786957,
-0.09303653985261917,
0.44638875126838684,
-0.006061634048819542,
0.9096124172210693,
0.5194351077079773,
-0.9018147587776184,
-0.9272319674491882,
-0.6822711825370789,
-0.274235814809799... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShrinivasSK/en_kn_2 | ShrinivasSK | 2023-11-04T19:23:04Z | 32 | 0 | null | [
"region:us"
] | 2023-11-04T19:23:04Z | 2023-11-04T19:22:57.000Z | 2023-11-04T19:22:57 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: src
dtype: string
- name: tgt
dtype: string
splits:
- name: train
num_bytes: 4039020.0
num_examples: 18000
- name: test
num_bytes: 448780.0
num_examples: 2000
download_size: 2402763
dataset_size: 4487800.0
---
# Dataset Card for "data_kn_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.43438395857810974,
-0.28652989864349365,
0.19067643582820892,
0.23858070373535156,
-0.4767703711986542,
-0.05737752467393875,
0.4248073697090149,
-0.2007405310869217,
0.6965741515159607,
0.4620165526866913,
-0.7047337293624878,
-0.7156741619110107,
-0.7239853143692017,
-0.35847860574722... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShrinivasSK/en_kn_3 | ShrinivasSK | 2023-11-04T19:23:10Z | 32 | 0 | null | [
"region:us"
] | 2023-11-04T19:23:10Z | 2023-11-04T19:23:05.000Z | 2023-11-04T19:23:05 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: src
dtype: string
- name: tgt
dtype: string
splits:
- name: train
num_bytes: 3976936.2
num_examples: 18000
- name: test
num_bytes: 441881.8
num_examples: 2000
download_size: 2363947
dataset_size: 4418818.0
---
# Dataset Card for "data_kn_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5622180104255676,
-0.24818755686283112,
0.3583858907222748,
0.2911073565483093,
-0.38462597131729126,
-0.1148880273103714,
0.5023107528686523,
-0.2135789692401886,
0.7167090773582458,
0.5482510924339294,
-0.6716360449790955,
-0.8764476180076599,
-0.5911083817481995,
-0.13586696982383728... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShrinivasSK/en_mr_1 | ShrinivasSK | 2023-11-04T19:23:18Z | 32 | 0 | null | [
"region:us"
] | 2023-11-04T19:23:18Z | 2023-11-04T19:23:11.000Z | 2023-11-04T19:23:11 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: src
dtype: string
- name: tgt
dtype: string
splits:
- name: train
num_bytes: 4557684.6
num_examples: 18000
- name: test
num_bytes: 506409.4
num_examples: 2000
download_size: 2671507
dataset_size: 5064094.0
---
# Dataset Card for "data_mr_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6589109301567078,
-0.3237012028694153,
0.08911033719778061,
0.16609269380569458,
-0.2531290650367737,
-0.12985549867153168,
0.4456336796283722,
0.005899033974856138,
1.0337673425674438,
0.49160927534103394,
-1.0502339601516724,
-0.7559027075767517,
-0.7084401249885559,
-0.18422359228134... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShrinivasSK/en_te_1 | ShrinivasSK | 2023-11-04T19:23:24Z | 32 | 0 | null | [
"region:us"
] | 2023-11-04T19:23:24Z | 2023-11-04T19:23:19.000Z | 2023-11-04T19:23:19 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: src
dtype: string
- name: tgt
dtype: string
splits:
- name: train
num_bytes: 4048999.2
num_examples: 18000
- name: test
num_bytes: 449888.8
num_examples: 2000
download_size: 2422001
dataset_size: 4498888.0
---
# Dataset Card for "data_te_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5756809115409851,
-0.4282711148262024,
0.1735430210828781,
0.18184584379196167,
-0.2759554386138916,
-0.07687260210514069,
0.36858242750167847,
-0.04109238088130951,
0.9743333458900452,
0.38458535075187683,
-0.9356188178062439,
-0.9084233045578003,
-0.6069813370704651,
-0.25699508190155... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShrinivasSK/en_te_2 | ShrinivasSK | 2023-11-04T19:23:31Z | 32 | 0 | null | [
"region:us"
] | 2023-11-04T19:23:31Z | 2023-11-04T19:23:25.000Z | 2023-11-04T19:23:25 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: src
dtype: string
- name: tgt
dtype: string
splits:
- name: train
num_bytes: 4108551.3
num_examples: 18000
- name: test
num_bytes: 456505.7
num_examples: 2000
download_size: 2453776
dataset_size: 4565057.0
---
# Dataset Card for "data_te_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.39144477248191833,
-0.38868558406829834,
0.23785404860973358,
0.19173982739448547,
-0.27177268266677856,
-0.05323084071278572,
0.3452489376068115,
-0.22898350656032562,
0.7540086507797241,
0.3214482069015503,
-0.7459965348243713,
-0.6982852816581726,
-0.6398873925209045,
-0.348675668239... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShrinivasSK/hi_kn | ShrinivasSK | 2023-11-04T19:28:20Z | 32 | 0 | null | [
"region:us"
] | 2023-11-04T19:28:20Z | 2023-11-04T19:28:12.000Z | 2023-11-04T19:28:12 | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 5132190.6
num_examples: 18000
- name: test
num_bytes: 570243.4
num_examples: 2000
download_size: 2596240
dataset_size: 5702434.0
---
# Dataset Card for "hi-kn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7795679569244385,
-0.41146668791770935,
0.11612948775291443,
0.10174185782670975,
-0.24633394181728363,
-0.057183362543582916,
0.27233436703681946,
-0.288110613822937,
1.0492496490478516,
0.6258800029754639,
-0.7668389081954956,
-0.8444231748580933,
-0.5869137048721313,
-0.2339470982551... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShrinivasSK/hi_te | ShrinivasSK | 2023-11-04T19:28:34Z | 32 | 0 | null | [
"region:us"
] | 2023-11-04T19:28:34Z | 2023-11-04T19:28:29.000Z | 2023-11-04T19:28:29 | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 5294133.0
num_examples: 18000
- name: test
num_bytes: 588237.0
num_examples: 2000
download_size: 2685371
dataset_size: 5882370.0
---
# Dataset Card for "hi-te"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7693906426429749,
-0.4673941731452942,
0.18351125717163086,
0.1234576627612114,
-0.10032518953084946,
-0.13172054290771484,
0.21259164810180664,
-0.3751440942287445,
1.1141974925994873,
0.5921779274940491,
-0.7225726842880249,
-0.7771574258804321,
-0.5834619998931885,
-0.216998234391212... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autumnjohnson/project_ceti | autumnjohnson | 2023-11-08T04:32:12Z | 32 | 0 | null | [
"region:us"
] | 2023-11-08T04:32:12Z | 2023-11-05T22:29:20.000Z | 2023-11-05T22:29:20 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 29115.75
num_examples: 7058
download_size: 0
dataset_size: 29115.75
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShengLiang-HH/sentences_es | ShengLiang-HH | 2023-11-23T10:44:04Z | 32 | 0 | null | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:es",
"region:us"
] | 2023-11-23T10:44:04Z | 2023-11-06T16:59:28.000Z | 2023-11-06T16:59:28 | ---
task_categories:
- text-classification
language:
- es
size_categories:
- 100K<n<1M
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShrinivasSK/hi_en_2 | ShrinivasSK | 2023-11-06T19:07:04Z | 32 | 0 | null | [
"region:us"
] | 2023-11-06T19:07:04Z | 2023-11-06T18:54:29.000Z | 2023-11-06T18:54:29 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: tgt
dtype: string
- name: src
dtype: string
splits:
- name: train
num_bytes: 6376404.6
num_examples: 18000
- name: test
num_bytes: 708489.4
num_examples: 2000
download_size: 3796444
dataset_size: 7084894.0
---
# Dataset Card for "hi_en_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6303144693374634,
-0.3502988815307617,
0.11569955199956894,
0.3079986572265625,
-0.17602574825286865,
-0.2464597374200821,
0.26142042875289917,
-0.47310706973075867,
0.9270923733711243,
0.6110357642173767,
-0.6890026330947876,
-0.6933183073997498,
-0.619550347328186,
-0.2052309215068817... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShrinivasSK/hi_en_3 | ShrinivasSK | 2023-11-06T19:07:15Z | 32 | 0 | null | [
"region:us"
] | 2023-11-06T19:07:15Z | 2023-11-06T18:54:34.000Z | 2023-11-06T18:54:34 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: tgt
dtype: string
- name: src
dtype: string
splits:
- name: train
num_bytes: 6366803.4
num_examples: 18000
- name: test
num_bytes: 707422.6
num_examples: 2000
download_size: 3789240
dataset_size: 7074226.0
---
# Dataset Card for "hi_en_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7411299347877502,
-0.3029508888721466,
0.26857444643974304,
0.33315929770469666,
-0.08014331758022308,
-0.2869759500026703,
0.3380031883716583,
-0.4803028702735901,
0.9238393902778625,
0.6980891823768616,
-0.6516099572181702,
-0.8515555262565613,
-0.48362910747528076,
0.0061036366969347... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShrinivasSK/kn_en_2 | ShrinivasSK | 2023-11-06T19:07:36Z | 32 | 0 | null | [
"region:us"
] | 2023-11-06T19:07:36Z | 2023-11-06T18:54:43.000Z | 2023-11-06T18:54:43 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: tgt
dtype: string
- name: src
dtype: string
splits:
- name: train
num_bytes: 3982082.4
num_examples: 18000
- name: test
num_bytes: 442453.6
num_examples: 2000
download_size: 2369798
dataset_size: 4424536.0
---
# Dataset Card for "kn_en_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5635228157043457,
-0.21262836456298828,
0.22840364277362823,
0.28965118527412415,
-0.5076987147331238,
-0.13066382706165314,
0.25543850660324097,
-0.3529297411441803,
0.7752323746681213,
0.5025032758712769,
-0.7124478220939636,
-0.7388049364089966,
-0.750071108341217,
-0.208125144243240... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShrinivasSK/kn_en_3 | ShrinivasSK | 2023-11-06T19:07:47Z | 32 | 0 | null | [
"region:us"
] | 2023-11-06T19:07:47Z | 2023-11-06T18:54:47.000Z | 2023-11-06T18:54:47 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: tgt
dtype: string
- name: src
dtype: string
splits:
- name: train
num_bytes: 3978395.1
num_examples: 18000
- name: test
num_bytes: 442043.9
num_examples: 2000
download_size: 2367278
dataset_size: 4420439.0
---
# Dataset Card for "kn_en_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6854434609413147,
-0.1701439768075943,
0.38944143056869507,
0.32265836000442505,
-0.41167181730270386,
-0.1885228306055069,
0.33952513337135315,
-0.3663474917411804,
0.7875069379806519,
0.5883365869522095,
-0.6649122834205627,
-0.9032849669456482,
-0.6012956500053406,
0.0159115605056285... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShrinivasSK/te_en_1 | ShrinivasSK | 2023-11-06T19:08:08Z | 32 | 0 | null | [
"region:us"
] | 2023-11-06T19:08:08Z | 2023-11-06T18:54:57.000Z | 2023-11-06T18:54:57 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: tgt
dtype: string
- name: src
dtype: string
splits:
- name: train
num_bytes: 4096206.9
num_examples: 18000
- name: test
num_bytes: 455134.1
num_examples: 2000
download_size: 2442401
dataset_size: 4551341.0
---
# Dataset Card for "te_en_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6569553017616272,
-0.3469671308994293,
0.2226521372795105,
0.23831132054328918,
-0.28596705198287964,
-0.11838687211275101,
0.20450453460216522,
-0.1658371090888977,
1.0482441186904907,
0.3653298616409302,
-0.9106462597846985,
-0.921967625617981,
-0.61533123254776,
-0.14028748869895935,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShrinivasSK/te_en_2 | ShrinivasSK | 2023-11-06T19:08:19Z | 32 | 0 | null | [
"region:us"
] | 2023-11-06T19:08:19Z | 2023-11-06T18:55:01.000Z | 2023-11-06T18:55:01 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: tgt
dtype: string
- name: src
dtype: string
splits:
- name: train
num_bytes: 4065421.5
num_examples: 18000
- name: test
num_bytes: 451713.5
num_examples: 2000
download_size: 2431811
dataset_size: 4517135.0
---
# Dataset Card for "te_en_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.49197283387184143,
-0.3096269965171814,
0.2695859670639038,
0.2478407621383667,
-0.29095935821533203,
-0.11117404699325562,
0.19057169556617737,
-0.3323875963687897,
0.8422784209251404,
0.31630969047546387,
-0.7393078207969666,
-0.713324248790741,
-0.6550636291503906,
-0.243454098701477... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShrinivasSK/te_en_3 | ShrinivasSK | 2023-11-06T19:12:56Z | 32 | 0 | null | [
"region:us"
] | 2023-11-06T19:12:56Z | 2023-11-06T18:55:06.000Z | 2023-11-06T18:55:06 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: tgt
dtype: string
- name: src
dtype: string
splits:
- name: train
num_bytes: 4067874.0
num_examples: 18000
- name: test
num_bytes: 451986.0
num_examples: 2000
download_size: 2432870
dataset_size: 4519860.0
---
# Dataset Card for "te_en_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6145793199539185,
-0.26703470945358276,
0.41661620140075684,
0.2673983871936798,
-0.1972319632768631,
-0.16037824749946594,
0.2680835723876953,
-0.3492750823497772,
0.8525338768959045,
0.3989774286746979,
-0.6955982446670532,
-0.8772882223129272,
-0.5066382884979248,
-0.0288655795156955... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShrinivasSK/hi_kn_1 | ShrinivasSK | 2023-11-06T19:22:17Z | 32 | 0 | null | [
"region:us"
] | 2023-11-06T19:22:17Z | 2023-11-06T19:22:07.000Z | 2023-11-06T19:22:07 | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 5155860.6
num_examples: 18000
- name: test
num_bytes: 572873.4
num_examples: 2000
download_size: 2612672
dataset_size: 5728734.0
---
# Dataset Card for "hi_kn_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7010003924369812,
-0.39800021052360535,
0.022935545071959496,
0.20773456990718842,
-0.3309517204761505,
-0.19412194192409515,
0.39629676938056946,
-0.14103995263576508,
1.0287230014801025,
0.6435109376907349,
-0.7935466170310974,
-0.9278016686439514,
-0.6304095983505249,
-0.244204208254... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomaarsen/setfit-absa-semeval-restaurants | tomaarsen | 2023-11-16T10:37:20Z | 32 | 0 | null | [
"region:us"
] | 2023-11-16T10:37:20Z | 2023-11-09T15:14:48.000Z | 2023-11-09T15:14:48 | ---
dataset_info:
features:
- name: text
dtype: string
- name: span
dtype: string
- name: label
dtype: string
- name: ordinal
dtype: int64
splits:
- name: train
num_bytes: 490223
num_examples: 3693
- name: test
num_bytes: 138187
num_examples: 1134
download_size: 193352
dataset_size: 628410
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "tomaarsen/setfit-absa-semeval-restaurants"
### Dataset Summary
This dataset contains the manually annotated restaurant reviews from SemEval-2014 Task 4, in the format as
understood by [SetFit](https://github.com/huggingface/setfit) ABSA.
For more details, see https://aclanthology.org/S14-2004/
### Data Instances
An example of "train" looks as follows.
```json
{"text": "But the staff was so horrible to us.", "span": "staff", "label": "negative", "ordinal": 0}
{"text": "To be completely fair, the only redeeming factor was the food, which was above average, but couldn't make up for all the other deficiencies of Teodora.", "span": "food", "label": "positive", "ordinal": 0}
{"text": "The food is uniformly exceptional, with a very capable kitchen which will proudly whip up whatever you feel like eating, whether it's on the menu or not.", "span": "food", "label": "positive", "ordinal": 0}
{"text": "The food is uniformly exceptional, with a very capable kitchen which will proudly whip up whatever you feel like eating, whether it's on the menu or not.", "span": "kitchen", "label": "positive", "ordinal": 0}
{"text": "The food is uniformly exceptional, with a very capable kitchen which will proudly whip up whatever you feel like eating, whether it's on the menu or not.", "span": "menu", "label": "neutral", "ordinal": 0}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
- `span`: a `string` feature showing the aspect span from the text.
- `label`: a `string` feature showing the polarity of the aspect span.
- `ordinal`: an `int64` feature showing the n-th occurrence of the span in the text. This is useful for if the span occurs within the same text multiple times.
### Data Splits
| name |train|test|
|---------|----:|---:|
|tomaarsen/setfit-absa-semeval-restaurants|3693|1134|
### Training ABSA models using SetFit ABSA
To train using this dataset, first install the SetFit library:
```bash
pip install setfit
```
And then you can use the following script as a guideline of how to train an ABSA model on this dataset:
```python
from setfit import AbsaModel, AbsaTrainer, TrainingArguments
from datasets import load_dataset
from transformers import EarlyStoppingCallback
# You can initialize a AbsaModel using one or two SentenceTransformer models, or two ABSA models
model = AbsaModel.from_pretrained("sentence-transformers/all-MiniLM-L6-v2")
# The training/eval dataset must have `text`, `span`, `polarity`, and `ordinal` columns
dataset = load_dataset("tomaarsen/setfit-absa-semeval-restaurants")
train_dataset = dataset["train"]
eval_dataset = dataset["test"]
args = TrainingArguments(
output_dir="models",
use_amp=True,
batch_size=256,
eval_steps=50,
save_steps=50,
load_best_model_at_end=True,
)
trainer = AbsaTrainer(
model,
args=args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
callbacks=[EarlyStoppingCallback(early_stopping_patience=5)],
)
trainer.train()
metrics = trainer.evaluate(eval_dataset)
print(metrics)
trainer.push_to_hub("tomaarsen/setfit-absa-restaurants")
```
You can then run inference like so:
```python
from setfit import AbsaModel
# Download from Hub and run inference
model = AbsaModel.from_pretrained(
"tomaarsen/setfit-absa-restaurants-aspect",
"tomaarsen/setfit-absa-restaurants-polarity",
)
# Run inference
preds = model([
"The best pizza outside of Italy and really tasty.",
"The food here is great but the service is terrible",
])
```
### Citation Information
```bibtex
@inproceedings{pontiki-etal-2014-semeval,
title = "{S}em{E}val-2014 Task 4: Aspect Based Sentiment Analysis",
author = "Pontiki, Maria and
Galanis, Dimitris and
Pavlopoulos, John and
Papageorgiou, Harris and
Androutsopoulos, Ion and
Manandhar, Suresh",
editor = "Nakov, Preslav and
Zesch, Torsten",
booktitle = "Proceedings of the 8th International Workshop on Semantic Evaluation ({S}em{E}val 2014)",
month = aug,
year = "2014",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S14-2004",
doi = "10.3115/v1/S14-2004",
pages = "27--35",
}
```
| [
-0.35950931906700134,
-0.7867863178253174,
0.41268256306648254,
0.11453639715909958,
-0.08111542463302612,
-0.4616025686264038,
-0.1518770158290863,
-0.23681233823299408,
0.4580283761024475,
0.6696370840072632,
-0.6647242307662964,
-0.6812894344329834,
-0.28970322012901306,
0.5078085660934... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mengmengmmm/B2F_medium | mengmengmmm | 2023-11-13T13:27:21Z | 32 | 0 | null | [
"region:us"
] | 2023-11-13T13:27:21Z | 2023-11-13T13:26:49.000Z | 2023-11-13T13:26:49 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mengmengmmm/tlc | mengmengmmm | 2023-11-13T13:52:54Z | 32 | 0 | null | [
"region:us"
] | 2023-11-13T13:52:54Z | 2023-11-13T13:52:21.000Z | 2023-11-13T13:52:21 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hippocrates/clinical_ner_train | hippocrates | 2023-11-13T19:52:52Z | 32 | 0 | null | [
"region:us"
] | 2023-11-13T19:52:52Z | 2023-11-13T19:52:44.000Z | 2023-11-13T19:52:44 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8527346
num_examples: 10661
- name: valid
num_bytes: 4951299
num_examples: 6254
- name: test
num_bytes: 5307591
num_examples: 6806
download_size: 5455050
dataset_size: 18786236
---
# Dataset Card for "clinical_ner_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.391435831785202,
-0.09383192658424377,
0.25289273262023926,
0.06740710884332657,
0.04897579178214073,
-0.19163845479488373,
0.26464584469795227,
-0.06304778903722763,
0.8568159937858582,
0.4683998227119446,
-0.7623781561851501,
-0.7072309255599976,
-0.7789531946182251,
-0.19420155882835... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Raven47/FLAN-Intent | Raven47 | 2023-11-14T06:49:42Z | 32 | 0 | null | [
"region:us"
] | 2023-11-14T06:49:42Z | 2023-11-14T06:48:55.000Z | 2023-11-14T06:48:55 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lhallee/dl_ten_reg | lhallee | 2023-11-15T18:36:05Z | 32 | 0 | null | [
"region:us"
] | 2023-11-15T18:36:05Z | 2023-11-15T18:36:03.000Z | 2023-11-15T18:36:03 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: seqs
dtype: string
- name: labels
dtype: int64
splits:
- name: train
num_bytes: 4494546
num_examples: 8743
- name: valid
num_bytes: 1132147
num_examples: 2190
- name: test
num_bytes: 1487111
num_examples: 2745
download_size: 7016840
dataset_size: 7113804
---
# Dataset Card for "dl_ten_reg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.657219648361206,
-0.3426154553890228,
0.25137463212013245,
0.3109773099422455,
-0.23794235289096832,
-0.03259780630469322,
0.21886879205703735,
-0.23572151362895966,
0.856349527835846,
0.42430412769317627,
-0.8962038159370422,
-0.8056426644325256,
-0.4672256112098694,
0.0452494770288467... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
GotItAI/rocketbook_gpt4_responses_combined | GotItAI | 2023-11-21T09:46:10Z | 32 | 0 | null | [
"region:us"
] | 2023-11-21T09:46:10Z | 2023-11-21T07:47:02.000Z | 2023-11-21T07:47:02 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: prompt_id
dtype: int64
splits:
- name: train
num_bytes: 16879635
num_examples: 1459
- name: test
num_bytes: 6182180
num_examples: 530
download_size: 0
dataset_size: 23061815
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "rocketbook_gpt4_responses_combined"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5285739898681641,
-0.12987709045410156,
0.05053757503628731,
0.3738754093647003,
-0.17305448651313782,
0.28139540553092957,
0.22117581963539124,
-0.12961862981319427,
0.5805678367614746,
0.25665682554244995,
-0.8448389768600464,
-0.5418693423271179,
-0.4773770868778229,
-0.1651019304990... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Giacinta/label | Giacinta | 2023-11-24T05:57:25Z | 32 | 0 | null | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-11-24T05:57:25Z | 2023-11-22T16:26:40.000Z | 2023-11-22T16:26:40 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
pretty_name: ai
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: train
path: filter.csv
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KrafterDen/SISUS_V1 | KrafterDen | 2023-11-25T19:05:22Z | 32 | 0 | null | [
"license:llama2",
"region:us"
] | 2023-11-25T19:05:22Z | 2023-11-24T12:59:38.000Z | 2023-11-24T12:59:38 | ---
license: llama2
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
M-A-D/DarijaBridge | M-A-D | 2023-11-26T14:17:11Z | 32 | 1 | null | [
"task_categories:translation",
"size_categories:1M<n<10M",
"language:ar",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-11-26T14:17:11Z | 2023-11-26T14:08:17.000Z | 2023-11-26T14:08:17 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: translation
dtype: string
- name: translated
dtype: bool
- name: corrected
dtype: bool
- name: correction
dtype: string
- name: quality
dtype: int64
- name: metadata
struct:
- name: config
dtype: string
- name: dataset
dtype: string
- name: language
dtype: string
- name: split
dtype: string
- name: template
dtype: string
splits:
- name: train
num_bytes: 343412514
num_examples: 1235091
download_size: 133902523
dataset_size: 343412514
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
language:
- ar
- en
task_categories:
- translation
pretty_name: DarijaBridge
size_categories:
- 1M<n<10M
---
# DarijaBridge Dataset Card
### General Information
- **Dataset Name:** DarijaBridge
- **Version:** 1.0
- **Creator:** MAD-Community
- **Language:** Darija (Moroccan Arabic) and English
- **Total Tokens:** 41,845,467 (in 'sentence' column)
- **Task:** Machine Translation
### Dataset Summary
DarijaBridge is a community-driven bilingual corpus designed for machine translation tasks between Darija (Moroccan Arabic) and English. Created by MAD-Community, it encompasses a wide range of the Moroccan "dialects" and colloquial expressions, reflecting the linguistic diversity of Morocco. The dataset is particularly valuable for developing and fine-tuning leading MT models like NLLB, improving translation accuracy and cultural relevance.
### Intended Use
This dataset is intended for use in machine translation research and applications, especially for those focusing on underrepresented languages and dialects like Darija. It's suitable for training models to translate between English and Darija and can be a crucial resource for linguistic studies and fostering cross-cultural communication.
## Data Collection and Preparation
### Data Source
The data in DarijaBridge has been contributed by the MAD-Community, comprising native Darija speakers and language experts. Contributions are ongoing, and the dataset is regularly updated with new translations and linguistic input.
### Methodology
Data is collected through community contributions, ensuring a diverse representation of dialects and usage. Each sentence in Darija is paired with its English translation, reviewed and corrected by language experts and expert models (like GPT-4) for accuracy.
## Dataset Structure
### Data Fields
- `sentence`: Contains the original sentence in Darija.
- `translation`: Contains the corresponding English translation of the Darija sentence.
- `quality`: Indicates the quality of the text in the sentence column (1 for high quality).
- `metadata`: Includes additional information like language, dialect, source, etc.
### Data Splits
The dataset is currently not split into standard training, validation, and test sets. Users are encouraged to create splits as per their specific research or application needs.
## Additional Information
### Limitations and Bias
As the dataset is community-contributed, there may be variations in translation quality and style. Efforts are made to standardize and review translations, but users should be aware of potential inconsistencies.
### Licensing Information
The DarijaBridge dataset is provided under the Apache 2.0 license. | [
-0.43911078572273254,
-0.4077867269515991,
-0.014605659060180187,
0.5060927271842957,
-0.33981215953826904,
-0.013474730774760246,
0.0721200481057167,
-0.11759474128484726,
0.5669653415679932,
0.8923514485359192,
-0.4193154275417328,
-0.9956061244010925,
-0.9954584240913391,
0.328668147325... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
merve/poetry | merve | 2022-10-25T09:50:55Z | 31 | 14 | null | [
"region:us"
] | 2022-10-25T09:50:55Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | # Dataset Card for poetry
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** poetryfoundation.com
- **Repository:** https://www.kaggle.com/ishnoor/poetry-analysis-with-machine-learning
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
It contains poems from subjects: Love, Nature and Mythology & Folklore that belong to two periods namely Renaissance and Modern
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
Has 5 columns:
- Content
- Author
- Poem name
- Age
- Type
### Data Splits
Only training set
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information]
---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: poetry
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
--- | [
-0.2599271833896637,
-0.3756997883319855,
0.11623905599117279,
0.25091513991355896,
-0.4237174987792969,
-0.07187794148921967,
-0.44194790720939636,
-0.5536515116691589,
0.5407798886299133,
0.668186366558075,
-0.9050719141960144,
-1.100799322128296,
-0.8266456723213196,
0.10681562125682831... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yxchar/citation_intent-tlm | yxchar | 2021-11-04T23:47:24Z | 31 | 2 | null | [
"region:us"
] | 2021-11-04T23:47:24Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yxchar/sciie-tlm | yxchar | 2021-11-05T02:04:05Z | 31 | 0 | null | [
"region:us"
] | 2021-11-05T02:04:05Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Khedesh/PeymaNER | Khedesh | 2022-03-11T11:30:13Z | 31 | 1 | null | [
"region:us"
] | 2022-03-11T11:30:13Z | 2022-03-11T11:18:49.000Z | 2022-03-11T11:18:49 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Khedesh/ParsTwiNER | Khedesh | 2022-03-11T16:25:50Z | 31 | 0 | null | [
"region:us"
] | 2022-03-11T16:25:50Z | 2022-03-11T16:22:48.000Z | 2022-03-11T16:22:48 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ebrigham/NOS-news | ebrigham | 2022-03-24T15:04:53Z | 31 | 0 | null | [
"region:us"
] | 2022-03-24T15:04:53Z | 2022-03-24T14:35:47.000Z | 2022-03-24T14:35:47 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DrishtiSharma/Anime-Face-Dataset | DrishtiSharma | 2022-04-11T00:04:37Z | 31 | 3 | null | [
"region:us"
] | 2022-04-11T00:04:37Z | 2022-04-10T23:51:23.000Z | 2022-04-10T23:51:23 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
saattrupdan/doc-nli | saattrupdan | 2022-04-26T18:44:14Z | 31 | 3 | null | [
"region:us"
] | 2022-04-26T18:44:14Z | 2022-04-26T18:32:39.000Z | 2022-04-26T18:32:39 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigscience-data/roots_zh-tw_wikipedia | bigscience-data | 2022-12-12T12:09:12Z | 31 | 3 | null | [
"language:zh",
"license:cc-by-sa-3.0",
"region:us"
] | 2022-12-12T12:09:12Z | 2022-05-18T09:20:00.000Z | 2022-05-18T09:20:00 | ---
language: zh
language_bcp47:
- zh-TW
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_zh-tw_wikipedia
# wikipedia
- Dataset uid: `wikipedia`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 3.2299 % of total
- 4.2071 % of en
- 5.6773 % of ar
- 3.3416 % of fr
- 5.2815 % of es
- 12.4852 % of ca
- 0.4288 % of zh
- 0.4286 % of zh
- 5.4743 % of indic-bn
- 8.9062 % of indic-ta
- 21.3313 % of indic-te
- 4.4845 % of pt
- 4.0493 % of indic-hi
- 11.3163 % of indic-ml
- 22.5300 % of indic-ur
- 4.4902 % of vi
- 16.9916 % of indic-kn
- 24.7820 % of eu
- 11.6241 % of indic-mr
- 9.8749 % of id
- 9.3489 % of indic-pa
- 9.4767 % of indic-gu
- 24.1132 % of indic-as
- 5.3309 % of indic-or
### BigScience processing steps
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: ar
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: fr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: es
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: ca
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: zh
#### Filters applied to: zh
#### Filters applied to: indic-bn
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: pt
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ur
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: vi
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-kn
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: eu
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
#### Filters applied to: indic-mr
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: id
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-pa
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-gu
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-as
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
#### Filters applied to: indic-or
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
| [
-0.692057192325592,
-0.5948865413665771,
0.36528050899505615,
0.180470809340477,
-0.2105587124824524,
-0.07773711532354355,
-0.2304721474647522,
-0.16339321434497833,
0.6963963508605957,
0.3377370238304138,
-0.829664409160614,
-0.9287978410720825,
-0.6951425075531006,
0.47541752457618713,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mariosasko/test_push_to_hub | mariosasko | 2022-06-12T13:20:34Z | 31 | 0 | null | [
"region:us"
] | 2022-06-12T13:20:34Z | 2022-05-24T17:58:17.000Z | 2022-05-24T17:58:17 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ziwenyd/AVATAR | ziwenyd | 2022-07-27T08:27:28Z | 31 | 0 | null | [
"region:us"
] | 2022-07-27T08:27:28Z | 2022-07-27T07:30:08.000Z | 2022-07-27T07:30:08 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
GateNLP/broad_twitter_corpus | GateNLP | 2022-07-01T15:46:36Z | 31 | 1 | broad-twitter-corpus | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-07-01T15:46:36Z | 2022-08-15T10:47:44.000Z | 2022-08-15T10:47:44 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: broad-twitter-corpus
pretty_name: Broad Twitter Corpus
---
# Dataset Card for broad_twitter_corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://github.com/GateNLP/broad_twitter_corpus](https://github.com/GateNLP/broad_twitter_corpus)
- **Repository:** [https://github.com/GateNLP/broad_twitter_corpus](https://github.com/GateNLP/broad_twitter_corpus)
- **Paper:** [http://www.aclweb.org/anthology/C16-1111](http://www.aclweb.org/anthology/C16-1111)
- **Leaderboard:** [Named Entity Recognition on Broad Twitter Corpus](https://paperswithcode.com/sota/named-entity-recognition-on-broad-twitter)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
### Dataset Summary
This is the Broad Twitter corpus, a dataset of tweets collected over stratified times, places and social uses. The goal is to represent a broad range of activities, giving a dataset more representative of the language used in this hardest of social media formats to process. Further, the BTC is annotated for named entities.
See the paper, [Broad Twitter Corpus: A Diverse Named Entity Recognition Resource](http://www.aclweb.org/anthology/C16-1111), for details.
### Supported Tasks and Leaderboards
* Named Entity Recognition
* On PWC: [Named Entity Recognition on Broad Twitter Corpus](https://paperswithcode.com/sota/named-entity-recognition-on-broad-twitter)
### Languages
English from UK, US, Australia, Canada, Ireland, New Zealand; `bcp47:en`
## Dataset Structure
### Data Instances
Feature |Count
---|---:
Documents |9 551
Tokens |165 739
Person entities |5 271
Location entities |3 114
Organization entities |3 732
### Data Fields
Each tweet contains an ID, a list of tokens, and a list of NER tags
- `id`: a `string` feature.
- `tokens`: a `list` of `strings`
- `ner_tags`: a `list` of class IDs (`int`s) representing the NER class:
```
0: O
1: B-PER
2: I-PER
3: B-ORG
4: I-ORG
5: B-LOC
6: I-LOC
```
### Data Splits
Section|Region|Collection period|Description|Annotators|Tweet count
---|---|---|---|---|---:
A | UK| 2012.01| General collection |Expert| 1000
B |UK |2012.01-02 |Non-directed tweets |Expert |2000
E |Global| 2014.07| Related to MH17 disaster| Crowd & expert |200
F |Stratified |2009-2014| Twitterati |Crowd & expert |2000
G |Stratified| 2011-2014| Mainstream news| Crowd & expert| 2351
H |Non-UK| 2014 |General collection |Crowd & expert |2000
The most varied parts of the BTC are sections F and H. However, each of the remaining four sections has some specific readily-identifiable bias. So, we propose that one uses half of section H for evaluation and leaves the other half in the training data. Section H should be partitioned in the order of the JSON-format lines. Note that the CoNLL-format data is readily reconstructible from the JSON format, which is the authoritative data format from which others are derived.
**Test**: Section F
**Development**: Section H (the paper says "second half of Section H" but ordinality could be ambiguous, so it all goes in. Bonne chance)
**Training**: everything else
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Creative Commons Attribution 4.0 International (CC BY 4.0)
### Citation Information
```
@inproceedings{derczynski2016broad,
title={Broad twitter corpus: A diverse named entity recognition resource},
author={Derczynski, Leon and Bontcheva, Kalina and Roberts, Ian},
booktitle={Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers},
pages={1169--1179},
year={2016}
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
| [
-0.571506679058075,
-0.6783424019813538,
0.22221536934375763,
0.2693931758403778,
-0.2752426564693451,
0.2584401071071625,
-0.5791414976119995,
-0.5375292897224426,
0.6024042367935181,
0.26595789194107056,
-0.5979828834533691,
-0.9988105893135071,
-0.6679815649986267,
0.14541885256767273,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
priyank-m/SROIE_2019_text_recognition | priyank-m | 2022-08-27T21:38:24Z | 31 | 2 | null | [
"task_categories:image-to-text",
"task_ids:image-captioning",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"text-recognition",
"recognition",
"region:us"
] | 2022-08-27T21:38:24Z | 2022-08-27T20:56:31.000Z | 2022-08-27T20:56:31 | ---
annotations_creators: []
language:
- en
language_creators: []
license: []
multilinguality:
- monolingual
pretty_name: SROIE_2019_text_recognition
size_categories:
- 10K<n<100K
source_datasets: []
tags:
- text-recognition
- recognition
task_categories:
- image-to-text
task_ids:
- image-captioning
---
This dataset we prepared using the Scanned receipts OCR and information extraction(SROIE) dataset.
The SROIE dataset contains 973 scanned receipts in English language.
Cropping the bounding boxes from each of the receipts to generate this text-recognition dataset resulted in 33626 images for train set and 18704 images for the test set.
The text annotations for all the images inside a split are stored in a metadata.jsonl file.
usage:
from dataset import load_dataset
data = load_dataset("priyank-m/SROIE_2019_text_recognition")
source of raw SROIE dataset:
https://www.kaggle.com/datasets/urbikn/sroie-datasetv2 | [
-0.2145131230354309,
0.06661824882030487,
0.14214187860488892,
0.011775446124374866,
-0.564387321472168,
-0.269732803106308,
0.05018056929111481,
-0.6932142972946167,
0.3889056146144867,
0.8510403037071228,
-0.5372226238250732,
-0.3139785826206207,
-0.6132219433784485,
0.599510133266449,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zishuod/pokemon-icons | zishuod | 2022-09-24T15:35:39Z | 31 | 2 | null | [
"task_categories:image-classification",
"license:mit",
"pokemon",
"region:us"
] | 2022-09-24T15:35:39Z | 2022-09-24T15:12:08.000Z | 2022-09-24T15:12:08 | ---
annotations_creators: []
language: []
language_creators: []
license:
- mit
multilinguality: []
pretty_name: pokemon-icons
size_categories: []
source_datasets: []
tags:
- pokemon
task_categories:
- image-classification
task_ids: []
---
# Dataset Card for pokemon-icons
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Pokemon Icons. Most of them are collected and cropped from screenshots captured in Pokémon Sword and Shield.
### Supported Tasks and Leaderboards
Image classification | [
-0.47401848435401917,
-0.24664179980754852,
0.08400889486074448,
0.2094624638557434,
-0.28344640135765076,
0.20860180258750916,
0.02527589537203312,
-0.27369770407676697,
0.9945322275161743,
0.5554587841033936,
-1.0348048210144043,
-0.9376322031021118,
-0.6298837065696716,
0.23159021139144... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ysharma/short_jokes_embeds | ysharma | 2022-09-28T21:27:36Z | 31 | 2 | null | [
"region:us"
] | 2022-09-28T21:27:36Z | 2022-09-28T20:30:34.000Z | 2022-09-28T20:30:34 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vesteinn/FC3 | vesteinn | 2023-03-23T15:51:34Z | 31 | 1 | null | [
"language:fo",
"license:cc",
"region:us"
] | 2023-03-23T15:51:34Z | 2022-09-30T12:09:39.000Z | 2022-09-30T12:09:39 | ---
license: cc
language:
- fo
pretty_name: FC3
---
This is the Faroese Common Crawl corpus. The largest dataset of mono-lingual Faroese text, it was extracted from the Common Crawl.
If you find this dataset useful, please cite
```
@inproceedings{snaebjarnarson-etal-2023-transfer,
title = "{T}ransfer to a Low-Resource Language via Close Relatives: The Case Study on Faroese",
author = "Snæbjarnarson, Vésteinn and
Simonsen, Annika and
Glavaš, Goran and
Vulić, Ivan",
booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)",
month = "may 22--24",
year = "2023",
address = "Tórshavn, Faroe Islands",
publisher = {Link{\"o}ping University Electronic Press, Sweden},
}
``` | [
-0.30328115820884705,
-0.33026567101478577,
0.305109441280365,
0.06970402598381042,
-0.3995097875595093,
-0.12400413304567337,
-0.4057167172431946,
-0.3993832468986511,
0.7658126354217529,
0.6238425374031067,
-0.5812488794326782,
-0.5901422500610352,
-0.3474313020706177,
0.5341500043869019... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arbml/Arabic_RC | arbml | 2022-10-05T12:58:05Z | 31 | 1 | null | [
"region:us"
] | 2022-10-05T12:58:05Z | 2022-10-05T12:58:00.000Z | 2022-10-05T12:58:00 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
brendenc/celeb-identities | brendenc | 2022-10-09T02:33:12Z | 31 | 0 | null | [
"region:us"
] | 2022-10-09T02:33:12Z | 2022-10-09T02:31:19.000Z | 2022-10-09T02:31:19 | This is a small dataset containing celebrity faces. This dataset was created for educational purposes and is far too small for any sort of model training. However, these images can be used for demo examples or other educational purposes. | [
-0.6049986481666565,
-0.533681333065033,
-0.21821318566799164,
-0.05224870145320892,
-0.06253568083047867,
-0.11097712069749832,
0.20828336477279663,
-0.054067276418209076,
0.5594276785850525,
0.7035158276557922,
-0.9865406155586243,
-0.0943635031580925,
-0.4488174319267273,
0.023765172809... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arbml/ArSAS | arbml | 2022-10-15T22:51:35Z | 31 | 1 | null | [
"region:us"
] | 2022-10-15T22:51:35Z | 2022-10-15T22:51:23.000Z | 2022-10-15T22:51:23 | ---
dataset_info:
features:
- name: '#Tweet_ID'
dtype: string
- name: Tweet_text
dtype: string
- name: Topic
dtype: string
- name: Sentiment_label_confidence
dtype: string
- name: Speech_act_label
dtype: string
- name: Speech_act_label_confidence
dtype: string
- name: label
dtype:
class_label:
names:
0: Negative
1: Neutral
2: Positive
3: Mixed
splits:
- name: train
num_bytes: 6147723
num_examples: 19897
download_size: 2998319
dataset_size: 6147723
---
# Dataset Card for "ArSAS"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6332305073738098,
-0.08003160357475281,
0.397285521030426,
0.1806136965751648,
-0.17891322076320648,
0.0009671934531070292,
0.7022755146026611,
-0.23288103938102722,
0.9767109751701355,
0.43502387404441833,
-0.7318122386932373,
-0.706161618232727,
-0.4791927933692932,
-0.258531987667083... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/n2c2_2018_track2 | bigbio | 2022-12-22T15:46:01Z | 31 | 3 | null | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-12-22T15:46:01Z | 2022-11-13T22:10:49.000Z | 2022-11-13T22:10:49 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: DUA
pretty_name: n2c2 2018 ADE
homepage: https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
bigbio_pubmed: False
bigbio_public: False
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- RELATION_EXTRACTION
---
# Dataset Card for n2c2 2018 ADE
## Dataset Description
- **Homepage:** https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
- **Pubmed:** False
- **Public:** False
- **Tasks:** NER,RE
The National NLP Clinical Challenges (n2c2), organized in 2018, continued the
legacy of i2b2 (Informatics for Biology and the Bedside), adding 2 new tracks and 2
new sets of data to the shared tasks organized since 2006. Track 2 of 2018
n2c2 shared tasks focused on the extraction of medications, with their signature
information, and adverse drug events (ADEs) from clinical narratives.
This track built on our previous medication challenge, but added a special focus on ADEs.
ADEs are injuries resulting from a medical intervention related to a drugs and
can include allergic reactions, drug interactions, overdoses, and medication errors.
Collectively, ADEs are estimated to account for 30% of all hospital adverse
events; however, ADEs are preventable. Identifying potential drug interactions,
overdoses, allergies, and errors at the point of care and alerting the caregivers of
potential ADEs can improve health delivery, reduce the risk of ADEs, and improve health
outcomes.
A step in this direction requires processing narratives of clinical records
that often elaborate on the medications given to a patient, as well as the known
allergies, reactions, and adverse events of the patient. Extraction of this information
from narratives complements the structured medication information that can be
obtained from prescriptions, allowing a more thorough assessment of potential ADEs
before they happen.
The 2018 n2c2 shared task Track 2, hereon referred to as the ADE track,
tackled these natural language processing tasks in 3 different steps,
which we refer to as tasks:
1. Concept Extraction: identification of concepts related to medications,
their signature information, and ADEs
2. Relation Classification: linking the previously mentioned concepts to
their medication by identifying relations on gold standard concepts
3. End-to-End: building end-to-end systems that process raw narrative text
to discover concepts and find relations of those concepts to their medications
Shared tasks provide a venue for head-to-head comparison of systems developed
for the same task and on the same data, allowing researchers to identify the state
of the art in a particular task, learn from it, and build on it.
## Citation Information
```
@article{DBLP:journals/jamia/HenryBFSU20,
author = {
Sam Henry and
Kevin Buchan and
Michele Filannino and
Amber Stubbs and
Ozlem Uzuner
},
title = {2018 n2c2 shared task on adverse drug events and medication extraction
in electronic health records},
journal = {J. Am. Medical Informatics Assoc.},
volume = {27},
number = {1},
pages = {3--12},
year = {2020},
url = {https://doi.org/10.1093/jamia/ocz166},
doi = {10.1093/jamia/ocz166},
timestamp = {Sat, 30 May 2020 19:53:56 +0200},
biburl = {https://dblp.org/rec/journals/jamia/HenryBFSU20.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
0.022800801321864128,
-0.5508396029472351,
0.6379913687705994,
0.05779650807380676,
0.0379415787756443,
0.009119994938373566,
-0.07196526229381561,
-0.9627037048339844,
0.3597988188266754,
0.7047105431556702,
-0.6141536831855774,
-0.6458863019943237,
-0.9038103818893433,
0.1191126182675361... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sagnikrayc/snli-cf-kaushik | sagnikrayc | 2022-11-21T22:34:23Z | 31 | 0 | null | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:multi-input-text-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|snli",
"language:en",
... | 2022-11-21T22:34:23Z | 2022-11-21T18:28:12.000Z | 2022-11-21T18:28:12 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|snli
task_categories:
- text-classification
task_ids:
- natural-language-inference
- multi-input-text-classification
pretty_name: Counterfactual Instances for Stanford Natural Language Inference
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 1771712
num_examples: 8300
- name: validation
num_bytes: 217479
num_examples: 1000
- name: test
num_bytes: 437468
num_examples: 2000
---
# Dataset Card for Counterfactually Augmented SNLI
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Description
- **Repository:** [Learning the Difference that Makes a Difference with Counterfactually-Augmented Data](https://github.com/acmi-lab/counterfactually-augmented-data)
- **Paper:** [Learning the Difference that Makes a Difference with Counterfactually-Augmented Data](https://openreview.net/forum?id=Sklgs0NFvr)
- **Point of Contact:** [Sagnik Ray Choudhury](mailto:sagnikrayc@gmail.com)
### Dataset Summary
The SNLI corpus (version 1.0) is a collection of 570k human-written English sentence pairs manually labeled for balanced classification with the labels entailment, contradiction, and neutral, supporting the task of natural language inference (NLI), also known as recognizing textual entailment (RTE). In the ICLR 2020 paper [Learning the Difference that Makes a Difference with Counterfactually-Augmented Data](https://openreview.net/forum?id=Sklgs0NFvr), Kaushik et. al. provided a dataset with counterfactual perturbations on the SNLI and IMDB data. This repository contains the original and counterfactual perturbations for the SNLI data, which was generated after processing the original data from [here](https://github.com/acmi-lab/counterfactually-augmented-data).
### Languages
The language in the dataset is English as spoken by users of the website Flickr and as spoken by crowdworkers from Amazon Mechanical Turk. The BCP-47 code for English is en.
## Dataset Structure
### Data Instances
For each instance, there is:
- a string for the premise,
- a string for the hypothesis,
- a label: (entailment, contradiction, neutral)
- a type: this tells whether the data point is the original SNLI data point or a counterfactual perturbation.
- an idx. The ids correspond to the original id in the SNLI data. For example, if the original SNLI instance was `4626192243.jpg#3r1e`, there wil be 5 data points as follows:
```json lines
{
"idx": "4626192243.jpg#3r1e-orig",
"premise": "A man with a beard is talking on the cellphone and standing next to someone who is lying down on the street.",
"hypothesis": "A man is prone on the street while another man stands next to him.",
"label": "entailment",
"type": "original"
}
{
"idx": "4626192243.jpg#3r1e-cf-0",
"premise": "A man with a beard is talking on the cellphone and standing next to someone who is lying down on the street.",
"hypothesis": "A man is talking to his wife on the cellphone.",
"label": "neutral",
"type": "cf"
}
{
"idx": "4626192243.jpg#3r1e-cf-1",
"premise": "A man with a beard is talking on the cellphone and standing next to someone who is on the street.",
"hypothesis": "A man is prone on the street while another man stands next to him.",
"label": "neutral",
"type": "cf"
}
{
"idx": "4626192243.jpg#3r1e-cf-2",
"premise": "A man with a beard is talking on the cellphone and standing next to someone who is sitting on the street.",
"hypothesis": "A man is prone on the street while another man stands next to him.",
"label": "contradiction",
"_type": "cf"
}
{
"idx": "4626192243.jpg#3r1e-cf-3",
"premise": "A man with a beard is talking on the cellphone and standing next to someone who is lying down on the street.",
"hypothesis": "A man is alone on the street.",
"label": "contradiction",
"type": "cf"
}
```
### Data Splits
Following SNLI, this dataset also has 3 splits: _train_, _validation_, and _test_. The original paper says this:
```aidl
RP and RH, each comprised of 3332 pairs in train, 400 in validation, and 800 in test, leading to a total of 6664 pairs in train, 800 in validation, and 1600 in test in the revised dataset.
```
This means for _train_, there are 1666 original SNLI instances, and each has 4 counterfactual perturbations (from premise and hypothesis edit), leading to a total of 1666*5 = 8330 _train_ data points in this dataset. Similarly, _validation_ and _test_ has 200 and 400 original SNLI instances respectively, consequently 1000 and 2000 instances in total.
| Dataset Split | Number of Instances in Split |
|---------------|------------------------------|
| Train | 8,330 |
| Validation | 1,000 |
| Test | 2,000 |
| [
-0.4337857961654663,
-0.9672380089759827,
0.351211816072464,
0.08820619434118271,
-0.103676937520504,
-0.18943625688552856,
-0.0004446125531103462,
-0.4999558925628662,
0.7711176872253418,
0.3401515781879425,
-0.6102473735809326,
-0.19939051568508148,
-0.3246592581272125,
0.198585957288742... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Javtor/rock-paper-scissors | Javtor | 2022-11-25T01:55:34Z | 31 | 0 | null | [
"region:us"
] | 2022-11-25T01:55:34Z | 2022-11-25T01:45:06.000Z | 2022-11-25T01:45:06 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: paper
1: rock
2: scissors
splits:
- name: test
num_bytes: 29457688.0
num_examples: 372
- name: train
num_bytes: 196585089.6
num_examples: 2520
download_size: 229783612
dataset_size: 226042777.6
---
# Dataset Card for "rock-paper-scissors"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4084841310977936,
-0.49334028363227844,
0.19449517130851746,
0.15525799989700317,
-0.21372586488723755,
0.061588142067193985,
0.21224719285964966,
-0.1319834291934967,
0.9437364935874939,
0.4533318281173706,
-0.9716587662696838,
-0.7153252363204956,
-0.613480269908905,
-0.31403407454490... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lewtun/corgi | lewtun | 2022-12-19T08:45:20Z | 31 | 2 | null | [
"region:us"
] | 2022-12-19T08:45:20Z | 2022-12-19T08:44:51.000Z | 2022-12-19T08:44:51 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 5590698.0
num_examples: 5
download_size: 5591635
dataset_size: 5590698.0
---
# Dataset Card for "corgi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7695200443267822,
-0.28606852889060974,
-0.1534671187400818,
0.3567982614040375,
-0.17417274415493011,
-0.03080780990421772,
0.2905576825141907,
-0.37911802530288696,
0.8739396929740906,
0.20980653166770935,
-0.9708335995674133,
-0.5532788634300232,
-0.4742720425128937,
-0.2021950185298... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AresEkb/prof_standards_sbert_large_mt_nlu_ru | AresEkb | 2023-01-11T12:47:13Z | 31 | 0 | null | [
"size_categories:100K<n<1M",
"language:ru",
"region:us"
] | 2023-01-11T12:47:13Z | 2023-01-08T12:04:10.000Z | 2023-01-08T12:04:10 | ---
language:
- ru
dataset_info:
- config_name: domains
features:
- name: reg_number
dtype: string
- name: standard_name
dtype: string
- name: name
dtype: string
- name: purpose
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 7293978
num_examples: 1510
download_size: 7789662
dataset_size: 7293978
- config_name: generalized_functions
features:
- name: generalized_function_id
dtype: string
- name: reg_number
dtype: string
- name: name
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 24536711
num_examples: 5520
download_size: 26728782
dataset_size: 24536711
- config_name: jobs
features:
- name: generalized_function_id
dtype: string
- name: reg_number
dtype: string
- name: name
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 64746734
num_examples: 14991
download_size: 68906153
dataset_size: 64746734
- config_name: particular_functions
features:
- name: generalized_function_id
dtype: string
- name: particular_function_id
dtype: string
- name: reg_number
dtype: string
- name: name
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 83618997
num_examples: 18730
download_size: 89697328
dataset_size: 83618997
- config_name: actions
features:
- name: generalized_function_id
dtype: string
- name: particular_function_id
dtype: string
- name: reg_number
dtype: string
- name: name
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 642320840
num_examples: 143024
download_size: 680158888
dataset_size: 642320840
- config_name: skills
features:
- name: generalized_function_id
dtype: string
- name: particular_function_id
dtype: string
- name: reg_number
dtype: string
- name: name
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 724280125
num_examples: 161473
download_size: 747889457
dataset_size: 724280125
- config_name: knowledges
features:
- name: generalized_function_id
dtype: string
- name: particular_function_id
dtype: string
- name: reg_number
dtype: string
- name: name
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 1041374369
num_examples: 234283
download_size: 1022695670
dataset_size: 1041374369
pretty_name: Professional Standards
size_categories:
- 100K<n<1M
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.