id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Kirili4ik/yandex_jobs | Kirili4ik | 2022-09-03T17:55:00Z | 17 | 4 | climate-fever | [
"task_categories:text-generation",
"task_categories:summarization",
"task_categories:multiple-choice",
"task_ids:language-modeling",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:ru",... | 2022-09-03T17:55:00Z | 2022-09-03T17:22:02.000Z | 2022-09-03T17:22:02 | ---
annotations_creators:
- expert-generated
language:
- ru
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
paperswithcode_id: climate-fever
pretty_name: yandex_jobs
size_categories:
- n<1K
source_datasets:
- original
tags:
- vacancies
- jobs
- ru
- yandex
task_categories:
- text-generation
- summarization
- multiple-choice
task_ids:
- language-modeling
---
# Dataset Card for Yandex_Jobs
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This is a dataset of more than 600 IT vacancies in Russian from parsing telegram channel https://t.me/ya_jobs. All the texts are perfectly structured, no missing values.
### Supported Tasks and Leaderboards
`text-generation` with the 'Raw text column'.
`summarization` as for getting from all the info the header.
`multiple-choice` as for the hashtags (to choose multiple from all available in the dataset)
### Languages
The text in the dataset is in only in Russian. The associated BCP-47 code is `ru`.
## Dataset Structure
### Data Instances
The data is parsed from a vacancy of Russian IT company [Yandex](https://ya.ru/).
An example from the set looks as follows:
```
{'Header': 'Разработчик интерфейсов в группу разработки спецпроектов',
'Emoji': '🎳',
'Description': 'Конструктор лендингов — это инструмент Яндекса, который позволяет пользователям создавать лендинги и турбо-лендинги для Яндекс.Директа. Турбо — режим ускоренной загрузки страниц для показа на мобильных. У нас современный стек, смелые планы и высокая динамика.\nМы ищем опытного и открытого новому фронтенд-разработчика.',
'Requirements': '• отлично знаете JavaScript
• разрабатывали на Node.js, применяли фреймворк Express
• умеете создавать веб-приложения на React + Redux
• знаете HTML и CSS, особенности их отображения в браузерах',
'Tasks': '• разрабатывать интерфейсы',
'Pluses': '• писали интеграционные, модульные, функциональные или браузерные тесты
• умеете разворачивать и администрировать веб-сервисы: собирать Docker-образы, настраивать мониторинги, выкладывать в облачные системы, отлаживать в продакшене
• работали с реляционными БД PostgreSQL',
'Hashtags': '#фронтенд #турбо #JS',
'Link': 'https://ya.cc/t/t7E3UsmVSKs6L',
'Raw text': 'Разработчик интерфейсов в группу разработки спецпроектов🎳
Конструктор лендингов — это инструмент Яндекса, который позволяет пользователям создавать лендинги и турбо-лендинги для Яндекс.Директа. Турбо — режим ускоренной загрузки страниц для показа на мобильных. У нас современный стек, смелые планы и высокая динамика.
Мы ищем опытного и открытого новому фронтенд-разработчика.
Мы ждем, что вы:
• отлично знаете JavaScript
• разрабатывали на Node.js, применяли фреймворк Express
• умеете создавать веб-приложения на React + Redux
• знаете HTML и CSS, особенности их отображения в браузерах
Что нужно делать:
• разрабатывать интерфейсы
Будет плюсом, если вы:
• писали интеграционные, модульные, функциональные или браузерные тесты
• умеете разворачивать и администрировать веб-сервисы: собирать Docker-образы, настраивать мониторинги, выкладывать в облачные системы, отлаживать в продакшене
• работали с реляционными БД PostgreSQL
https://ya.cc/t/t7E3UsmVSKs6L
#фронтенд #турбо #JS'
}
```
### Data Fields
- `Header`: A string with a position title (str)
- `Emoji`: Emoji that is used at the end of the title position (usually asosiated with the position) (str)
- `Description`: Short description of the vacancy (str)
- `Requirements`: A couple of required technologies/programming languages/experience (str)
- `Tasks`: Examples of the tasks of the job position (str)
- `Pluses`: A couple of great points for the applicant to have (technologies/experience/etc)
- `Hashtags`: A list of hashtags assosiated with the job (usually programming languages) (str)
- `Link`: A link to a job description (there may be more information, but it is not checked) (str)
- `Raw text`: Raw text with all the formatiing from the channel. Created with other fields. (str)
### Data Splits
There is not enough examples yet to split it to train/test/val in my opinion.
## Dataset Creation
It downloaded and parsed from telegram channel https://t.me/ya_jobs 03.09.2022. All the unparsed examples and the ones missing any field are deleted (from 1600 vacancies to only 600 without any missing fields like emojis or links)
## Considerations for Using the Data
These vacancies are for only one IT company (yandex). This means they can be pretty specific and probably can not be generalized as any vacancies or even any IT vacancies.
## Contributions
- **Point of Contact and Author:** [Kirill Gelvan](telegram: @kirili4ik) | [
-0.5282834768295288,
-0.6446081399917603,
0.38719087839126587,
0.11431456357240677,
-0.4549707770347595,
0.1851181834936142,
-0.0739523395895958,
-0.22938233613967896,
0.8808434009552002,
0.11429755389690399,
-0.7314744591712952,
-1.0360755920410156,
-0.290056973695755,
-0.1039827689528465... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
fusing/wikiart_captions | fusing | 2022-09-23T11:50:28Z | 17 | 3 | null | [
"region:us"
] | 2022-09-23T11:50:28Z | 2022-09-03T22:44:00.000Z | 2022-09-03T22:44:00 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
victor/autotrain-data-satellite-image-classification | victor | 2022-09-05T09:30:13Z | 17 | 1 | null | [
"task_categories:image-classification",
"region:us"
] | 2022-09-05T09:30:13Z | 2022-09-05T08:58:49.000Z | 2022-09-05T08:58:49 | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: satellite-image-classification
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project satellite-image-classification.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<256x256 CMYK PIL image>",
"target": 0
},
{
"image": "<256x256 CMYK PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(num_classes=1, names=['cloudy'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1200 |
| valid | 300 |
| [
-0.6490024328231812,
0.25595349073410034,
0.09875813871622086,
0.2557413876056671,
-0.5018361210823059,
0.2920009195804596,
-0.24754925072193146,
-0.3205113112926483,
-0.1442556232213974,
0.4424184262752533,
-0.6170730590820312,
-0.8327760696411133,
-0.5955919027328491,
0.06239977106451988... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cjvt/solar3 | cjvt | 2022-10-21T07:35:45Z | 17 | 0 | null | [
"task_categories:text2text-generation",
"task_categories:other",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:sl",
"license:cc-by-nc-sa-4.0",
"gram... | 2022-10-21T07:35:45Z | 2022-09-07T09:16:23.000Z | 2022-09-07T09:16:23 | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- sl
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 1K<n<10K
source_datasets:
- original
task_categories:
- text2text-generation
- other
task_ids: []
pretty_name: solar3
tags:
- grammatical-error-correction
- other-token-classification-of-text-errors
---
# Dataset Card for solar3
### Dataset Summary
Šolar* is a developmental corpus of 5485 school texts (e.g., essays), written by students in Slovenian secondary schools
(age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade.
Part of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the
document available at https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1589/Smernice-za-oznacevanje-korpusa-Solar_V1.1.pdf (in Slovenian).
\(*) pronounce "š" as "sh" in "shoe".
By default the dataset is provided at **sentence-level** (125867 instances): each instance contains a source (the original) and a target (the corrected) sentence. Note that either the source or the target sentence in an instance may be missing - this usually happens when a source sentence is marked as redundant or when a new sentence is added by the teacher. Additionally, a source or a target sentence may appear in multiple instances - for example, this happens when one sentence gets divided into multiple sentences.
There is also an option to aggregate the instances at the **document-level** or **paragraph-level**
by explicitly providing the correct config:
```
datasets.load_dataset("cjvt/solar3", "paragraph_level")`
datasets.load_dataset("cjvt/solar3", "document_level")`
```
### Supported Tasks and Leaderboards
Error correction, e.g., at token/sequence level, as token/sequence classification or text2text generation.
### Languages
Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset:
```json
{
'id_doc': 'solar1',
'doc_title': 'KUS-G-slo-1-GO-E-2009-10001',
'is_manually_validated': True,
'src_tokens': ['”', 'Ne', 'da', 'sovražim', ',', 'da', 'ljubim', 'sem', 'na', 'svetu', '”', ',', 'izreče', 'Antigona', 'v', 'bran', 'kralju', 'Kreonu', 'za', 'svoje', 'nasprotno', 'mišljenje', 'pred', 'smrtjo', '.'],
'src_ling_annotations': {
# truncated for conciseness
'lemma': ['”', 'ne', 'da', 'sovražiti', ...],
'ana': ['mte:U', 'mte:L', 'mte:Vd', ...],
'msd': ['UPosTag=PUNCT', 'UPosTag=PART|Polarity=Neg', 'UPosTag=SCONJ', ...],
'ne_tag': [..., 'O', 'B-PER', 'O', ...],
'space_after': [False, True, True, False, ...]
},
'tgt_tokens': ['„', 'Ne', 'da', 'sovražim', ',', 'da', 'ljubim', 'sem', 'na', 'svetu', ',', '”', 'izreče', 'Antigona', 'sebi', 'v', 'bran', 'kralju', 'Kreonu', 'za', 'svoje', 'nasprotno', 'mišljenje', 'pred', 'smrtjo', '.'],
# omitted for conciseness, the format is the same as in 'src_ling_annotations'
'tgt_ling_annotations': {...},
'corrections': [
{'idx_src': [0], 'idx_tgt': [0], 'corr_types': ['Z/LOČ/nerazvrščeno']},
{'idx_src': [10, 11], 'idx_tgt': [10, 11], 'corr_types': ['Z/LOČ/nerazvrščeno']},
{'idx_src': [], 'idx_tgt': [14], 'corr_types': ['O/KAT/povratnost']}
]
}
```
The instance represents a correction in the document 'solar1' (`id_doc`), which were manually assigned/validated (`is_manually_validated`). More concretely, the source sentence contains three errors (as indicated by three elements in `corrections`):
- a punctuation change: '”' -> '„';
- a punctuation change: ['”', ','] -> [',', '”'] (i.e. comma inside the quote, not outside);
- addition of a new word: 'sebi'.
### Data Fields
- `id_doc`: a string containing the identifying name of the document in which the sentence appears;
- `doc_title`: a string containing the assigned document title;
- `is_manually_validated`: a bool indicating whether the document in which the sentence appears was reviewed by a teacher;
- `src_tokens`: words in the source sentence (`[]` if there is no source sentence);
- `src_ling_annotations`: a dict containing the lemmas (key `"lemma"`), morphosyntactic descriptions using UD (key `"msd"`) and JOS/MULTEXT-East (key `"ana"`) specification, named entity tags encoded using IOB2 (key `"ne_tag"`) for the source tokens (**automatically annotated**), and spacing information (key `"space_after"`), i.e. whether there is a whitespace after each token;
- `tgt_tokens`: words in the target sentence (`[]` if there is no target sentence);
- `tgt_ling_annotations`: a dict containing the lemmas (key `"lemma"`), morphosyntactic descriptions using UD (key `"msd"`) and JOS/MULTEXT-East (key `"ana"`) specification, named entity tags encoded using IOB2 (key `"ne_tag"`) for the target tokens (**automatically annotated**), and spacing information (key `"space_after"`), i.e. whether there is a whitespace after each token;
- `corrections`: a list of the corrections, with each correction represented with a dictionary, containing the indices of the source tokens involved (`idx_src`), target tokens involved (`idx_tgt`), and the categories of the corrections made (`corr_types`). Please note that there can be multiple assigned categories for one annotated correction, in which case `len(corr_types) > 1`.
## Dataset Creation
The Developmental corpus Šolar consists of 5,485 texts written by students in Slovenian secondary schools (age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade. The information on school (elementary or secondary), subject, level (grade or year), type of text, region, and date of production is provided for each text. School essays form the majority of the corpus while other material includes texts created during lessons, such as text recapitulations or descriptions, examples of formal applications, etc.
Part of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the attached document (in Slovenian). Teacher corrections were part of the original files and reflect real classroom situations of essay marking. Corrections were then inserted into texts by annotators and subsequently categorized. Due to the annotations being gathered in a practical (i.e. classroom) setting, only the most relevant errors may sometimes be annotated, e.g., not all incorrectly placed commas are annotated if there is a bigger issue in the text.
## Additional Information
### Dataset Curators
Špela Arhar Holdt; et al. (please see http://hdl.handle.net/11356/1589 for the full list)
### Licensing Information
CC BY-NC-SA 4.0.
### Citation Information
```
@misc{solar3,
title = {Developmental corpus {\v S}olar 3.0},
author = {Arhar Holdt, {\v S}pela and Rozman, Tadeja and Stritar Ku{\v c}uk, Mojca and Krek, Simon and Krap{\v s} Vodopivec, Irena and Stabej, Marko and Pori, Eva and Goli, Teja and Lavri{\v c}, Polona and Laskowski, Cyprian and Kocjan{\v c}i{\v c}, Polonca and Klemenc, Bojan and Krsnik, Luka and Kosem, Iztok},
url = {http://hdl.handle.net/11356/1589},
note = {Slovenian language resource repository {CLARIN}.{SI}},
year = {2022}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
| [
-0.21324068307876587,
-0.6184303760528564,
0.35535597801208496,
0.23039090633392334,
-0.14030350744724274,
-0.1309417486190796,
-0.42633122205734253,
-0.19894468784332275,
0.18741917610168457,
0.5160306096076965,
-0.565756618976593,
-1.0025557279586792,
-0.5607335567474365,
0.5103782415390... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-glue-mrpc-9038ab-1509054845 | autoevaluate | 2022-09-19T14:49:33Z | 17 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-19T14:49:33Z | 2022-09-19T14:49:04.000Z | 2022-09-19T14:49:04 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: natural_language_inference
model: JeremiahZ/roberta-base-mrpc
metrics: []
dataset_name: glue
dataset_config: mrpc
dataset_split: validation
col_mapping:
text1: sentence1
text2: sentence2
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: JeremiahZ/roberta-base-mrpc
* Dataset: glue
* Config: mrpc
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@JeremiahZ](https://huggingface.co/JeremiahZ) for evaluating this model. | [
-0.41470879316329956,
-0.431429386138916,
0.26317670941352844,
0.1350131630897522,
0.07749743014574051,
-0.112503282725811,
-0.1332995742559433,
-0.4135621190071106,
0.15182487666606903,
0.5398889780044556,
-1.0552570819854736,
-0.2660587430000305,
-0.6862477660179138,
0.04163185507059097,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
khalidx199/k199 | khalidx199 | 2022-09-28T16:49:21Z | 17 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2022-09-28T16:49:21Z | 2022-09-28T16:47:43.000Z | 2022-09-28T16:47:43 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ayesha08/pake-freelancer-dataset | ayesha08 | 2022-09-28T19:54:04Z | 17 | 0 | null | [
"region:us"
] | 2022-09-28T19:54:04Z | 2022-09-28T18:42:15.000Z | 2022-09-28T18:42:15 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Kitbutows/OLS | Kitbutows | 2022-09-29T01:24:35Z | 17 | 0 | null | [
"region:us"
] | 2022-09-29T01:24:35Z | 2022-09-29T01:20:46.000Z | 2022-09-29T01:20:46 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
badmaiky/images | badmaiky | 2022-09-29T20:22:21Z | 17 | 0 | null | [
"license:openrail",
"region:us"
] | 2022-09-29T20:22:21Z | 2022-09-29T20:20:42.000Z | 2022-09-29T20:20:42 | ---
license: openrail
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arbml/emoji_sentiment_lexicon | arbml | 2022-11-03T14:11:13Z | 17 | 0 | null | [
"region:us"
] | 2022-11-03T14:11:13Z | 2022-10-05T22:08:15.000Z | 2022-10-05T22:08:15 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
davanstrien/news_nav_test | davanstrien | 2022-10-06T07:00:11Z | 17 | 0 | null | [
"region:us"
] | 2022-10-06T07:00:11Z | 2022-10-06T07:00:02.000Z | 2022-10-06T07:00:02 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arbml/Rewayatech | arbml | 2022-11-03T15:07:56Z | 17 | 0 | null | [
"region:us"
] | 2022-11-03T15:07:56Z | 2022-10-06T13:46:17.000Z | 2022-10-06T13:46:17 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-inverse-scaling__hindsight-neglect-10shot-inverse-scali-383fe9-1695459606 | autoevaluate | 2022-10-08T13:27:32Z | 17 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-08T13:27:32Z | 2022-10-08T13:23:51.000Z | 2022-10-08T13:23:51 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- inverse-scaling/hindsight-neglect-10shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-125m_eval
metrics: []
dataset_name: inverse-scaling/hindsight-neglect-10shot
dataset_config: inverse-scaling--hindsight-neglect-10shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: inverse-scaling/hindsight-neglect-10shot
* Config: inverse-scaling--hindsight-neglect-10shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. | [
-0.31532540917396545,
-0.3453640043735504,
0.3547316789627075,
0.12090998142957687,
-0.012137934565544128,
-0.2826985716819763,
-0.03918003663420677,
-0.39210546016693115,
0.07222352176904678,
0.37282124161720276,
-0.9972697496414185,
-0.2147030234336853,
-0.7010112404823303,
-0.0221445746... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-phpthinh__exampletx-constructive-7f6ba0-1708559815 | autoevaluate | 2022-10-10T05:25:28Z | 17 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-10T05:25:28Z | 2022-10-10T05:11:37.000Z | 2022-10-10T05:11:37 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampletx
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-3b
metrics: []
dataset_name: phpthinh/exampletx
dataset_config: constructive
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: phpthinh/exampletx
* Config: constructive
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. | [
-0.23774395883083344,
-0.38993799686431885,
0.5601527690887451,
0.2286241501569748,
-0.06814421713352203,
-0.11144193261861801,
-0.036944806575775146,
-0.458942711353302,
0.016140591353178024,
0.3069542348384857,
-0.9326968193054199,
-0.33131471276283264,
-0.6260182857513428,
-0.0106717664... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-phpthinh__examplei-all-929d48-1748861032 | autoevaluate | 2022-10-13T19:34:07Z | 17 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-13T19:34:07Z | 2022-10-13T15:48:24.000Z | 2022-10-13T15:48:24 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/examplei
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-7b1
metrics: ['f1']
dataset_name: phpthinh/examplei
dataset_config: all
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: phpthinh/examplei
* Config: all
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. | [
-0.2484564632177353,
-0.349831223487854,
0.5092028379440308,
0.15676434338092804,
-0.11482428014278412,
-0.11899495869874954,
-0.002425889251753688,
-0.4377375841140747,
0.05566902831196785,
0.3438701629638672,
-0.9659997820854187,
-0.32683995366096497,
-0.6868129372596741,
0.0208363384008... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-phpthinh__examplei-mismatch-1389aa-1748961035 | autoevaluate | 2022-10-13T15:53:05Z | 17 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-13T15:53:05Z | 2022-10-13T15:48:34.000Z | 2022-10-13T15:48:34 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/examplei
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-1b1
metrics: ['f1']
dataset_name: phpthinh/examplei
dataset_config: mismatch
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: phpthinh/examplei
* Config: mismatch
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. | [
-0.228274405002594,
-0.34975874423980713,
0.46491801738739014,
0.19825966656208038,
-0.05502980202436447,
-0.11917959898710251,
0.055974576622247696,
-0.4125039279460907,
0.045401301234960556,
0.32959455251693726,
-1.018726110458374,
-0.2907922863960266,
-0.6646762490272522,
-0.00828501489... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ratishsp/newshead | ratishsp | 2022-10-14T07:42:08Z | 17 | 0 | null | [
"license:mit",
"region:us"
] | 2022-10-14T07:42:08Z | 2022-10-14T06:05:56.000Z | 2022-10-14T06:05:56 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AndyChiang/dgen | AndyChiang | 2022-10-14T14:19:16Z | 17 | 0 | null | [
"task_categories:fill-mask",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"cloze",
"sciq",
"mcql",
"ai2 science questions",
"region:us"
] | 2022-10-14T14:19:16Z | 2022-10-14T12:56:15.000Z | 2022-10-14T12:56:15 | ---
pretty_name: dgen
multilinguality:
- monolingual
language:
- en
license:
- mit
size_categories:
- 1K<n<10K
tags:
- cloze
- sciq
- mcql
- ai2 science questions
task_categories:
- fill-mask
---
# dgen
**DGen** is a cloze questions dataset which covers multiple domains including science, vocabulary, common sense and trivia. It is compiled from a wide variety of datasets including SciQ, MCQL, AI2 Science Questions, etc. The detail of DGen dataset is shown below.
| DGen dataset | Train | Valid | Test | Total |
| ----------------------- | ----- | ----- | ---- | ----- |
| **Number of questions** | 2321 | 300 | 259 | 2880 |
Source: https://github.com/DRSY/DGen | [
-0.8497971892356873,
-0.8358722925186157,
0.4259520173072815,
0.07888099551200867,
-0.2590773403644562,
0.2244681566953659,
0.021182652562856674,
-0.038214944303035736,
0.05795878916978836,
0.28923824429512024,
-0.9183307886123657,
-0.8882948160171509,
-0.489296555519104,
0.357117950916290... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
anisub/movie-poster-generator-demo | anisub | 2022-10-18T19:09:05Z | 17 | 0 | null | [
"region:us"
] | 2022-10-18T19:09:05Z | 2022-10-14T15:41:12.000Z | 2022-10-14T15:41:12 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
svnfs/depth-of-field | svnfs | 2022-11-13T23:33:39Z | 17 | 0 | null | [
"task_categories:image-classification",
"task_categories:image-segmentation",
"annotations_creators:Stavros Niafas",
"license:apache-2.0",
"region:us"
] | 2022-11-13T23:33:39Z | 2022-10-15T13:57:29.000Z | 2022-10-15T13:57:29 | ---
license: apache-2.0
annotations_creators:
- Stavros Niafas
sample_number:
- 1200
class_number:
- 2
image_size:
- (200,300,3)
source_dataset:
- unsplash
task_categories:
- image-classification
- image-segmentation
dataset_info:
- config_name: depth-of-field
features:
- name: image
dtype: string
- name: class
dtype:
class_label:
names:
0: bokeh
1: no-bokeh
- config_name: default
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: '0'
1: '1'
splits:
- name: train
num_bytes: 192150
num_examples: 1200
download_size: 38792692
dataset_size: 192150
---
## Dataset Summary
Depth-of-Field(DoF) dataset is comprised of 1200 annotated images, binary annotated with(0) and without(1) bokeh effect, shallow or deep depth of field. It is a forked data set from the [Unsplash 25K](https://github.com/unsplash/datasets) data set.
## Dataset Description
- **Repository:** [https://github.com/sniafas/photography-style-analysis](https://github.com/sniafas/photography-style-analysis)
- **Paper:** [More Information Needed](https://www.researchgate.net/publication/355917312_Photography_Style_Analysis_using_Machine_Learning)
### Citation Information
```
@article{sniafas2021,
title={DoF: An image dataset for depth of field classification},
author={Niafas, Stavros},
doi= {10.13140/RG.2.2.29880.62722},
url= {https://www.researchgate.net/publication/364356051_DoF_depth_of_field_datase}
year={2021}
}
```
Note that each DoF dataset has its own citation. Please see the source to
get the correct citation for each contained dataset. | [
-0.6206698417663574,
-0.6592693328857422,
0.3847881257534027,
0.22449299693107605,
-0.366958349943161,
-0.0007377744186669588,
0.235091432929039,
-0.5326377153396606,
0.06083410605788231,
0.48076552152633667,
-0.8007912039756775,
-0.8847912549972534,
-0.6495502591133118,
0.2237941771745681... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ghoumrassi/clothes_sample | ghoumrassi | 2022-10-15T18:07:22Z | 17 | 3 | null | [
"region:us"
] | 2022-10-15T18:07:22Z | 2022-10-15T15:50:15.000Z | 2022-10-15T15:50:15 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 20078406.0
num_examples: 990
download_size: 0
dataset_size: 20078406.0
---
# Dataset Card for "clothes_sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.46835997700691223,
-0.18909083306789398,
0.05977121740579605,
0.17503038048744202,
-0.34510934352874756,
-0.12371086329221725,
0.31201472878456116,
-0.29041188955307007,
0.7637213468551636,
0.4980863630771637,
-1.0756694078445435,
-0.7995503544807434,
-0.5982512831687927,
-0.25050789117... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pythonist/PubMedQA | pythonist | 2022-11-10T10:15:08Z | 17 | 0 | null | [
"region:us"
] | 2022-11-10T10:15:08Z | 2022-10-16T12:11:07.000Z | 2022-10-16T12:11:07 | ---
train-eval-index:
- config: pythonist--PubMedQA
task: question-answering
task_id: extractive_question_answering
splits:
eval_split: train
col_mapping:
id: answers.answer_start
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Boglinger/stableDiffusion | Boglinger | 2022-12-04T10:46:10Z | 17 | 0 | null | [
"region:us"
] | 2022-12-04T10:46:10Z | 2022-10-18T11:15:07.000Z | 2022-10-18T11:15:07 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nbroad/small_arxiv_classification | nbroad | 2022-10-18T23:29:38Z | 17 | 1 | null | [
"region:us"
] | 2022-10-18T23:29:38Z | 2022-10-18T23:26:49.000Z | 2022-10-18T23:26:49 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
takiholadi/kill-me-please-dataset | takiholadi | 2022-10-19T15:35:00Z | 17 | 2 | null | [
"task_categories:text-generation",
"task_categories:text-classification",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ru",
"stories",
"website",
"region:us"
] | 2022-10-19T15:35:00Z | 2022-10-19T14:18:28.000Z | 2022-10-19T14:18:28 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- ru
multilinguality:
- monolingual
pretty_name: Kill-Me-Please Dataset
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- stories
- website
task_categories:
- text-generation
- text-classification
---
# Dataset Card for Kill-Me-Please Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Description
- **Repository:** [github pet project repo](https://github.com/takiholadi/generative-kill-me-please)
### Dataset Summary
It is an Russian-language dataset containing just over 30k unique stories as written as users of https://killpls.me as of period from March 2009 to October 2022. This resource was blocked by Roskomnadzor so consider text-generation task if you want more stories.
### Languages
ru-RU
## Dataset Structure
### Data Instances
Here is an example of instance:
```
{'text': 'По глупости удалил всю 10 летнюю базу. Восстановлению не подлежит. Мне конец. КМП!'
'tags': 'техника'
'votes': 2914
'url': 'https://killpls.me/story/616'
'datetime': '4 июля 2009, 23:20'}
```
### Data Fields
- `text`: a string containing the body of the story
- `tags`: a string containing a comma-separated tags in a multi-label setup, fullset of tags (except of one empty-tagged record): `внешность`, `деньги`, `друзья`, `здоровье`, `отношения`, `работа`, `разное`, `родители`, `секс`, `семья`, `техника`, `учеба`
- `votes`: an integer sum of upvotes/downvotes
- `url`: a string containing the url where the story was web-scraped from
- `datetime`: a string containing with the datetime the story was written
### Data Splits
The has 2 multi-label stratified splits: train and test.
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 27,321 |
| Test | 2,772 |
| [
-0.2971348166465759,
-0.6876623630523682,
0.30152517557144165,
0.22973456978797913,
-0.6214577555656433,
0.3545992970466614,
-0.1203298270702362,
-0.1277076154947281,
0.4814538061618805,
0.2782432436943054,
-0.8384251594543457,
-1.0122203826904297,
-0.4007635712623596,
0.21955320239067078,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Tugay/clickbait-spoiling | Tugay | 2022-10-19T19:08:59Z | 17 | 0 | null | [
"region:us"
] | 2022-10-19T19:08:59Z | 2022-10-19T18:50:16.000Z | 2022-10-19T18:50:16 | Data for Semeval 2023 task, clickbait spoiling | [
-0.2534414529800415,
-0.4630570113658905,
0.6378317475318909,
0.6088246703147888,
-0.5318514108657837,
-0.42818400263786316,
0.358589768409729,
-0.30787667632102966,
0.5521170496940613,
1.0129393339157104,
-0.8310441970825195,
-0.3448500633239746,
-0.6092085838317871,
0.2502537667751312,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
michellejieli/friends_dataset | michellejieli | 2022-10-23T13:21:12Z | 17 | 1 | null | [
"language:en",
"distilroberta",
"sentiment",
"emotion",
"twitter",
"reddit",
"region:us"
] | 2022-10-23T13:21:12Z | 2022-10-22T20:37:03.000Z | 2022-10-22T20:37:03 | ---
language: "en"
tags:
- distilroberta
- sentiment
- emotion
- twitter
- reddit
---
# Dataset Card for friends_data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Friends dataset consists of speech-based dialogue from the Friends TV sitcom. It is extracted from the [SocialNLP EmotionX 2019 challenge](https://sites.google.com/view/emotionx2019/datasets).
### Supported Tasks and Leaderboards
text-classification, sentiment-classification: The dataset is mainly used to predict a sentiment label given text input.
### Languages
The utterances are in English.
## Dataset Structure
### Data Instances
A data point containing text and the corresponding label.
An example from the friends_dataset looks like this:
{
'text': 'Well! Well! Well! Joey Tribbiani! So you came back huh?',
'label': 'surprise'
}
### Data Fields
The field includes a text column and a corresponding emotion label.
## Dataset Creation
### Curation Rationale
The dataset contains 1000 English-language dialogues originally in JSON files. The JSON file contains an array of dialogue objects. Each dialogue object is an array of line objects, and each line object contains speaker, utterance, emotion, and annotation strings.
{
"speaker": "Chandler",
"utterance": "My duties? All right.",
"emotion": "surprise",
"annotation": "2000030"
}
Utterance and emotion were extracted from the original files into a CSV file. The dataset was cleaned to remove non-neutral labels. This dataset was created to be used in fine-tuning an emotion sentiment classifier that can be useful to teach individuals with autism how to read facial expressions. | [
-0.4921102821826935,
-0.5608368515968323,
0.24239154160022736,
0.3791166841983795,
-0.11263129115104675,
0.2028200626373291,
-0.30033546686172485,
-0.3473711311817169,
0.6476755142211914,
0.42130935192108154,
-0.9529741406440735,
-1.0033098459243774,
-0.5690205693244934,
0.3123444318771362... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063405 | autoevaluate | 2022-10-24T00:35:42Z | 17 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-24T00:35:42Z | 2022-10-23T21:00:15.000Z | 2022-10-23T21:00:15 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/neqa2_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-30b_eval
metrics: []
dataset_name: jeffdshen/neqa2_8shot
dataset_config: jeffdshen--neqa2_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: jeffdshen/neqa2_8shot
* Config: jeffdshen--neqa2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | [
-0.39342498779296875,
-0.3254022002220154,
0.3292590379714966,
-0.023393772542476654,
-0.015315333381295204,
-0.15021836757659912,
-0.006164696533232927,
-0.3398238718509674,
0.015685100108385086,
0.45594388246536255,
-0.9688515663146973,
-0.2314349263906479,
-0.6527381539344788,
-0.013326... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263415 | autoevaluate | 2022-10-23T21:32:52Z | 17 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T21:32:52Z | 2022-10-23T21:29:13.000Z | 2022-10-23T21:29:13 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/redefine_math0_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-125m_eval
metrics: []
dataset_name: jeffdshen/redefine_math0_8shot
dataset_config: jeffdshen--redefine_math0_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | [
-0.35186511278152466,
-0.31135550141334534,
0.3487303555011749,
-0.05203960835933685,
-0.04765312373638153,
-0.17582622170448303,
-0.05472104996442795,
-0.3280197083950043,
0.07457190752029419,
0.38557374477386475,
-0.9223843812942505,
-0.2255771905183792,
-0.7402662038803101,
0.0282994825... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263416 | autoevaluate | 2022-10-23T21:45:45Z | 17 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T21:45:45Z | 2022-10-23T21:39:05.000Z | 2022-10-23T21:39:05 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/redefine_math0_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-350m_eval
metrics: []
dataset_name: jeffdshen/redefine_math0_8shot
dataset_config: jeffdshen--redefine_math0_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | [
-0.3680446445941925,
-0.2877988815307617,
0.3633836805820465,
-0.055012788623571396,
-0.03404691442847252,
-0.17315512895584106,
-0.051488444209098816,
-0.3235826790332794,
0.05055670812726021,
0.3988751173019409,
-0.9230550527572632,
-0.2122790664434433,
-0.7216994166374207,
0.02377491071... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263419 | autoevaluate | 2022-10-23T22:49:05Z | 17 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T22:49:05Z | 2022-10-23T21:51:58.000Z | 2022-10-23T21:51:58 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/redefine_math0_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-6.7b_eval
metrics: []
dataset_name: jeffdshen/redefine_math0_8shot
dataset_config: jeffdshen--redefine_math0_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | [
-0.34724757075309753,
-0.30743852257728577,
0.3248664140701294,
-0.050107765942811966,
-0.05015983432531357,
-0.1992424577474594,
-0.038518600165843964,
-0.3552335500717163,
0.07745670527219772,
0.40746670961380005,
-0.9351286888122559,
-0.17940732836723328,
-0.7260019183158875,
0.02394778... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nkare/joni | Nkare | 2022-10-25T16:37:00Z | 17 | 0 | null | [
"region:us"
] | 2022-10-25T16:37:00Z | 2022-10-24T18:13:57.000Z | 2022-10-24T18:13:57 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-lener_br-lener_br-bd0c63-1886364291 | autoevaluate | 2022-10-26T04:40:21Z | 17 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-26T04:40:21Z | 2022-10-26T04:39:24.000Z | 2022-10-26T04:39:24 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | [
-0.42135387659072876,
-0.24480068683624268,
0.17800083756446838,
0.06286046653985977,
-0.07464687526226044,
-0.1365196406841278,
0.05496469512581825,
-0.45578673481941223,
0.2309413105249405,
0.3735761046409607,
-0.8423155546188354,
-0.21027469635009766,
-0.6249731779098511,
-0.01600302569... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
edbeeching/sample_factory_videos | edbeeching | 2022-11-04T08:00:27Z | 17 | 1 | null | [
"license:mit",
"region:us"
] | 2022-11-04T08:00:27Z | 2022-10-26T13:55:56.000Z | 2022-10-26T13:55:56 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nbtpj/BioNLP2021 | nbtpj | 2023-01-02T02:11:44Z | 17 | 0 | null | [
"region:us"
] | 2023-01-02T02:11:44Z | 2022-11-01T01:51:49.000Z | 2022-11-01T01:51:49 | # BioNLP2021 dataset (Task2)
___
Data fields:
* text (str): source text; Section and Article (train_mul subset only) are separated by <SAS> ; Single Documents are separated by <DOC> ; Sentences are separated by <SS>
* summ_abs, summ_ext (str): abstractive and extractive summarization, whose Sentences are separated by <SS>
* question (str): question, whose Sentences are separated by <SS>
* key (str): key in the origin dataset (for submitting) | [
-0.2052045464515686,
-0.5982050895690918,
0.30156826972961426,
0.42745038866996765,
-0.4433349668979645,
0.27139347791671753,
0.012563562951982021,
-0.42654484510421753,
0.36125096678733826,
0.6028347611427307,
-0.8846448659896851,
-0.30980297923088074,
-0.5008522868156433,
0.5254918336868... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dhmeltzer/goodreads_train | dhmeltzer | 2022-11-02T04:16:00Z | 17 | 0 | null | [
"region:us"
] | 2022-11-02T04:16:00Z | 2022-11-02T04:14:58.000Z | 2022-11-02T04:14:58 | ---
dataset_info:
features:
- name: rating
dtype: int64
- name: review_text
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 1893978314
num_examples: 900000
download_size: 928071460
dataset_size: 1893978314
---
# Dataset Card for "goodreads_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5192952752113342,
0.032754767686128616,
0.04393536597490311,
0.1712011992931366,
-0.3598676025867462,
-0.24363090097904205,
0.40306681394577026,
-0.2124292254447937,
0.8001459240913391,
0.41237232089042664,
-0.8877400755882263,
-0.49175480008125305,
-0.6276795268058777,
-0.2157000005245... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lexlms/lex_files_preprocessed | lexlms | 2023-05-10T16:01:44Z | 17 | 3 | null | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended",
"language:en",
... | 2023-05-10T16:01:44Z | 2022-11-07T17:27:54.000Z | 2022-11-07T17:27:54 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- extended
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
pretty_name: LexFiles
configs:
- eu_legislation
- eu_court_cases
- uk_legislation
- uk_court_cases
- us_legislation
- us_court_cases
- us_contracts
- canadian_legislation
- canadian_court_cases
- indian_court_cases
---
# Dataset Card for "LexFiles"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Specifications](#supported-tasks-and-leaderboards)
## Dataset Description
- **Homepage:** https://github.com/coastalcph/lexlms
- **Repository:** https://github.com/coastalcph/lexlms
- **Paper:** https://arxiv.org/abs/xxx
- **Point of Contact:** [Ilias Chalkidis](mailto:ilias.chalkidis@di.ku.dk)
### Dataset Summary
**Disclaimer: This is a pre-proccessed version of the LexFiles corpus (https://huggingface.co/datasets/lexlms/lexfiles), where documents are pre-split in chunks of 512 tokens.**
The LeXFiles is a new diverse English multinational legal corpus that we created including 11 distinct sub-corpora that cover legislation and case law from 6 primarily English-speaking legal systems (EU, CoE, Canada, US, UK, India).
The corpus contains approx. 19 billion tokens. In comparison, the "Pile of Law" corpus released by Hendersons et al. (2022) comprises 32 billion in total, where the majority (26/30) of sub-corpora come from the United States of America (USA), hence the corpus as a whole is biased towards the US legal system in general, and the federal or state jurisdiction in particular, to a significant extent.
### Dataset Specifications
| Corpus | Corpus alias | Documents | Tokens | Pct. | Sampl. (a=0.5) | Sampl. (a=0.2) |
|-----------------------------------|----------------------|-----------|--------|--------|----------------|----------------|
| EU Legislation | `eu-legislation` | 93.7K | 233.7M | 1.2% | 5.0% | 8.0% |
| EU Court Decisions | `eu-court-cases` | 29.8K | 178.5M | 0.9% | 4.3% | 7.6% |
| ECtHR Decisions | `ecthr-cases` | 12.5K | 78.5M | 0.4% | 2.9% | 6.5% |
| UK Legislation | `uk-legislation` | 52.5K | 143.6M | 0.7% | 3.9% | 7.3% |
| UK Court Decisions | `uk-court-cases` | 47K | 368.4M | 1.9% | 6.2% | 8.8% |
| Indian Court Decisions | `indian-court-cases` | 34.8K | 111.6M | 0.6% | 3.4% | 6.9% |
| Canadian Legislation | `canadian-legislation` | 6K | 33.5M | 0.2% | 1.9% | 5.5% |
| Canadian Court Decisions | `canadian-court-cases` | 11.3K | 33.1M | 0.2% | 1.8% | 5.4% |
| U.S. Court Decisions [1] | `court-listener` | 4.6M | 11.4B | 59.2% | 34.7% | 17.5% |
| U.S. Legislation | `us-legislation` | 518 | 1.4B | 7.4% | 12.3% | 11.5% |
| U.S. Contracts | `us-contracts` | 622K | 5.3B | 27.3% | 23.6% | 15.0% |
| Total | `lexlms/lexfiles` | 5.8M | 18.8B | 100% | 100% | 100% |
[1] We consider only U.S. Court Decisions from 1965 onwards (cf. post Civil Rights Act), as a hard threshold for cases relying on severely out-dated and in many cases harmful law standards. The rest of the corpora include more recent documents.
[2] Sampling (Sampl.) ratios are computed following the exponential sampling introduced by Lample et al. (2019).
Additional corpora not considered for pre-training, since they do not represent factual legal knowledge.
| Corpus | Corpus alias | Documents | Tokens |
|----------------------------------------|------------------------|-----------|--------|
| Legal web pages from C4 | `legal-c4` | 284K | 340M |
### Citation
[*Ilias Chalkidis\*, Nicolas Garneau\*, Catalina E.C. Goanta, Daniel Martin Katz, and Anders Søgaard.*
*LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development.*
*2022. In the Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics. Toronto, Canada.*](https://aclanthology.org/xxx/)
```
@inproceedings{chalkidis-garneau-etal-2023-lexlms,
title = {{LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development}},
author = "Chalkidis*, Ilias and
Garneau*, Nicolas and
Goanta, Catalina and
Katz, Daniel Martin and
Søgaard, Anders",
booktitle = "Proceedings of the 61h Annual Meeting of the Association for Computational Linguistics",
month = june,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/xxx",
}
``` | [
-0.38116520643234253,
-0.19511601328849792,
0.5826772451400757,
0.09800706803798676,
-0.4101264178752899,
0.23528316617012024,
-0.1719287931919098,
-0.371135950088501,
0.39792051911354065,
0.4412245750427246,
-0.32732078433036804,
-1.0059083700180054,
-0.6317269206047058,
0.041466653347015... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nielsr/image-segmentation-toy-data | nielsr | 2022-11-08T15:08:25Z | 17 | 0 | null | [
"region:us"
] | 2022-11-08T15:08:25Z | 2022-11-08T14:55:04.000Z | 2022-11-08T14:55:04 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zhangxinran/lolita-dress-ENG | zhangxinran | 2022-11-12T00:43:03Z | 17 | 1 | null | [
"region:us"
] | 2022-11-12T00:43:03Z | 2022-11-12T00:24:35.000Z | 2022-11-12T00:24:35 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 533036535.0
num_examples: 744
download_size: 530749245
dataset_size: 533036535.0
---
# Dataset Card for "lolita-dress-ENG"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5170811414718628,
-0.4760371744632721,
0.13745592534542084,
0.5061501264572144,
-0.24814385175704956,
-0.1845509558916092,
0.31560078263282776,
-0.36681443452835083,
0.975427508354187,
0.6606783270835876,
-0.83579021692276,
-0.9544935822486877,
-0.5863108038902283,
-0.30789273977279663,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/meddocan | bigbio | 2022-12-22T15:45:24Z | 17 | 1 | null | [
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"region:us"
] | 2022-12-22T15:45:24Z | 2022-11-13T22:09:29.000Z | 2022-11-13T22:09:29 |
---
language:
- es
bigbio_language:
- Spanish
license: cc-by-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: MEDDOCAN
homepage: https://temu.bsc.es/meddocan/
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for MEDDOCAN
## Dataset Description
- **Homepage:** https://temu.bsc.es/meddocan/
- **Pubmed:** False
- **Public:** True
- **Tasks:** NER
MEDDOCAN: Medical Document Anonymization Track
This dataset is designed for the MEDDOCAN task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.
It is a manually classified collection of 1,000 clinical case reports derived from the Spanish Clinical Case Corpus (SPACCC), enriched with PHI expressions.
The annotation of the entire set of entity mentions was carried out by experts annotatorsand it includes 29 entity types relevant for the annonymiation of medical documents.22 of these annotation types are actually present in the corpus: TERRITORIO, FECHAS, EDAD_SUJETO_ASISTENCIA, NOMBRE_SUJETO_ASISTENCIA, NOMBRE_PERSONAL_SANITARIO, SEXO_SUJETO_ASISTENCIA, CALLE, PAIS, ID_SUJETO_ASISTENCIA, CORREO, ID_TITULACION_PERSONAL_SANITARIO,ID_ASEGURAMIENTO, HOSPITAL, FAMILIARES_SUJETO_ASISTENCIA, INSTITUCION, ID_CONTACTO ASISTENCIAL,NUMERO_TELEFONO, PROFESION, NUMERO_FAX, OTROS_SUJETO_ASISTENCIA, CENTRO_SALUD, ID_EMPLEO_PERSONAL_SANITARIO
For further information, please visit https://temu.bsc.es/meddocan/ or send an email to encargo-pln-life@bsc.es
## Citation Information
```
@inproceedings{marimon2019automatic,
title={Automatic De-identification of Medical Texts in Spanish: the MEDDOCAN Track, Corpus, Guidelines, Methods and Evaluation of Results.},
author={Marimon, Montserrat and Gonzalez-Agirre, Aitor and Intxaurrondo, Ander and Rodriguez, Heidy and Martin, Jose Lopez and Villegas, Marta and Krallinger, Martin},
booktitle={IberLEF@ SEPLN},
pages={618--638},
year={2019}
}
```
| [
-0.2734445035457611,
-0.5499144792556763,
0.509647011756897,
0.313149094581604,
-0.40589889883995056,
0.18469423055648804,
-0.0691872090101242,
-0.5587357878684998,
0.5446596145629883,
0.7493284940719604,
-0.47344306111335754,
-1.0782511234283447,
-0.8170539736747742,
0.6520822048187256,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/ntcir_13_medweb | bigbio | 2022-12-22T15:46:09Z | 17 | 0 | null | [
"multilinguality:multilingual",
"language:en",
"language:zh",
"language:ja",
"license:cc-by-4.0",
"region:us"
] | 2022-12-22T15:46:09Z | 2022-11-13T22:11:06.000Z | 2022-11-13T22:11:06 |
---
language:
- en
- zh
- ja
bigbio_language:
- English
- Chinese
- Japanese
license: cc-by-4.0
multilinguality: multilingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: NTCIR-13 MedWeb
homepage: http://research.nii.ac.jp/ntcir/permission/ntcir-13/perm-en-MedWeb.html
bigbio_pubmed: False
bigbio_public: False
bigbio_tasks:
- TRANSLATION
- TEXT_CLASSIFICATION
---
# Dataset Card for NTCIR-13 MedWeb
## Dataset Description
- **Homepage:** http://research.nii.ac.jp/ntcir/permission/ntcir-13/perm-en-MedWeb.html
- **Pubmed:** False
- **Public:** False
- **Tasks:** TRANSL,TXTCLASS
NTCIR-13 MedWeb (Medical Natural Language Processing for Web Document) task requires
to perform a multi-label classification that labels for eight diseases/symptoms must
be assigned to each tweet. Given pseudo-tweets, the output are Positive:p or Negative:n
labels for eight diseases/symptoms. The achievements of this task can almost be
directly applied to a fundamental engine for actual applications.
This task provides pseudo-Twitter messages in a cross-language and multi-label corpus,
covering three languages (Japanese, English, and Chinese), and annotated with eight
labels such as influenza, diarrhea/stomachache, hay fever, cough/sore throat, headache,
fever, runny nose, and cold.
For more information, see:
http://research.nii.ac.jp/ntcir/permission/ntcir-13/perm-en-MedWeb.html
As this dataset also provides a parallel corpus of pseudo-tweets for english,
japanese and chinese it can also be used to train translation models between
these three languages.
## Citation Information
```
@article{wakamiya2017overview,
author = {Shoko Wakamiya, Mizuki Morita, Yoshinobu Kano, Tomoko Ohkuma and Eiji Aramaki},
title = {Overview of the NTCIR-13 MedWeb Task},
journal = {Proceedings of the 13th NTCIR Conference on Evaluation of Information Access Technologies (NTCIR-13)},
year = {2017},
url = {
http://research.nii.ac.jp/ntcir/workshop/OnlineProceedings13/pdf/ntcir/01-NTCIR13-OV-MEDWEB-WakamiyaS.pdf
},
}
```
| [
-0.13733407855033875,
-0.33442801237106323,
0.24486775696277618,
0.438631534576416,
-0.28310897946357727,
0.15812663733959198,
-0.1868978887796402,
-0.7122049927711487,
0.46637848019599915,
0.1627146154642105,
-0.5413659811019897,
-0.7529997825622559,
-0.7177658677101135,
0.576919436454773... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/sciq | bigbio | 2022-12-22T15:46:48Z | 17 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-nc-3.0",
"region:us"
] | 2022-12-22T15:46:48Z | 2022-11-13T22:12:14.000Z | 2022-11-13T22:12:14 |
---
language:
- en
bigbio_language:
- English
license: cc-by-nc-3.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_NC_3p0
pretty_name: SciQ
homepage: https://allenai.org/data/sciq
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- QUESTION_ANSWERING
---
# Dataset Card for SciQ
## Dataset Description
- **Homepage:** https://allenai.org/data/sciq
- **Pubmed:** False
- **Public:** True
- **Tasks:** QA
The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For most questions, an additional paragraph with supporting evidence for the correct answer is provided.
## Citation Information
```
@inproceedings{welbl-etal-2017-crowdsourcing,
title = "Crowdsourcing Multiple Choice Science Questions",
author = "Welbl, Johannes and
Liu, Nelson F. and
Gardner, Matt",
booktitle = "Proceedings of the 3rd Workshop on Noisy User-generated Text",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W17-4413",
doi = "10.18653/v1/W17-4413",
pages = "94--106",
}
```
| [
-0.1490931212902069,
-0.3548431098461151,
0.5177581310272217,
0.26336413621902466,
-0.14285078644752502,
-0.021244563162326813,
0.08211661875247955,
-0.19007058441638947,
0.2708456814289093,
0.35131874680519104,
-0.5094089508056641,
-0.401366651058197,
-0.3355874717235565,
0.56980711221694... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/tmvar_v3 | bigbio | 2023-02-17T14:55:58Z | 17 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"arxiv:2204.03637",
"region:us"
] | 2023-02-17T14:55:58Z | 2022-11-13T22:12:35.000Z | 2022-11-13T22:12:35 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: tmVar v3
homepage: https://www.ncbi.nlm.nih.gov/research/bionlp/Tools/tmvar/
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
---
# Dataset Card for tmVar v3
## Dataset Description
- **Homepage:** https://www.ncbi.nlm.nih.gov/research/bionlp/Tools/tmvar/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED
This dataset contains 500 PubMed articles manually annotated with mutation mentions of various kinds and dbsnp normalizations for each of them. In addition, it contains variant normalization options such as allele-specific identifiers from the ClinGen Allele Registry It can be used for NER tasks and NED tasks, This dataset does NOT have splits.
## Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2204.03637,
title = {tmVar 3.0: an improved variant concept recognition and normalization tool},
author = {
Wei, Chih-Hsuan and Allot, Alexis and Riehle, Kevin and Milosavljevic,
Aleksandar and Lu, Zhiyong
},
year = 2022,
publisher = {arXiv},
doi = {10.48550/ARXIV.2204.03637},
url = {https://arxiv.org/abs/2204.03637},
copyright = {Creative Commons Attribution 4.0 International},
keywords = {
Computation and Language (cs.CL), FOS: Computer and information sciences,
FOS: Computer and information sciences
}
}
```
| [
-0.24242499470710754,
-0.46921995282173157,
0.30910128355026245,
0.036728955805301666,
-0.45541858673095703,
-0.011264016851782799,
-0.3077648878097534,
-0.2730877101421356,
0.1681852787733078,
0.5556016564369202,
-0.44901272654533386,
-0.8660101294517517,
-0.6759452819824219,
0.5542879700... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
research-backup/semeval2012_relational_similarity_v7 | research-backup | 2022-11-20T11:49:41Z | 17 | 0 | null | [
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:other",
"region:us"
] | 2022-11-20T11:49:41Z | 2022-11-20T11:42:11.000Z | 2022-11-20T11:42:11 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
pretty_name: SemEval2012 task 2 Relational Similarity
---
# Dataset Card for "relbert/semeval2012_relational_similarity_V7"
## Dataset Description
- **Repository:** [RelBERT](https://github.com/asahi417/relbert)
- **Paper:** [https://aclanthology.org/S12-1047/](https://aclanthology.org/S12-1047/)
- **Dataset:** SemEval2012: Relational Similarity
### Dataset Summary
***IMPORTANT***: This is the same dataset as [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity),
but with a different dataset construction.
Relational similarity dataset from [SemEval2012 task 2](https://aclanthology.org/S12-1047/), compiled to fine-tune [RelBERT](https://github.com/asahi417/relbert) model.
The dataset contains a list of positive and negative word pair from 89 pre-defined relations.
The relation types are constructed on top of following 10 parent relation types.
```shell
{
1: "Class Inclusion", # Hypernym
2: "Part-Whole", # Meronym, Substance Meronym
3: "Similar", # Synonym, Co-hypornym
4: "Contrast", # Antonym
5: "Attribute", # Attribute, Event
6: "Non Attribute",
7: "Case Relation",
8: "Cause-Purpose",
9: "Space-Time",
10: "Representation"
}
```
Each of the parent relation is further grouped into child relation types where the definition can be found [here](https://drive.google.com/file/d/0BzcZKTSeYL8VenY0QkVpZVpxYnc/view?resourcekey=0-ZP-UARfJj39PcLroibHPHw).
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'relation_type': '8d',
'positives': [ [ "breathe", "live" ], [ "study", "learn" ], [ "speak", "communicate" ], ... ]
'negatives': [ [ "starving", "hungry" ], [ "clean", "bathe" ], [ "hungry", "starving" ], ... ]
}
```
### Data Splits
| name |train|validation|
|---------|----:|---------:|
|semeval2012_relational_similarity| 89 | 89|
### Number of Positive/Negative Word-pairs in each Split
| | positives | negatives |
|:------------------------------------------|------------:|------------:|
| ('1', 'parent', 'train') | 110 | 680 |
| ('10', 'parent', 'train') | 60 | 730 |
| ('10a', 'child', 'train') | 10 | 1655 |
| ('10a', 'child_prototypical', 'train') | 246 | 2438 |
| ('10b', 'child', 'train') | 10 | 1656 |
| ('10b', 'child_prototypical', 'train') | 234 | 2027 |
| ('10c', 'child', 'train') | 10 | 1658 |
| ('10c', 'child_prototypical', 'train') | 210 | 2030 |
| ('10d', 'child', 'train') | 10 | 1659 |
| ('10d', 'child_prototypical', 'train') | 198 | 1766 |
| ('10e', 'child', 'train') | 10 | 1661 |
| ('10e', 'child_prototypical', 'train') | 174 | 1118 |
| ('10f', 'child', 'train') | 10 | 1659 |
| ('10f', 'child_prototypical', 'train') | 198 | 1766 |
| ('1a', 'child', 'train') | 10 | 1655 |
| ('1a', 'child_prototypical', 'train') | 246 | 2192 |
| ('1b', 'child', 'train') | 10 | 1655 |
| ('1b', 'child_prototypical', 'train') | 246 | 2192 |
| ('1c', 'child', 'train') | 10 | 1658 |
| ('1c', 'child_prototypical', 'train') | 210 | 2030 |
| ('1d', 'child', 'train') | 10 | 1653 |
| ('1d', 'child_prototypical', 'train') | 270 | 2540 |
| ('1e', 'child', 'train') | 10 | 1661 |
| ('1e', 'child_prototypical', 'train') | 174 | 1031 |
| ('2', 'parent', 'train') | 100 | 690 |
| ('2a', 'child', 'train') | 10 | 1654 |
| ('2a', 'child_prototypical', 'train') | 258 | 2621 |
| ('2b', 'child', 'train') | 10 | 1658 |
| ('2b', 'child_prototypical', 'train') | 210 | 1610 |
| ('2c', 'child', 'train') | 10 | 1656 |
| ('2c', 'child_prototypical', 'train') | 234 | 2144 |
| ('2d', 'child', 'train') | 10 | 1659 |
| ('2d', 'child_prototypical', 'train') | 198 | 1667 |
| ('2e', 'child', 'train') | 10 | 1658 |
| ('2e', 'child_prototypical', 'train') | 210 | 1925 |
| ('2f', 'child', 'train') | 10 | 1658 |
| ('2f', 'child_prototypical', 'train') | 210 | 2240 |
| ('2g', 'child', 'train') | 10 | 1653 |
| ('2g', 'child_prototypical', 'train') | 270 | 2405 |
| ('2h', 'child', 'train') | 10 | 1658 |
| ('2h', 'child_prototypical', 'train') | 210 | 1925 |
| ('2i', 'child', 'train') | 10 | 1660 |
| ('2i', 'child_prototypical', 'train') | 186 | 1706 |
| ('2j', 'child', 'train') | 10 | 1659 |
| ('2j', 'child_prototypical', 'train') | 198 | 1964 |
| ('3', 'parent', 'train') | 80 | 710 |
| ('3a', 'child', 'train') | 10 | 1658 |
| ('3a', 'child_prototypical', 'train') | 210 | 1925 |
| ('3b', 'child', 'train') | 10 | 1658 |
| ('3b', 'child_prototypical', 'train') | 210 | 2240 |
| ('3c', 'child', 'train') | 10 | 1657 |
| ('3c', 'child_prototypical', 'train') | 222 | 1979 |
| ('3d', 'child', 'train') | 10 | 1655 |
| ('3d', 'child_prototypical', 'train') | 246 | 2315 |
| ('3e', 'child', 'train') | 10 | 1664 |
| ('3e', 'child_prototypical', 'train') | 138 | 1268 |
| ('3f', 'child', 'train') | 10 | 1658 |
| ('3f', 'child_prototypical', 'train') | 210 | 2345 |
| ('3g', 'child', 'train') | 10 | 1663 |
| ('3g', 'child_prototypical', 'train') | 150 | 1340 |
| ('3h', 'child', 'train') | 10 | 1659 |
| ('3h', 'child_prototypical', 'train') | 198 | 1964 |
| ('4', 'parent', 'train') | 80 | 710 |
| ('4a', 'child', 'train') | 10 | 1658 |
| ('4a', 'child_prototypical', 'train') | 210 | 2240 |
| ('4b', 'child', 'train') | 10 | 1662 |
| ('4b', 'child_prototypical', 'train') | 162 | 1163 |
| ('4c', 'child', 'train') | 10 | 1657 |
| ('4c', 'child_prototypical', 'train') | 222 | 2201 |
| ('4d', 'child', 'train') | 10 | 1665 |
| ('4d', 'child_prototypical', 'train') | 126 | 749 |
| ('4e', 'child', 'train') | 10 | 1657 |
| ('4e', 'child_prototypical', 'train') | 222 | 2423 |
| ('4f', 'child', 'train') | 10 | 1660 |
| ('4f', 'child_prototypical', 'train') | 186 | 1892 |
| ('4g', 'child', 'train') | 10 | 1654 |
| ('4g', 'child_prototypical', 'train') | 258 | 2492 |
| ('4h', 'child', 'train') | 10 | 1657 |
| ('4h', 'child_prototypical', 'train') | 222 | 2312 |
| ('5', 'parent', 'train') | 90 | 700 |
| ('5a', 'child', 'train') | 10 | 1655 |
| ('5a', 'child_prototypical', 'train') | 246 | 2315 |
| ('5b', 'child', 'train') | 10 | 1661 |
| ('5b', 'child_prototypical', 'train') | 174 | 1640 |
| ('5c', 'child', 'train') | 10 | 1658 |
| ('5c', 'child_prototypical', 'train') | 210 | 1925 |
| ('5d', 'child', 'train') | 10 | 1654 |
| ('5d', 'child_prototypical', 'train') | 258 | 2363 |
| ('5e', 'child', 'train') | 10 | 1661 |
| ('5e', 'child_prototypical', 'train') | 174 | 1640 |
| ('5f', 'child', 'train') | 10 | 1658 |
| ('5f', 'child_prototypical', 'train') | 210 | 2135 |
| ('5g', 'child', 'train') | 10 | 1660 |
| ('5g', 'child_prototypical', 'train') | 186 | 1892 |
| ('5h', 'child', 'train') | 10 | 1654 |
| ('5h', 'child_prototypical', 'train') | 258 | 2750 |
| ('5i', 'child', 'train') | 10 | 1655 |
| ('5i', 'child_prototypical', 'train') | 246 | 2561 |
| ('6', 'parent', 'train') | 80 | 710 |
| ('6a', 'child', 'train') | 10 | 1654 |
| ('6a', 'child_prototypical', 'train') | 258 | 2492 |
| ('6b', 'child', 'train') | 10 | 1658 |
| ('6b', 'child_prototypical', 'train') | 210 | 2135 |
| ('6c', 'child', 'train') | 10 | 1656 |
| ('6c', 'child_prototypical', 'train') | 234 | 2495 |
| ('6d', 'child', 'train') | 10 | 1659 |
| ('6d', 'child_prototypical', 'train') | 198 | 2261 |
| ('6e', 'child', 'train') | 10 | 1658 |
| ('6e', 'child_prototypical', 'train') | 210 | 2135 |
| ('6f', 'child', 'train') | 10 | 1657 |
| ('6f', 'child_prototypical', 'train') | 222 | 2090 |
| ('6g', 'child', 'train') | 10 | 1657 |
| ('6g', 'child_prototypical', 'train') | 222 | 1979 |
| ('6h', 'child', 'train') | 10 | 1654 |
| ('6h', 'child_prototypical', 'train') | 258 | 2621 |
| ('7', 'parent', 'train') | 80 | 710 |
| ('7a', 'child', 'train') | 10 | 1655 |
| ('7a', 'child_prototypical', 'train') | 246 | 2561 |
| ('7b', 'child', 'train') | 10 | 1662 |
| ('7b', 'child_prototypical', 'train') | 162 | 1082 |
| ('7c', 'child', 'train') | 10 | 1658 |
| ('7c', 'child_prototypical', 'train') | 210 | 1715 |
| ('7d', 'child', 'train') | 10 | 1655 |
| ('7d', 'child_prototypical', 'train') | 246 | 2561 |
| ('7e', 'child', 'train') | 10 | 1659 |
| ('7e', 'child_prototypical', 'train') | 198 | 1568 |
| ('7f', 'child', 'train') | 10 | 1657 |
| ('7f', 'child_prototypical', 'train') | 222 | 1757 |
| ('7g', 'child', 'train') | 10 | 1660 |
| ('7g', 'child_prototypical', 'train') | 186 | 1148 |
| ('7h', 'child', 'train') | 10 | 1655 |
| ('7h', 'child_prototypical', 'train') | 246 | 1946 |
| ('8', 'parent', 'train') | 80 | 710 |
| ('8a', 'child', 'train') | 10 | 1655 |
| ('8a', 'child_prototypical', 'train') | 246 | 2192 |
| ('8b', 'child', 'train') | 10 | 1662 |
| ('8b', 'child_prototypical', 'train') | 162 | 1487 |
| ('8c', 'child', 'train') | 10 | 1657 |
| ('8c', 'child_prototypical', 'train') | 222 | 1757 |
| ('8d', 'child', 'train') | 10 | 1656 |
| ('8d', 'child_prototypical', 'train') | 234 | 1910 |
| ('8e', 'child', 'train') | 10 | 1658 |
| ('8e', 'child_prototypical', 'train') | 210 | 1610 |
| ('8f', 'child', 'train') | 10 | 1657 |
| ('8f', 'child_prototypical', 'train') | 222 | 1868 |
| ('8g', 'child', 'train') | 10 | 1662 |
| ('8g', 'child_prototypical', 'train') | 162 | 839 |
| ('8h', 'child', 'train') | 10 | 1655 |
| ('8h', 'child_prototypical', 'train') | 246 | 2315 |
| ('9', 'parent', 'train') | 90 | 700 |
| ('9a', 'child', 'train') | 10 | 1655 |
| ('9a', 'child_prototypical', 'train') | 246 | 1946 |
| ('9b', 'child', 'train') | 10 | 1657 |
| ('9b', 'child_prototypical', 'train') | 222 | 2090 |
| ('9c', 'child', 'train') | 10 | 1662 |
| ('9c', 'child_prototypical', 'train') | 162 | 596 |
| ('9d', 'child', 'train') | 10 | 1660 |
| ('9d', 'child_prototypical', 'train') | 186 | 1985 |
| ('9e', 'child', 'train') | 10 | 1661 |
| ('9e', 'child_prototypical', 'train') | 174 | 1901 |
| ('9f', 'child', 'train') | 10 | 1659 |
| ('9f', 'child_prototypical', 'train') | 198 | 1766 |
| ('9g', 'child', 'train') | 10 | 1655 |
| ('9g', 'child_prototypical', 'train') | 246 | 2069 |
| ('9h', 'child', 'train') | 10 | 1656 |
| ('9h', 'child_prototypical', 'train') | 234 | 2261 |
| ('9i', 'child', 'train') | 10 | 1660 |
| ('9i', 'child_prototypical', 'train') | 186 | 1613 |
| ('AtLocation', 'N/A', 'validation') | 960 | 4646 |
| ('CapableOf', 'N/A', 'validation') | 536 | 4734 |
| ('Causes', 'N/A', 'validation') | 194 | 4738 |
| ('CausesDesire', 'N/A', 'validation') | 40 | 4730 |
| ('CreatedBy', 'N/A', 'validation') | 4 | 3554 |
| ('DefinedAs', 'N/A', 'validation') | 4 | 1182 |
| ('Desires', 'N/A', 'validation') | 56 | 4732 |
| ('HasA', 'N/A', 'validation') | 168 | 4772 |
| ('HasFirstSubevent', 'N/A', 'validation') | 4 | 3554 |
| ('HasLastSubevent', 'N/A', 'validation') | 10 | 4732 |
| ('HasPrerequisite', 'N/A', 'validation') | 450 | 4744 |
| ('HasProperty', 'N/A', 'validation') | 266 | 4766 |
| ('HasSubevent', 'N/A', 'validation') | 330 | 4768 |
| ('IsA', 'N/A', 'validation') | 816 | 4688 |
| ('MadeOf', 'N/A', 'validation') | 48 | 4726 |
| ('MotivatedByGoal', 'N/A', 'validation') | 50 | 4736 |
| ('PartOf', 'N/A', 'validation') | 82 | 4742 |
| ('ReceivesAction', 'N/A', 'validation') | 52 | 4726 |
| ('SymbolOf', 'N/A', 'validation') | 4 | 1184 |
| ('UsedFor', 'N/A', 'validation') | 660 | 4760 |
### Citation Information
```
@inproceedings{jurgens-etal-2012-semeval,
title = "{S}em{E}val-2012 Task 2: Measuring Degrees of Relational Similarity",
author = "Jurgens, David and
Mohammad, Saif and
Turney, Peter and
Holyoak, Keith",
booktitle = "*{SEM} 2012: The First Joint Conference on Lexical and Computational Semantics {--} Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation ({S}em{E}val 2012)",
month = "7-8 " # jun,
year = "2012",
address = "Montr{\'e}al, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S12-1047",
pages = "356--364",
}
``` | [
-0.48818808794021606,
-0.2162976711988449,
0.12537243962287903,
0.7185695171356201,
-0.16870473325252533,
-0.09107744693756104,
0.21694207191467285,
-0.28836190700531006,
0.6592179536819458,
0.166190505027771,
-0.8185762166976929,
-0.7488557696342468,
-0.7682684659957886,
0.458125680685043... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Tomaszek12/Sebek | Tomaszek12 | 2022-11-20T12:20:33Z | 17 | 0 | null | [
"region:us"
] | 2022-11-20T12:20:33Z | 2022-11-20T12:18:31.000Z | 2022-11-20T12:18:31 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-staging-eval-project-6598b244-9392-4c7f-a1a9-2f5ffa8b50f8-3230 | autoevaluate | 2022-11-20T19:54:25Z | 17 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-20T19:54:25Z | 2022-11-20T19:53:49.000Z | 2022-11-20T19:53:49 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: autoevaluate/binary-classification
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: sst2
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | [
-0.20361605286598206,
-0.33383142948150635,
0.2989133596420288,
0.17618133127689362,
-0.16354314982891083,
0.03615495190024376,
0.020895475521683693,
-0.39217695593833923,
0.12184618413448334,
0.3618122935295105,
-0.9186378717422485,
-0.21669870615005493,
-0.770520806312561,
-0.01348786149... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mesolitica/translated-funpedia | mesolitica | 2022-11-21T03:29:02Z | 17 | 0 | null | [
"region:us"
] | 2022-11-21T03:29:02Z | 2022-11-21T03:28:27.000Z | 2022-11-21T03:28:27 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-futin__feed-top_en-246167-2175069950 | autoevaluate | 2022-11-21T05:06:31Z | 17 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-21T05:06:31Z | 2022-11-21T04:36:23.000Z | 2022-11-21T04:36:23 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/feed
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-560m
metrics: []
dataset_name: futin/feed
dataset_config: top_en
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: futin/feed
* Config: top_en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | [
-0.22009021043777466,
-0.3456071615219116,
0.4595124423503876,
0.07217951118946075,
0.06695989519357681,
-0.1767612099647522,
-0.04301054775714874,
-0.43762657046318054,
0.03550883010029793,
0.3361496031284332,
-0.9327805638313293,
-0.2751677334308624,
-0.7346486449241638,
-0.0131801590323... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-futin__feed-top_en-c0540d-2175569970 | autoevaluate | 2022-11-21T19:57:40Z | 17 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-21T19:57:40Z | 2022-11-21T06:03:07.000Z | 2022-11-21T06:03:07 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/feed
eval_info:
task: text_zero_shot_classification
model: facebook/opt-30b
metrics: []
dataset_name: futin/feed
dataset_config: top_en
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: futin/feed
* Config: top_en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | [
-0.32898572087287903,
-0.46417295932769775,
0.31621411442756653,
0.060802049934864044,
0.05403139814734459,
-0.1034909188747406,
0.021400699391961098,
-0.4478943645954132,
0.12637652456760406,
0.37975919246673584,
-1.0317225456237793,
-0.24290066957473755,
-0.6624175906181335,
-0.037953857... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-futin__feed-top_en-c0540d-2175569976 | autoevaluate | 2022-11-21T07:06:54Z | 17 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-21T07:06:54Z | 2022-11-21T07:00:49.000Z | 2022-11-21T07:00:49 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/feed
eval_info:
task: text_zero_shot_classification
model: facebook/opt-125m
metrics: []
dataset_name: futin/feed
dataset_config: top_en
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-125m
* Dataset: futin/feed
* Config: top_en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | [
-0.3194386661052704,
-0.4667455554008484,
0.32447490096092224,
0.04079205542802811,
0.014352676458656788,
-0.12786929309368134,
0.0008485601283609867,
-0.4530557692050934,
0.1469322144985199,
0.3741290271282196,
-1.0121642351150513,
-0.23736107349395752,
-0.6979309320449829,
-0.03130475804... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DTU54DL/common-native | DTU54DL | 2022-11-30T05:41:32Z | 17 | 0 | acronym-identification | [
"task_categories:token-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"region:us"
] | 2022-11-30T05:41:32Z | 2022-11-29T13:46:08.000Z | 2022-11-29T13:46:08 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: acronym-identification
pretty_name: Acronym Identification Dataset
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- token-classification-other-acronym-identification
train-eval-index:
- col_mapping:
labels: tags
tokens: tokens
config: default
splits:
eval_split: test
task: token-classification
task_id: entity_extraction
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: accent
dtype: string
splits:
- name: train
num_bytes: 419902426.3910719
num_examples: 10000
- name: test
num_bytes: 41430604.33704293
num_examples: 994
download_size: 440738761
dataset_size: 461333030.72811484
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | [
-0.47841677069664,
-0.5084842443466187,
0.14602938294410706,
0.278889000415802,
-0.21702472865581512,
0.24832050502300262,
-0.3366999328136444,
-0.3758932054042816,
0.6720380783081055,
0.6457639932632446,
-0.9167346358299255,
-1.2200127840042114,
-0.7551794052124023,
0.07273735105991364,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Tristan/olm-test-normal-dedup | Tristan | 2022-11-30T00:33:45Z | 17 | 0 | null | [
"region:us"
] | 2022-11-30T00:33:45Z | 2022-11-30T00:04:23.000Z | 2022-11-30T00:04:23 | ---
dataset_info:
features:
- name: text
dtype: string
- name: url
dtype: string
- name: crawl_timestamp
dtype: float64
splits:
- name: train
num_bytes: 211642596.0
num_examples: 40900
download_size: 128804894
dataset_size: 211642596.0
---
# Dataset Card for "olm-test-normal-dedup"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7157204747200012,
-0.537764847278595,
0.07070962339639664,
-0.09485992044210434,
-0.18297386169433594,
-0.39509856700897217,
0.09583737701177597,
-0.05766260623931885,
0.5860602259635925,
0.7026835083961487,
-0.5498329997062683,
-0.8560361862182617,
-0.4715239405632019,
-0.1468596309423... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kmyoo/cnn-dailymail-v1-tiny | kmyoo | 2022-12-02T14:00:12Z | 17 | 0 | null | [
"region:us"
] | 2022-12-02T14:00:12Z | 2022-12-02T13:59:35.000Z | 2022-12-02T13:59:35 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
adrienheymans/imdb-movie-genres | adrienheymans | 2022-12-02T17:49:10Z | 17 | 0 | null | [
"region:us"
] | 2022-12-02T17:49:10Z | 2022-12-02T17:44:56.000Z | 2022-12-02T17:44:56 | ---
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
- name: genre
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 35392128
num_examples: 54214
- name: test
num_bytes: 35393614
num_examples: 54200
download_size: 46358637
dataset_size: 70785742
---
# Dataset Card for "imdb-movie-genres"
MDb (an acronym for Internet Movie Database) is an online database of information related to films, television programs, home videos, video games, and streaming content online – including cast, production crew and personal biographies, plot summaries, trivia, ratings, and fan and critical reviews. An additional fan feature, message boards, was abandoned in February 2017. Originally a fan-operated website, the database is now owned and operated by IMDb.com, Inc., a subsidiary of Amazon.
As of December 2020, IMDb has approximately 7.5 million titles (including episodes) and 10.4 million personalities in its database,[2] as well as 83 million registered users.
IMDb began as a movie database on the Usenet group "rec.arts.movies" in 1990 and moved to the web in 1993.
## Provenance : [ftp://ftp.fu-berlin.de/pub/misc/movies/database/](ftp://ftp.fu-berlin.de/pub/misc/movies/database/)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-1.1251980066299438,
-0.6029261946678162,
0.19587765634059906,
0.13669586181640625,
-0.49379590153694153,
0.27720576524734497,
0.23665419220924377,
0.024103745818138123,
0.6615870594978333,
0.5861393213272095,
-1.1486427783966064,
-0.5341231226921082,
-0.47651219367980957,
0.22897171974182... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nbtpj/multi-context-long-answer-dataset | nbtpj | 2022-12-05T02:44:15Z | 17 | 4 | null | [
"region:us"
] | 2022-12-05T02:44:15Z | 2022-12-05T02:40:17.000Z | 2022-12-05T02:40:17 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
graphs-datasets/CIFAR10 | graphs-datasets | 2023-02-07T16:37:24Z | 17 | 1 | null | [
"task_categories:graph-ml",
"license:mit",
"arxiv:2003.00982",
"region:us"
] | 2023-02-07T16:37:24Z | 2022-12-08T09:59:00.000Z | 2022-12-08T09:59:00 | ---
licence: unknown
license: mit
task_categories:
- graph-ml
---
# Dataset Card for CIFAR10
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://github.com/graphdeeplearning/benchmarking-gnns)**
- **Paper:**: (see citation)
### Dataset Summary
The `CIFAR10` dataset consists of 45000 images in 10 classes, represented as graphs.
### Supported Tasks and Leaderboards
`CIFAR10` should be used for multiclass graph classification.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| #graphs | 45,000 |
| average #nodes | 117.6 |
| average #edges | 941.2 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: #labels): contains the number of labels available to predict
- `num_nodes` (int): number of nodes of the graph
- `pos` (list: 2 x #node): positional information of each node
### Data Splits
This data is split. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under MIT license.
### Citation Information
```
@article{DBLP:journals/corr/abs-2003-00982,
author = {Vijay Prakash Dwivedi and
Chaitanya K. Joshi and
Thomas Laurent and
Yoshua Bengio and
Xavier Bresson},
title = {Benchmarking Graph Neural Networks},
journal = {CoRR},
volume = {abs/2003.00982},
year = {2020},
url = {https://arxiv.org/abs/2003.00982},
eprinttype = {arXiv},
eprint = {2003.00982},
timestamp = {Sat, 23 Jan 2021 01:14:30 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2003-00982.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
-0.47402340173721313,
-0.42680999636650085,
0.03837963566184044,
0.11404665559530258,
-0.09572147578001022,
-0.13756722211837769,
-0.14215721189975739,
-0.4987863302230835,
0.4023367762565613,
0.07421721518039703,
-0.4072889983654022,
-0.6747075915336609,
-0.5171065926551819,
-0.0151833305... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Arch4ngel/untitled_goose_game | Arch4ngel | 2023-01-07T20:00:06Z | 17 | 0 | null | [
"region:us"
] | 2023-01-07T20:00:06Z | 2023-01-07T19:53:41.000Z | 2023-01-07T19:53:41 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 1487961.0
num_examples: 15
download_size: 1461841
dataset_size: 1487961.0
---
# Dataset Card for "untitled_goose_game"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.36716219782829285,
-0.36156654357910156,
0.045299120247364044,
0.5328845977783203,
-0.11888778954744339,
-0.19162322580814362,
0.23734396696090698,
-0.2644733786582947,
0.8049226403236389,
0.5232880115509033,
-0.9995244145393372,
-0.8218182921409607,
-0.4432111084461212,
-0.213632330298... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
RamAnanth1/talkrl-podcast | RamAnanth1 | 2023-01-12T20:46:26Z | 17 | 0 | null | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:summarization",
"size_categories:n<1K",
"language:en",
"region:us"
] | 2023-01-12T20:46:26Z | 2023-01-10T23:09:01.000Z | 2023-01-10T23:09:01 | ---
dataset_info:
features:
- name: title
dtype: string
- name: summary
dtype: string
- name: link
dtype: string
- name: transcript
dtype: string
- name: segments
list:
- name: end
dtype: float64
- name: start
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4845076
num_examples: 39
download_size: 2633561
dataset_size: 4845076
task_categories:
- text-classification
- text-generation
- summarization
language:
- en
size_categories:
- n<1K
pretty_name: TalkRL Podcast
---
# Dataset Card for "talkrl-podcast"
This dataset is sourced from the [TalkRL Podcast website](https://www.talkrl.com/) and contains English transcripts of wonderful TalkRL podcast episodes. The transcripts were generated using OpenAI's base Whisper model | [
-0.1163327619433403,
-0.4494730830192566,
-0.13823503255844116,
0.267117440700531,
-0.28350818157196045,
0.054812368005514145,
-0.18076655268669128,
-0.5013034343719482,
0.4499063193798065,
0.38295844197273254,
-1.0547313690185547,
-0.697201132774353,
-0.24346476793289185,
-0.3688987493515... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
keremberke/pothole-segmentation | keremberke | 2023-01-15T18:38:49Z | 17 | 2 | null | [
"task_categories:image-segmentation",
"roboflow",
"roboflow2huggingface",
"Construction",
"Self Driving",
"Transportation",
"Damage Risk",
"region:us"
] | 2023-01-15T18:38:49Z | 2023-01-15T18:38:37.000Z | 2023-01-15T18:38:37 | ---
task_categories:
- image-segmentation
tags:
- roboflow
- roboflow2huggingface
- Construction
- Self Driving
- Transportation
- Damage Risk
---
<div align="center">
<img width="640" alt="keremberke/pothole-segmentation" src="https://huggingface.co/datasets/keremberke/pothole-segmentation/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['pothole']
```
### Number of Images
```json
{'test': 5, 'train': 80, 'valid': 5}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/pothole-segmentation", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/imacs-pothole-detection-wo8mu/pothole-detection-irkz9/dataset/4](https://universe.roboflow.com/imacs-pothole-detection-wo8mu/pothole-detection-irkz9/dataset/4?ref=roboflow2huggingface)
### Citation
```
@misc{ pothole-detection-irkz9_dataset,
title = { Pothole Detection Dataset },
type = { Open Source Dataset },
author = { IMACS Pothole Detection },
howpublished = { \\url{ https://universe.roboflow.com/imacs-pothole-detection-wo8mu/pothole-detection-irkz9 } },
url = { https://universe.roboflow.com/imacs-pothole-detection-wo8mu/pothole-detection-irkz9 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { jan },
note = { visited on 2023-01-15 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on January 15, 2023 at 6:38 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 90 images.
Pothole are annotated in COCO format.
The following pre-processing was applied to each image:
No image augmentation techniques were applied.
| [
-0.4935385286808014,
-0.5537006855010986,
0.5939487814903259,
0.06918984651565552,
-0.43953776359558105,
-0.07259587943553925,
-0.04248494654893875,
-0.4219527244567871,
0.23847775161266327,
0.29346221685409546,
-0.531484842300415,
-0.8279646039009094,
-0.5751659870147705,
0.08056991547346... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
keremberke/indoor-scene-classification | keremberke | 2023-01-16T21:04:18Z | 17 | 0 | null | [
"task_categories:image-classification",
"roboflow",
"roboflow2huggingface",
"Retail",
"Pest Control",
"Benchmark",
"region:us"
] | 2023-01-16T21:04:18Z | 2023-01-16T20:56:17.000Z | 2023-01-16T20:56:17 | ---
task_categories:
- image-classification
tags:
- roboflow
- roboflow2huggingface
- Retail
- Pest Control
- Benchmark
---
<div align="center">
<img width="640" alt="keremberke/indoor-scene-classification" src="https://huggingface.co/datasets/keremberke/indoor-scene-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['meeting_room', 'cloister', 'stairscase', 'restaurant', 'hairsalon', 'children_room', 'dining_room', 'lobby', 'museum', 'laundromat', 'computerroom', 'grocerystore', 'hospitalroom', 'buffet', 'office', 'warehouse', 'garage', 'bookstore', 'florist', 'locker_room', 'inside_bus', 'subway', 'fastfood_restaurant', 'auditorium', 'studiomusic', 'airport_inside', 'pantry', 'restaurant_kitchen', 'casino', 'movietheater', 'kitchen', 'waitingroom', 'artstudio', 'toystore', 'kindergarden', 'trainstation', 'bedroom', 'mall', 'corridor', 'bar', 'classroom', 'shoeshop', 'dentaloffice', 'videostore', 'laboratorywet', 'tv_studio', 'church_inside', 'operating_room', 'jewelleryshop', 'bathroom', 'clothingstore', 'closet', 'winecellar', 'livingroom', 'nursery', 'gameroom', 'inside_subway', 'deli', 'bakery', 'library', 'prisoncell', 'gym', 'concert_hall', 'greenhouse', 'elevator', 'poolinside', 'bowling']
```
### Number of Images
```json
{'train': 10885, 'test': 1558, 'valid': 3128}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/indoor-scene-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/popular-benchmarks/mit-indoor-scene-recognition/dataset/5](https://universe.roboflow.com/popular-benchmarks/mit-indoor-scene-recognition/dataset/5?ref=roboflow2huggingface)
### Citation
```
```
### License
MIT
### Dataset Summary
This dataset was exported via roboflow.com on October 24, 2022 at 4:09 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 15571 images.
Indoor-scenes are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 416x416 (Stretch)
No image augmentation techniques were applied.
| [
-0.49430936574935913,
-0.2299012690782547,
0.10922157019376755,
-0.16037721931934357,
-0.04208289086818695,
-0.059826191514730453,
-0.010453283786773682,
-0.46457308530807495,
-0.0158503670245409,
0.0745692327618599,
-0.6070703268051147,
-0.9228799939155579,
-0.5181469917297363,
0.22608490... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Xieyiyiyi/ceshi0119 | Xieyiyiyi | 2023-01-28T02:48:32Z | 17 | 0 | superglue | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_ids:natural-language-inference",
"task_ids:word-sense-disambiguation",
"task_ids:coreference-resolution",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"lan... | 2023-01-28T02:48:32Z | 2023-01-17T10:08:24.000Z | 2023-01-17T10:08:24 | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other
task_categories:
- text-classification
- token-classification
- question-answering
task_ids:
- natural-language-inference
- word-sense-disambiguation
- coreference-resolution
- extractive-qa
paperswithcode_id: superglue
pretty_name: SuperGLUE
tags:
- superglue
- NLU
- natural language understanding
dataset_info:
- config_name: boolq
features:
- name: question
dtype: string
- name: passage
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: test
num_bytes: 2107997
num_examples: 3245
- name: train
num_bytes: 6179206
num_examples: 9427
- name: validation
num_bytes: 2118505
num_examples: 3270
download_size: 4118001
dataset_size: 10405708
- config_name: cb
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': entailment
'1': contradiction
'2': neutral
splits:
- name: test
num_bytes: 93660
num_examples: 250
- name: train
num_bytes: 87218
num_examples: 250
- name: validation
num_bytes: 21894
num_examples: 56
download_size: 75482
dataset_size: 202772
- config_name: copa
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': choice1
'1': choice2
splits:
- name: test
num_bytes: 60303
num_examples: 500
- name: train
num_bytes: 49599
num_examples: 400
- name: validation
num_bytes: 12586
num_examples: 100
download_size: 43986
dataset_size: 122488
- config_name: multirc
features:
- name: paragraph
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: idx
struct:
- name: paragraph
dtype: int32
- name: question
dtype: int32
- name: answer
dtype: int32
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: test
num_bytes: 14996451
num_examples: 9693
- name: train
num_bytes: 46213579
num_examples: 27243
- name: validation
num_bytes: 7758918
num_examples: 4848
download_size: 1116225
dataset_size: 68968948
- config_name: record
features:
- name: passage
dtype: string
- name: query
dtype: string
- name: entities
sequence: string
- name: entity_spans
sequence:
- name: text
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
- name: answers
sequence: string
- name: idx
struct:
- name: passage
dtype: int32
- name: query
dtype: int32
splits:
- name: train
num_bytes: 179232052
num_examples: 100730
- name: validation
num_bytes: 17479084
num_examples: 10000
- name: test
num_bytes: 17200575
num_examples: 10000
download_size: 51757880
dataset_size: 213911711
- config_name: rte
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': entailment
'1': not_entailment
splits:
- name: test
num_bytes: 975799
num_examples: 3000
- name: train
num_bytes: 848745
num_examples: 2490
- name: validation
num_bytes: 90899
num_examples: 277
download_size: 750920
dataset_size: 1915443
- config_name: wic
features:
- name: word
dtype: string
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: start1
dtype: int32
- name: start2
dtype: int32
- name: end1
dtype: int32
- name: end2
dtype: int32
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: test
num_bytes: 180593
num_examples: 1400
- name: train
num_bytes: 665183
num_examples: 5428
- name: validation
num_bytes: 82623
num_examples: 638
download_size: 396213
dataset_size: 928399
- config_name: wsc
features:
- name: text
dtype: string
- name: span1_index
dtype: int32
- name: span2_index
dtype: int32
- name: span1_text
dtype: string
- name: span2_text
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: test
num_bytes: 31572
num_examples: 146
- name: train
num_bytes: 89883
num_examples: 554
- name: validation
num_bytes: 21637
num_examples: 104
download_size: 32751
dataset_size: 143092
- config_name: wsc.fixed
features:
- name: text
dtype: string
- name: span1_index
dtype: int32
- name: span2_index
dtype: int32
- name: span1_text
dtype: string
- name: span2_text
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: test
num_bytes: 31568
num_examples: 146
- name: train
num_bytes: 89883
num_examples: 554
- name: validation
num_bytes: 21637
num_examples: 104
download_size: 32751
dataset_size: 143088
- config_name: axb
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': entailment
'1': not_entailment
splits:
- name: test
num_bytes: 238392
num_examples: 1104
download_size: 33950
dataset_size: 238392
- config_name: axg
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': entailment
'1': not_entailment
splits:
- name: test
num_bytes: 53581
num_examples: 356
download_size: 10413
dataset_size: 53581
---
# Dataset Card for "super_glue"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/google-research-datasets/boolean-questions](https://github.com/google-research-datasets/boolean-questions)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 55.66 MB
- **Size of the generated dataset:** 238.01 MB
- **Total amount of disk used:** 293.67 MB
### Dataset Summary
SuperGLUE (https://super.gluebenchmark.com/) is a new benchmark styled after
GLUE with a new set of more difficult language understanding tasks, improved
resources, and a new public leaderboard.
BoolQ (Boolean Questions, Clark et al., 2019a) is a QA task where each example consists of a short
passage and a yes/no question about the passage. The questions are provided anonymously and
unsolicited by users of the Google search engine, and afterwards paired with a paragraph from a
Wikipedia article containing the answer. Following the original work, we evaluate with accuracy.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### axb
- **Size of downloaded dataset files:** 0.03 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.26 MB
An example of 'test' looks as follows.
```
```
#### axg
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.05 MB
- **Total amount of disk used:** 0.06 MB
An example of 'test' looks as follows.
```
```
#### boolq
- **Size of downloaded dataset files:** 3.93 MB
- **Size of the generated dataset:** 9.92 MB
- **Total amount of disk used:** 13.85 MB
An example of 'train' looks as follows.
```
```
#### cb
- **Size of downloaded dataset files:** 0.07 MB
- **Size of the generated dataset:** 0.19 MB
- **Total amount of disk used:** 0.27 MB
An example of 'train' looks as follows.
```
```
#### copa
- **Size of downloaded dataset files:** 0.04 MB
- **Size of the generated dataset:** 0.12 MB
- **Total amount of disk used:** 0.16 MB
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### axb
- `sentence1`: a `string` feature.
- `sentence2`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `entailment` (0), `not_entailment` (1).
#### axg
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `entailment` (0), `not_entailment` (1).
#### boolq
- `question`: a `string` feature.
- `passage`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `False` (0), `True` (1).
#### cb
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `entailment` (0), `contradiction` (1), `neutral` (2).
#### copa
- `premise`: a `string` feature.
- `choice1`: a `string` feature.
- `choice2`: a `string` feature.
- `question`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `choice1` (0), `choice2` (1).
### Data Splits
#### axb
| |test|
|---|---:|
|axb|1104|
#### axg
| |test|
|---|---:|
|axg| 356|
#### boolq
| |train|validation|test|
|-----|----:|---------:|---:|
|boolq| 9427| 3270|3245|
#### cb
| |train|validation|test|
|---|----:|---------:|---:|
|cb | 250| 56| 250|
#### copa
| |train|validation|test|
|----|----:|---------:|---:|
|copa| 400| 100| 500|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{clark2019boolq,
title={BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions},
author={Clark, Christopher and Lee, Kenton and Chang, Ming-Wei, and Kwiatkowski, Tom and Collins, Michael, and Toutanova, Kristina},
booktitle={NAACL},
year={2019}
}
@article{wang2019superglue,
title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},
author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},
journal={arXiv preprint arXiv:1905.00537},
year={2019}
}
Note that each SuperGLUE dataset has its own citation. Please see the source to
get the correct citation for each contained dataset.
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. | [
-0.577597439289093,
-0.6432895064353943,
0.11570750176906586,
-0.01703517697751522,
-0.12759791314601898,
-0.097799152135849,
-0.1336895078420639,
-0.434673011302948,
0.5041906237602234,
0.42067599296569824,
-0.7235814929008484,
-0.7790929675102234,
-0.38170403242111206,
0.0965930745005607... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
matchbench/rel-heter | matchbench | 2023-01-23T13:54:35Z | 17 | 0 | null | [
"region:us"
] | 2023-01-23T13:54:35Z | 2023-01-18T14:43:30.000Z | 2023-01-18T14:43:30 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KTH/hungarian-single-speaker-tts | KTH | 2023-01-22T13:11:38Z | 17 | 3 | null | [
"task_categories:text-to-speech",
"task_categories:other",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:hu",
"license:cc0-1.0",
"arxiv:1903.11269",
"region:us"
] | 2023-01-22T13:11:38Z | 2023-01-21T12:03:09.000Z | 2023-01-21T12:03:09 | ---
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 22050
- name: original_text
dtype: string
- name: text
dtype: string
- name: duration
dtype: float64
splits:
- name: train
num_bytes: 3173032948.2
num_examples: 4515
download_size: 0
dataset_size: 3173032948.2
annotations_creators:
- expert-generated
language:
- hu
license: cc0-1.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-to-speech
- other
task_ids: []
---
# Dataset Card for CSS10 Hungarian: Single Speaker Speech Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Hungarian Single Speaker Speech Dataset](https://www.kaggle.com/datasets/bryanpark/hungarian-single-speaker-speech-dataset)
- **Repository:** [CSS10](https://github.com/kyubyong/css10)
- **Paper:** [CSS10: A Collection of Single Speaker Speech Datasets for 10 Languages](https://arxiv.org/abs/1903.11269)
### Dataset Summary
The corpus consists of a single speaker, with 4515 segments extracted
from a single LibriVox audiobook.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The audio is in Hungarian.
## Dataset Structure
[Needs More Information]
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
CSS10 is a collection of single speaker speech datasets for 10 languages. Each of them consists of audio files recorded by a single volunteer and their aligned text sourced from LibriVox.
### Source Data
#### Initial Data Collection and Normalization
[Egri csillagok](https://librivox.org/egri-csillagok-by-geza-gardonyi/),
read by Diana Majlinger.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Kyubyong Park & Tommy Mulc
### Licensing Information
[CC0: Public Domain](https://creativecommons.org/publicdomain/zero/1.0/)
### Citation Information
```
@article{park2019css10,
title={CSS10: A Collection of Single Speaker Speech Datasets for 10 Languages},
author={Park, Kyubyong and Mulc, Thomas},
journal={Interspeech},
year={2019}
}
```
### Contributions
[Needs More Information] | [
-0.6015537977218628,
-0.4992436468601227,
-0.183222234249115,
0.3143112361431122,
-0.16144271194934845,
0.04115105792880058,
-0.791921079158783,
-0.2463976889848709,
0.573926568031311,
0.5146570205688477,
-0.8058168292045593,
-1.020490050315857,
-0.3534441888332367,
0.1315491497516632,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
fathyshalab/atis_intents | fathyshalab | 2023-01-23T18:25:53Z | 17 | 0 | null | [
"region:us"
] | 2023-01-23T18:25:53Z | 2023-01-23T18:19:03.000Z | 2023-01-23T18:19:03 | ---
dataset_info:
features:
- name: label text
dtype: string
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 448812
num_examples: 4834
- name: test
num_bytes: 69352
num_examples: 800
download_size: 157677
dataset_size: 518164
---
# Dataset Card for "atis_intents"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3317960202693939,
-0.05529149994254112,
0.4542587399482727,
0.16012261807918549,
-0.0966121181845665,
-0.2642708122730255,
0.2893419861793518,
-0.16361606121063232,
1.0658860206604004,
0.5719395875930786,
-0.9475190043449402,
-0.8420353531837463,
-0.5277147889137268,
-0.3502730429172516... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
plncmm/wl-abbreviation | plncmm | 2023-01-23T18:45:03Z | 17 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | 2023-01-23T18:45:03Z | 2023-01-23T18:43:15.000Z | 2023-01-23T18:43:15 | ---
license: cc-by-nc-4.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Piro17/balancednumber-affecthqnet-fer2013 | Piro17 | 2023-02-10T15:48:19Z | 17 | 0 | null | [
"region:us"
] | 2023-02-10T15:48:19Z | 2023-02-10T15:46:56.000Z | 2023-02-10T15:46:56 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': anger
'1': disgust
'2': fear
'3': happy
'4': neutral
'5': sad
'6': surprise
splits:
- name: train
num_bytes: 40414185.188
num_examples: 21343
download_size: 1835629540
dataset_size: 40414185.188
---
# Dataset Card for "dataset-balanced-affecthqnet-fer2013"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5886436104774475,
-0.26511603593826294,
-0.0005134461680427194,
0.61745285987854,
-0.07985790818929672,
-0.15527507662773132,
0.394509494304657,
-0.13693849742412567,
1.0516940355300903,
0.3669983446598053,
-1.0291357040405273,
-0.407826691865921,
-0.5087429285049438,
-0.039085175842046... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Multimodal-Fatima/Imagenet1k_sample_train | Multimodal-Fatima | 2023-02-10T18:05:32Z | 17 | 0 | null | [
"region:us"
] | 2023-02-10T18:05:32Z | 2023-02-10T18:05:04.000Z | 2023-02-10T18:05:04 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': tench, Tinca tinca
'1': goldfish, Carassius auratus
'2': great white shark, white shark, man-eater, man-eating shark, Carcharodon
carcharias
'3': tiger shark, Galeocerdo cuvieri
'4': hammerhead, hammerhead shark
'5': electric ray, crampfish, numbfish, torpedo
'6': stingray
'7': cock
'8': hen
'9': ostrich, Struthio camelus
'10': brambling, Fringilla montifringilla
'11': goldfinch, Carduelis carduelis
'12': house finch, linnet, Carpodacus mexicanus
'13': junco, snowbird
'14': indigo bunting, indigo finch, indigo bird, Passerina cyanea
'15': robin, American robin, Turdus migratorius
'16': bulbul
'17': jay
'18': magpie
'19': chickadee
'20': water ouzel, dipper
'21': kite
'22': bald eagle, American eagle, Haliaeetus leucocephalus
'23': vulture
'24': great grey owl, great gray owl, Strix nebulosa
'25': European fire salamander, Salamandra salamandra
'26': common newt, Triturus vulgaris
'27': eft
'28': spotted salamander, Ambystoma maculatum
'29': axolotl, mud puppy, Ambystoma mexicanum
'30': bullfrog, Rana catesbeiana
'31': tree frog, tree-frog
'32': tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui
'33': loggerhead, loggerhead turtle, Caretta caretta
'34': leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea
'35': mud turtle
'36': terrapin
'37': box turtle, box tortoise
'38': banded gecko
'39': common iguana, iguana, Iguana iguana
'40': American chameleon, anole, Anolis carolinensis
'41': whiptail, whiptail lizard
'42': agama
'43': frilled lizard, Chlamydosaurus kingi
'44': alligator lizard
'45': Gila monster, Heloderma suspectum
'46': green lizard, Lacerta viridis
'47': African chameleon, Chamaeleo chamaeleon
'48': Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus
komodoensis
'49': African crocodile, Nile crocodile, Crocodylus niloticus
'50': American alligator, Alligator mississipiensis
'51': triceratops
'52': thunder snake, worm snake, Carphophis amoenus
'53': ringneck snake, ring-necked snake, ring snake
'54': hognose snake, puff adder, sand viper
'55': green snake, grass snake
'56': king snake, kingsnake
'57': garter snake, grass snake
'58': water snake
'59': vine snake
'60': night snake, Hypsiglena torquata
'61': boa constrictor, Constrictor constrictor
'62': rock python, rock snake, Python sebae
'63': Indian cobra, Naja naja
'64': green mamba
'65': sea snake
'66': horned viper, cerastes, sand viper, horned asp, Cerastes cornutus
'67': diamondback, diamondback rattlesnake, Crotalus adamanteus
'68': sidewinder, horned rattlesnake, Crotalus cerastes
'69': trilobite
'70': harvestman, daddy longlegs, Phalangium opilio
'71': scorpion
'72': black and gold garden spider, Argiope aurantia
'73': barn spider, Araneus cavaticus
'74': garden spider, Aranea diademata
'75': black widow, Latrodectus mactans
'76': tarantula
'77': wolf spider, hunting spider
'78': tick
'79': centipede
'80': black grouse
'81': ptarmigan
'82': ruffed grouse, partridge, Bonasa umbellus
'83': prairie chicken, prairie grouse, prairie fowl
'84': peacock
'85': quail
'86': partridge
'87': African grey, African gray, Psittacus erithacus
'88': macaw
'89': sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita
'90': lorikeet
'91': coucal
'92': bee eater
'93': hornbill
'94': hummingbird
'95': jacamar
'96': toucan
'97': drake
'98': red-breasted merganser, Mergus serrator
'99': goose
'100': black swan, Cygnus atratus
'101': tusker
'102': echidna, spiny anteater, anteater
'103': platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus
anatinus
'104': wallaby, brush kangaroo
'105': koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus
'106': wombat
'107': jellyfish
'108': sea anemone, anemone
'109': brain coral
'110': flatworm, platyhelminth
'111': nematode, nematode worm, roundworm
'112': conch
'113': snail
'114': slug
'115': sea slug, nudibranch
'116': chiton, coat-of-mail shell, sea cradle, polyplacophore
'117': chambered nautilus, pearly nautilus, nautilus
'118': Dungeness crab, Cancer magister
'119': rock crab, Cancer irroratus
'120': fiddler crab
'121': king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes
camtschatica
'122': American lobster, Northern lobster, Maine lobster, Homarus americanus
'123': spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish
'124': crayfish, crawfish, crawdad, crawdaddy
'125': hermit crab
'126': isopod
'127': white stork, Ciconia ciconia
'128': black stork, Ciconia nigra
'129': spoonbill
'130': flamingo
'131': little blue heron, Egretta caerulea
'132': American egret, great white heron, Egretta albus
'133': bittern
'134': crane
'135': limpkin, Aramus pictus
'136': European gallinule, Porphyrio porphyrio
'137': American coot, marsh hen, mud hen, water hen, Fulica americana
'138': bustard
'139': ruddy turnstone, Arenaria interpres
'140': red-backed sandpiper, dunlin, Erolia alpina
'141': redshank, Tringa totanus
'142': dowitcher
'143': oystercatcher, oyster catcher
'144': pelican
'145': king penguin, Aptenodytes patagonica
'146': albatross, mollymawk
'147': grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius
robustus
'148': killer whale, killer, orca, grampus, sea wolf, Orcinus orca
'149': dugong, Dugong dugon
'150': sea lion
'151': Chihuahua
'152': Japanese spaniel
'153': Maltese dog, Maltese terrier, Maltese
'154': Pekinese, Pekingese, Peke
'155': Shih-Tzu
'156': Blenheim spaniel
'157': papillon
'158': toy terrier
'159': Rhodesian ridgeback
'160': Afghan hound, Afghan
'161': basset, basset hound
'162': beagle
'163': bloodhound, sleuthhound
'164': bluetick
'165': black-and-tan coonhound
'166': Walker hound, Walker foxhound
'167': English foxhound
'168': redbone
'169': borzoi, Russian wolfhound
'170': Irish wolfhound
'171': Italian greyhound
'172': whippet
'173': Ibizan hound, Ibizan Podenco
'174': Norwegian elkhound, elkhound
'175': otterhound, otter hound
'176': Saluki, gazelle hound
'177': Scottish deerhound, deerhound
'178': Weimaraner
'179': Staffordshire bullterrier, Staffordshire bull terrier
'180': American Staffordshire terrier, Staffordshire terrier, American pit
bull terrier, pit bull terrier
'181': Bedlington terrier
'182': Border terrier
'183': Kerry blue terrier
'184': Irish terrier
'185': Norfolk terrier
'186': Norwich terrier
'187': Yorkshire terrier
'188': wire-haired fox terrier
'189': Lakeland terrier
'190': Sealyham terrier, Sealyham
'191': Airedale, Airedale terrier
'192': cairn, cairn terrier
'193': Australian terrier
'194': Dandie Dinmont, Dandie Dinmont terrier
'195': Boston bull, Boston terrier
'196': miniature schnauzer
'197': giant schnauzer
'198': standard schnauzer
'199': Scotch terrier, Scottish terrier, Scottie
'200': Tibetan terrier, chrysanthemum dog
'201': silky terrier, Sydney silky
'202': soft-coated wheaten terrier
'203': West Highland white terrier
'204': Lhasa, Lhasa apso
'205': flat-coated retriever
'206': curly-coated retriever
'207': golden retriever
'208': Labrador retriever
'209': Chesapeake Bay retriever
'210': German short-haired pointer
'211': vizsla, Hungarian pointer
'212': English setter
'213': Irish setter, red setter
'214': Gordon setter
'215': Brittany spaniel
'216': clumber, clumber spaniel
'217': English springer, English springer spaniel
'218': Welsh springer spaniel
'219': cocker spaniel, English cocker spaniel, cocker
'220': Sussex spaniel
'221': Irish water spaniel
'222': kuvasz
'223': schipperke
'224': groenendael
'225': malinois
'226': briard
'227': kelpie
'228': komondor
'229': Old English sheepdog, bobtail
'230': Shetland sheepdog, Shetland sheep dog, Shetland
'231': collie
'232': Border collie
'233': Bouvier des Flandres, Bouviers des Flandres
'234': Rottweiler
'235': German shepherd, German shepherd dog, German police dog, alsatian
'236': Doberman, Doberman pinscher
'237': miniature pinscher
'238': Greater Swiss Mountain dog
'239': Bernese mountain dog
'240': Appenzeller
'241': EntleBucher
'242': boxer
'243': bull mastiff
'244': Tibetan mastiff
'245': French bulldog
'246': Great Dane
'247': Saint Bernard, St Bernard
'248': Eskimo dog, husky
'249': malamute, malemute, Alaskan malamute
'250': Siberian husky
'251': dalmatian, coach dog, carriage dog
'252': affenpinscher, monkey pinscher, monkey dog
'253': basenji
'254': pug, pug-dog
'255': Leonberg
'256': Newfoundland, Newfoundland dog
'257': Great Pyrenees
'258': Samoyed, Samoyede
'259': Pomeranian
'260': chow, chow chow
'261': keeshond
'262': Brabancon griffon
'263': Pembroke, Pembroke Welsh corgi
'264': Cardigan, Cardigan Welsh corgi
'265': toy poodle
'266': miniature poodle
'267': standard poodle
'268': Mexican hairless
'269': timber wolf, grey wolf, gray wolf, Canis lupus
'270': white wolf, Arctic wolf, Canis lupus tundrarum
'271': red wolf, maned wolf, Canis rufus, Canis niger
'272': coyote, prairie wolf, brush wolf, Canis latrans
'273': dingo, warrigal, warragal, Canis dingo
'274': dhole, Cuon alpinus
'275': African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus
'276': hyena, hyaena
'277': red fox, Vulpes vulpes
'278': kit fox, Vulpes macrotis
'279': Arctic fox, white fox, Alopex lagopus
'280': grey fox, gray fox, Urocyon cinereoargenteus
'281': tabby, tabby cat
'282': tiger cat
'283': Persian cat
'284': Siamese cat, Siamese
'285': Egyptian cat
'286': cougar, puma, catamount, mountain lion, painter, panther, Felis concolor
'287': lynx, catamount
'288': leopard, Panthera pardus
'289': snow leopard, ounce, Panthera uncia
'290': jaguar, panther, Panthera onca, Felis onca
'291': lion, king of beasts, Panthera leo
'292': tiger, Panthera tigris
'293': cheetah, chetah, Acinonyx jubatus
'294': brown bear, bruin, Ursus arctos
'295': American black bear, black bear, Ursus americanus, Euarctos americanus
'296': ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus
'297': sloth bear, Melursus ursinus, Ursus ursinus
'298': mongoose
'299': meerkat, mierkat
'300': tiger beetle
'301': ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle
'302': ground beetle, carabid beetle
'303': long-horned beetle, longicorn, longicorn beetle
'304': leaf beetle, chrysomelid
'305': dung beetle
'306': rhinoceros beetle
'307': weevil
'308': fly
'309': bee
'310': ant, emmet, pismire
'311': grasshopper, hopper
'312': cricket
'313': walking stick, walkingstick, stick insect
'314': cockroach, roach
'315': mantis, mantid
'316': cicada, cicala
'317': leafhopper
'318': lacewing, lacewing fly
'319': dragonfly, darning needle, devil's darning needle, sewing needle,
snake feeder, snake doctor, mosquito hawk, skeeter hawk
'320': damselfly
'321': admiral
'322': ringlet, ringlet butterfly
'323': monarch, monarch butterfly, milkweed butterfly, Danaus plexippus
'324': cabbage butterfly
'325': sulphur butterfly, sulfur butterfly
'326': lycaenid, lycaenid butterfly
'327': starfish, sea star
'328': sea urchin
'329': sea cucumber, holothurian
'330': wood rabbit, cottontail, cottontail rabbit
'331': hare
'332': Angora, Angora rabbit
'333': hamster
'334': porcupine, hedgehog
'335': fox squirrel, eastern fox squirrel, Sciurus niger
'336': marmot
'337': beaver
'338': guinea pig, Cavia cobaya
'339': sorrel
'340': zebra
'341': hog, pig, grunter, squealer, Sus scrofa
'342': wild boar, boar, Sus scrofa
'343': warthog
'344': hippopotamus, hippo, river horse, Hippopotamus amphibius
'345': ox
'346': water buffalo, water ox, Asiatic buffalo, Bubalus bubalis
'347': bison
'348': ram, tup
'349': bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain
sheep, Ovis canadensis
'350': ibex, Capra ibex
'351': hartebeest
'352': impala, Aepyceros melampus
'353': gazelle
'354': Arabian camel, dromedary, Camelus dromedarius
'355': llama
'356': weasel
'357': mink
'358': polecat, fitch, foulmart, foumart, Mustela putorius
'359': black-footed ferret, ferret, Mustela nigripes
'360': otter
'361': skunk, polecat, wood pussy
'362': badger
'363': armadillo
'364': three-toed sloth, ai, Bradypus tridactylus
'365': orangutan, orang, orangutang, Pongo pygmaeus
'366': gorilla, Gorilla gorilla
'367': chimpanzee, chimp, Pan troglodytes
'368': gibbon, Hylobates lar
'369': siamang, Hylobates syndactylus, Symphalangus syndactylus
'370': guenon, guenon monkey
'371': patas, hussar monkey, Erythrocebus patas
'372': baboon
'373': macaque
'374': langur
'375': colobus, colobus monkey
'376': proboscis monkey, Nasalis larvatus
'377': marmoset
'378': capuchin, ringtail, Cebus capucinus
'379': howler monkey, howler
'380': titi, titi monkey
'381': spider monkey, Ateles geoffroyi
'382': squirrel monkey, Saimiri sciureus
'383': Madagascar cat, ring-tailed lemur, Lemur catta
'384': indri, indris, Indri indri, Indri brevicaudatus
'385': Indian elephant, Elephas maximus
'386': African elephant, Loxodonta africana
'387': lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens
'388': giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca
'389': barracouta, snoek
'390': eel
'391': coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus
kisutch
'392': rock beauty, Holocanthus tricolor
'393': anemone fish
'394': sturgeon
'395': gar, garfish, garpike, billfish, Lepisosteus osseus
'396': lionfish
'397': puffer, pufferfish, blowfish, globefish
'398': abacus
'399': abaya
'400': academic gown, academic robe, judge's robe
'401': accordion, piano accordion, squeeze box
'402': acoustic guitar
'403': aircraft carrier, carrier, flattop, attack aircraft carrier
'404': airliner
'405': airship, dirigible
'406': altar
'407': ambulance
'408': amphibian, amphibious vehicle
'409': analog clock
'410': apiary, bee house
'411': apron
'412': ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin,
dustbin, trash barrel, trash bin
'413': assault rifle, assault gun
'414': backpack, back pack, knapsack, packsack, rucksack, haversack
'415': bakery, bakeshop, bakehouse
'416': balance beam, beam
'417': balloon
'418': ballpoint, ballpoint pen, ballpen, Biro
'419': Band Aid
'420': banjo
'421': bannister, banister, balustrade, balusters, handrail
'422': barbell
'423': barber chair
'424': barbershop
'425': barn
'426': barometer
'427': barrel, cask
'428': barrow, garden cart, lawn cart, wheelbarrow
'429': baseball
'430': basketball
'431': bassinet
'432': bassoon
'433': bathing cap, swimming cap
'434': bath towel
'435': bathtub, bathing tub, bath, tub
'436': beach wagon, station wagon, wagon, estate car, beach waggon, station
waggon, waggon
'437': beacon, lighthouse, beacon light, pharos
'438': beaker
'439': bearskin, busby, shako
'440': beer bottle
'441': beer glass
'442': bell cote, bell cot
'443': bib
'444': bicycle-built-for-two, tandem bicycle, tandem
'445': bikini, two-piece
'446': binder, ring-binder
'447': binoculars, field glasses, opera glasses
'448': birdhouse
'449': boathouse
'450': bobsled, bobsleigh, bob
'451': bolo tie, bolo, bola tie, bola
'452': bonnet, poke bonnet
'453': bookcase
'454': bookshop, bookstore, bookstall
'455': bottlecap
'456': bow
'457': bow tie, bow-tie, bowtie
'458': brass, memorial tablet, plaque
'459': brassiere, bra, bandeau
'460': breakwater, groin, groyne, mole, bulwark, seawall, jetty
'461': breastplate, aegis, egis
'462': broom
'463': bucket, pail
'464': buckle
'465': bulletproof vest
'466': bullet train, bullet
'467': butcher shop, meat market
'468': cab, hack, taxi, taxicab
'469': caldron, cauldron
'470': candle, taper, wax light
'471': cannon
'472': canoe
'473': can opener, tin opener
'474': cardigan
'475': car mirror
'476': carousel, carrousel, merry-go-round, roundabout, whirligig
'477': carpenter's kit, tool kit
'478': carton
'479': car wheel
'480': cash machine, cash dispenser, automated teller machine, automatic
teller machine, automated teller, automatic teller, ATM
'481': cassette
'482': cassette player
'483': castle
'484': catamaran
'485': CD player
'486': cello, violoncello
'487': cellular telephone, cellular phone, cellphone, cell, mobile phone
'488': chain
'489': chainlink fence
'490': chain mail, ring mail, mail, chain armor, chain armour, ring armor,
ring armour
'491': chain saw, chainsaw
'492': chest
'493': chiffonier, commode
'494': chime, bell, gong
'495': china cabinet, china closet
'496': Christmas stocking
'497': church, church building
'498': cinema, movie theater, movie theatre, movie house, picture palace
'499': cleaver, meat cleaver, chopper
'500': cliff dwelling
'501': cloak
'502': clog, geta, patten, sabot
'503': cocktail shaker
'504': coffee mug
'505': coffeepot
'506': coil, spiral, volute, whorl, helix
'507': combination lock
'508': computer keyboard, keypad
'509': confectionery, confectionary, candy store
'510': container ship, containership, container vessel
'511': convertible
'512': corkscrew, bottle screw
'513': cornet, horn, trumpet, trump
'514': cowboy boot
'515': cowboy hat, ten-gallon hat
'516': cradle
'517': crane2
'518': crash helmet
'519': crate
'520': crib, cot
'521': Crock Pot
'522': croquet ball
'523': crutch
'524': cuirass
'525': dam, dike, dyke
'526': desk
'527': desktop computer
'528': dial telephone, dial phone
'529': diaper, nappy, napkin
'530': digital clock
'531': digital watch
'532': dining table, board
'533': dishrag, dishcloth
'534': dishwasher, dish washer, dishwashing machine
'535': disk brake, disc brake
'536': dock, dockage, docking facility
'537': dogsled, dog sled, dog sleigh
'538': dome
'539': doormat, welcome mat
'540': drilling platform, offshore rig
'541': drum, membranophone, tympan
'542': drumstick
'543': dumbbell
'544': Dutch oven
'545': electric fan, blower
'546': electric guitar
'547': electric locomotive
'548': entertainment center
'549': envelope
'550': espresso maker
'551': face powder
'552': feather boa, boa
'553': file, file cabinet, filing cabinet
'554': fireboat
'555': fire engine, fire truck
'556': fire screen, fireguard
'557': flagpole, flagstaff
'558': flute, transverse flute
'559': folding chair
'560': football helmet
'561': forklift
'562': fountain
'563': fountain pen
'564': four-poster
'565': freight car
'566': French horn, horn
'567': frying pan, frypan, skillet
'568': fur coat
'569': garbage truck, dustcart
'570': gasmask, respirator, gas helmet
'571': gas pump, gasoline pump, petrol pump, island dispenser
'572': goblet
'573': go-kart
'574': golf ball
'575': golfcart, golf cart
'576': gondola
'577': gong, tam-tam
'578': gown
'579': grand piano, grand
'580': greenhouse, nursery, glasshouse
'581': grille, radiator grille
'582': grocery store, grocery, food market, market
'583': guillotine
'584': hair slide
'585': hair spray
'586': half track
'587': hammer
'588': hamper
'589': hand blower, blow dryer, blow drier, hair dryer, hair drier
'590': hand-held computer, hand-held microcomputer
'591': handkerchief, hankie, hanky, hankey
'592': hard disc, hard disk, fixed disk
'593': harmonica, mouth organ, harp, mouth harp
'594': harp
'595': harvester, reaper
'596': hatchet
'597': holster
'598': home theater, home theatre
'599': honeycomb
'600': hook, claw
'601': hoopskirt, crinoline
'602': horizontal bar, high bar
'603': horse cart, horse-cart
'604': hourglass
'605': iPod
'606': iron, smoothing iron
'607': jack-o'-lantern
'608': jean, blue jean, denim
'609': jeep, landrover
'610': jersey, T-shirt, tee shirt
'611': jigsaw puzzle
'612': jinrikisha, ricksha, rickshaw
'613': joystick
'614': kimono
'615': knee pad
'616': knot
'617': lab coat, laboratory coat
'618': ladle
'619': lampshade, lamp shade
'620': laptop, laptop computer
'621': lawn mower, mower
'622': lens cap, lens cover
'623': letter opener, paper knife, paperknife
'624': library
'625': lifeboat
'626': lighter, light, igniter, ignitor
'627': limousine, limo
'628': liner, ocean liner
'629': lipstick, lip rouge
'630': Loafer
'631': lotion
'632': loudspeaker, speaker, speaker unit, loudspeaker system, speaker system
'633': loupe, jeweler's loupe
'634': lumbermill, sawmill
'635': magnetic compass
'636': mailbag, postbag
'637': mailbox, letter box
'638': maillot
'639': maillot, tank suit
'640': manhole cover
'641': maraca
'642': marimba, xylophone
'643': mask
'644': matchstick
'645': maypole
'646': maze, labyrinth
'647': measuring cup
'648': medicine chest, medicine cabinet
'649': megalith, megalithic structure
'650': microphone, mike
'651': microwave, microwave oven
'652': military uniform
'653': milk can
'654': minibus
'655': miniskirt, mini
'656': minivan
'657': missile
'658': mitten
'659': mixing bowl
'660': mobile home, manufactured home
'661': Model T
'662': modem
'663': monastery
'664': monitor
'665': moped
'666': mortar
'667': mortarboard
'668': mosque
'669': mosquito net
'670': motor scooter, scooter
'671': mountain bike, all-terrain bike, off-roader
'672': mountain tent
'673': mouse, computer mouse
'674': mousetrap
'675': moving van
'676': muzzle
'677': nail
'678': neck brace
'679': necklace
'680': nipple
'681': notebook, notebook computer
'682': obelisk
'683': oboe, hautboy, hautbois
'684': ocarina, sweet potato
'685': odometer, hodometer, mileometer, milometer
'686': oil filter
'687': organ, pipe organ
'688': oscilloscope, scope, cathode-ray oscilloscope, CRO
'689': overskirt
'690': oxcart
'691': oxygen mask
'692': packet
'693': paddle, boat paddle
'694': paddlewheel, paddle wheel
'695': padlock
'696': paintbrush
'697': pajama, pyjama, pj's, jammies
'698': palace
'699': panpipe, pandean pipe, syrinx
'700': paper towel
'701': parachute, chute
'702': parallel bars, bars
'703': park bench
'704': parking meter
'705': passenger car, coach, carriage
'706': patio, terrace
'707': pay-phone, pay-station
'708': pedestal, plinth, footstall
'709': pencil box, pencil case
'710': pencil sharpener
'711': perfume, essence
'712': Petri dish
'713': photocopier
'714': pick, plectrum, plectron
'715': pickelhaube
'716': picket fence, paling
'717': pickup, pickup truck
'718': pier
'719': piggy bank, penny bank
'720': pill bottle
'721': pillow
'722': ping-pong ball
'723': pinwheel
'724': pirate, pirate ship
'725': pitcher, ewer
'726': plane, carpenter's plane, woodworking plane
'727': planetarium
'728': plastic bag
'729': plate rack
'730': plow, plough
'731': plunger, plumber's helper
'732': Polaroid camera, Polaroid Land camera
'733': pole
'734': police van, police wagon, paddy wagon, patrol wagon, wagon, black
Maria
'735': poncho
'736': pool table, billiard table, snooker table
'737': pop bottle, soda bottle
'738': pot, flowerpot
'739': potter's wheel
'740': power drill
'741': prayer rug, prayer mat
'742': printer
'743': prison, prison house
'744': projectile, missile
'745': projector
'746': puck, hockey puck
'747': punching bag, punch bag, punching ball, punchball
'748': purse
'749': quill, quill pen
'750': quilt, comforter, comfort, puff
'751': racer, race car, racing car
'752': racket, racquet
'753': radiator
'754': radio, wireless
'755': radio telescope, radio reflector
'756': rain barrel
'757': recreational vehicle, RV, R.V.
'758': reel
'759': reflex camera
'760': refrigerator, icebox
'761': remote control, remote
'762': restaurant, eating house, eating place, eatery
'763': revolver, six-gun, six-shooter
'764': rifle
'765': rocking chair, rocker
'766': rotisserie
'767': rubber eraser, rubber, pencil eraser
'768': rugby ball
'769': rule, ruler
'770': running shoe
'771': safe
'772': safety pin
'773': saltshaker, salt shaker
'774': sandal
'775': sarong
'776': sax, saxophone
'777': scabbard
'778': scale, weighing machine
'779': school bus
'780': schooner
'781': scoreboard
'782': screen, CRT screen
'783': screw
'784': screwdriver
'785': seat belt, seatbelt
'786': sewing machine
'787': shield, buckler
'788': shoe shop, shoe-shop, shoe store
'789': shoji
'790': shopping basket
'791': shopping cart
'792': shovel
'793': shower cap
'794': shower curtain
'795': ski
'796': ski mask
'797': sleeping bag
'798': slide rule, slipstick
'799': sliding door
'800': slot, one-armed bandit
'801': snorkel
'802': snowmobile
'803': snowplow, snowplough
'804': soap dispenser
'805': soccer ball
'806': sock
'807': solar dish, solar collector, solar furnace
'808': sombrero
'809': soup bowl
'810': space bar
'811': space heater
'812': space shuttle
'813': spatula
'814': speedboat
'815': spider web, spider's web
'816': spindle
'817': sports car, sport car
'818': spotlight, spot
'819': stage
'820': steam locomotive
'821': steel arch bridge
'822': steel drum
'823': stethoscope
'824': stole
'825': stone wall
'826': stopwatch, stop watch
'827': stove
'828': strainer
'829': streetcar, tram, tramcar, trolley, trolley car
'830': stretcher
'831': studio couch, day bed
'832': stupa, tope
'833': submarine, pigboat, sub, U-boat
'834': suit, suit of clothes
'835': sundial
'836': sunglass
'837': sunglasses, dark glasses, shades
'838': sunscreen, sunblock, sun blocker
'839': suspension bridge
'840': swab, swob, mop
'841': sweatshirt
'842': swimming trunks, bathing trunks
'843': swing
'844': switch, electric switch, electrical switch
'845': syringe
'846': table lamp
'847': tank, army tank, armored combat vehicle, armoured combat vehicle
'848': tape player
'849': teapot
'850': teddy, teddy bear
'851': television, television system
'852': tennis ball
'853': thatch, thatched roof
'854': theater curtain, theatre curtain
'855': thimble
'856': thresher, thrasher, threshing machine
'857': throne
'858': tile roof
'859': toaster
'860': tobacco shop, tobacconist shop, tobacconist
'861': toilet seat
'862': torch
'863': totem pole
'864': tow truck, tow car, wrecker
'865': toyshop
'866': tractor
'867': trailer truck, tractor trailer, trucking rig, rig, articulated lorry,
semi
'868': tray
'869': trench coat
'870': tricycle, trike, velocipede
'871': trimaran
'872': tripod
'873': triumphal arch
'874': trolleybus, trolley coach, trackless trolley
'875': trombone
'876': tub, vat
'877': turnstile
'878': typewriter keyboard
'879': umbrella
'880': unicycle, monocycle
'881': upright, upright piano
'882': vacuum, vacuum cleaner
'883': vase
'884': vault
'885': velvet
'886': vending machine
'887': vestment
'888': viaduct
'889': violin, fiddle
'890': volleyball
'891': waffle iron
'892': wall clock
'893': wallet, billfold, notecase, pocketbook
'894': wardrobe, closet, press
'895': warplane, military plane
'896': washbasin, handbasin, washbowl, lavabo, wash-hand basin
'897': washer, automatic washer, washing machine
'898': water bottle
'899': water jug
'900': water tower
'901': whiskey jug
'902': whistle
'903': wig
'904': window screen
'905': window shade
'906': Windsor tie
'907': wine bottle
'908': wing
'909': wok
'910': wooden spoon
'911': wool, woolen, woollen
'912': worm fence, snake fence, snake-rail fence, Virginia fence
'913': wreck
'914': yawl
'915': yurt
'916': web site, website, internet site, site
'917': comic book
'918': crossword puzzle, crossword
'919': street sign
'920': traffic light, traffic signal, stoplight
'921': book jacket, dust cover, dust jacket, dust wrapper
'922': menu
'923': plate
'924': guacamole
'925': consomme
'926': hot pot, hotpot
'927': trifle
'928': ice cream, icecream
'929': ice lolly, lolly, lollipop, popsicle
'930': French loaf
'931': bagel, beigel
'932': pretzel
'933': cheeseburger
'934': hotdog, hot dog, red hot
'935': mashed potato
'936': head cabbage
'937': broccoli
'938': cauliflower
'939': zucchini, courgette
'940': spaghetti squash
'941': acorn squash
'942': butternut squash
'943': cucumber, cuke
'944': artichoke, globe artichoke
'945': bell pepper
'946': cardoon
'947': mushroom
'948': Granny Smith
'949': strawberry
'950': orange
'951': lemon
'952': fig
'953': pineapple, ananas
'954': banana
'955': jackfruit, jak, jack
'956': custard apple
'957': pomegranate
'958': hay
'959': carbonara
'960': chocolate sauce, chocolate syrup
'961': dough
'962': meat loaf, meatloaf
'963': pizza, pizza pie
'964': potpie
'965': burrito
'966': red wine
'967': espresso
'968': cup
'969': eggnog
'970': alp
'971': bubble
'972': cliff, drop, drop-off
'973': coral reef
'974': geyser
'975': lakeside, lakeshore
'976': promontory, headland, head, foreland
'977': sandbar, sand bar
'978': seashore, coast, seacoast, sea-coast
'979': valley, vale
'980': volcano
'981': ballplayer, baseball player
'982': groom, bridegroom
'983': scuba diver
'984': rapeseed
'985': daisy
'986': yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus,
Cypripedium parviflorum
'987': corn
'988': acorn
'989': hip, rose hip, rosehip
'990': buckeye, horse chestnut, conker
'991': coral fungus
'992': agaric
'993': gyromitra
'994': stinkhorn, carrion fungus
'995': earthstar
'996': hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola
frondosa
'997': bolete
'998': ear, spike, capitulum
'999': toilet tissue, toilet paper, bathroom tissue
- name: lexicon
sequence: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 349126026.0
num_examples: 3000
download_size: 340943693
dataset_size: 349126026.0
---
# Dataset Card for "Imagenet1k_sample_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6339205503463745,
0.09070248156785965,
-0.0800275057554245,
0.3159215450286865,
-0.40184491872787476,
-0.2941700518131256,
0.3848956823348999,
-0.10289342701435089,
0.8468663096427917,
0.5025452375411987,
-0.9196581840515137,
-0.6871848106384277,
-0.6538602113723755,
-0.3423089385032654... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KnutJaegersberg/IPTC-topic-classifier-labels | KnutJaegersberg | 2023-02-12T12:50:34Z | 17 | 1 | null | [
"region:us"
] | 2023-02-12T12:50:34Z | 2023-02-12T12:49:59.000Z | 2023-02-12T12:49:59 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LFBMS/class_dataset_real_donut_train_val | LFBMS | 2023-02-17T12:54:59Z | 17 | 0 | null | [
"region:us"
] | 2023-02-17T12:54:59Z | 2023-02-17T12:54:48.000Z | 2023-02-17T12:54:48 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': bilanz_h
'1': bilanz_v
'2': guv
'3': kontennachweis_bilanz
'4': kontennachweis_guv
'5': other
'6': text
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 294898200.8863026
num_examples: 1005
- name: test
num_bytes: 32864277.113697402
num_examples: 112
download_size: 307756703
dataset_size: 327762478.0
---
# Dataset Card for "class_dataset_real_donut_train_val"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.37028732895851135,
-0.28249090909957886,
0.13268181681632996,
-0.02867397665977478,
0.00173900555819273,
0.2749340236186981,
0.268808513879776,
0.03799424692988396,
0.7405475974082947,
0.4784441292285919,
-0.7258842587471008,
-0.4616943299770355,
-0.5837969183921814,
-0.3232901394367218... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
carexl8/telegram_he_ru | carexl8 | 2023-04-07T11:23:47Z | 17 | 0 | null | [
"region:us"
] | 2023-04-07T11:23:47Z | 2023-02-25T11:55:59.000Z | 2023-02-25T11:55:59 | ---
dataset_info:
features:
- name: id
dtype: string
- name: name
dtype: string
- name: time
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: language tags
sequence: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 30629039
num_examples: 43336
download_size: 8829228
dataset_size: 30629039
---
# Dataset Card for "telegram_he_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3509250283241272,
-0.27824893593788147,
-0.06770341098308563,
0.4995283782482147,
-0.4138025641441345,
0.07848278433084488,
0.08874082565307617,
-0.22433152794837952,
0.9164149165153503,
0.39650389552116394,
-0.8669653534889221,
-0.8956001996994019,
-0.7102057933807373,
-0.2823605835437... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Jacobvs/CelebrityTweets | Jacobvs | 2023-03-02T23:01:59Z | 17 | 0 | null | [
"region:us"
] | 2023-03-02T23:01:59Z | 2023-03-02T23:01:12.000Z | 2023-03-02T23:01:12 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
IlyaGusev/yandex_q_full | IlyaGusev | 2023-03-07T20:30:24Z | 17 | 1 | null | [
"region:us"
] | 2023-03-07T20:30:24Z | 2023-03-06T18:17:41.000Z | 2023-03-06T18:17:41 | ---
dataset_info:
features:
- name: id
dtype: string
- name: id2
dtype: int64
- name: title
dtype: string
- name: text_plain
dtype: string
- name: text_html
dtype: string
- name: author
dtype: string
- name: negative_votes
dtype: int32
- name: positive_votes
dtype: int32
- name: quality
dtype: int8
- name: views
dtype: uint64
- name: votes
dtype: int32
- name: approved_answer
dtype: string
- name: timestamp
dtype: uint64
- name: tags
sequence: string
- name: answers
sequence:
- name: id
dtype: string
- name: id2
dtype: int64
- name: text_plain
dtype: string
- name: text_html
dtype: string
- name: author
dtype: string
- name: negative_votes
dtype: int32
- name: positive_votes
dtype: int32
- name: votes
dtype: int32
- name: quality
dtype: int8
- name: views
dtype: uint64
- name: reposts
dtype: int32
- name: timestamp
dtype: uint64
splits:
- name: train
num_bytes: 5468460217
num_examples: 1297670
download_size: 1130317937
dataset_size: 5468460217
---
Based on https://huggingface.co/datasets/its5Q/yandex-q, parsed full.jsonl.gz
| [
-0.36909177899360657,
-0.3590656518936157,
0.6782532930374146,
0.4008903503417969,
-0.12084157019853592,
-0.11872879415750504,
0.0033704815432429314,
-0.5347347259521484,
0.7812456488609314,
0.5734042525291443,
-1.1155205965042114,
-1.065049171447754,
-0.26916804909706116,
0.09844050556421... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
whyoke/segmentation_drone | whyoke | 2023-03-11T18:26:58Z | 17 | 1 | null | [
"region:us"
] | 2023-03-11T18:26:58Z | 2023-03-11T18:19:16.000Z | 2023-03-11T18:19:16 | ---
dataset_info:
features:
- name: image
dtype: image
- name: annotation
dtype: image
splits:
- name: train
num_bytes: 469141459.0
num_examples: 350
- name: annotation
num_bytes: 53547177.0
num_examples: 40
download_size: 522729573
dataset_size: 522688636.0
---
# Dataset Card for "segmentation_drone"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7723647952079773,
-0.11876165866851807,
0.05506753548979759,
0.056513626128435135,
-0.357744038105011,
0.2755674719810486,
0.4898149371147156,
-0.22456134855747223,
0.88181471824646,
0.3713266849517822,
-0.7152172327041626,
-0.7354573607444763,
-0.4633946716785431,
-0.4112663269042969,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AnonymousSub/MedQuAD_Context_Question_Answer_Triples_TWO | AnonymousSub | 2023-03-14T18:17:38Z | 17 | 5 | null | [
"region:us"
] | 2023-03-14T18:17:38Z | 2023-03-14T18:17:35.000Z | 2023-03-14T18:17:35 | ---
dataset_info:
features:
- name: Contexts
dtype: string
- name: Questions
dtype: string
- name: Answers
dtype: string
splits:
- name: train
num_bytes: 190839732
num_examples: 47441
download_size: 21760499
dataset_size: 190839732
---
# Dataset Card for "MedQuAD_Context_Question_Answer_Triples_TWO"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5476153492927551,
-0.5508464574813843,
0.38098084926605225,
0.21115359663963318,
-0.1891442984342575,
-0.15179872512817383,
0.34573230147361755,
-0.1589037925004959,
0.6213654279708862,
0.6381193399429321,
-0.6552448272705078,
-0.5171641111373901,
-0.4463723301887512,
-0.181440547108650... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
paulofinardi/OIG_small_chip2_portuguese_brasil | paulofinardi | 2023-03-19T23:16:11Z | 17 | 8 | null | [
"task_categories:conversational",
"task_categories:text2text-generation",
"language:pt",
"region:us"
] | 2023-03-19T23:16:11Z | 2023-03-19T22:45:05.000Z | 2023-03-19T22:45:05 | ---
dataset_info:
features:
- name: user
dtype: string
- name: chip2
dtype: string
splits:
- name: train
num_examples: 210289
task_categories:
- conversational
- text2text-generation
language:
- pt
---
# Dataset Card for "OIG_small_chip2_portuguese_brasil"
This dataset was translated to Portuguese-Brasil from [here](https://huggingface.co/datasets/0-hero/OIG-small-chip2)
The data was translated with *MarianMT* model and weights [Helsinki-NLP/opus-mt-en-ROMANCE](https://huggingface.co/Helsinki-NLP/opus-mt-en-ROMANCE)
The full details to replicate the translation are here: [translation_notebook](https://github.com/finardi/tutos/blob/master/translate_Laion_OIG.ipynb)
---
license: apache-2.0
--- | [
0.0026486574206501245,
-0.533791184425354,
0.11930423229932785,
0.47484198212623596,
-0.6358034610748291,
-0.25094133615493774,
-0.23319239914417267,
-0.665015459060669,
0.4710208475589752,
0.6244562864303589,
-0.4523356556892395,
-0.658862829208374,
-0.674868106842041,
0.0276249460875988,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
somosnlp/somos-clean-alpaca-es | somosnlp | 2023-04-05T15:00:28Z | 17 | 12 | null | [
"region:us"
] | 2023-04-05T15:00:28Z | 2023-03-24T13:09:28.000Z | 2023-03-24T13:09:28 | ---
dataset_info:
features:
- name: text
dtype: 'null'
- name: inputs
struct:
- name: 1-instruction
dtype: string
- name: 2-input
dtype: string
- name: 3-output
dtype: string
- name: prediction
list:
- name: label
dtype: string
- name: score
dtype: float64
- name: prediction_agent
dtype: 'null'
- name: annotation
dtype: 'null'
- name: annotation_agent
dtype: 'null'
- name: vectors
struct:
- name: input
sequence: float64
- name: instruction
sequence: float64
- name: output
sequence: float64
- name: multi_label
dtype: bool
- name: explanation
dtype: 'null'
- name: id
dtype: string
- name: metadata
struct:
- name: tr-flag-1-instruction
dtype: bool
- name: tr-flag-2-input
dtype: bool
- name: tr-flag-3-output
dtype: bool
- name: status
dtype: string
- name: event_timestamp
dtype: timestamp[us]
- name: metrics
dtype: 'null'
splits:
- name: train
num_bytes: 985217294
num_examples: 51942
download_size: 651888026
dataset_size: 985217294
---
# Dataset Card for "somos-clean-alpaca-es"
Este conjunto de datos es una traducción del dataset Clean Alpaca al Español y sirve como referencia para el esfuerzo colaborativo de limpieza y mejora del dataset durante el [Hackathon Somos NLP 2023](https://somosnlp.org/hackathon). *Nota: No es necesario participar en el hackathon para contribuir a esta tarea.*
Cuantas más personas y equipos participen mayor será la calidad del dataset final y por lo tanto también del LLM que entrenemos, ¡únete!
Te explicamos como participar:
> **[Video explicativo (10 mins) | Daniel @Argilla](https://www.youtube.com/watch?v=Q-2qsvOEgnA)**
> **[Artículo "Ayuda a mejorar los LLM de AI en español en 7 sencillos pasos" | Carlos @Platzi](https://platzi.com/blog/ayuda-a-mejorar-los-llm-en-espanol-en-7-sencillos-pasos/)**
Estamos a tu disponibilidad en el **[canal #alpaca-es](https://discord.com/invite/my8w7JUxZR)** del servidor de Discord de Somos NLP.
## 🔥 El reto
A continuación se describen los pasos y normas para participar:
1. Se debe utilizar este conjunto de datos como punto de partida y mantener tanto los `ids` como la estructura. Esto es así para poder realizar tareas posteriores de validación cruzada y mejoras programáticas del dataset final.
2. Se trata de un dataset en formato compatible con Argilla. Cada equipo o persona que quiera participar, puede trabajar con su propia instancia de Argilla. Una forma fácil de empezar es duplicar el Space que hemos creado para el reto. En la sección de abajo encontrarás como hacerlo.
3. Argilla se puede utilizar para validar y etiquetar manualmente y usando búsquedas y similitud semántica desde la UI. Para ello se pondrán ejemplos de uso del lenguaje de búsqueda en esta página, pero se recomienda consultar [la guía de uso](https://docs.argilla.io/en/latest/guides/query_datasets.html).
4. La validación humana es necesaria para garantizar la calidad final pero se pueden realizar también limpiezas programáticas para aquellos casos en los que sea más eficiente. En cualquier caso, para el éxito del experimento se deberán utilizar las etiquetas propuestas, aunque se modifique programáticamente el dataset.
5. No se deben borrar registros del dataset, si un registro es inválido se deberá indicar en la etiqueta (por ejemplo `BAD INPUT`) o con el status `discard`.
6. Antes de empezar a anotar, es necesario leer la [guía de anotación](guia-de-anotacion.md) al completo.
El resultado del reto será un dataset por persona o equipo que contenga el dataset original etiquetado parcialmente, y opcionalmente otras versiones/subconjuntos del dataset con los datos corregidos, mejorados o aumentados. En estos casos es conveniente mantener un dataset a parte con los ids originales.
Al finalizar combinaremos todas las versiones etiquetadas para conseguir un dataset de calidad.
## ✅ Cómo empezar a etiquetar
Para etiquetar el dataset tienes que:
1. Lanzar tu Argilla Space siguiendo [este link](https://huggingface.co/spaces/somosnlp/somos-alpaca-es?duplicate=true). Esto te guiará para crear una instancia de Argilla en el Hub que cargará automaticamente el dataset (ver captura de pantalla abajo). **IMPORTANTE**: que el Space sea Public para poder leer los datos etiquetados desde Python. El proceso de carga puede tardar hasta 10 minutos, puedes consultar los logs para comprobar que se están cargando los datos.
2. **IMPORTANTE:** Si se quiere sincronizar los datos validados con el Hub para no perder las anotaciones si se reinicia el Space, hay que configurar dos secrets (en Settings del Space): `HF_TOKEN` que es [vuestro token de escritura](https://huggingface.co/settings/tokens), y `HUB_DATASET_NAME` que es el dataset donde queréis guardarlo, importante incluir la organizacion o persona seguido de un / y el nombre del dataset. Por ejemplo `juanmartinez/somos-clean-alpaca-es-validations` o `miempresa/somos-clean-alpaca-es-validations`.
3. El usuario y contraseña es `argilla` / `1234`. Mientras se carga tu Argilla Space con el dataset puedes aprovechar para leer las guías de anotación.
4. Aunque en principio se va sincronizar el dataset anotado, recomendamos que abras Colab o un notebook en local y que guardes el dataset periodicamente en un dataset del Hub (puede ser en tu espacio personal o tu organización). Para ello recomendamos leer el apartado como guardar el dataset en el Hub.
Se recomienda mirar el log del Space para ver si hay errores a la hora de configurar los Secret `HF_TOKEN` y `HUB_DATASET_NAME`.

## 🚀 Desplegar Argilla localmente o en un servidor cloud
Para equipos que tengan el tiempo y quieran desplegar una versión con más capacidad de computación y estabilidad que Spaces, [aquí hay una guía explicativa](https://docs.argilla.io/en/latest/getting_started/installation/deployments/deployments.html).
Una vez instalada, se deben subir los datos con [este notebook](https://colab.research.google.com/drive/1KyikSFeJe6_lQNs-9cHveIOGM99ENha9#scrollTo=jbfdRoRVXTW6).
## ✍️ Guías de anotación
Antes de empezar a anotar, es necesario leer la [guía de anotación](guia-de-anotacion.md) al completo.
## 💾 IMPORTANTE: Guardar el dataset en el Hub periodicamente
Aunque se ha configurado el Space para que se sincronice con un dataset del Hub a vuestra elección, para tener más seguridad se recomienda guardar una copia del dataset en el Hub ejecutando el siguiente código. Es necesario hacer login con Python usando `from huggingface_hub import notebook_login` o añadir el token directamente al hacer el push_to_hub:
```python
import argilla as rg
# usar rg.init() para definir la API_URL (la direct URL de tu Space de Argilla) y API_KEY
rg.init(
api_url="https://tu-space-de-argilla.hf.space",
api_key="team.apikey"
)
# Leer dataset con validaciones de Argilla
rg_dataset = rg.load("somos-clean-alpaca-es-team", query="status:Validated")
# Transformar a formato datasets
dataset = rg_dataset.to_datasets()
# Publicar en el Hub, puedes usar cualquier nombre de dataset que elijas
dataset.push_to_hub("somos-clean-alpaca-es", token="TU TOKEN WRITE EN SETTINGS HUB. NO NECESARIO SI HAS HECHO LOGIN")
```
Una vez hecho esto se puede recuperar el dataset y volver a cargar en Argilla con el notebook de "Cómo cargar el dataset en Argilla".
## 🔎 Ejemplos de consultas y trucos para etiquetar
Se recomienda comenzar explorando y etiquetando el dataset de manera secuencial para entender la estructura e ir identificando patrones.
Una vez hecho esto se recomienda combinarlo con las siguientes herramientas:
### Utilizar el buscador
Tanto con palabras clave, como con expresiones regulares, y wildcards y expresiones booleanas, ver [la guía de uso](https://docs.argilla.io/en/latest/guides/query_datasets.html).
Un aspecto interesante es la capacidad de buscar solo en determinados campos. Para ello, hay que utilizar la siguiente sintaxis `inputs.nombre_del_campo:"consulta"`:
Por ejemplo: `inputs.1-instruction:"Crear una página"` encontraría todos aquellos registros con este texto en la instrucción.
Además, esto se puede combinar con expresiones booleanas para buscar en varios campos: `inputs.1-instruction:"Crear una página" AND inputs.3-output:"html"`
Otro ejemplos:
Encontrar frases de instrucción en Inglés: `inputs.1-instruction:Edit the following sentence` encuentra más de 100 instrucciones inválidas.
### Find similar
Cuando encontramos patrones interesantes o erróneos en un registro y campo, podemos usar el botón find similar para encontrar ejemplos similares gracias al uso de similarity search usando embeddings.
### Etiquetado en lote (bulk)
Si encontramos un patrón muy claro, podemos revisar los ejemplos más rápido y anotarlos en bloque usando la barra superior, debajo del buscador. Si hay mucho ejemplos se puede aumentar el número de registros por página. Se recomienda en cualquier caso revisar los ejemplos.
## ✨ Hackathon Somos NLP 2023
- No es necesario participar en el hackathon para unirse a esta tarea colaborativa.
- Los equipos que participen en el hackathon pueden utilizar su versión etiquetada de este dataset para su proyecto.
- Las versiones etiquetadas de este dataset serán elegibles para ganar la mención de honor al mejor dataset etiquetado.
## 🙌 Agradecimientos
Muchas gracias
a `versae` del proyecto BERTIN por la traducción del dataset,
a `dvilasuero` y `nataliaElv` de Argilla por crear la documentación y resolver todas las dudas de las personas participantes,
a `alarcon7a` de Platzi por escribir el artículo de blog, y
a `mariagrandury` de Somos NLP por coordinar e integrar el reto en el hackathon.
Al combinar las versiones y crear el dataset final mencionaremos a todas las personas que hayan participado en este esfuerzo 🤗 | [
-0.6292793154716492,
-0.6936998963356018,
0.1495274007320404,
0.33576616644859314,
-0.3132304847240448,
-0.277256041765213,
-0.013432001695036888,
-0.42713719606399536,
0.5649349689483643,
0.5055875182151794,
-0.7089945673942566,
-0.8851485848426819,
-0.3559120297431946,
0.5271333456039429... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
s-nlp/en_paradetox_toxicity | s-nlp | 2023-09-08T08:37:06Z | 17 | 1 | null | [
"task_categories:text-classification",
"language:en",
"license:openrail++",
"region:us"
] | 2023-09-08T08:37:06Z | 2023-03-24T14:24:58.000Z | 2023-03-24T14:24:58 | ---
license: openrail++
task_categories:
- text-classification
language:
- en
---
# ParaDetox: Detoxification with Parallel Data (English). Toxicity Task Results
This repository contains information about **Toxicity Task** markup from [English Paradetox dataset](https://huggingface.co/datasets/s-nlp/paradetox) collection pipeline.
The original paper ["ParaDetox: Detoxification with Parallel Data"](https://aclanthology.org/2022.acl-long.469/) was presented at ACL 2022 main conference.
## ParaDetox Collection Pipeline
The ParaDetox Dataset collection was done via [Yandex.Toloka](https://toloka.yandex.com/) crowdsource platform. The collection was done in three steps:
* *Task 1:* **Generation of Paraphrases**: The first crowdsourcing task asks users to eliminate toxicity in a given sentence while keeping the content.
* *Task 2:* **Content Preservation Check**: We show users the generated paraphrases along with their original variants and ask them to indicate if they have close meanings.
* *Task 3:* **Toxicity Check**: Finally, we check if the workers succeeded in removing toxicity.
Specifically this repo contains the results of **Task 3: Toxicity Check**. Here, the samples with markup confidence >= 90 are present.
The input here is text and the label shows if the text is toxic or not.
Totally, datasets contains 26,507 samples. Among them, the minor part is toxic examples (4,009 pairs).
## Citation
```
@inproceedings{logacheva-etal-2022-paradetox,
title = "{P}ara{D}etox: Detoxification with Parallel Data",
author = "Logacheva, Varvara and
Dementieva, Daryna and
Ustyantsev, Sergey and
Moskovskiy, Daniil and
Dale, David and
Krotova, Irina and
Semenov, Nikita and
Panchenko, Alexander",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.469",
pages = "6804--6818",
abstract = "We present a novel pipeline for the collection of parallel data for the detoxification task. We collect non-toxic paraphrases for over 10,000 English toxic sentences. We also show that this pipeline can be used to distill a large existing corpus of paraphrases to get toxic-neutral sentence pairs. We release two parallel corpora which can be used for the training of detoxification models. To the best of our knowledge, these are the first parallel datasets for this task.We describe our pipeline in detail to make it fast to set up for a new language or domain, thus contributing to faster and easier development of new parallel resources.We train several detoxification models on the collected data and compare them with several baselines and state-of-the-art unsupervised approaches. We conduct both automatic and manual evaluations. All models trained on parallel data outperform the state-of-the-art unsupervised models by a large margin. This suggests that our novel datasets can boost the performance of detoxification systems.",
}
```
## Contacts
For any questions, please contact: Daryna Dementieva (dardem96@gmail.com) | [
-0.044320207089185715,
-0.3778759241104126,
0.7243964076042175,
0.25260892510414124,
-0.23727382719516754,
-0.06320714950561523,
-0.03104429505765438,
-0.024580128490924835,
0.18538913130760193,
0.774158775806427,
-0.3778817653656006,
-0.9711666703224182,
-0.5501412153244019,
0.50619310140... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/cardiode | bigbio | 2023-04-05T01:14:13Z | 17 | 4 | null | [
"multilinguality:monolingual",
"language:ger",
"license:other",
"region:us"
] | 2023-04-05T01:14:13Z | 2023-04-01T16:40:12.000Z | 2023-04-01T16:40:12 | ---
language:
- ger
bigbio_language:
- German
license: other
multilinguality: monolingual
pretty_name: CARDIO:DE
homepage: https://heidata.uni-heidelberg.de/dataset.xhtml?persistentId=doi:10.11588/data/AFYQDY
bigbio_pubmed: false
bigbio_public: false
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for CARDIO.DE
## Dataset Description
- **Homepage:** https://heidata.uni-heidelberg.de/dataset.xhtml?persistentId=doi:10.11588/data/AFYQDY
- **Pubmed:** False
- **Public:** False
- **Tasks:** NER
We present CARDIO:DE, the first freely available and distributable large German clinical corpus from the cardiovascular domain. CARDIO:DE encompasses 500 clinical routine German doctor’s letters from Heidelberg University Hospital, which were manually annotated. Our prospective study design complies well with current data protection regulations and allows us to keep the original structure of clinical documents consistent. In order to ease access to our corpus, we manually de-identified all letters. To enable various information extraction tasks the temporal information in the documents was preserved. We added two high-quality manual annotation layers to CARDIO:DE, (1) medication information and (2) CDA-compliant section classes.
## Citation Information
```
@data{
data/AFYQDY_2022,
author = {Christoph Dieterich},
publisher = {heiDATA},
title = {{CARDIO:DE}},
year = {2022},
version = {V5},
doi = {10.11588/data/AFYQDY},
url = {https://doi.org/10.11588/data/AFYQDY}
}
```
| [
-0.49792009592056274,
-0.4985542893409729,
0.4272090792655945,
0.0840631052851677,
-0.42644375562667847,
-0.13179582357406616,
-0.31429433822631836,
-0.5343561172485352,
0.5225762128829956,
0.5272760987281799,
-0.49588873982429504,
-1.077012538909912,
-0.7766409516334534,
0.247133150696754... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Jane016/whisper2 | Jane016 | 2023-04-04T04:08:00Z | 17 | 0 | null | [
"region:us"
] | 2023-04-04T04:08:00Z | 2023-04-03T15:25:16.000Z | 2023-04-03T15:25:16 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/arcene | mstz | 2023-04-17T08:46:30Z | 17 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"arcene",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | 2023-04-17T08:46:30Z | 2023-04-17T08:36:34.000Z | 2023-04-17T08:36:34 | ---
language:
- en
tags:
- arcene
- tabular_classification
- binary_classification
- UCI
pretty_name: Arcene
size_categories:
- n<1K
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- arcene
---
# Arcene
The [Arcene dataset](https://archive-beta.ics.uci.edu/dataset/167/arcene) from the [UCI repository](https://archive-beta.ics.uci.edu/).
| [
-0.4735133647918701,
-0.04363301396369934,
0.3537457585334778,
0.057042237371206284,
0.23310486972332,
0.054270144551992416,
0.29461896419525146,
-0.07905701547861099,
0.5713917016983032,
0.9592052698135376,
-0.6582970023155212,
-0.6669056415557861,
-0.29162725806236267,
-0.127253144979476... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jordyvl/rvl_cdip_easyocr | jordyvl | 2023-10-20T18:43:34Z | 17 | 0 | rvl-cdip | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|iit_cdip",
"language:en",
"license:other",
"arxiv:1502.07058",
"regi... | 2023-10-20T18:43:34Z | 2023-04-19T10:51:31.000Z | 2023-04-19T10:51:31 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|iit_cdip
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
paperswithcode_id: rvl-cdip
pretty_name: RVL-CDIP-EasyOCR
dataset_info:
features:
- name: id
dtype: string
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': letter
'1': form
'2': email
'3': handwritten
'4': advertisement
'5': scientific report
'6': scientific publication
'7': specification
'8': file folder
'9': news article
'10': budget
'11': invoice
'12': presentation
'13': questionnaire
'14': resume
'15': memo
- name: words
sequence: string
- name: boxes
sequence:
sequence: int32
---
# Dataset Card for RVL-CDIP
## Extension
The data loader provides support for loading easyOCR files together with the images
It is not included under '../data', yet is available upon request via email <firstname@contract.fit>.
## Table of Contents
- [Dataset Card for RVL-CDIP](#dataset-card-for-rvl-cdip)
- [Extension](#extension)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [The RVL-CDIP Dataset](https://www.cs.cmu.edu/~aharley/rvl-cdip/)
- **Repository:**
- **Paper:** [Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval](https://arxiv.org/abs/1502.07058)
- **Leaderboard:** [RVL-CDIP leaderboard](https://paperswithcode.com/dataset/rvl-cdip)
- **Point of Contact:** [Adam W. Harley](mailto:aharley@cmu.edu)
### Dataset Summary
The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given document into one of 16 classes representing document types (letter, form, etc.). The leaderboard for this task is available [here](https://paperswithcode.com/sota/document-image-classification-on-rvl-cdip).
### Languages
All the classes and documents use English as their primary language.
## Dataset Structure
### Data Instances
A sample from the training set is provided below :
```
{
'image': <PIL.TiffImagePlugin.TiffImageFile image mode=L size=754x1000 at 0x7F9A5E92CA90>,
'label': 15
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing a document.
- `label`: an `int` classification label.
<details>
<summary>Class Label Mappings</summary>
```json
{
"0": "letter",
"1": "form",
"2": "email",
"3": "handwritten",
"4": "advertisement",
"5": "scientific report",
"6": "scientific publication",
"7": "specification",
"8": "file folder",
"9": "news article",
"10": "budget",
"11": "invoice",
"12": "presentation",
"13": "questionnaire",
"14": "resume",
"15": "memo"
}
```
</details>
### Data Splits
| |train|test|validation|
|----------|----:|----:|---------:|
|# of examples|320000|40000|40000|
The dataset was split in proportions similar to those of ImageNet.
- 320000 images were used for training,
- 40000 images for validation, and
- 40000 images for testing.
## Dataset Creation
### Curation Rationale
From the paper:
> This work makes available a new labelled subset of the IIT-CDIP collection, containing 400,000
document images across 16 categories, useful for training new CNNs for document analysis.
### Source Data
#### Initial Data Collection and Normalization
The same as in the IIT-CDIP collection.
#### Who are the source language producers?
The same as in the IIT-CDIP collection.
### Annotations
#### Annotation process
The same as in the IIT-CDIP collection.
#### Who are the annotators?
The same as in the IIT-CDIP collection.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was curated by the authors - Adam W. Harley, Alex Ufkes, and Konstantinos G. Derpanis.
### Licensing Information
RVL-CDIP is a subset of IIT-CDIP, which came from the [Legacy Tobacco Document Library](https://www.industrydocuments.ucsf.edu/tobacco/), for which license information can be found [here](https://www.industrydocuments.ucsf.edu/help/copyright/).
### Citation Information
```bibtex
@inproceedings{harley2015icdar,
title = {Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval},
author = {Adam W Harley and Alex Ufkes and Konstantinos G Derpanis},
booktitle = {International Conference on Document Analysis and Recognition ({ICDAR})}},
year = {2015}
}
```
### Contributions
Thanks to [@dnaveenr](https://github.com/dnaveenr) for adding this dataset. | [
-0.4858223497867584,
-0.3182796239852905,
0.04235317185521126,
-0.011790435761213303,
-0.10234390199184418,
0.05878404900431633,
-0.40183714032173157,
-0.5287516117095947,
-0.19495846331119537,
0.4855820834636688,
-0.31231412291526794,
-0.838367223739624,
-0.9007103443145752,
0.13075827062... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CM/codexglue_code2text_go | CM | 2023-04-22T01:51:07Z | 17 | 0 | null | [
"region:us"
] | 2023-04-22T01:51:07Z | 2023-04-22T01:50:51.000Z | 2023-04-22T01:50:51 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 342243143
num_examples: 167288
- name: validation
num_bytes: 13721860
num_examples: 7325
- name: test
num_bytes: 16328406
num_examples: 8122
download_size: 121340474
dataset_size: 372293409
---
# Dataset Card for "codexglue_code2text_go"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.291058212518692,
-0.17607247829437256,
0.23682616651058197,
0.34206777811050415,
-0.14693568646907806,
0.013320467434823513,
-0.09364547580480576,
-0.26093560457229614,
0.6065089106559753,
0.7163185477256775,
-0.775477409362793,
-0.9051144123077393,
-0.556359052658081,
-0.43220123648643... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bjoernp/tagesschau-2018-2023 | bjoernp | 2023-04-27T09:04:08Z | 17 | 5 | null | [
"size_categories:10K<n<100K",
"language:de",
"region:us"
] | 2023-04-27T09:04:08Z | 2023-04-27T07:49:50.000Z | 2023-04-27T07:49:50 | ---
dataset_info:
features:
- name: date
dtype: string
- name: headline
dtype: string
- name: short_headline
dtype: string
- name: short_text
dtype: string
- name: article
dtype: string
- name: link
dtype: string
splits:
- name: train
num_bytes: 107545823
num_examples: 21847
download_size: 63956047
dataset_size: 107545823
language:
- de
size_categories:
- 10K<n<100K
---
# Tagesschau Archive Article Dataset
A scrape of Tagesschau.de articles from 01.01.2018 to 26.04.2023. Find all source code in [github.com/bjoernpl/tagesschau](https://github.com/bjoernpl/tagesschau).
## Dataset Information
CSV structure:
| Field | Description |
| --- | --- |
| `date` | Date of the article |
| `headline` | Title of the article |
| `short_headline` | A short headline / Context |
| `short_text` | A brief summary of the article |
| `article` | The full text of the article |
| `href` | The href of the article on tagesschau.de |
Size:
The final dataset (2018-today) contains 225202 articles from 1942 days. Of these articles only
21848 are unique (Tagesschau often keeps articles in circulation for ~1 month). The total download
size is ~65MB.
Cleaning:
- Duplicate articles are removed
- Articles with empty text are removed
- Articles with empty short_texts are removed
- Articles, headlines and short_headlines are stripped of leading and trailing whitespace
More details in [`clean.py`](https://github.com/bjoernpl/tagesschau/blob/main/clean.py). | [
-0.34876182675361633,
-0.47414055466651917,
0.20429441332817078,
0.39966267347335815,
-0.5119518041610718,
-0.10446599125862122,
0.0053046285174787045,
-0.4877557158470154,
0.7383233904838562,
0.2448204904794693,
-0.690104603767395,
-0.5679154992103577,
-0.22963404655456543,
0.288866102695... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
alxfgh/PubChem10M_SELFIES_Tokenized | alxfgh | 2023-04-30T23:59:13Z | 17 | 0 | null | [
"size_categories:1M<n<10M",
"source_datasets:PubChem10M",
"chemistry",
"molecules",
"selfies",
"smiles",
"region:us"
] | 2023-04-30T23:59:13Z | 2023-04-30T20:51:17.000Z | 2023-04-30T20:51:17 | ---
pretty_name: PubChem10M_Selfies_Tokenized
size_categories:
- 1M<n<10M
source_datasets:
- PubChem10M
tags:
- chemistry
- molecules
- selfies
- smiles
---
<a href="https://github.com/alxfgh/LLM-Guided-GA/blob/main/SELFIES%20Tokenizer.ipynb">Custom cl100k</a> tokenized version of <a href="https://huggingface.co/datasets/alxfgh/PubChem10M_SELFIES">PubChem10M_SELFIES</a>. | [
-0.6763701438903809,
-0.4376947581768036,
0.4249155819416046,
0.3880729079246521,
-0.4506915509700775,
0.1108795627951622,
0.19084089994430542,
-0.28529584407806396,
1.208423376083374,
0.4021461009979248,
-0.9991332292556763,
-1.0397124290466309,
-0.4257339835166931,
0.05240102484822273,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
0x22almostEvil/tatoeba-mt-llama-only | 0x22almostEvil | 2023-05-10T09:14:37Z | 17 | 0 | null | [
"task_categories:translation",
"size_categories:1M<n<10M",
"language:en",
"language:ru",
"language:de",
"language:uk",
"language:sv",
"language:sr",
"language:sl",
"language:ro",
"language:pt",
"language:pl",
"language:nl",
"language:it",
"language:hu",
"language:hr",
"language:fr",
... | 2023-05-10T09:14:37Z | 2023-05-08T15:42:22.000Z | 2023-05-08T15:42:22 | ---
license: cc-by-2.0
task_categories:
- translation
language:
- en
- ru
- de
- uk
- sv
- sr
- sl
- ro
- pt
- pl
- nl
- it
- hu
- hr
- fr
- es
- da
- cs
- ca
- bg
tags:
- tatoeba
- Translation
pretty_name: tatoeba-mt-llama-only
size_categories:
- 1M<n<10M
---
# Dataset Card for multilingual tatoeba translations with ~3M entries (llama supported languages only).
### Dataset Summary
~3M entries. Just more user-friendly version that combines all of the entries of original dataset in a single file (llama supported languages only):
https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt | [
-0.45118364691734314,
-0.23263545334339142,
0.22947771847248077,
0.7190463542938232,
-0.9199318289756775,
0.13315452635288239,
-0.24983030557632446,
-0.6973377466201782,
0.8794448971748352,
0.7457180023193359,
-0.4165729582309723,
-0.8978345394134521,
-0.7013809680938721,
0.528337240219116... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
h2oai/h2ogpt-oig-oasst1-instruct-cleaned-v3 | h2oai | 2023-05-09T04:58:54Z | 17 | 6 | null | [
"language:en",
"license:apache-2.0",
"gpt",
"llm",
"large language model",
"open-source",
"region:us"
] | 2023-05-09T04:58:54Z | 2023-05-09T03:08:38.000Z | 2023-05-09T03:08:38 | ---
license: apache-2.0
language:
- en
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
tags:
- gpt
- llm
- large language model
- open-source
---
# h2oGPT Data Card
## Summary
H2O.ai's `h2ogpt-oig-oasst1-instruct-cleaned-v3` is an open-source instruct-type dataset for fine-tuning of large language models, licensed for commercial use.
- Number of rows: `269406`
- Number of columns: `4`
- Column names: `['input', 'source', 'prompt_type', 'id']`
## Source
- [Original LAION OIG Dataset](https://github.com/LAION-AI/Open-Instruction-Generalist)
- [LAION OIG data detoxed and filtered down by scripts in h2oGPT repository](https://github.com/h2oai/h2ogpt/blob/main/FINETUNE.md#high-quality-oig-based-instruct-data)
- [Original Open Assistant data in tree structure](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [This flattened dataset created by script in h2oGPT repository](https://github.com/h2oai/h2ogpt/blob/6728938a262d3eb5e8db1f252bbcd7de838da452/create_data.py#L1415)
| [
-0.17223839461803436,
-0.6267262101173401,
0.14749519526958466,
-0.20185191929340363,
-0.11061247438192368,
-0.16707922518253326,
0.03510070592164993,
-0.3044261634349823,
-0.1647903174161911,
0.4887266457080841,
-0.27227306365966797,
-0.7049868702888489,
-0.2250065952539444,
-0.2188464105... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
diffusers-parti-prompts/karlo-v1 | diffusers-parti-prompts | 2023-05-17T16:49:02Z | 17 | 0 | null | [
"region:us"
] | 2023-05-17T16:49:02Z | 2023-05-14T22:06:00.000Z | 2023-05-14T22:06:00 | ---
dataset_info:
features:
- name: Prompt
dtype: string
- name: Category
dtype: string
- name: Challenge
dtype: string
- name: Note
dtype: string
- name: images
dtype: image
- name: model_name
dtype: string
- name: seed
dtype: int64
splits:
- name: train
num_bytes: 161180147.0
num_examples: 1632
download_size: 161038543
dataset_size: 161180147.0
---
# Images of Parti Prompts for "karlo-v1"
Code that was used to get the results:
```py
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("kakaobrain/karlo-v1-alpha", torch_dtype=torch.float16)
pipe.to("cuda")
prompt = "" # a parti prompt
generator = torch.Generator("cuda").manual_seed(0)
image = pipe(prompt, prior_num_inference_steps=50, decoder_num_inference_steps=100, generator=generator).images[0]
``` | [
-0.32521551847457886,
-0.43539899587631226,
0.7413290739059448,
0.2723459005355835,
-0.7142024636268616,
-0.37114399671554565,
0.30617430806159973,
0.17740267515182495,
0.18477551639080048,
0.2772114872932434,
-0.7561108469963074,
-0.6977056264877319,
-0.8114995360374451,
0.301128149032592... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ztphs980/taptap_datasets | ztphs980 | 2023-05-23T12:32:37Z | 17 | 2 | null | [
"language:en",
"license:mit",
"arxiv:2305.09696",
"region:us"
] | 2023-05-23T12:32:37Z | 2023-05-20T14:34:39.000Z | 2023-05-20T14:34:39 | ---
license: mit
language:
- en
---
This repository contains a total of 483 tabular datasets with meaningful column names collected from OpenML, UCI, and Kaggle platforms. The last column of each dataset is the label column. For more details, please refer to our paper https://arxiv.org/abs/2305.09696.
You can use the [code](https://github.com/ZhangTP1996/TapTap/blob/master/load_pretraining_datasets.py) to load all the datasets into a dictionary of pd.DataFrame.
An example script can be found below:
```python
from datasets import load_dataset
import pandas as pd
import numpy as np
data = {}
dataset = load_dataset(path='ztphs980/taptap_datasets')
dataset = dataset['train'].to_dict()
for table_name, table in zip(dataset['dataset_name'], dataset['table']):
table = pd.DataFrame.from_dict(eval(table, {'nan': np.nan}))
data[table_name] = table
``` | [
-0.5299504399299622,
-0.04457170143723488,
0.23418140411376953,
0.2204159051179886,
0.009127071127295494,
-0.10819374769926071,
-0.19338124990463257,
0.19266709685325623,
0.26525765657424927,
0.7184552550315857,
-0.16258305311203003,
-0.8443765640258789,
-0.23777787387371063,
0.33598572015... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
juletxara/xnli_mt | juletxara | 2023-07-21T10:21:37Z | 17 | 0 | xnli | [
"language:en",
"region:us"
] | 2023-07-21T10:21:37Z | 2023-05-23T11:00:18.000Z | 2023-05-23T11:00:18 | ---
language:
- en
paperswithcode_id: xnli
pretty_name: Cross-lingual Natural Language Inference
dataset_info:
- config_name: nllb-200-distilled-600M
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 851225
num_examples: 5010
- name: bg
num_bytes: 860275
num_examples: 5010
- name: de
num_bytes: 852016
num_examples: 5010
- name: el
num_bytes: 852043
num_examples: 5010
- name: es
num_bytes: 862194
num_examples: 5010
- name: fr
num_bytes: 861464
num_examples: 5010
- name: hi
num_bytes: 839337
num_examples: 5010
- name: ru
num_bytes: 860117
num_examples: 5010
- name: sw
num_bytes: 829257
num_examples: 5010
- name: th
num_bytes: 845834
num_examples: 5010
- name: tr
num_bytes: 840611
num_examples: 5010
- name: ur
num_bytes: 829009
num_examples: 5010
- name: vi
num_bytes: 846643
num_examples: 5010
- name: zh
num_bytes: 851646
num_examples: 5010
download_size: 11040341
dataset_size: 11881671
- config_name: nllb-200-distilled-1.3B
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 851205
num_examples: 5010
- name: bg
num_bytes: 857938
num_examples: 5010
- name: de
num_bytes: 849800
num_examples: 5010
- name: el
num_bytes: 849820
num_examples: 5010
- name: es
num_bytes: 860984
num_examples: 5010
- name: fr
num_bytes: 862545
num_examples: 5010
- name: hi
num_bytes: 848151
num_examples: 5010
- name: ru
num_bytes: 858069
num_examples: 5010
- name: sw
num_bytes: 830347
num_examples: 5010
- name: th
num_bytes: 841814
num_examples: 5010
- name: tr
num_bytes: 840738
num_examples: 5010
- name: ur
num_bytes: 828996
num_examples: 5010
- name: vi
num_bytes: 848990
num_examples: 5010
- name: zh
num_bytes: 855461
num_examples: 5010
download_size: 11043528
dataset_size: 11884858
- config_name: nllb-200-1.3B
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 855256
num_examples: 5010
- name: bg
num_bytes: 861195
num_examples: 5010
- name: de
num_bytes: 854679
num_examples: 5010
- name: el
num_bytes: 852766
num_examples: 5010
- name: es
num_bytes: 863689
num_examples: 5010
- name: fr
num_bytes: 868360
num_examples: 5010
- name: hi
num_bytes: 846414
num_examples: 5010
- name: ru
num_bytes: 865308
num_examples: 5010
- name: sw
num_bytes: 830998
num_examples: 5010
- name: th
num_bytes: 846171
num_examples: 5010
- name: tr
num_bytes: 845907
num_examples: 5010
- name: ur
num_bytes: 838279
num_examples: 5010
- name: vi
num_bytes: 848249
num_examples: 5010
- name: zh
num_bytes: 846116
num_examples: 5010
download_size: 11082057
dataset_size: 11923387
- config_name: nllb-200-3.3B
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 863302
num_examples: 5010
- name: bg
num_bytes: 863677
num_examples: 5010
- name: de
num_bytes: 857147
num_examples: 5010
- name: el
num_bytes: 856383
num_examples: 5010
- name: es
num_bytes: 866137
num_examples: 5010
- name: fr
num_bytes: 871853
num_examples: 5010
- name: hi
num_bytes: 857305
num_examples: 5010
- name: ru
num_bytes: 869523
num_examples: 5010
- name: sw
num_bytes: 839567
num_examples: 5010
- name: th
num_bytes: 850312
num_examples: 5010
- name: tr
num_bytes: 851657
num_examples: 5010
- name: ur
num_bytes: 832903
num_examples: 5010
- name: vi
num_bytes: 856479
num_examples: 5010
- name: zh
num_bytes: 853093
num_examples: 5010
download_size: 11148008
dataset_size: 11989338
- config_name: xglm-564M
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 789329
num_examples: 5010
- name: bg
num_bytes: 846003
num_examples: 5010
- name: de
num_bytes: 781577
num_examples: 5010
- name: el
num_bytes: 1069000
num_examples: 5010
- name: es
num_bytes: 852488
num_examples: 5010
- name: fr
num_bytes: 860951
num_examples: 5010
- name: hi
num_bytes: 849698
num_examples: 5010
- name: ru
num_bytes: 898706
num_examples: 5010
- name: sw
num_bytes: 842743
num_examples: 5010
- name: th
num_bytes: 1098847
num_examples: 5010
- name: tr
num_bytes: 788523
num_examples: 5010
- name: ur
num_bytes: 786383
num_examples: 5010
- name: vi
num_bytes: 827304
num_examples: 5010
- name: zh
num_bytes: 1083312
num_examples: 5010
download_size: 11533534
dataset_size: 12374864
- config_name: xglm-1.7B
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 788487
num_examples: 5010
- name: bg
num_bytes: 863627
num_examples: 5010
- name: de
num_bytes: 824591
num_examples: 5010
- name: el
num_bytes: 870729
num_examples: 5010
- name: es
num_bytes: 856025
num_examples: 5010
- name: fr
num_bytes: 877381
num_examples: 5010
- name: hi
num_bytes: 973947
num_examples: 5010
- name: ru
num_bytes: 840252
num_examples: 5010
- name: sw
num_bytes: 784472
num_examples: 5010
- name: th
num_bytes: 821323
num_examples: 5010
- name: tr
num_bytes: 747863
num_examples: 5010
- name: ur
num_bytes: 855280
num_examples: 5010
- name: vi
num_bytes: 807745
num_examples: 5010
- name: zh
num_bytes: 801384
num_examples: 5010
download_size: 10871776
dataset_size: 11713106
- config_name: xglm-2.9B
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 791983
num_examples: 5010
- name: bg
num_bytes: 856898
num_examples: 5010
- name: de
num_bytes: 833316
num_examples: 5010
- name: el
num_bytes: 859152
num_examples: 5010
- name: es
num_bytes: 875232
num_examples: 5010
- name: fr
num_bytes: 880335
num_examples: 5010
- name: hi
num_bytes: 754460
num_examples: 5010
- name: ru
num_bytes: 839486
num_examples: 5010
- name: sw
num_bytes: 807832
num_examples: 5010
- name: th
num_bytes: 792237
num_examples: 5010
- name: tr
num_bytes: 744151
num_examples: 5010
- name: ur
num_bytes: 763715
num_examples: 5010
- name: vi
num_bytes: 825575
num_examples: 5010
- name: zh
num_bytes: 803580
num_examples: 5010
download_size: 10586622
dataset_size: 11427952
- config_name: xglm-4.5B
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 825461
num_examples: 5010
- name: bg
num_bytes: 861124
num_examples: 5010
- name: de
num_bytes: 847007
num_examples: 5010
- name: el
num_bytes: 875762
num_examples: 5010
- name: es
num_bytes: 871840
num_examples: 5010
- name: fr
num_bytes: 882720
num_examples: 5010
- name: hi
num_bytes: 826770
num_examples: 5010
- name: ru
num_bytes: 865706
num_examples: 5010
- name: sw
num_bytes: 807688
num_examples: 5010
- name: th
num_bytes: 827077
num_examples: 5010
- name: tr
num_bytes: 836039
num_examples: 5010
- name: ur
num_bytes: 799881
num_examples: 5010
- name: vi
num_bytes: 846648
num_examples: 5010
- name: zh
num_bytes: 836279
num_examples: 5010
download_size: 10968672
dataset_size: 11810002
- config_name: xglm-7.5B
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 818748
num_examples: 5010
- name: bg
num_bytes: 853616
num_examples: 5010
- name: de
num_bytes: 833462
num_examples: 5010
- name: el
num_bytes: 860997
num_examples: 5010
- name: es
num_bytes: 855814
num_examples: 5010
- name: fr
num_bytes: 859597
num_examples: 5010
- name: hi
num_bytes: 788540
num_examples: 5010
- name: ru
num_bytes: 846308
num_examples: 5010
- name: sw
num_bytes: 813638
num_examples: 5010
- name: th
num_bytes: 793438
num_examples: 5010
- name: tr
num_bytes: 753138
num_examples: 5010
- name: ur
num_bytes: 811513
num_examples: 5010
- name: vi
num_bytes: 829040
num_examples: 5010
- name: zh
num_bytes: 823480
num_examples: 5010
download_size: 10699999
dataset_size: 11541329
- config_name: bloom-560m
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 793192
num_examples: 5010
- name: bg
num_bytes: 1293032
num_examples: 5026
- name: de
num_bytes: 853267
num_examples: 5011
- name: el
num_bytes: 853650
num_examples: 5028
- name: es
num_bytes: 790401
num_examples: 5019
- name: fr
num_bytes: 785706
num_examples: 5022
- name: hi
num_bytes: 815413
num_examples: 5020
- name: ru
num_bytes: 1119100
num_examples: 5035
- name: sw
num_bytes: 1283629
num_examples: 5010
- name: th
num_bytes: 1927388
num_examples: 5010
- name: tr
num_bytes: 1136397
num_examples: 5010
- name: ur
num_bytes: 806534
num_examples: 5050
- name: vi
num_bytes: 810195
num_examples: 5033
- name: zh
num_bytes: 895087
num_examples: 5013
download_size: 13312268
dataset_size: 14162991
- config_name: bloom-1b1
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 772035
num_examples: 5010
- name: bg
num_bytes: 838287
num_examples: 5010
- name: de
num_bytes: 816688
num_examples: 5010
- name: el
num_bytes: 757902
num_examples: 5010
- name: es
num_bytes: 811192
num_examples: 5010
- name: fr
num_bytes: 823552
num_examples: 5010
- name: hi
num_bytes: 755051
num_examples: 5010
- name: ru
num_bytes: 802154
num_examples: 5010
- name: sw
num_bytes: 769220
num_examples: 5010
- name: th
num_bytes: 855265
num_examples: 5010
- name: tr
num_bytes: 1009235
num_examples: 5010
- name: ur
num_bytes: 784984
num_examples: 5010
- name: vi
num_bytes: 798443
num_examples: 5010
- name: zh
num_bytes: 795561
num_examples: 5010
download_size: 10548239
dataset_size: 11389569
- config_name: bloom-1b7
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 817013
num_examples: 5010
- name: bg
num_bytes: 803575
num_examples: 5010
- name: de
num_bytes: 811977
num_examples: 5010
- name: el
num_bytes: 768757
num_examples: 5010
- name: es
num_bytes: 834218
num_examples: 5010
- name: fr
num_bytes: 844544
num_examples: 5010
- name: hi
num_bytes: 780516
num_examples: 5010
- name: ru
num_bytes: 856927
num_examples: 5010
- name: sw
num_bytes: 745814
num_examples: 5010
- name: th
num_bytes: 930774
num_examples: 5010
- name: tr
num_bytes: 871417
num_examples: 5010
- name: ur
num_bytes: 751069
num_examples: 5010
- name: vi
num_bytes: 814194
num_examples: 5010
- name: zh
num_bytes: 790631
num_examples: 5010
download_size: 10580096
dataset_size: 11421426
- config_name: bloom-3b
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 819238
num_examples: 5010
- name: bg
num_bytes: 822686
num_examples: 5010
- name: de
num_bytes: 850318
num_examples: 5010
- name: el
num_bytes: 809037
num_examples: 5010
- name: es
num_bytes: 850349
num_examples: 5010
- name: fr
num_bytes: 855581
num_examples: 5010
- name: hi
num_bytes: 797905
num_examples: 5010
- name: ru
num_bytes: 861096
num_examples: 5010
- name: sw
num_bytes: 767209
num_examples: 5010
- name: th
num_bytes: 820321
num_examples: 5010
- name: tr
num_bytes: 881668
num_examples: 5010
- name: ur
num_bytes: 810843
num_examples: 5010
- name: vi
num_bytes: 828926
num_examples: 5010
- name: zh
num_bytes: 793476
num_examples: 5010
download_size: 10727323
dataset_size: 11568653
- config_name: bloom-7b1
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 834767
num_examples: 5010
- name: bg
num_bytes: 848921
num_examples: 5010
- name: de
num_bytes: 827646
num_examples: 5010
- name: el
num_bytes: 886001
num_examples: 5010
- name: es
num_bytes: 859775
num_examples: 5010
- name: fr
num_bytes: 863548
num_examples: 5010
- name: hi
num_bytes: 814484
num_examples: 5010
- name: ru
num_bytes: 860392
num_examples: 5010
- name: sw
num_bytes: 811380
num_examples: 5010
- name: th
num_bytes: 775738
num_examples: 5010
- name: tr
num_bytes: 747961
num_examples: 5010
- name: ur
num_bytes: 836727
num_examples: 5010
- name: vi
num_bytes: 836042
num_examples: 5010
- name: zh
num_bytes: 814866
num_examples: 5010
download_size: 10776918
dataset_size: 11618248
- config_name: llama-7B
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 792437
num_examples: 5010
- name: bg
num_bytes: 855365
num_examples: 5010
- name: de
num_bytes: 844453
num_examples: 5010
- name: el
num_bytes: 864748
num_examples: 5010
- name: es
num_bytes: 871358
num_examples: 5010
- name: fr
num_bytes: 882671
num_examples: 5010
- name: hi
num_bytes: 791631
num_examples: 5010
- name: ru
num_bytes: 853745
num_examples: 5010
- name: sw
num_bytes: 753655
num_examples: 5010
- name: th
num_bytes: 787365
num_examples: 5010
- name: tr
num_bytes: 814193
num_examples: 5010
- name: ur
num_bytes: 811987
num_examples: 5010
- name: vi
num_bytes: 807334
num_examples: 5010
- name: zh
num_bytes: 841441
num_examples: 5010
download_size: 10731053
dataset_size: 11572383
- config_name: llama-13B
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 833799
num_examples: 5010
- name: bg
num_bytes: 850755
num_examples: 5010
- name: de
num_bytes: 842498
num_examples: 5010
- name: el
num_bytes: 853859
num_examples: 5010
- name: es
num_bytes: 865884
num_examples: 5010
- name: fr
num_bytes: 872326
num_examples: 5010
- name: hi
num_bytes: 803350
num_examples: 5010
- name: ru
num_bytes: 850066
num_examples: 5010
- name: sw
num_bytes: 785595
num_examples: 5010
- name: th
num_bytes: 794461
num_examples: 5010
- name: tr
num_bytes: 789769
num_examples: 5010
- name: ur
num_bytes: 813459
num_examples: 5010
- name: vi
num_bytes: 783219
num_examples: 5010
- name: zh
num_bytes: 828885
num_examples: 5010
download_size: 10726595
dataset_size: 11567925
- config_name: RedPajama-INCITE-Base-3B-v1
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 815395
num_examples: 5010
- name: bg
num_bytes: 870568
num_examples: 5010
- name: de
num_bytes: 830593
num_examples: 5010
- name: el
num_bytes: 887938
num_examples: 5010
- name: es
num_bytes: 866523
num_examples: 5010
- name: fr
num_bytes: 880668
num_examples: 5010
- name: hi
num_bytes: 871126
num_examples: 5010
- name: ru
num_bytes: 875379
num_examples: 5010
- name: sw
num_bytes: 775459
num_examples: 5010
- name: th
num_bytes: 829562
num_examples: 5010
- name: tr
num_bytes: 813161
num_examples: 5010
- name: ur
num_bytes: 812296
num_examples: 5010
- name: vi
num_bytes: 824340
num_examples: 5010
- name: zh
num_bytes: 892427
num_examples: 5010
download_size: 11004105
dataset_size: 11845435
- config_name: RedPajama-INCITE-7B-Base
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 789074
num_examples: 5010
- name: bg
num_bytes: 870916
num_examples: 5010
- name: de
num_bytes: 845436
num_examples: 5010
- name: el
num_bytes: 850780
num_examples: 5010
- name: es
num_bytes: 875677
num_examples: 5010
- name: fr
num_bytes: 880989
num_examples: 5010
- name: hi
num_bytes: 751526
num_examples: 5010
- name: ru
num_bytes: 881090
num_examples: 5010
- name: sw
num_bytes: 746100
num_examples: 5010
- name: th
num_bytes: 685496
num_examples: 5010
- name: tr
num_bytes: 770359
num_examples: 5010
- name: ur
num_bytes: 708810
num_examples: 5010
- name: vi
num_bytes: 735197
num_examples: 5010
- name: zh
num_bytes: 848461
num_examples: 5010
download_size: 10398581
dataset_size: 11239911
- config_name: llama-30B
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 860301
num_examples: 5010
- name: bg
num_bytes: 863946
num_examples: 5010
- name: de
num_bytes: 858009
num_examples: 5010
- name: el
num_bytes: 874347
num_examples: 5010
- name: es
num_bytes: 875007
num_examples: 5010
- name: fr
num_bytes: 884764
num_examples: 5010
- name: hi
num_bytes: 846950
num_examples: 5010
- name: ru
num_bytes: 869708
num_examples: 5010
- name: sw
num_bytes: 857197
num_examples: 5010
- name: th
num_bytes: 847402
num_examples: 5010
- name: tr
num_bytes: 825879
num_examples: 5010
- name: ur
num_bytes: 860074
num_examples: 5010
- name: vi
num_bytes: 862456
num_examples: 5010
- name: zh
num_bytes: 849263
num_examples: 5010
download_size: 11193973
dataset_size: 12035303
- config_name: open_llama_3b
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 705142
num_examples: 5010
- name: bg
num_bytes: 875604
num_examples: 5010
- name: de
num_bytes: 851525
num_examples: 5010
- name: el
num_bytes: 739635
num_examples: 5010
- name: es
num_bytes: 866291
num_examples: 5010
- name: fr
num_bytes: 880556
num_examples: 5010
- name: hi
num_bytes: 392659
num_examples: 5010
- name: ru
num_bytes: 876933
num_examples: 5010
- name: sw
num_bytes: 738299
num_examples: 5010
- name: th
num_bytes: 1273724
num_examples: 5010
- name: tr
num_bytes: 769184
num_examples: 5010
- name: ur
num_bytes: 739162
num_examples: 5010
- name: vi
num_bytes: 701661
num_examples: 5010
- name: zh
num_bytes: 878129
num_examples: 5010
download_size: 10447174
dataset_size: 11288504
- config_name: open_llama_7b
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 765568
num_examples: 5010
- name: bg
num_bytes: 860978
num_examples: 5010
- name: de
num_bytes: 839878
num_examples: 5010
- name: el
num_bytes: 790038
num_examples: 5010
- name: es
num_bytes: 862624
num_examples: 5010
- name: fr
num_bytes: 871243
num_examples: 5010
- name: hi
num_bytes: 328421
num_examples: 5010
- name: ru
num_bytes: 867424
num_examples: 5010
- name: sw
num_bytes: 784318
num_examples: 5010
- name: th
num_bytes: 1133537
num_examples: 5010
- name: tr
num_bytes: 770420
num_examples: 5010
- name: ur
num_bytes: 739842
num_examples: 5010
- name: vi
num_bytes: 767095
num_examples: 5010
- name: zh
num_bytes: 840369
num_examples: 5010
download_size: 10380425
dataset_size: 11221755
- config_name: open_llama_13b
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 855506
num_examples: 5010
- name: bg
num_bytes: 860868
num_examples: 5010
- name: de
num_bytes: 845896
num_examples: 5010
- name: el
num_bytes: 789495
num_examples: 5010
- name: es
num_bytes: 874595
num_examples: 5010
- name: fr
num_bytes: 883531
num_examples: 5010
- name: hi
num_bytes: 349430
num_examples: 5010
- name: ru
num_bytes: 860441
num_examples: 5010
- name: sw
num_bytes: 819611
num_examples: 5010
- name: th
num_bytes: 1249012
num_examples: 5010
- name: tr
num_bytes: 813974
num_examples: 5010
- name: ur
num_bytes: 775914
num_examples: 5010
- name: vi
num_bytes: 826589
num_examples: 5010
- name: zh
num_bytes: 828483
num_examples: 5010
download_size: 10792015
dataset_size: 11633345
- config_name: xgen-7b-4k-base
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 815916
num_examples: 5010
- name: bg
num_bytes: 866698
num_examples: 5010
- name: de
num_bytes: 845296
num_examples: 5010
- name: el
num_bytes: 873279
num_examples: 5010
- name: es
num_bytes: 867614
num_examples: 5010
- name: fr
num_bytes: 878177
num_examples: 5010
- name: hi
num_bytes: 795679
num_examples: 5010
- name: ru
num_bytes: 870241
num_examples: 5010
- name: sw
num_bytes: 815925
num_examples: 5010
- name: th
num_bytes: 680865
num_examples: 5010
- name: tr
num_bytes: 808508
num_examples: 5010
- name: ur
num_bytes: 755658
num_examples: 5010
- name: vi
num_bytes: 798616
num_examples: 5010
- name: zh
num_bytes: 839810
num_examples: 5010
download_size: 10670952
dataset_size: 11512282
- config_name: xgen-7b-8k-base
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 822039
num_examples: 5010
- name: bg
num_bytes: 866105
num_examples: 5010
- name: de
num_bytes: 834487
num_examples: 5010
- name: el
num_bytes: 871714
num_examples: 5010
- name: es
num_bytes: 863765
num_examples: 5010
- name: fr
num_bytes: 874570
num_examples: 5010
- name: hi
num_bytes: 811916
num_examples: 5010
- name: ru
num_bytes: 863980
num_examples: 5010
- name: sw
num_bytes: 801837
num_examples: 5010
- name: th
num_bytes: 773394
num_examples: 5010
- name: tr
num_bytes: 812359
num_examples: 5010
- name: ur
num_bytes: 762615
num_examples: 5010
- name: vi
num_bytes: 845558
num_examples: 5010
- name: zh
num_bytes: 840984
num_examples: 5010
download_size: 10803993
dataset_size: 11645323
- config_name: xgen-7b-8k-inst
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 852293
num_examples: 5010
- name: bg
num_bytes: 877290
num_examples: 5010
- name: de
num_bytes: 843890
num_examples: 5010
- name: el
num_bytes: 900388
num_examples: 5010
- name: es
num_bytes: 871938
num_examples: 5010
- name: fr
num_bytes: 883776
num_examples: 5010
- name: hi
num_bytes: 819611
num_examples: 5010
- name: ru
num_bytes: 871868
num_examples: 5010
- name: sw
num_bytes: 903297
num_examples: 5010
- name: th
num_bytes: 781456
num_examples: 5010
- name: tr
num_bytes: 888386
num_examples: 5010
- name: ur
num_bytes: 835512
num_examples: 5010
- name: vi
num_bytes: 881933
num_examples: 5010
- name: zh
num_bytes: 886819
num_examples: 5010
download_size: 11257127
dataset_size: 12098457
- config_name: open_llama_7b_v2
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 799618
num_examples: 5010
- name: bg
num_bytes: 864517
num_examples: 5010
- name: de
num_bytes: 844605
num_examples: 5010
- name: el
num_bytes: 867881
num_examples: 5010
- name: es
num_bytes: 872871
num_examples: 5010
- name: fr
num_bytes: 883623
num_examples: 5010
- name: hi
num_bytes: 821085
num_examples: 5010
- name: ru
num_bytes: 875313
num_examples: 5010
- name: sw
num_bytes: 810855
num_examples: 5010
- name: th
num_bytes: 756931
num_examples: 5010
- name: tr
num_bytes: 832938
num_examples: 5010
- name: ur
num_bytes: 776355
num_examples: 5010
- name: vi
num_bytes: 841205
num_examples: 5010
- name: zh
num_bytes: 836994
num_examples: 5010
download_size: 10843461
dataset_size: 11684791
- config_name: polylm-1.7b
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 840312
num_examples: 5010
- name: bg
num_bytes: 766907
num_examples: 5010
- name: de
num_bytes: 846775
num_examples: 5010
- name: el
num_bytes: 985392
num_examples: 5010
- name: es
num_bytes: 850661
num_examples: 5010
- name: fr
num_bytes: 872488
num_examples: 5010
- name: hi
num_bytes: 947295
num_examples: 5010
- name: ru
num_bytes: 823812
num_examples: 5010
- name: sw
num_bytes: 639344
num_examples: 5010
- name: th
num_bytes: 873714
num_examples: 5010
- name: tr
num_bytes: 882916
num_examples: 5010
- name: ur
num_bytes: 707398
num_examples: 5010
- name: vi
num_bytes: 837592
num_examples: 5010
- name: zh
num_bytes: 811983
num_examples: 5010
download_size: 10845259
dataset_size: 11686589
- config_name: polylm-13b
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 856622
num_examples: 5010
- name: bg
num_bytes: 872936
num_examples: 5010
- name: de
num_bytes: 853814
num_examples: 5010
- name: el
num_bytes: 792171
num_examples: 5010
- name: es
num_bytes: 867823
num_examples: 5010
- name: fr
num_bytes: 876800
num_examples: 5010
- name: hi
num_bytes: 825863
num_examples: 5010
- name: ru
num_bytes: 876390
num_examples: 5010
- name: sw
num_bytes: 659651
num_examples: 5010
- name: th
num_bytes: 848574
num_examples: 5010
- name: tr
num_bytes: 801914
num_examples: 5010
- name: ur
num_bytes: 750495
num_examples: 5010
- name: vi
num_bytes: 847699
num_examples: 5010
- name: zh
num_bytes: 823542
num_examples: 5010
download_size: 10712964
dataset_size: 11554294
- config_name: polylm-multialpaca-13b
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 832229
num_examples: 5010
- name: bg
num_bytes: 873130
num_examples: 5010
- name: de
num_bytes: 846302
num_examples: 5010
- name: el
num_bytes: 846617
num_examples: 5010
- name: es
num_bytes: 861183
num_examples: 5010
- name: fr
num_bytes: 863929
num_examples: 5010
- name: hi
num_bytes: 938018
num_examples: 5010
- name: ru
num_bytes: 866081
num_examples: 5010
- name: sw
num_bytes: 802054
num_examples: 5010
- name: th
num_bytes: 836126
num_examples: 5010
- name: tr
num_bytes: 799768
num_examples: 5010
- name: ur
num_bytes: 909124
num_examples: 5010
- name: vi
num_bytes: 842588
num_examples: 5010
- name: zh
num_bytes: 823529
num_examples: 5010
download_size: 11099348
dataset_size: 11940678
- config_name: open_llama_3b_v2
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 692849
num_examples: 5010
- name: bg
num_bytes: 852675
num_examples: 5010
- name: de
num_bytes: 835619
num_examples: 5010
- name: el
num_bytes: 834201
num_examples: 5010
- name: es
num_bytes: 873160
num_examples: 5010
- name: fr
num_bytes: 881098
num_examples: 5010
- name: hi
num_bytes: 726395
num_examples: 5010
- name: ru
num_bytes: 853657
num_examples: 5010
- name: sw
num_bytes: 690930
num_examples: 5010
- name: th
num_bytes: 724712
num_examples: 5010
- name: tr
num_bytes: 755625
num_examples: 5010
- name: ur
num_bytes: 753648
num_examples: 5010
- name: vi
num_bytes: 795981
num_examples: 5010
- name: zh
num_bytes: 844200
num_examples: 5010
download_size: 10273420
dataset_size: 11114750
- config_name: Llama-2-7b-hf
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 833964
num_examples: 5010
- name: bg
num_bytes: 867408
num_examples: 5010
- name: de
num_bytes: 852305
num_examples: 5010
- name: el
num_bytes: 859363
num_examples: 5010
- name: es
num_bytes: 880162
num_examples: 5010
- name: fr
num_bytes: 886400
num_examples: 5010
- name: hi
num_bytes: 802665
num_examples: 5010
- name: ru
num_bytes: 868568
num_examples: 5010
- name: sw
num_bytes: 775118
num_examples: 5010
- name: th
num_bytes: 774722
num_examples: 5010
- name: tr
num_bytes: 810268
num_examples: 5010
- name: ur
num_bytes: 786428
num_examples: 5010
- name: vi
num_bytes: 841904
num_examples: 5010
- name: zh
num_bytes: 837126
num_examples: 5010
download_size: 10835071
dataset_size: 11676401
- config_name: Llama-2-13b-hf
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 838926
num_examples: 5010
- name: bg
num_bytes: 864619
num_examples: 5010
- name: de
num_bytes: 847106
num_examples: 5010
- name: el
num_bytes: 858400
num_examples: 5010
- name: es
num_bytes: 873274
num_examples: 5010
- name: fr
num_bytes: 878414
num_examples: 5010
- name: hi
num_bytes: 819446
num_examples: 5010
- name: ru
num_bytes: 864307
num_examples: 5010
- name: sw
num_bytes: 821998
num_examples: 5010
- name: th
num_bytes: 812673
num_examples: 5010
- name: tr
num_bytes: 812102
num_examples: 5010
- name: ur
num_bytes: 831111
num_examples: 5010
- name: vi
num_bytes: 838971
num_examples: 5010
- name: zh
num_bytes: 835539
num_examples: 5010
download_size: 10955556
dataset_size: 11796886
- config_name: Llama-2-7b-chat-hf
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 948578
num_examples: 5010
- name: bg
num_bytes: 776309
num_examples: 5010
- name: de
num_bytes: 725534
num_examples: 5010
- name: el
num_bytes: 956805
num_examples: 5010
- name: es
num_bytes: 631915
num_examples: 5010
- name: fr
num_bytes: 534372
num_examples: 5010
- name: hi
num_bytes: 960220
num_examples: 5010
- name: ru
num_bytes: 535448
num_examples: 5010
- name: sw
num_bytes: 1001740
num_examples: 5010
- name: th
num_bytes: 995206
num_examples: 5010
- name: tr
num_bytes: 865992
num_examples: 5010
- name: ur
num_bytes: 864017
num_examples: 5010
- name: vi
num_bytes: 246890
num_examples: 5010
- name: zh
num_bytes: 538232
num_examples: 5010
download_size: 9739928
dataset_size: 10581258
- config_name: Llama-2-13b-chat-hf
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 932439
num_examples: 5010
- name: bg
num_bytes: 877857
num_examples: 5010
- name: de
num_bytes: 859893
num_examples: 5010
- name: el
num_bytes: 910487
num_examples: 5010
- name: es
num_bytes: 872553
num_examples: 5010
- name: fr
num_bytes: 879291
num_examples: 5010
- name: hi
num_bytes: 987002
num_examples: 5010
- name: ru
num_bytes: 887918
num_examples: 5010
- name: sw
num_bytes: 1021074
num_examples: 5010
- name: th
num_bytes: 1054387
num_examples: 5010
- name: tr
num_bytes: 900761
num_examples: 5010
- name: ur
num_bytes: 1099374
num_examples: 5010
- name: vi
num_bytes: 884472
num_examples: 5010
- name: zh
num_bytes: 882394
num_examples: 5010
download_size: 12208572
dataset_size: 13049902
---
# Dataset Card for "xnli"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://www.nyu.edu/projects/bowman/xnli/](https://www.nyu.edu/projects/bowman/xnli/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 7.74 GB
- **Size of the generated dataset:** 3.23 GB
- **Total amount of disk used:** 10.97 GB
### Dataset Summary
XNLI is a subset of a few thousand examples from MNLI which has been translated
into a 14 different languages (some low-ish resource). As with MNLI, the goal is
to predict textual entailment (does sentence A imply/contradict/neither sentence
B) and is a classification task (given two sentences, predict one of three
labels).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### all_languages
- **Size of downloaded dataset files:** 483.96 MB
- **Size of the generated dataset:** 1.61 GB
- **Total amount of disk used:** 2.09 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"hypothesis": "{\"language\": [\"ar\", \"bg\", \"de\", \"el\", \"en\", \"es\", \"fr\", \"hi\", \"ru\", \"sw\", \"th\", \"tr\", \"ur\", \"vi\", \"zh\"], \"translation\": [\"احد اع...",
"label": 0,
"premise": "{\"ar\": \"واحدة من رقابنا ستقوم بتنفيذ تعليماتك كلها بكل دقة\", \"bg\": \"един от нашите номера ще ви даде инструкции .\", \"de\": \"Eine ..."
}
```
#### ar
- **Size of downloaded dataset files:** 483.96 MB
- **Size of the generated dataset:** 109.32 MB
- **Total amount of disk used:** 593.29 MB
An example of 'validation' looks as follows.
```
{
"hypothesis": "اتصل بأمه حالما أوصلته حافلة المدرسية.",
"label": 1,
"premise": "وقال، ماما، لقد عدت للمنزل."
}
```
#### bg
- **Size of downloaded dataset files:** 483.96 MB
- **Size of the generated dataset:** 128.32 MB
- **Total amount of disk used:** 612.28 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"hypothesis": "\"губиш нещата на следното ниво , ако хората си припомнят .\"...",
"label": 0,
"premise": "\"по време на сезона и предполагам , че на твоето ниво ще ги загубиш на следващото ниво , ако те решат да си припомнят отбора на ..."
}
```
#### de
- **Size of downloaded dataset files:** 483.96 MB
- **Size of the generated dataset:** 86.17 MB
- **Total amount of disk used:** 570.14 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"hypothesis": "Man verliert die Dinge auf die folgende Ebene , wenn sich die Leute erinnern .",
"label": 0,
"premise": "\"Du weißt , während der Saison und ich schätze , auf deiner Ebene verlierst du sie auf die nächste Ebene , wenn sie sich entschl..."
}
```
#### el
- **Size of downloaded dataset files:** 483.96 MB
- **Size of the generated dataset:** 142.30 MB
- **Total amount of disk used:** 626.26 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"hypothesis": "\"Τηλεφώνησε στη μαμά του μόλις το σχολικό λεωφορείο τον άφησε.\"...",
"label": 1,
"premise": "Και είπε, Μαμά, έφτασα στο σπίτι."
}
```
### Data Fields
The data fields are the same among all splits.
#### all_languages
- `premise`: a multilingual `string` variable, with possible languages including `ar`, `bg`, `de`, `el`, `en`.
- `hypothesis`: a multilingual `string` variable, with possible languages including `ar`, `bg`, `de`, `el`, `en`.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
#### ar
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
#### bg
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
#### de
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
#### el
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
### Data Splits
| name |train |validation|test|
|-------------|-----:|---------:|---:|
|all_languages|392702| 2490|5010|
|ar |392702| 2490|5010|
|bg |392702| 2490|5010|
|de |392702| 2490|5010|
|el |392702| 2490|5010|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{conneau2018xnli,
author = {Conneau, Alexis
and Rinott, Ruty
and Lample, Guillaume
and Williams, Adina
and Bowman, Samuel R.
and Schwenk, Holger
and Stoyanov, Veselin},
title = {XNLI: Evaluating Cross-lingual Sentence Representations},
booktitle = {Proceedings of the 2018 Conference on Empirical Methods
in Natural Language Processing},
year = {2018},
publisher = {Association for Computational Linguistics},
location = {Brussels, Belgium},
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. | [
-0.621026337146759,
-0.5387721657752991,
0.18572315573692322,
0.04372932016849518,
-0.17417824268341064,
-0.11081359535455704,
-0.4572189748287201,
-0.45329204201698303,
0.669642448425293,
0.4429357051849365,
-0.84357088804245,
-0.854694664478302,
-0.47620365023612976,
0.28663021326065063,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jlvdoorn/atcosim | jlvdoorn | 2023-06-29T14:36:14Z | 17 | 1 | null | [
"language:en",
"air traffic management",
"automatic speech recognition",
"natural language processing",
"atcosim",
"atm",
"asr",
"nlp",
"doi:10.57967/hf/1378",
"region:us"
] | 2023-06-29T14:36:14Z | 2023-05-29T10:21:24.000Z | 2023-05-29T10:21:24 | ---
language:
- en
tags:
- air traffic management
- automatic speech recognition
- natural language processing
- atcosim
- atm
- asr
- nlp
pretty_name: ATCOSIM
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
splits:
- name: train
num_bytes: 1929508254.0
num_examples: 7646
- name: validation
num_bytes: 480869258.0
num_examples: 1913
download_size: 2399337867
dataset_size: 2410377512.0
---
This is an ATM dataset for the use of automatic speech recognition. The original source of the data is from the [ATCOSIM](https://www.spsc.tugraz.at/databases-and-tools/atcosim-air-traffic-control-simulation-speech-corpus.html) project. | [
-0.404488742351532,
-0.7511106729507446,
0.1796424388885498,
-0.34972068667411804,
-0.2505021393299103,
0.07861478626728058,
0.3075745403766632,
-0.2563815712928772,
0.384866327047348,
1.0475488901138306,
-0.5134222507476807,
-0.2091234028339386,
-0.4032212793827057,
-0.1833329200744629,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tchebonenko/MedicalTranscriptions | tchebonenko | 2023-05-29T19:39:18Z | 17 | 5 | null | [
"region:us"
] | 2023-05-29T19:39:18Z | 2023-05-29T19:04:30.000Z | 2023-05-29T19:04:30 | # Medical Transcriptions
Medical transcription data scraped from mtsamples.com
### Content
This dataset contains sample medical transcriptions for various medical specialties.
<br>
More information can be found [here](https://www.kaggle.com/datasets/tboyle10/medicaltranscriptions?resource=download)
Due to data availability only transcripts for the following medical specialties were selected for the model training:
- Surgery
- Cardiovascular / Pulmonary
- Orthopedic
- Radiology
- General Medicine
- Gastroenterology
- Neurology
- Obstetrics / Gynecology
- Urology
---
**task_categories:**
- text-classification
- feature-extraction
**language:** en <br>
**tags:** medical <br>
**size_categories:** 1K<n<10K | [
0.2120019793510437,
-0.4248557686805725,
0.48422738909721375,
0.011296307668089867,
-0.34398549795150757,
0.14025171101093292,
0.0726139023900032,
-0.27148300409317017,
0.698678731918335,
0.7600706219673157,
-0.7318382859230042,
-0.8749189972877502,
-0.8000651597976685,
0.36404573917388916... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
recwizard/redial | recwizard | 2023-10-02T02:32:06Z | 17 | 0 | null | [
"size_categories:10K<n<100K",
"language:en",
"recommendation",
"conversational recommendation",
"sentiment analysis",
"arxiv:1812.07617",
"region:us"
] | 2023-10-02T02:32:06Z | 2023-06-03T06:23:40.000Z | 2023-06-03T06:23:40 | ---
dataset_info:
- config_name: SA
features:
- name: movieId
dtype: int32
- name: movieName
dtype: string
- name: messages
sequence: string
- name: senders
sequence: int32
- name: form
sequence: int32
splits:
- name: train
num_bytes: 33174059
num_examples: 41370
- name: validation
num_bytes: 8224594
num_examples: 10329
- name: test
num_bytes: 5151856
num_examples: 6952
download_size: 32552755
dataset_size: 46550509
- config_name: rec
features:
- name: movieIds
sequence: int32
- name: messages
sequence: string
- name: senders
sequence: int32
splits:
- name: train
num_bytes: 6064195
num_examples: 8004
- name: validation
num_bytes: 1511644
num_examples: 2002
- name: test
num_bytes: 937739
num_examples: 1342
download_size: 4812520
dataset_size: 8513578
- config_name: autorec
features:
- name: movieIds
sequence: int32
- name: ratings
sequence: float32
splits:
- name: train
num_bytes: 350688
num_examples: 7840
- name: validation
num_bytes: 87496
num_examples: 1966
- name: test
num_bytes: 58704
num_examples: 1321
download_size: 32552755
dataset_size: 496888
config_names:
- SA
- rec
- autorec
tags:
- recommendation
- conversational recommendation
- sentiment analysis
language:
- en
pretty_name: ReDIAL
size_categories:
- 10K<n<100K
---
# Dataset Card for ReDIAL
## Dataset Description
- **Homepage:**
- **Repository:**
[RecBot](https://github.com/McAuley-Lab/RecBot).
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is an adapted version of the [original redial dataset](https://huggingface.co/datasets/re_dial), for supporting different tasks in our project [RecBot](https://github.com/McAuley-Lab/RecBot).
The redial dataset provides over 10,000 conversations centered around movie recommendations. It was released in the paper ["Towards Deep Conversational Recommendations"](https://arxiv.org/abs/1812.07617) at NeurIPS 2018.
### Supported Tasks and Leaderboards
1. Sentiment Analysis: Use the SA config for sentiment analysis.
2. Recommendation: Use the autorec config for recommendation task.
3. Conversational recommendation: Use the rec config for conversational recommendation task.
### Languages
English
## Dataset Structure
### Data Instances
#### SA
An example of 'test' looks as follows.
```
{
"movieId": 111776,
"movieName": "Super Troopers",
"messages": [
"Hi I am looking for a movie like @111776",
"You should watch @151656",
"Is that a great one? I have never seen it. I have seen @192131\nI mean @134643",
"Yes @151656 is very funny and so is @94688",
"It sounds like I need to check them out",
"yes you will enjoy them",
"I appreciate your time. I will need to check those out. Are there any others you would recommend?",
"yes @101794",
"Thank you i will watch that too",
"and also @91481",
"Thanks for the suggestions.",
"you are welcome\nand also @124771",
"thanks goodbye"
],
"senders": [1, -1, 1, -1, 1, -1, 1, -1, 1, -1, 1, -1, 1],
"form": [0, 1, 1, 0, 1, 1]
}
```
#### rec
An example of 'test' looks as follows.
```
{
'movieIds': [111776, 91481, 151656, 134643, 192131, 124771, 94688, 101794],
'messages': ['Hi I am looking for a movie like @111776',
'You should watch @151656',
'Is that a great one? I have never seen it. I have seen @192131\nI mean @134643',
'Yes @151656 is very funny and so is @94688',
'It sounds like I need to check them out',
'yes you will enjoy them',
'I appreciate your time. I will need to check those out. Are there any others you would recommend?',
'yes @101794',
'Thank you i will watch that too',
'and also @91481',
'Thanks for the suggestions.',
'you are welcome\nand also @124771',
'thanks goodbye'],
'senders': [1, -1, 1, -1, 1, -1, 1, -1, 1, -1, 1, -1, 1]
}
```
#### autorec
An example of 'test' looks as follows.
```
{
"movieIds": [
111776,
151656,
134643,
192131,
94688
],
"ratings": [
1.0,
1.0,
1.0,
1.0,
1.0
]
}
```
### Data Fields
#### SA
- movieId: the movie's ID in the [MovieLens](https://grouplens.org/datasets/movielens/latest/) dataset.
- movieName: the movie's name.
- messages: a list of string. The conversation messages related to the movie. Note that one conversation can contain mutiple movies. The conversation messages are repeated for each movie as a sample.
- senders: a list of 1 or -1. It has the same length of messages. Each element indicates the message at the same index is from the initiatorWorker (with 1) or the respondentWorkerId (with -1).
- form: a list generated by: [init_q[movieId]["suggested"], init_q[movieId]["seen"], init_q[movieId]["liked"], resp_q[movieId]["suggested"], resp_q[movieId]["seen"], resp_q[movieId]["liked"]. init_q is the initiator questions in the conversation. resp_q is the respondent questions in the conversation.
#### rec
- movieIds: a list of movie ids in a conversation.
- messages: a list of string. see config SA for detail.
- senders: a list of 1 or -1. see config SA for detail.
#### autorec:
- movieIds: a list of movie ids in a conversation.
- ratings: a list of 0 or 1. It has the same length as movieIds. Each element indicates the inititator's "liked" value for the movie.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.53328537940979,
-0.6023889183998108,
0.0001411505218129605,
0.11130353063344955,
-0.38885337114334106,
-0.0731530413031578,
-0.07326652854681015,
-0.021717289462685585,
0.6307110786437988,
0.621807873249054,
-0.9169294834136963,
-0.6383697986602783,
-0.5112169981002808,
0.13376519083976... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tianyang/repobench-r | tianyang | 2023-06-17T03:06:46Z | 17 | 1 | null | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:code",
"license:cc-by-nc-nd-4.0",
"arxiv:2306.03091",
"region:us"
] | 2023-06-17T03:06:46Z | 2023-06-06T00:52:55.000Z | 2023-06-06T00:52:55 | ---
language_creators:
- found
language:
- code
license:
- cc-by-nc-nd-4.0
multilinguality:
- multilingual
pretty_name: RepoBench-Retrieval
source_datasets:
- original
task_categories:
- text-retrieval
task_ids:
- document-retrieval
---
# Dataset Card for RepoBench-R
## Dataset Description
- **Homepage:** https://github.com/Leolty/repobench
- **Paper:** https://arxiv.org/abs/2306.03091
## Dataset Summary
**RepoBench-R (Retrieval)** is a subtask of **RepoBench**([GitHub](https://github.com/Leolty/repobench), [arXiv](https://arxiv.org/abs/2306.03091)), targeting the retrieval component of a repository-level auto-completion system, focusing on retrieving the most relevant code snippet from a project repository for next-line
code prediction.
## Settings
- `cff`: short for cross_file_first, indicating the cross-file module in next line is first used in the current file.
- `cfr`: short for cross_file_random, indicating the cross-file module in next line is not first used in the current file.
## Supported Tasks
The dataset has 4 subsets:
- `python_cff`: python dataset with `cff` setting.
- `python_cfr`: python dataset with `cfr` setting.
- `java_cff`: java dataset with `cff` setting.
- `java_cfr`: java dataset with `cfr` setting.
Each subset has 4 splits:
- `train_easy`: training set with easy difficulty, where the number of code snippets in the context \\(k\\) satisfies \\( 5 \leq k < 10 \\).
- `train_hard`: training set with hard difficulty, where the number of code snippets in the context \\(k\\) satisfies \\( k \geq 10 \\).
- `test_easy`: testing set with easy difficulty.
- `test_hard`: testing set with hard difficulty.
## Loading Data
For example, if you want to load the `test` `cross_file_first` `python` dataset with `easy` difficulty, you can use the following code:
```python
from datasets import load_dataset
dataset = load_dataset("tianyang/repobench-r", "python_cff", split="test_easy")
```
> Note: The `split` argument is optional. If not provided, the entire dataset (including, train and test data with easy and hard level) will be loaded.
## Dataset Structure
```json
{
"repo_name": "repository name of the data point",
"file_path": "path/to/file",
"context": [
"snippet 1",
"snippet 2",
// ...
"snippet k"
],
"import_statement": "all import statements in the file",
"gold_snippet_idex": 2, // the index of the gold snippet in the context list, 0~k-1
"code": "the code for next-line prediction",
"next_line": "the next line of the code"
}
```
## Licensing Information
CC BY-NC-ND 4.0
## Citation Information
```bibtex
@misc{liu2023repobench,
title={RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems},
author={Tianyang Liu and Canwen Xu and Julian McAuley},
year={2023},
eprint={2306.03091},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Contributions
Thanks to [@Leolty](https://github.com/Leolty) for adding this dataset. | [
-0.3846701681613922,
-0.23234061896800995,
-0.07699193805456161,
0.12779651582241058,
-0.14715959131717682,
0.07399580627679825,
-0.32800528407096863,
-0.13337171077728271,
0.19480980932712555,
0.43929150700569153,
-0.6517921090126038,
-0.6464641094207764,
-0.3953614830970764,
0.4025219678... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
musabg/wikipedia-tr-summarization | musabg | 2023-06-13T04:29:02Z | 17 | 3 | null | [
"task_categories:summarization",
"size_categories:100K<n<1M",
"language:tr",
"region:us"
] | 2023-06-13T04:29:02Z | 2023-06-06T13:39:57.000Z | 2023-06-06T13:39:57 | ---
dataset_info:
features:
- name: text
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 324460408.0479985
num_examples: 119110
- name: validation
num_bytes: 17077006.95200153
num_examples: 6269
download_size: 216029002
dataset_size: 341537415
task_categories:
- summarization
language:
- tr
pretty_name: Wikipedia Turkish Summarization
size_categories:
- 100K<n<1M
---
# Wikipedia Turkish Summarization Dataset
## Dataset Description
This is a Turkish summarization dataset 🇹🇷 prepared from the 2023 Wikipedia dump. The dataset has been cleaned, tokenized, and summarized using Huggingface Wikipedia dataset cleaner script, custom cleaning scripts, and OpenAI's gpt3.5-turbo API.
### Data Source
- Wikipedia's latest Turkish dump (2023 version) 🌐
### Features
- text: string (The original text extracted from Wikipedia articles 📖)
- summary: string (The generated summary of the original text 📝)
### Data Splits
| Split | Num Bytes | Num Examples |
|------------|--------------------|--------------|
| train | 324,460,408.048 | 119,110 |
| validation | 17,077,006.952 | 6,269 |
### Download Size
- 216,029,002 bytes
### Dataset Size
- 341,537,415 bytes
## Data Preparation
### Data Collection
1. The latest Turkish Wikipedia dump was downloaded 📥.
2. Huggingface Wikipedia dataset cleaner script was used to clean the text 🧹.
3. A custom script was used to further clean the text, removing sections like "Kaynakca" (References) and other irrelevant information 🛠️.
### Tokenization
The dataset was tokenized using Google's MT5 tokenizer. The following criteria were applied:
- Articles with a token count between 300 and 900 were selected ✔️.
- Articles with less than 300 tokens were ignored ❌.
- For articles with more than 900 tokens, only the first 900 tokens ending with a paragraph were selected 🔍.
### Summarization
The generated raw texts were summarized using OpenAI's gpt3.5-turbo API 🤖.
## Dataset Usage
This dataset can be used for various natural language processing tasks 👩💻, such as text summarization, machine translation, and language modeling in the Turkish language.
Example usage:
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("musabg/wikipedia-tr-summarization")
# Access the data
train_data = dataset["train"]
validation_data = dataset["validation"]
# Iterate through the data
for example in train_data:
text = example["text"]
summary = example["summary"]
# Process the data as needed
```
Please make sure to cite the dataset as follows 📝:
```bibtex
@misc{musabg2023wikipediatrsummarization,
author = {Musab Gultekin},
title = {Wikipedia Turkish Summarization Dataset},
year = {2023},
publisher = {HuggingFace},
howpublished = {\url{https://huggingface.co/datasets/musabg/wikipedia-tr-summarization}},
}
```
---
## Wikipedia Türkçe Özetleme Veri Seti
Bu, 2023 Wikipedia dökümünden hazırlanan Türkçe özetleme veri kümesidir. Veri kümesi, Huggingface Wikipedia veri kümesi temizleme betiği, özel temizleme betikleri ve OpenAI'nin gpt3.5-turbo API'si kullanılarak temizlenmiş, tokenleştirilmiş ve özetlenmiştir.
### Veri Kaynağı
- Wikipedia'nın en güncel Türkçe dökümü (2023 sürümü)
### Özellikler
- text: string (Wikipedia makalelerinden çıkarılan orijinal metin)
- summary: string (Orijinal metnin oluşturulan özeti)
### Veri Bölümleri
| Bölüm | Numara Baytı | Örnek Sayısı |
|------------|--------------------|--------------|
| train | 324.460.408,048 | 119.110 |
| validation | 17.077.006,952 | 6.269 |
### İndirme Boyutu
- 216.029.002 bayt
### Veri Kümesi Boyutu
- 341.537.415 bayt
## Veri Hazırlama
### Veri Toplama
1. En güncel Türkçe Wikipedia dökümü indirildi.
2. Huggingface Wikipedia veri kümesi temizleme betiği metni temizlemek için kullanıldı.
3. "Kaynakça" (Referanslar) gibi bölümleri ve diğer alakasız bilgileri kaldırmak için özel bir betik kullanıldı.
### Tokenleştirme
Veri kümesi, Google'ın MT5 tokenleştiricisi kullanılarak tokenleştirildi. Aşağıdaki kriterler uygulandı:
- 300 ile 900 token arasında olan makaleler seçildi.
- 300'den az tokeni olan makaleler dikkate alınmadı.
- 900'den fazla tokeni olan makalelerde, sadece bir paragraf ile biten ilk 900 token kısmı alındı.
### Özetleme
Oluşturulan ham metinler, OpenAI'nin gpt3.5-turbo API'si kullanılarak özetlendi.
## Veri Kümesi Kullanımı
Bu veri kümesi, Türkçe dilinde metin özetleme, makine çevirisi ve dil modelleme gibi çeşitli doğal dil işleme görevleri için kullanılabilir.
Örnek kullanım:
```python
from datasets import load_dataset
# Veri kümesini yükle
dataset = load_dataset("musabg/wikipedia-tr-summarization")
# Verilere erişin
train_data = dataset["train"]
validation_data = dataset["validation"]
# Verilerin üzerinden geçin
for example in train_data:
text = example["text"]
summary = example["summary"]
# Veriyi gerektiği gibi işleyin
``` | [
-0.796615719795227,
-0.6901629567146301,
0.008339167572557926,
0.2691335678100586,
-0.5154228806495667,
-0.45367762446403503,
-0.279781311750412,
-0.30081674456596375,
0.6909938454627991,
0.2644665241241455,
-0.4519103169441223,
-0.5818055868148804,
-0.5642393827438354,
0.40793555974960327... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zaanind/sinhala_englsih_parrel_corpus | zaanind | 2023-10-30T02:41:44Z | 17 | 2 | null | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:si",
"language:en",
"license:gpl",
"region:us"
] | 2023-10-30T02:41:44Z | 2023-06-06T15:27:51.000Z | 2023-06-06T15:27:51 | ---
language:
- si
- en
license: gpl
size_categories:
- 10K<n<100K
task_categories:
- translation
pretty_name: Zoom Eng-Si Nmt Dataset
dataset_info:
features:
- name: english
dtype: string
- name: sinhala
dtype: string
splits:
- name: train
num_bytes: 8516909
num_examples: 80684
download_size: 4162588
dataset_size: 8516909
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Follow me on : https://facebook.com/zaanind | https://github.com/zaanind
Contact : zaanind@gmail.com | https://m.me/zaanind | https://t.me/zaanind
Dataset Name: Eng-Sinhala Translation Dataset
Description: This dataset contains approximately 80,000 lines of English-Sinhala translation pairs. It can be used to train models for machine translation tasks and other natural language processing applications.
Data License: GPL (GNU General Public License). Please ensure that you comply with the terms and conditions of the GPL when using the dataset.
Note: While you mentioned that some sentences in the dataset might be incorrect due to its large size, it is important to ensure the quality and accuracy of the data for training purposes. Consider performing data cleaning and validation to improve the reliability of your model.
Mission
Our mission is to improve the quality of open-source English to Sinhala machine translation. This dataset, consisting of 8,000 translation pairs, is a step in that direction.
Special Thanks:
We extend our gratitude to the data collected and cleared by the Zoom.lk subtitles team, whose contributions have been invaluable in making this dataset possible.
Please feel free to reach out if you have any questions, suggestions, or would like to collaborate on further improving this dataset or machine translation models. Your support is greatly appreciated!
(Contact : zaanind@gmail.com | https://m.me/zaanind | https://t.me/zaanind) | [
-0.03808235004544258,
-0.15735027194023132,
-0.15004053711891174,
0.3276505172252655,
-0.6789054870605469,
-0.3066871464252472,
-0.5716555714607239,
-0.031011659651994705,
0.25361454486846924,
0.6273561716079712,
-0.7595507502555847,
-0.6424543857574463,
-0.5087896585464478,
0.298885345458... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jlvdoorn/atco2-asr-atcosim | jlvdoorn | 2023-07-07T07:06:05Z | 17 | 1 | null | [
"task_categories:automatic-speech-recognition",
"language:en",
"air traffic control",
"automatic speech recognition",
"natural language processing",
"atc",
"asr",
"nlp",
"atco2",
"atcosim",
"doi:10.57967/hf/1379",
"region:us"
] | 2023-07-07T07:06:05Z | 2023-06-14T13:08:14.000Z | 2023-06-14T13:08:14 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
- name: info
dtype: string
splits:
- name: train
num_bytes: 2029124649.948
num_examples: 8092
- name: validation
num_bytes: 508032748.446
num_examples: 2026
download_size: 2524947331
dataset_size: 2537157398.394
task_categories:
- automatic-speech-recognition
language:
- en
tags:
- air traffic control
- automatic speech recognition
- natural language processing
- atc
- asr
- nlp
- atco2
- atcosim
pretty_name: ATCO2-ASR-ATCOSIM
---
# Dataset Card for "atco2-asr-atcosim"
This is a dataset constructed from two datasets: [ATCO2-ASR](https://huggingface.co/datasets/jlvdoorn/atco2-asr) and [ATCOSIM](https://huggingface.co/datasets/jlvdoorn/atcosim).
It is divided into 80% train and 20% validation by selecting files randomly. Some of the files have additional information that is presented in the 'info' file. | [
-0.29637473821640015,
-0.2363722324371338,
-0.11085106432437897,
-0.03251848742365837,
-0.34417951107025146,
0.3749813437461853,
0.10317469388246536,
-0.26942965388298035,
0.6076123714447021,
0.7514299750328064,
-0.6152468323707581,
-0.5032138824462891,
-0.491260826587677,
-0.2097898274660... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
adrianhenkel/lucidprots_full_data | adrianhenkel | 2023-06-15T17:12:22Z | 17 | 2 | null | [
"region:us"
] | 2023-06-15T17:12:22Z | 2023-06-15T16:58:30.000Z | 2023-06-15T16:58:30 | ---
dataset_info:
features:
- name: input_id_x
sequence: int64
- name: input_id_y
sequence: int64
splits:
- name: train
num_bytes: 65665021040
num_examples: 17070828
- name: test
num_bytes: 1131744
num_examples: 474
- name: valid
num_bytes: 4840024
num_examples: 1259
download_size: 5082803946
dataset_size: 65670992808
---
# Dataset Card for "lucidprots_full_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5007917881011963,
-0.27229204773902893,
0.2914455831050873,
0.2553827166557312,
-0.33355578780174255,
-0.058859217911958694,
0.08357071876525879,
-0.1760757714509964,
1.1198079586029053,
0.7598430514335632,
-0.6702360510826111,
-0.7900586128234863,
-0.4153609871864319,
-0.23214006423950... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
patimus-prime/strain_selection | patimus-prime | 2023-06-28T00:58:15Z | 17 | 0 | null | [
"license:mit",
"region:us"
] | 2023-06-28T00:58:15Z | 2023-06-28T00:51:38.000Z | 2023-06-28T00:51:38 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Falah/eye-disease-dataset | Falah | 2023-07-02T12:33:39Z | 17 | 0 | null | [
"region:us"
] | 2023-07-02T12:33:39Z | 2023-06-30T17:25:26.000Z | 2023-06-30T17:25:26 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Bulging_Eyes
'1': Cataracts
'2': Crossed_Eyes
'3': Glaucoma
'4': Uveitis
splits:
- name: train
num_bytes: 2558487.0
num_examples: 383
download_size: 0
dataset_size: 2558487.0
---
# Eye Disease Dataset
## Description
The Eye Disease Dataset is a collection of images related to various eye diseases. It provides a valuable resource for training and evaluating computer vision models for eye disease detection and classification. The dataset includes images representing five different eye disease classes: Bulging Eyes, Cataracts, Crossed Eyes, Glaucoma, and Uveitis.
## Dataset Details
- Dataset Name: Falah/eye-disease-dataset
- Number of Rows: 383
- Class Labels:
- '0': Bulging Eyes
- '1': Cataracts
- '2': Crossed Eyes
- '3': Glaucoma
- '4': Uveitis
## Usage
### Installation
You can install the dataset using the Hugging Face Datasets library:
```bash
pip install datasets
```
### Accessing the Dataset
To access the Eye Disease Dataset, you can use the following Python code:
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("falah/eye-disease-dataset")
```
### Dataset Structure
The dataset consists of a collection of images, each labeled with a specific eye disease class. The images are stored in a directory structure where each class has its own subdirectory. The directory structure is as follows:
```
├── Bulging_Eyes
│ ├── image1.jpg
│ ├── image2.jpg
│ └── ...
├── Cataracts
│ ├── image1.jpg
│ ├── image2.jpg
│ └── ...
├── Crossed_Eyes
│ ├── image1.jpg
│ ├── image2.jpg
│ └── ...
├── Glaucoma
│ ├── image1.jpg
│ ├── image2.jpg
│ └── ...
└── Uveitis
├── image1.jpg
├── image2.jpg
└── ...
```
### Example Usage
Here's an example of how to load and visualize the Eye Disease Dataset:
```python
import matplotlib.pyplot as plt
# Load the dataset
dataset = load_dataset("falah/eye-disease-dataset")
# Display the first image and its label
image = dataset["train"][0]["image"]
label = dataset["train"][0]["label"]
plt.imshow(image)
plt.title(f"Class Label: {label}")
plt.axis("off")
plt.show()
```
## Citation
If you use the Eye Disease Dataset in your research or project, please consider citing it as:
```
@dataset{falah/eye-disease-dataset,
author = {Falah},
title = {Eye Disease Dataset},
year = {2023},
publisher = {Hugging Face},
version = {1.0.0},
url = {https://huggingface.co/falah/eye-disease-dataset}
}
```
## License
The Eye Disease Dataset is available under the [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/legalcode) license. | [
-0.22359158098697662,
-0.6314511299133301,
0.1660987138748169,
0.10003270953893661,
-0.26791536808013916,
-0.38541242480278015,
0.3320305049419403,
-0.401902437210083,
0.4734375774860382,
0.5439513921737671,
-0.2521266043186188,
-0.8648474812507629,
-0.38989225029945374,
0.2713455855846405... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
causal-lm/instructions-ko | causal-lm | 2023-07-24T05:54:16Z | 17 | 1 | null | [
"language:ko",
"region:us"
] | 2023-07-24T05:54:16Z | 2023-07-02T06:42:03.000Z | 2023-07-02T06:42:03 | ---
language: ko
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 71817534.51580903
num_examples: 112104
- name: validation
num_bytes: 8026314.24732017
num_examples: 12429
download_size: 43862664
dataset_size: 79843848.7631292
---
# Dataset Card for "instructions-ko"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4781802296638489,
-0.3737752139568329,
0.5506542921066284,
0.3366071581840515,
-0.28451430797576904,
-0.17704446613788605,
0.2671875059604645,
0.06697295606136322,
0.7037659883499146,
0.663975179195404,
-1.1856969594955444,
-0.9539188146591187,
-0.48553499579429626,
-0.2542062997817993,... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.