id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
autoevaluate/autoeval-eval-lener_br-lener_br-c186f5-1776861660 | autoevaluate | 2022-10-16T12:52:21Z | 14 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-16T12:52:21Z | 2022-10-16T12:48:37.000Z | 2022-10-16T12:48:37 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: Luciano/bertimbau-large-lener_br
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: train
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/bertimbau-large-lener_br
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | [
-0.4842439889907837,
-0.17540961503982544,
0.22767120599746704,
0.15290184319019318,
-0.13036516308784485,
-0.1539730280637741,
-0.023989088833332062,
-0.48563578724861145,
0.31852301955223083,
0.2830922603607178,
-0.848848819732666,
-0.18277674913406372,
-0.6188963651657104,
-0.0609755218... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ChustekUlises/fernets | ChustekUlises | 2022-10-20T20:08:13Z | 14 | 0 | null | [
"region:us"
] | 2022-10-20T20:08:13Z | 2022-10-20T20:04:55.000Z | 2022-10-20T20:04:55 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hidude561/jeopardy | hidude561 | 2022-10-23T20:22:03Z | 14 | 0 | null | [
"region:us"
] | 2022-10-23T20:22:03Z | 2022-10-23T20:20:41.000Z | 2022-10-23T20:20:41 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163411 | autoevaluate | 2022-10-23T21:54:20Z | 14 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T21:54:20Z | 2022-10-23T21:20:01.000Z | 2022-10-23T21:20:01 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/redefine_math2_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-6.7b_eval
metrics: []
dataset_name: jeffdshen/redefine_math2_8shot
dataset_config: jeffdshen--redefine_math2_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | [
-0.31241345405578613,
-0.31446710228919983,
0.3172299563884735,
-0.0261039100587368,
-0.045823823660612106,
-0.20395804941654205,
-0.046192750334739685,
-0.345258504152298,
0.04719577357172966,
0.41330987215042114,
-0.9372439980506897,
-0.1558273583650589,
-0.7341708540916443,
-0.019625009... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
joshtobin/malicious_urls | joshtobin | 2022-10-23T23:28:01Z | 14 | 0 | null | [
"region:us"
] | 2022-10-23T23:28:01Z | 2022-10-23T23:02:35.000Z | 2022-10-23T23:02:35 | ---
dataset_info:
features:
- name: url_len
dtype: int64
- name: abnormal_url
dtype: int64
- name: https
dtype: int64
- name: digits
dtype: int64
- name: letters
dtype: int64
- name: shortening_service
dtype: int64
- name: ip_address
dtype: int64
- name: '@'
dtype: int64
- name: '?'
dtype: int64
- name: '-'
dtype: int64
- name: '='
dtype: int64
- name: .
dtype: int64
- name: '#'
dtype: int64
- name: '%'
dtype: int64
- name: +
dtype: int64
- name: $
dtype: int64
- name: '!'
dtype: int64
- name: '*'
dtype: int64
- name: ','
dtype: int64
- name: //
dtype: int64
splits:
- name: train
num_bytes: 32000
num_examples: 200
download_size: 9837
dataset_size: 32000
---
# Dataset Card for "malicious_urls"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.20268316566944122,
-0.5651168823242188,
0.10649766772985458,
0.260479211807251,
-0.16614139080047607,
0.016737768426537514,
0.3813410699367523,
-0.229069322347641,
0.585889995098114,
0.8438838124275208,
-0.6740897297859192,
-0.8809869885444641,
-0.5233674645423889,
-0.09414977580308914,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mathemakitten/winobias_antistereotype_dev_cot | mathemakitten | 2022-10-25T03:32:16Z | 14 | 0 | null | [
"region:us"
] | 2022-10-25T03:32:16Z | 2022-10-25T03:32:00.000Z | 2022-10-25T03:32:00 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
laion/laion1b-nolang-vit-h-14-embeddings | laion | 2022-12-20T19:20:40Z | 14 | 1 | null | [
"region:us"
] | 2022-12-20T19:20:40Z | 2022-10-26T01:46:20.000Z | 2022-10-26T01:46:20 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
chattermill/fabsa | chattermill | 2022-11-01T19:51:01Z | 14 | 3 | null | [
"license:mit",
"region:us"
] | 2022-11-01T19:51:01Z | 2022-10-26T17:53:24.000Z | 2022-10-26T17:53:24 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arbml/AKEC | arbml | 2022-11-02T14:55:00Z | 14 | 0 | null | [
"region:us"
] | 2022-11-02T14:55:00Z | 2022-11-02T14:54:42.000Z | 2022-11-02T14:54:42 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ficsort/SzegedNER | ficsort | 2022-11-02T15:56:22Z | 14 | 0 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:hu",
"hungarian",
"szeged",
"ner",
"region:us"
] | 2022-11-02T15:56:22Z | 2022-11-02T15:46:47.000Z | 2022-11-02T15:46:47 | ---
annotations_creators:
- expert-generated
language:
- hu
language_creators:
- other
license: []
multilinguality:
- monolingual
paperswithcode_id: null
pretty_name: SzegedNER
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- hungarian
- szeged
- ner
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# Introduction
The recognition and classification of proper nouns and names in plain text is of key importance in Natural Language Processing (NLP) as it has a beneficial effect on the performance of various types of applications, including Information Extraction, Machine Translation, Syntactic Parsing/Chunking, etc.
## Corpus of Business Newswire Texts (business)
The Named Entity Corpus for Hungarian is a subcorpus of the Szeged Treebank, which contains full syntactic annotations done manually by linguist experts. A significant part of these texts has been annotated with Named Entity class labels in line with the annotation standards used on the CoNLL-2003 shared task.
Statistical data on Named Entities occurring in the corpus:
```
| tokens | phrases
------ | ------ | -------
non NE | 200067 |
PER | 1921 | 982
ORG | 20433 | 10533
LOC | 1501 | 1294
MISC | 2041 | 1662
```
### Reference
> György Szarvas, Richárd Farkas, László Felföldi, András Kocsor, János Csirik: Highly accurate Named Entity corpus for Hungarian. International Conference on Language Resources and Evaluation 2006, Genova (Italy)
## Criminal NE corpus (criminal)
The Hungarian National Corpus and its Heti Világgazdaság (HVG) subcorpus provided the basis for corpus text selection: articles related to the topic of financially liable offences were selected and annotated for the categories person, organization, location and miscellaneous.
There are two annotated versions of the corpus. When preparing the tag-for-meaning annotation, our linguists took into consideration the context in which the Named Entity under investigation occurred, thus, it was not the primary sense of the Named Entity that determined the tag (e.g. Manchester=LOC) but its contextual reference (e.g. Manchester won the Premier League=ORG). As for tag-for-tag annotation, these cases were not differentiated: tags were always given on the basis of the primary sense.
Statistical data on Named Entities occurring in the corpus:
```
| tag-for-meaning | tag-for-tag
------ | --------------- | -----------
non NE | 200067 |
PER | 8101 | 8121
ORG | 8782 | 9480
LOC | 5049 | 5391
MISC | 1917 | 854
```
## Metadata
dataset_info:
- config_name: business
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
0: O
1: B-PER
2: I-PER
3: B-ORG
4: I-ORG
5: B-LOC
6: I-LOC
7: B-MISC
8: I-MISC
- name: document_id
dtype: string
- name: sentence_id
dtype: string
splits:
- name: original
num_bytes: 4452207
num_examples: 9573
- name: test
num_bytes: 856798
num_examples: 1915
- name: train
num_bytes: 3171931
num_examples: 6701
- name: validation
num_bytes: 423478
num_examples: 957
download_size: 0
dataset_size: 8904414
- config_name: criminal
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
0: O
1: B-PER
2: I-PER
3: B-ORG
4: I-ORG
5: B-LOC
6: I-LOC
7: B-MISC
8: I-MISC
- name: document_id
dtype: string
- name: sentence_id
dtype: string
splits:
- name: original
num_bytes: 2807970
num_examples: 5375
- name: test
num_bytes: 520959
num_examples: 1089
- name: train
num_bytes: 1989662
num_examples: 3760
- name: validation
num_bytes: 297349
num_examples: 526
download_size: 0
dataset_size: 5615940
| [
-0.6016581654548645,
-0.6270418167114258,
0.3140124976634979,
0.13182257115840912,
-0.13815104961395264,
-0.10012032836675644,
-0.3576200306415558,
-0.6477452516555786,
0.23143912851810455,
0.37834012508392334,
-0.13514849543571472,
-0.6752381324768066,
-0.5188816785812378,
0.4641273021697... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
JoBeer/eclassCorpus | JoBeer | 2023-01-07T12:35:44Z | 14 | 0 | null | [
"region:us"
] | 2023-01-07T12:35:44Z | 2022-11-05T11:10:39.000Z | 2022-11-05T11:10:39 | ---
dataset_info:
features:
- name: did
dtype: int64
- name: query
dtype: string
- name: name
dtype: string
- name: datatype
dtype: string
- name: unit
dtype: string
- name: IRDI
dtype: string
- name: metalabel
dtype: int64
splits:
- name: train
num_bytes: 137123
num_examples: 672
download_size: 48203
dataset_size: 137123
---
# Dataset Card for "eclassCorpus"
This Dataset consists of names and descriptions from ECLASS-standard pump-properties. It can be used to evaluate models on the task of matching paraphrases to the ECLASS-standard pump-properties based on their semantics. | [
-0.4798245131969452,
-0.2840127646923065,
-0.08320658653974533,
-0.1419675052165985,
-0.6073609590530396,
0.09598939120769501,
0.25496619939804077,
-0.19330759346485138,
0.20571961998939514,
0.8056101202964783,
-0.6789194941520691,
-0.6411996483802795,
-0.3030739426612854,
0.04842458665370... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
JoBeer/eclassQuery | JoBeer | 2023-01-07T12:34:03Z | 14 | 0 | null | [
"task_categories:sentence-similarity",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | 2023-01-07T12:34:03Z | 2022-11-05T11:14:01.000Z | 2022-11-05T11:14:01 | ---
dataset_info:
features:
- name: did
dtype: int64
- name: query
dtype: string
- name: name
dtype: string
- name: duplicate_id
dtype: int64
- name: metalabel
dtype: int64
splits:
- name: train
num_bytes: 147176
num_examples: 1040
- name: eval
num_bytes: 100846
num_examples: 671
download_size: 113268
dataset_size: 248022
task_categories:
- sentence-similarity
language:
- en
size_categories:
- 1K<n<10K
---
# Dataset Card for "eclassQuery"
This Dataset consists of paraphrases of ECLASS-standard pump-properties. It can be used to evaluate models on the task of matching these paraphrases to the actual ECLASS-standard pump-properties based on their semantics. | [
-0.37287086248397827,
-0.37527355551719666,
0.03712647780776024,
-0.20337817072868347,
-0.6450185179710388,
-0.03901350125670433,
0.3384905159473419,
-0.17615248262882233,
0.10280191153287888,
1.0019932985305786,
-0.47181007266044617,
-0.7139757871627808,
-0.1454993188381195,
-0.0795709267... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Short-Answer-Feedback/saf_legal_domain_german | Short-Answer-Feedback | 2023-03-31T11:47:38Z | 14 | 2 | null | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:de",
"license:cc-by-4.0",
"short answer feedback",
"legal domain",
"region:us"
] | 2023-03-31T11:47:38Z | 2022-11-09T10:35:55.000Z | 2022-11-09T10:35:55 | ---
pretty_name: SAF - Legal Domain - German
annotations_creators:
- expert-generated
language:
- de
language_creators:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- short answer feedback
- legal domain
task_categories:
- text2text-generation
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: reference_answer
dtype: string
- name: provided_answer
dtype: string
- name: answer_feedback
dtype: string
- name: verification_feedback
dtype: string
- name: error_class
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 2142112
num_examples: 1596
- name: validation
num_bytes: 550206
num_examples: 400
- name: test_unseen_answers
num_bytes: 301087
num_examples: 221
- name: test_unseen_questions
num_bytes: 360616
num_examples: 275
download_size: 484808
dataset_size: 3354021
license: cc-by-4.0
---
# Dataset Card for "saf_legal_domain_german"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This Short Answer Feedback (SAF) dataset contains 19 German questions in the domain of the German social law (with reference answers). The idea of constructing a bilingual (English and German) short answer dataset as a way to remedy the lack of content-focused feedback datasets was introduced in [Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset](https://aclanthology.org/2022.acl-long.587) (Filighera et al., ACL 2022). Please refer to [saf_micro_job_german](https://huggingface.co/datasets/Short-Answer-Feedback/saf_micro_job_german) and [saf_communication_networks_english](https://huggingface.co/datasets/Short-Answer-Feedback/saf_communication_networks_english) for similarly constructed datasets that can be used for SAF tasks.
### Supported Tasks and Leaderboards
- `short_answer_feedback`: The dataset can be used to train a Text2Text Generation model from HuggingFace transformers in order to generate automatic short answer feedback.
### Languages
The questions, reference answers, provided answers and the answer feedback in the dataset are written in German.
## Dataset Structure
### Data Instances
An example of an entry of the training split looks as follows.
```
{
"id": "1",
"question": "Ist das eine Frage?",
"reference_answer": "Ja, das ist eine Frage.",
"provided_answer": "Ich bin mir sicher, dass das eine Frage ist.",
"answer_feedback": "Korrekt.",
"verification_feedback": "Correct",
"error_class": "Keine",
"score": 1
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: a `string` feature (UUID4 in HEX format).
- `question`: a `string` feature representing a question.
- `reference_answer`: a `string` feature representing a reference answer to the question.
- `provided_answer`: a `string` feature representing an answer that was provided for a particular question.
- `answer_feedback`: a `string` feature representing the feedback given to the provided answers.
- `verification_feedback`: a `string` feature representing an automatic labeling of the score. It can be `Correct` (`score` = 1), `Incorrect` (`score` = 0) or `Partially correct` (all intermediate scores).
- `error_class`: a `string` feature representing the type of error identified in the case of a not completely correct answer.
- `score`: a `float64` feature (between 0 and 1) representing the score given to the provided answer.
### Data Splits
The dataset is comprised of four data splits.
- `train`: used for training, contains a set of questions and the provided answers to them.
- `validation`: used for validation, contains a set of questions and the provided answers to them (derived from the original training set from which the data came from).
- `test_unseen_answers`: used for testing, contains unseen answers to the questions present in the `train` split.
- `test_unseen_questions`: used for testing, contains unseen questions that do not appear in the `train` split.
| Split |train|validation|test_unseen_answers|test_unseen_questions|
|-------------------|----:|---------:|------------------:|--------------------:|
|Number of instances| 1596| 400| 221| 275|
## Additional Information
### Contributions
Thanks to [@JohnnyBoy2103](https://github.com/JohnnyBoy2103) for adding this dataset. | [
-0.706130862236023,
-0.7929450273513794,
0.16362076997756958,
0.36053746938705444,
-0.10096663981676102,
-0.10587462037801743,
-0.3126392662525177,
-0.26977258920669556,
0.36781173944473267,
0.5473155975341797,
-1.0703259706497192,
-0.6003760099411011,
-0.4334091544151306,
0.34154722094535... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AnonymousSub/recipe1m_vit_base_embeddings | AnonymousSub | 2022-11-12T20:06:36Z | 14 | 0 | null | [
"region:us"
] | 2022-11-12T20:06:36Z | 2022-11-12T20:05:54.000Z | 2022-11-12T20:05:54 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Javtor/biomedical-topic-categorization-validation | Javtor | 2022-11-13T03:58:45Z | 14 | 0 | null | [
"region:us"
] | 2022-11-13T03:58:45Z | 2022-11-13T03:52:44.000Z | 2022-11-13T03:52:44 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/cellfinder | bigbio | 2022-12-22T15:44:19Z | 14 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | 2022-12-22T15:44:19Z | 2022-11-13T22:07:39.000Z | 2022-11-13T22:07:39 |
---
language:
- en
bigbio_language:
- English
license: cc-by-sa-3.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_SA_3p0
pretty_name: CellFinder
homepage: https://www.informatik.hu-berlin.de/de/forschung/gebiete/wbi/resources/cellfinder/
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for CellFinder
## Dataset Description
- **Homepage:** https://www.informatik.hu-berlin.de/de/forschung/gebiete/wbi/resources/cellfinder/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
The CellFinder project aims to create a stem cell data repository by linking information from existing public databases and by performing text mining on the research literature. The first version of the corpus is composed of 10 full text documents containing more than 2,100 sentences, 65,000 tokens and 5,200 annotations for entities. The corpus has been annotated with six types of entities (anatomical parts, cell components, cell lines, cell types, genes/protein and species) with an overall inter-annotator agreement around 80%.
See: https://www.informatik.hu-berlin.de/de/forschung/gebiete/wbi/resources/cellfinder/
## Citation Information
```
@inproceedings{neves2012annotating,
title = {Annotating and evaluating text for stem cell research},
author = {Neves, Mariana and Damaschun, Alexander and Kurtz, Andreas and Leser, Ulf},
year = 2012,
booktitle = {
Proceedings of the Third Workshop on Building and Evaluation Resources for
Biomedical Text Mining\ (BioTxtM 2012) at Language Resources and Evaluation
(LREC). Istanbul, Turkey
},
pages = {16--23},
organization = {Citeseer}
}
```
| [
-0.22206968069076538,
-0.21483734250068665,
0.23309051990509033,
0.3301146626472473,
-0.38673293590545654,
0.11119193583726883,
0.11780685186386108,
-0.461123526096344,
0.1514742076396942,
0.28260499238967896,
-0.6710581183433533,
-1.0180554389953613,
-0.3457651138305664,
0.462225824594497... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/mediqa_nli | bigbio | 2022-12-22T15:45:31Z | 14 | 0 | null | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-12-22T15:45:31Z | 2022-11-13T22:09:39.000Z | 2022-11-13T22:09:39 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: PHYSIONET_LICENSE_1p5
pretty_name: MEDIQA NLI
homepage: https://physionet.org/content/mednli-bionlp19/1.0.1/
bigbio_pubmed: False
bigbio_public: False
bigbio_tasks:
- TEXTUAL_ENTAILMENT
---
# Dataset Card for MEDIQA NLI
## Dataset Description
- **Homepage:** https://physionet.org/content/mednli-bionlp19/1.0.1/
- **Pubmed:** False
- **Public:** False
- **Tasks:** TE
Natural Language Inference (NLI) is the task of determining whether a given hypothesis can be
inferred from a given premise. Also known as Recognizing Textual Entailment (RTE), this task has
enjoyed popularity among researchers for some time. However, almost all datasets for this task
focused on open domain data such as as news texts, blogs, and so on. To address this gap, the MedNLI
dataset was created for language inference in the medical domain. MedNLI is a derived dataset with
data sourced from MIMIC-III v1.4. In order to stimulate research for this problem, a shared task on
Medical Inference and Question Answering (MEDIQA) was organized at the workshop for biomedical
natural language processing (BioNLP) 2019. The dataset provided herein is a test set of 405 premise
hypothesis pairs for the NLI challenge in the MEDIQA shared task. Participants of the shared task
are expected to use the MedNLI data for development of their models and this dataset was used as an
unseen dataset for scoring each participant submission.
## Citation Information
```
@misc{https://doi.org/10.13026/gtv4-g455,
title = {MedNLI for Shared Task at ACL BioNLP 2019},
author = {Shivade, Chaitanya},
year = 2019,
publisher = {physionet.org},
doi = {10.13026/GTV4-G455},
url = {https://physionet.org/content/mednli-bionlp19/}
}
```
| [
-0.06343677639961243,
-0.5851808190345764,
0.47133955359458923,
0.2934090793132782,
-0.02932831272482872,
-0.2599785625934601,
-0.03547905758023262,
-0.4789934456348419,
0.4510432481765747,
0.4512697756290436,
-0.8327675461769104,
-0.5884477496147156,
-0.27072176337242126,
0.42590317130088... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/multi_xscience | bigbio | 2022-12-22T15:45:44Z | 14 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:mit",
"arxiv:2010.14235",
"region:us"
] | 2022-12-22T15:45:44Z | 2022-11-13T22:10:18.000Z | 2022-11-13T22:10:18 |
---
language:
- en
bigbio_language:
- English
license: mit
multilinguality: monolingual
bigbio_license_shortname: MIT
pretty_name: Multi-XScience
homepage: https://github.com/yaolu/Multi-XScience
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- PARAPHRASING
- SUMMARIZATION
---
# Dataset Card for Multi-XScience
## Dataset Description
- **Homepage:** https://github.com/yaolu/Multi-XScience
- **Pubmed:** False
- **Public:** True
- **Tasks:** PARA,SUM
Multi-document summarization is a challenging task for which there exists little large-scale datasets.
We propose Multi-XScience, a large-scale multi-document summarization dataset created from scientific articles.
Multi-XScience introduces a challenging multi-document summarization task: writing the related-work section
of a paper based on its abstract and the articles it references. Our work is inspired by extreme summarization,
a dataset construction protocol that favours abstractive modeling approaches. Descriptive statistics and
empirical results---using several state-of-the-art models trained on the Multi-XScience dataset---reveal t
hat Multi-XScience is well suited for abstractive models.
## Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2010.14235,
doi = {10.48550/ARXIV.2010.14235},
url = {https://arxiv.org/abs/2010.14235},
author = {Lu, Yao and Dong, Yue and Charlin, Laurent},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles},
publisher = {arXiv},
year = {2020},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
| [
-0.16487444937229156,
-0.27621695399284363,
0.37447986006736755,
0.006003924645483494,
-0.1389216184616089,
0.1394302248954773,
-0.20874038338661194,
-0.4264749586582184,
0.45084577798843384,
0.24971401691436768,
-0.44160985946655273,
-0.5519263744354248,
-0.5897809863090515,
0.26516905426... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/scielo | bigbio | 2022-12-22T15:46:40Z | 14 | 1 | null | [
"multilinguality:multilingual",
"language:en",
"language:es",
"language:pt",
"license:cc-by-4.0",
"region:us"
] | 2022-12-22T15:46:40Z | 2022-11-13T22:12:07.000Z | 2022-11-13T22:12:07 |
---
language:
- en
- es
- pt
bigbio_language:
- English
- Spanish
- Portuguese
license: cc-by-4.0
multilinguality: multilingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: SciELO
homepage: https://sites.google.com/view/felipe-soares/datasets#h.p_92uSCyAjWSRB
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- TRANSLATION
---
# Dataset Card for SciELO
## Dataset Description
- **Homepage:** https://sites.google.com/view/felipe-soares/datasets#h.p_92uSCyAjWSRB
- **Pubmed:** False
- **Public:** True
- **Tasks:** TRANSL
A parallel corpus of full-text scientific articles collected from Scielo database in the following languages: English, Portuguese and Spanish. The corpus is sentence aligned for all language pairs, as well as trilingual aligned for a small subset of sentences. Alignment was carried out using the Hunalign algorithm.
## Citation Information
```
@inproceedings{soares2018large,
title = {A Large Parallel Corpus of Full-Text Scientific Articles},
author = {Soares, Felipe and Moreira, Viviane and Becker, Karin},
year = 2018,
booktitle = {
Proceedings of the Eleventh International Conference on Language Resources
and Evaluation (LREC-2018)
}
}
```
| [
0.16614852845668793,
-0.17357733845710754,
0.4656078517436981,
0.745596706867218,
-0.34621745347976685,
0.2031286507844925,
-0.28786784410476685,
-0.49579721689224243,
0.49233636260032654,
0.3918009102344513,
-0.4485955834388733,
-0.7933021187782288,
-0.5372648239135742,
0.6539798378944397... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
severo/mnist | severo | 2022-11-03T16:46:54Z | 14 | 0 | mnist | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-nist",
"language:en",
"license:mit",
"region:us"
] | 2022-11-03T16:46:54Z | 2022-11-17T16:33:16.000Z | 2022-11-17T16:33:16 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-nist
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
paperswithcode_id: mnist
pretty_name: MNIST
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: '5'
6: '6'
7: '7'
8: '8'
9: '9'
config_name: mnist
splits:
- name: test
num_bytes: 2916440
num_examples: 10000
- name: train
num_bytes: 17470848
num_examples: 60000
download_size: 11594722
dataset_size: 20387288
---
# Dataset Card for MNIST
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://yann.lecun.com/exdb/mnist/
- **Repository:**
- **Paper:** MNIST handwritten digit database by Yann LeCun, Corinna Cortes, and CJ Burges
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The MNIST dataset consists of 70,000 28x28 black-and-white images of handwritten digits extracted from two NIST databases. There are 60,000 images in the training dataset and 10,000 images in the validation dataset, one class per digit so a total of 10 classes, with 7,000 images (6,000 train images and 1,000 test images) per class.
Half of the image were drawn by Census Bureau employees and the other half by high school students (this split is evenly distributed in the training and testing sets).
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image of a handwritten digit into one of 10 classes representing integer values from 0 to 9, inclusively. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-mnist).
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its label:
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x276021F6DD8>,
'label': 5
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing the 28x28 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `label`: an integer between 0 and 9 representing the digit.
### Data Splits
The data is split into training and test set. All the images in the test set were drawn by different individuals than the images in the training set. The training set contains 60,000 images and the test set 10,000 images.
## Dataset Creation
### Curation Rationale
The MNIST database was created to provide a testbed for people wanting to try pattern recognition methods or machine learning algorithms while spending minimal efforts on preprocessing and formatting. Images of the original dataset (NIST) were in two groups, one consisting of images drawn by Census Bureau employees and one consisting of images drawn by high school students. In NIST, the training set was built by grouping all the images of the Census Bureau employees, and the test set was built by grouping the images form the high school students.
The goal in building MNIST was to have a training and test set following the same distributions, so the training set contains 30,000 images drawn by Census Bureau employees and 30,000 images drawn by high school students, and the test set contains 5,000 images of each group. The curators took care to make sure all the images in the test set were drawn by different individuals than the images in the training set.
### Source Data
#### Initial Data Collection and Normalization
The original images from NIST were size normalized to fit a 20x20 pixel box while preserving their aspect ratio. The resulting images contain grey levels (i.e., pixels don't simply have a value of black and white, but a level of greyness from 0 to 255) as a result of the anti-aliasing technique used by the normalization algorithm. The images were then centered in a 28x28 image by computing the center of mass of the pixels, and translating the image so as to position this point at the center of the 28x28 field.
#### Who are the source language producers?
Half of the source images were drawn by Census Bureau employees, half by high school students. According to the dataset curator, the images from the first group are more easily recognizable.
### Annotations
#### Annotation process
The images were not annotated after their creation: the image creators annotated their images with the corresponding label after drawing them.
#### Who are the annotators?
Same as the source data creators.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Chris Burges, Corinna Cortes and Yann LeCun
### Licensing Information
MIT Licence
### Citation Information
```
@article{lecun2010mnist,
title={MNIST handwritten digit database},
author={LeCun, Yann and Cortes, Corinna and Burges, CJ},
journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist},
volume={2},
year={2010}
}
```
### Contributions
Thanks to [@sgugger](https://github.com/sgugger) for adding this dataset. | [
-0.48108309507369995,
-0.35934439301490784,
0.08701934665441513,
0.06516963988542557,
-0.4660346806049347,
0.06910192966461182,
-0.07527599483728409,
-0.4236864447593689,
0.5878740549087524,
0.6485567092895508,
-0.4841615855693817,
-0.7754755616188049,
-0.6777386665344238,
0.18783693015575... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
chenrm/illusion-cards | chenrm | 2022-11-20T19:14:34Z | 14 | 0 | null | [
"region:us"
] | 2022-11-20T19:14:34Z | 2022-11-20T16:56:31.000Z | 2022-11-20T16:56:31 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 41920616810.06
num_examples: 73190
download_size: 37899199783
dataset_size: 41920616810.06
---
# Dataset Card for "illusion-cards"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5649044513702393,
-0.3542952835559845,
0.1893540769815445,
0.24168452620506287,
-0.16430975496768951,
-0.0174077320843935,
0.4239949882030487,
-0.4352949559688568,
1.163879632949829,
0.5485642552375793,
-0.7526400089263916,
-0.553478479385376,
-0.583672821521759,
-0.3715920150279999,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-futin__feed-sen_en-2f01d7-2175769991 | autoevaluate | 2022-11-21T10:32:30Z | 14 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-21T10:32:30Z | 2022-11-21T10:04:58.000Z | 2022-11-21T10:04:58 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/feed
eval_info:
task: text_zero_shot_classification
model: facebook/opt-1.3b
metrics: []
dataset_name: futin/feed
dataset_config: sen_en
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-1.3b
* Dataset: futin/feed
* Config: sen_en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | [
-0.2938690185546875,
-0.46765968203544617,
0.31711429357528687,
0.03542553260922432,
0.0473528727889061,
-0.18207986652851105,
0.01109559740871191,
-0.4732665419578552,
0.13633346557617188,
0.3728671669960022,
-1.0195077657699585,
-0.21140234172344208,
-0.6411057114601135,
-0.0165381096303... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dnwalkup/db_regularization_images | dnwalkup | 2022-11-23T10:58:35Z | 14 | 0 | null | [
"license:other",
"region:us"
] | 2022-11-23T10:58:35Z | 2022-11-21T10:29:16.000Z | 2022-11-21T10:29:16 | ---
license: other
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kojima-r/birddb_small2 | kojima-r | 2022-11-21T12:22:41Z | 14 | 0 | null | [
"region:us"
] | 2022-11-21T12:22:41Z | 2022-11-21T12:18:24.000Z | 2022-11-21T12:18:24 | ---
dataset_info:
features:
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 1011384430.775
num_examples: 77501
download_size: 2139041561
dataset_size: 1011384430.775
---
# Dataset Card for "birddb_small2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5575853586196899,
-0.20409716665744781,
0.0007361288880929351,
0.18377630412578583,
-0.21667802333831787,
-0.3926927149295807,
0.2699219286441803,
-0.4120357036590576,
0.6884723901748657,
0.25896182656288147,
-0.7115408778190613,
-0.4969741404056549,
-0.47822871804237366,
-0.04822873696... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DTU54DL/common-voice | DTU54DL | 2022-11-21T22:28:56Z | 14 | 0 | acronym-identification | [
"task_categories:token-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"region:us"
] | 2022-11-21T22:28:56Z | 2022-11-21T17:32:49.000Z | 2022-11-21T17:32:49 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: acronym-identification
pretty_name: Acronym Identification Dataset
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- token-classification-other-acronym-identification
train-eval-index:
- col_mapping:
labels: tags
tokens: tokens
config: default
splits:
eval_split: test
task: token-classification
task_id: entity_extraction
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| [
-0.47841677069664,
-0.5084842443466187,
0.14602938294410706,
0.278889000415802,
-0.2170247584581375,
0.24832050502300262,
-0.3366999626159668,
-0.375893235206604,
0.6720380187034607,
0.6457639932632446,
-0.9167346358299255,
-1.2200126647949219,
-0.7551794052124023,
0.07273735105991364,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Twitter/SignedGraphs | Twitter | 2022-11-22T03:32:19Z | 14 | 0 | null | [
"license:cc-by-4.0",
"arxiv:2201.11675",
"region:us"
] | 2022-11-22T03:32:19Z | 2022-11-21T20:08:09.000Z | 2022-11-21T20:08:09 | ---
license: cc-by-4.0
---
# Learning Stance Embeddings from Signed Social Graphs
[](http://makeapullrequest.com)
[](https://arxiv.org/abs/2201.11675)
This repo contains the datasets from our paper [Learning Stance Embeddings from Signed Social Graphs](https://arxiv.org/abs/2201.11675). <br />
[[PDF]](https://arxiv.org/pdf/2201.11675.pdf)
[[HuggingFace Datasets]](https://huggingface.co/Twitter)
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
## Overview
A key challenge in social network analysis is understanding the position, or stance, of people in the graph on a large set of topics. In such social graphs, modeling (dis)agreement patterns across a range of correlated topics may be beneficial. For example, disagreement on one topic may make disagreement (or agreement) more likely for related topics.
We open source **two Twitter signed, topical graph datasets**. One dataset, **TwitterSG**, labels (dis)agreements using engagements between users via tweets to derive topic-informed, signed edges. The other, **BirdwatchSG**,leverages community reports on misinformation and misleading content.
## Datasets
### TwitterSG
Twitter Signed Graph, or TwitterSG, is a signed, directed, edge-attributed graph of users, drawn from Twitter interactions. TwitterSG contains 753,944 nodes (users), 200 topics and 12,848,093 edges. It is the largest publicly available user-to-user signed social graph (∼6x larger than the Epinions graph).
A positive edge exists from user 𝐴 to user 𝐵 if user 𝐴 liked a tweet posted by user 𝐵. A negative edge exists from user 𝐴 to user 𝐵 if user 𝐴 expressed opposition towards user 𝐵’s tweet, e.g., by replying *I disagree with you*. The full list of opposition keywords is specified [here](https://github.com/lejohnyjohn/learning-stance-embeddings-from-signed-social-graphs/tree/main/datasets). The topic of an edge from user 𝐴 to user 𝐵 is determined by the topic of user 𝐵’s tweet.
Tweets' topics were inferred with a topic classifier used in production by Twitter. The topics provided in the dataset are all related to sports (e.g., sports teams, players, managers, or events), and the tweets related to these interactions were published between 20th May (Ice Hockey World Championships) and 8th August 2021 (closing date of the 2020 Tokyo Olympic Games).
9.6\% of edges are negative (opposition) and 90.4\% are positive. There may be several edges between two nodes (several interactions, several topics). The data format is displayed below.
| source_idx | target_idx | topic_idx | topic | rating |
| ------------- | ------------- | ---------- | ------ | ---- |
| 1 | 6 | 19 | Copa America | +1 |
| 1 | 6 | 97 | NFL | -1 |
| 4 | 5 | 23 |Kylian Mbappe | +1 |
### BirdwatchSG
Birdwatch Signed Graph, or BirdwatchSG, is a signed, directed, edge-attributed graph of users, drawn from note ratings on the Birdwatch pilot. The graph contains 2,987 nodes (users), 1,020 topics and 441,986 edges.
[Birdwatch pilot](https://blog.twitter.com/en_us/topics/product/2021/introducing-birdwatch-a-community-based-approach-to-misinformation) was launched by Twitter in January 2021 in the USA to address misleading information on the platform, in a community-driven fashion: the Birdwatch participants can identify information they believe is misleading in tweets and write notes that provide informative context. They can also rate the helpfulness (either *helpful*, *somewhat helpful*, or *not helpful*) of notes added by other contributors. All Birdwatch contributions are publicly available on the [Birdwatch site](https://twitter.github.io/birdwatch/) for anyone in the USA.
Using Birdwatch data from January to July 2021, a positive (negative) edge is created from participant 𝑈1 to 𝑈2 if participant 𝑈1 rated a note written by participant 𝑈2 as *helpful* (*not helpful*). The *somewhat helpful* ratings were filtered out. The topic associated with an edge is the topic inferred from the tweet the note refers to.
36.9% of edges are negative (opposition) and 63.1% are positive. There may be several edges between two nodes (several interactions, several topics).
| source_idx | target_idx | topic_idx | topic | rating |
| ------------- | ------------- | ---------- | ------ | ---- |
| 10 | 6 | 443 | US Politics | +1 |
| 7 | 14 | 12 | Ted Cruz | -1 |
| 1 | 11 | 1003 | COVID-19 | +1 |
## Citation
If you use our datasets in your work, please cite the following:
```bib
@article{pougue2022learning,
title={Learning Stance Embeddings from Signed Social Graphs},
author={Pougu{\'e}-Biyong, John and Gupta, Akshay and Haghighi, Aria and El-Kishky, Ahmed},
journal={arXiv preprint arXiv:2201.11675},
year={2022}
}
``` | [
-0.2131824940443039,
-0.5591050386428833,
0.4240386486053467,
0.22818920016288757,
-0.54451584815979,
0.15814466774463654,
0.09476609528064728,
-0.45024532079696655,
0.7309881448745728,
0.008493268862366676,
-0.5744766592979431,
-0.9137645363807678,
-0.8354966044425964,
-0.0965360030531883... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Javtor/biomedical-topic-categorization-2022only-cased | Javtor | 2022-12-10T23:56:01Z | 14 | 0 | null | [
"region:us"
] | 2022-12-10T23:56:01Z | 2022-11-22T22:29:27.000Z | 2022-11-22T22:29:27 | ---
dataset_info:
features:
- name: Title/Abstract
dtype: string
- name: T001
dtype: int64
- name: T002
dtype: int64
- name: T004
dtype: int64
- name: T005
dtype: int64
- name: T007
dtype: int64
- name: T008
dtype: int64
- name: T010
dtype: int64
- name: T011
dtype: int64
- name: T012
dtype: int64
- name: T013
dtype: int64
- name: T014
dtype: int64
- name: T015
dtype: int64
- name: T016
dtype: int64
- name: T017
dtype: int64
- name: T018
dtype: int64
- name: T019
dtype: int64
- name: T020
dtype: int64
- name: T022
dtype: int64
- name: T023
dtype: int64
- name: T024
dtype: int64
- name: T025
dtype: int64
- name: T026
dtype: int64
- name: T028
dtype: int64
- name: T029
dtype: int64
- name: T030
dtype: int64
- name: T031
dtype: int64
- name: T032
dtype: int64
- name: T033
dtype: int64
- name: T034
dtype: int64
- name: T037
dtype: int64
- name: T038
dtype: int64
- name: T039
dtype: int64
- name: T040
dtype: int64
- name: T041
dtype: int64
- name: T042
dtype: int64
- name: T043
dtype: int64
- name: T044
dtype: int64
- name: T045
dtype: int64
- name: T046
dtype: int64
- name: T047
dtype: int64
- name: T048
dtype: int64
- name: T049
dtype: int64
- name: T050
dtype: int64
- name: T051
dtype: int64
- name: T052
dtype: int64
- name: T053
dtype: int64
- name: T054
dtype: int64
- name: T055
dtype: int64
- name: T056
dtype: int64
- name: T057
dtype: int64
- name: T058
dtype: int64
- name: T059
dtype: int64
- name: T060
dtype: int64
- name: T061
dtype: int64
- name: T062
dtype: int64
- name: T063
dtype: int64
- name: T064
dtype: int64
- name: T065
dtype: int64
- name: T066
dtype: int64
- name: T067
dtype: int64
- name: T068
dtype: int64
- name: T069
dtype: int64
- name: T070
dtype: int64
- name: T071
dtype: int64
- name: T072
dtype: int64
- name: T073
dtype: int64
- name: T074
dtype: int64
- name: T075
dtype: int64
- name: T077
dtype: int64
- name: T078
dtype: int64
- name: T079
dtype: int64
- name: T080
dtype: int64
- name: T081
dtype: int64
- name: T082
dtype: int64
- name: T083
dtype: int64
- name: T085
dtype: int64
- name: T086
dtype: int64
- name: T087
dtype: int64
- name: T089
dtype: int64
- name: T090
dtype: int64
- name: T091
dtype: int64
- name: T092
dtype: int64
- name: T093
dtype: int64
- name: T094
dtype: int64
- name: T095
dtype: int64
- name: T096
dtype: int64
- name: T097
dtype: int64
- name: T098
dtype: int64
- name: T099
dtype: int64
- name: T100
dtype: int64
- name: T101
dtype: int64
- name: T102
dtype: int64
- name: T103
dtype: int64
- name: T104
dtype: int64
- name: T109
dtype: int64
- name: T114
dtype: int64
- name: T116
dtype: int64
- name: T120
dtype: int64
- name: T121
dtype: int64
- name: T122
dtype: int64
- name: T123
dtype: int64
- name: T125
dtype: int64
- name: T126
dtype: int64
- name: T127
dtype: int64
- name: T129
dtype: int64
- name: T130
dtype: int64
- name: T131
dtype: int64
- name: T167
dtype: int64
- name: T168
dtype: int64
- name: T169
dtype: int64
- name: T170
dtype: int64
- name: T171
dtype: int64
- name: T184
dtype: int64
- name: T185
dtype: int64
- name: T190
dtype: int64
- name: T191
dtype: int64
- name: T192
dtype: int64
- name: T194
dtype: int64
- name: T195
dtype: int64
- name: T196
dtype: int64
- name: T197
dtype: int64
- name: T200
dtype: int64
- name: T201
dtype: int64
- name: T204
dtype: int64
splits:
- name: train
num_bytes: 399873192.8393
num_examples: 183374
- name: test
num_bytes: 133291791.16070004
num_examples: 61125
download_size: 178851848
dataset_size: 533164984.0
---
# Dataset Card for "biomedical-topic-categorization-validation-cased"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.32198554277420044,
-0.4887152314186096,
0.35588234663009644,
0.22943368554115295,
-0.39582571387290955,
0.20206516981124878,
0.3376479744911194,
0.21426084637641907,
0.8878397345542908,
0.5498420596122742,
-0.6429483890533447,
-0.9823623895645142,
-0.4974001348018646,
-0.190131694078445... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TTian/feedback-prize-tokenized-dataset-2021 | TTian | 2022-11-25T02:39:38Z | 14 | 0 | null | [
"region:us"
] | 2022-11-25T02:39:38Z | 2022-11-25T02:39:29.000Z | 2022-11-25T02:39:29 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DTU54DL/common-accent-proc | DTU54DL | 2022-11-30T20:41:55Z | 14 | 0 | acronym-identification | [
"task_categories:token-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"region:us"
] | 2022-11-30T20:41:55Z | 2022-11-30T13:24:08.000Z | 2022-11-30T13:24:08 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: acronym-identification
pretty_name: Acronym Identification Dataset
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- token-classification-other-acronym-identification
train-eval-index:
- col_mapping:
labels: tags
tokens: tokens
config: default
splits:
eval_split: test
task: token-classification
task_id: entity_extraction
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: accent
dtype: string
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 11534718760.0
num_examples: 10000
- name: test
num_bytes: 518496848.0
num_examples: 451
download_size: 3935975243
dataset_size: 12053215608.0
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | [
-0.47841677069664,
-0.5084842443466187,
0.14602938294410706,
0.278889000415802,
-0.21702472865581512,
0.24832050502300262,
-0.3366999328136444,
-0.3758932054042816,
0.6720380783081055,
0.6457639932632446,
-0.9167346358299255,
-1.2200127840042114,
-0.7551794052124023,
0.07273735105991364,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
neuralcatcher/hateful_memes | neuralcatcher | 2022-12-01T07:08:59Z | 14 | 2 | null | [
"arxiv:2005.04790",
"region:us"
] | 2022-12-01T07:08:59Z | 2022-12-01T03:49:06.000Z | 2022-12-01T03:49:06 | # The Hateful Memes Challenge README
The Hateful Memes Challenge is a dataset and benchmark created by Facebook AI to drive and measure progress on multimodal reasoning and understanding. The task focuses on detecting hate speech in multimodal memes.
Please see the paper for further details:
[The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes
D. Kiela, H. Firooz, A. Mohan, V. Goswami, A. Singh, P. Ringshia, D. Testuggine](
https://arxiv.org/abs/2005.04790)
For more details, see also the website:
https://hatefulmemeschallenge.com
# Dataset details
The files for this folder are arranged as follows:
img/ - the PNG images
train.jsonl - the training set
dev_seen.jsonl - the "seen" development set
test_seen.jsonl - the "seen" test set
dev_unseen.jsonl - the "unseen" development set
test_unseen.jsonl - the "unseen" test set
The "seen" dataset was presented in the NeurIPS paper; the “unseen” dev and test set were released as a part of the NeurIPS 2020 competition.
The .jsonl format contains one JSON-encoded example per line, each of which has the following fields:
‘text’ - the text occurring in the meme
‘img’ - the path to the image in the img/ directory
‘label’ - the label for the meme (0=not-hateful, 1=hateful), provided for train and dev
The metric to use is AUROC. You may also report accuracy in addition, since this is arguably more interpretable. To compute these metrics, we recommend the roc_auc_score and accuracy_score methods in sklearn.metrics, with default settings.
# Getting started
To get started working on this dataset, there's an easy-to-use "starter kit" available in MMF: https://github.com/facebookresearch/mmf/tree/master/projects/hateful_memes.
# Note on Annotator Accuracy
As is to be expected with a dataset of this size and nature, some of the examples in the training set have been misclassified. We are not claiming that our dataset labels are completely accurate, or even that all annotators would agree on a particular label. Misclassifications, although possible, should be very rare in the dev and seen test set, however, and we will take extra care with the unseen test set.
As a reminder, the annotations collected for this dataset were not collected using Facebook annotators and we did not employ Facebook’s hate speech policy. As such, the dataset labels do not in any way reflect Facebook’s official stance on this matter.
# License
The dataset is licensed under the terms in the `LICENSE.txt` file.
# Image Attribution
If you wish to display example memes in your paper, please provide the following attribution:
*Image is a compilation of assets, including ©Getty Image.*
# Citations
If you wish to cite this work, please use the following BiBTeX:
```
@inproceedings{Kiela:2020hatefulmemes,
author = {Kiela, Douwe and Firooz, Hamed and Mohan, Aravind and Goswami, Vedanuj and Singh, Amanpreet and Ringshia, Pratik and Testuggine, Davide},
booktitle = {Advances in Neural Information Processing Systems},
editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin},
pages = {2611--2624},
publisher = {Curran Associates, Inc.},
title = {The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes},
url = {https://proceedings.neurips.cc/paper/2020/file/1b84c4cee2b8b3d823b30e2d604b1878-Paper.pdf},
volume = {33},
year = {2020}
}
```
# Contact
If you have any questions or comments on the dataset, please contact hatefulmemeschallenge@fb.com or one of the authors.
| [
-0.4650436341762543,
-0.8006029725074768,
-0.09297613799571991,
0.20180580019950867,
-0.12985393404960632,
0.22801616787910461,
-0.16581179201602936,
-0.5734763145446777,
0.2996506690979004,
0.1246902272105217,
-0.7721275091171265,
-0.4038280248641968,
-0.7527027726173401,
0.13555696606636... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
m-aliabbas/idrak_splitted_amy_2 | m-aliabbas | 2022-12-01T16:38:38Z | 14 | 0 | null | [
"region:us"
] | 2022-12-01T16:38:38Z | 2022-12-01T11:37:06.000Z | 2022-12-01T11:37:06 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lmqg/qg_tweetqa | lmqg | 2022-12-02T19:11:42Z | 14 | 0 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:tweet_qa",
"language:en",
"license:cc-by-sa-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
] | 2022-12-02T19:11:42Z | 2022-12-02T18:53:49.000Z | 2022-12-02T18:53:49 | ---
license: cc-by-sa-4.0
pretty_name: TweetQA for question generation
language: en
multilinguality: monolingual
size_categories: 1k<n<10K
source_datasets: tweet_qa
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qg_tweetqa"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the [tweet_qa](https://huggingface.co/datasets/tweet_qa). The test set of the original data is not publicly released, so we randomly sampled test questions from the training set.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
English (en)
## Dataset Structure
An example of 'train' looks as follows.
```
{
'answer': 'vine',
'paragraph_question': 'question: what site does the link take you to?, context:5 years in 5 seconds. Darren Booth (@darbooth) January 25, 2013',
'question': 'what site does the link take you to?',
'paragraph': '5 years in 5 seconds. Darren Booth (@darbooth) January 25, 2013'
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `question_answer`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|9489 | 1086| 1203|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | [
-0.4311399459838867,
-1.0217983722686768,
0.3141946494579315,
0.07140640914440155,
-0.26910173892974854,
0.03313000500202179,
-0.14616245031356812,
-0.18164756894111633,
0.14486289024353027,
0.27950775623321533,
-0.9385127425193787,
-0.6357155442237854,
-0.24354970455169678,
0.131130367517... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
stacked-summaries/stacked-xsum-1024 | stacked-summaries | 2023-10-08T23:34:15Z | 14 | 1 | null | [
"task_categories:summarization",
"size_categories:100K<n<1M",
"source_datasets:xsum",
"language:en",
"license:apache-2.0",
"stacked summaries",
"xsum",
"doi:10.57967/hf/0390",
"region:us"
] | 2023-10-08T23:34:15Z | 2022-12-04T00:47:30.000Z | 2022-12-04T00:47:30 | ---
language:
- en
license: apache-2.0
size_categories:
- 100K<n<1M
source_datasets:
- xsum
task_categories:
- summarization
pretty_name: 'Stacked XSUM: 1024 tokens max'
tags:
- stacked summaries
- xsum
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: document
dtype: string
- name: summary
dtype: string
- name: id
dtype: int64
- name: chapter_length
dtype: int64
- name: summary_length
dtype: int64
- name: is_stacked
dtype: bool
splits:
- name: train
num_bytes: 918588672
num_examples: 320939
- name: validation
num_bytes: 51154057
num_examples: 17935
- name: test
num_bytes: 51118088
num_examples: 17830
download_size: 653378162
dataset_size: 1020860817
---
# stacked-xsum-1024
a "stacked" version of `xsum`
1. Original Dataset: copy of the base dataset
2. Stacked Rows: The original dataset is processed by stacking rows based on certain criteria:
- Maximum Input Length: The maximum length for input sequences is 1024 tokens in the longt5 model tokenizer.
- Maximum Output Length: The maximum length for output sequences is also 1024 tokens in the longt5 model tokenizer.
3. Special Token: The dataset utilizes the `[NEXT_CONCEPT]` token to indicate a new topic **within** the same summary. It is recommended to explicitly add this special token to your model's tokenizer before training, ensuring that it is recognized and processed correctly during downstream usage.
4.
## updates
- dec 3: upload initial version
- dec 4: upload v2 with basic data quality fixes (i.e. the `is_stacked` column)
- dec 5 0500: upload v3 which has pre-randomised order and duplicate rows for document+summary dropped
## stats

## dataset details
see the repo `.log` file for more details.
train input
```python
[2022-12-05 01:05:17] INFO:root:INPUTS - basic stats - train
[2022-12-05 01:05:17] INFO:root:{'num_columns': 5,
'num_rows': 204045,
'num_unique_target': 203107,
'num_unique_text': 203846,
'summary - average chars': 125.46,
'summary - average tokens': 30.383719277610332,
'text input - average chars': 2202.42,
'text input - average tokens': 523.9222230390355}
```
stacked train:
```python
[2022-12-05 04:47:01] INFO:root:stacked 181719 rows, 22326 rows were ineligible
[2022-12-05 04:47:02] INFO:root:dropped 64825 duplicate rows, 320939 rows remain
[2022-12-05 04:47:02] INFO:root:shuffling output with seed 323
[2022-12-05 04:47:03] INFO:root:STACKED - basic stats - train
[2022-12-05 04:47:04] INFO:root:{'num_columns': 6,
'num_rows': 320939,
'num_unique_chapters': 320840,
'num_unique_summaries': 320101,
'summary - average chars': 199.89,
'summary - average tokens': 46.29925001324239,
'text input - average chars': 2629.19,
'text input - average tokens': 621.541532814647}
```
## Citation
If you find this useful in your work, please consider citing us.
```
@misc {stacked_summaries_2023,
author = { {Stacked Summaries: Karim Foda and Peter Szemraj} },
title = { stacked-xsum-1024 (Revision 2d47220) },
year = 2023,
url = { https://huggingface.co/datasets/stacked-summaries/stacked-xsum-1024 },
doi = { 10.57967/hf/0390 },
publisher = { Hugging Face }
}
``` | [
-0.410433292388916,
-0.17482134699821472,
0.016300249844789505,
0.36238715052604675,
-0.19903695583343506,
-0.007457035128027201,
-0.05301159992814064,
-0.20086927711963654,
0.6081695556640625,
0.5386316180229187,
-0.4635339081287384,
-0.565356969833374,
-0.6917433142662048,
0.107767164707... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lucadiliello/bookcorpusopen | lucadiliello | 2022-12-04T19:09:30Z | 14 | 1 | null | [
"region:us"
] | 2022-12-04T19:09:30Z | 2022-12-04T19:05:51.000Z | 2022-12-04T19:05:51 | ---
dataset_info:
features:
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 6643459928
num_examples: 17868
download_size: 3940589290
dataset_size: 6643459928
---
# Dataset Card for "bookcorpusopen"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4406247138977051,
-0.022922726348042488,
-0.1536819487810135,
0.2310679703950882,
-0.18042290210723877,
0.1028476282954216,
0.24871660768985748,
-0.23031127452850342,
0.6688352227210999,
0.7127107977867126,
-1.0162855386734009,
-0.8907992839813232,
-0.4848952889442444,
-0.25207024812698... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cakiki/arxiv-pyserini | cakiki | 2022-12-07T15:31:56Z | 14 | 0 | null | [
"region:us"
] | 2022-12-07T15:31:56Z | 2022-12-07T13:27:11.000Z | 2022-12-07T13:27:11 | ---
dataset_info:
features:
- name: id
dtype: string
- name: submitter
dtype: string
- name: authors
dtype: string
- name: title
dtype: string
- name: comments
dtype: string
- name: journal-ref
dtype: string
- name: doi
dtype: string
- name: report-no
dtype: string
- name: categories
dtype: string
- name: license
dtype: string
- name: abstract
dtype: string
- name: versions
list:
- name: created
dtype: string
- name: version
dtype: string
- name: update_date
dtype: string
- name: authors_parsed
sequence:
sequence: string
splits:
- name: train
num_bytes: 3217788413
num_examples: 2171090
download_size: 1801274080
dataset_size: 3217788413
---
# Dataset Card for "arxiv-pyserini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5200200080871582,
-0.17727190256118774,
0.16816921532154083,
0.15757343173027039,
-0.3255957365036011,
-0.09941145777702332,
0.28940242528915405,
0.012559507973492146,
0.7377196550369263,
0.46652406454086304,
-0.32434529066085815,
-0.6550180315971375,
-0.6676816940307617,
-0.23806923627... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
liuyanchen1015/MULTI_VALUE_mnli_negative_concord | liuyanchen1015 | 2022-12-12T01:40:42Z | 14 | 0 | null | [
"region:us"
] | 2022-12-12T01:40:42Z | 2022-12-12T01:40:25.000Z | 2022-12-12T01:40:25 | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev_matched
num_bytes: 266322
num_examples: 1192
- name: dev_mismatched
num_bytes: 272492
num_examples: 1203
- name: test_matched
num_bytes: 255310
num_examples: 1140
- name: test_mismatched
num_bytes: 282595
num_examples: 1214
- name: train
num_bytes: 11140889
num_examples: 49529
download_size: 7640308
dataset_size: 12217608
---
# Dataset Card for "MULTI_VALUE_mnli_negative_concord"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7260438203811646,
-0.15746179223060608,
0.2039034068584442,
0.2865937650203705,
-0.399352103471756,
-0.14123781025409698,
0.2882903516292572,
-0.15692199766635895,
0.9632675647735596,
0.363128125667572,
-0.710831344127655,
-0.8782411813735962,
-0.5854387283325195,
-0.1847914159297943,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Flonixcorn/SVEmbed | Flonixcorn | 2022-12-28T21:08:09Z | 14 | 1 | null | [
"license:cc0-1.0",
"region:us"
] | 2022-12-28T21:08:09Z | 2022-12-14T17:43:58.000Z | 2022-12-14T17:43:58 | ---
license: cc0-1.0
---
### This is the v3 of my Sideview embedding,
here you can download all steps saved.
Personlly I recommend going in 1000 steps up from 2000, depending on if you want more style or less.
*REMEMBER:*
to use the embedding it will need to be in you Auto1111 embeddings folder and you will need to use the name in your prompt, see civitai page for more info.
some example prompts to use:
a man with a mohawk and a yellow scarf on his head and a yellow background with a black and yellow design, art by flonixsdviewv3
a man with a mask on his face and a city in the background with blue lines and a orange background with a circle, art by flonixsdviewv3
a man with dreadlocks and a gas mask on his face, with a red and black background, art by flonixsdviewv3
### More Images on the Civit.ai page https://civitai.com/models/1373/flonixs-side-view
https://civitai.com/models/1373/flonixs-side-view
<img src="https://s3.amazonaws.com/moonup/production/uploads/1671040720337-63383cdec6295341204b2ade.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1671040772203-63383cdec6295341204b2ade.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1671040828365-63383cdec6295341204b2ade.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1671040891116-63383cdec6295341204b2ade.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1671040930692-63383cdec6295341204b2ade.png" width="100%"/> | [
-0.8314897418022156,
-0.255326509475708,
0.6303844451904297,
0.3817523121833801,
-0.42479607462882996,
-0.027350623160600662,
0.24431212246418,
0.01668715290725231,
1.0297077894210815,
0.7682734727859497,
-0.7690897583961487,
-0.6026240587234497,
-0.3828742802143097,
0.06431686133146286,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
JammyMachina/improved_4bars | JammyMachina | 2022-12-14T22:28:36Z | 14 | 0 | null | [
"region:us"
] | 2022-12-14T22:28:36Z | 2022-12-14T22:24:44.000Z | 2022-12-14T22:24:44 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
laion/laion2b-multi-vit-l-14-embeddings | laion | 2022-12-16T17:53:54Z | 14 | 0 | null | [
"region:us"
] | 2022-12-16T17:53:54Z | 2022-12-15T23:33:02.000Z | 2022-12-15T23:33:02 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Dahoas/synthetic-hh-rlhf-prompts | Dahoas | 2022-12-19T16:16:22Z | 14 | 0 | null | [
"region:us"
] | 2022-12-19T16:16:22Z | 2022-12-19T16:15:30.000Z | 2022-12-19T16:15:30 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
carlosejimenez/bookcorpus_filtered_len_17_simcse_retrieval_top32__source_tranch_16__target_tranch_12__from_120 | carlosejimenez | 2023-01-04T02:54:44Z | 14 | 0 | null | [
"region:us"
] | 2023-01-04T02:54:44Z | 2023-01-04T02:54:18.000Z | 2023-01-04T02:54:18 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
irds/clinicaltrials_2021_trec-ct-2022 | irds | 2023-01-05T02:54:20Z | 14 | 1 | null | [
"task_categories:text-retrieval",
"source_datasets:irds/clinicaltrials_2021",
"region:us"
] | 2023-01-05T02:54:20Z | 2023-01-05T02:54:14.000Z | 2023-01-05T02:54:14 | ---
pretty_name: '`clinicaltrials/2021/trec-ct-2022`'
viewer: false
source_datasets: ['irds/clinicaltrials_2021']
task_categories:
- text-retrieval
---
# Dataset Card for `clinicaltrials/2021/trec-ct-2022`
The `clinicaltrials/2021/trec-ct-2022` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2021/trec-ct-2022).
# Data
This dataset provides:
- `queries` (i.e., topics); count=50
- For `docs`, use [`irds/clinicaltrials_2021`](https://huggingface.co/datasets/irds/clinicaltrials_2021)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/clinicaltrials_2021_trec-ct-2022', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
| [
-0.14923706650733948,
-0.2831323444843292,
0.11592356860637665,
0.33459988236427307,
-0.44577640295028687,
-0.051676955074071884,
0.030500680208206177,
-0.10064782202243805,
0.46204107999801636,
0.6612485647201538,
-0.622791051864624,
-0.9008182883262634,
-0.43337786197662354,
0.4487956762... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
canola78/narration | canola78 | 2023-01-05T12:05:29Z | 14 | 0 | null | [
"region:us"
] | 2023-01-05T12:05:29Z | 2023-01-05T11:56:19.000Z | 2023-01-05T11:56:19 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cwinkler/green_patents | cwinkler | 2023-01-08T09:16:25Z | 14 | 1 | null | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | 2023-01-08T09:16:25Z | 2023-01-06T06:12:33.000Z | 2023-01-06T06:12:33 | ---
language:
- en
size_categories:
- 1K<n<10K
task_categories:
- text-classification
---
# Green patents dataset
- num_rows: 9145
- features: [title, label]
- label: 0, 1
The dataset contains patent titles that are labeled as 1 (="green") and 0 (="not green").
"green" patents titles were gathered by searching for CPC class "Y02" with Google Patents (query: "status:APPLICATION type:PATENT (Y02) country:EP,US", 05/01/2023).
"not green" patents titles are derived from the [HUPD dataset](https://huggingface.co/datasets/HUPD/hupd) (random choice of 5000 titles). We could not find any patents in HUPD assigned to any CPC class starting with "Y". | [
-0.08471710979938507,
-0.33368054032325745,
0.3389207720756531,
0.2619916498661041,
-0.21111150085926056,
0.01918390765786171,
0.3930151164531708,
-0.3191933333873749,
0.39440375566482544,
0.5067777037620544,
-0.5871904492378235,
-0.8958174586296082,
-0.5564831495285034,
-0.175903990864753... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Cohere/wikipedia-22-12-ar-embeddings | Cohere | 2023-03-22T16:52:28Z | 14 | 2 | null | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:ar",
"license:apache-2.0",
"region:us"
] | 2023-03-22T16:52:28Z | 2023-01-14T02:00:24.000Z | 2023-01-14T02:00:24 | ---
annotations_creators:
- expert-generated
language:
- ar
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# Wikipedia (ar) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (ar)](https://ar.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ar-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ar-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-ar-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | [
-0.7233262062072754,
-0.699908971786499,
0.1615678369998932,
0.0015330845490098,
-0.1732831597328186,
-0.09171734750270844,
-0.3083530366420746,
-0.2537130117416382,
0.6140792965888977,
-0.032153163105249405,
-0.5026401281356812,
-0.8694170117378235,
-0.65177321434021,
0.21612268686294556,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Cohere/wikipedia-22-12-ja-embeddings | Cohere | 2023-03-22T16:55:06Z | 14 | 1 | null | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"multilinguality:multilingual",
"language:ja",
"license:apache-2.0",
"region:us"
] | 2023-03-22T16:55:06Z | 2023-01-14T03:52:53.000Z | 2023-01-14T03:52:53 | ---
language:
- ja
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# Wikipedia (ja) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (ja)](https://ja.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ja-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ja-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-ja-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | [
-0.7120575308799744,
-0.7149612903594971,
0.16682295501232147,
0.012469463981688023,
-0.1729092001914978,
-0.09575177729129791,
-0.3336113691329956,
-0.2625402808189392,
0.6146823763847351,
-0.010888385586440563,
-0.5280740261077881,
-0.8672659993171692,
-0.6386200785636902,
0.224195346236... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
EgilKarlsen/CSIC | EgilKarlsen | 2023-08-12T21:27:59Z | 14 | 0 | null | [
"region:us"
] | 2023-08-12T21:27:59Z | 2023-01-17T15:26:30.000Z | 2023-01-17T15:26:30 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: log
dtype: string
- name: label
dtype: string
- name: id
dtype: int64
splits:
- name: test
num_bytes: 4890697
num_examples: 10000
- name: train
num_bytes: 17076222
num_examples: 35000
- name: validation
num_bytes: 2448080
num_examples: 5000
download_size: 5582880
dataset_size: 24414999
---
# Dataset Card for "CSIC"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5883678793907166,
0.037391383200883865,
0.4526729881763458,
0.45415353775024414,
-0.08917872607707977,
0.1932094395160675,
0.45127665996551514,
-0.1216266006231308,
0.852849543094635,
0.47351306676864624,
-0.9378855228424072,
-0.9601448178291321,
-0.5158910751342773,
-0.1263891309499740... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
heziyevv/small_wiki_news_books | heziyevv | 2023-01-28T11:53:30Z | 14 | 0 | null | [
"license:mit",
"region:us"
] | 2023-01-28T11:53:30Z | 2023-01-28T07:12:43.000Z | 2023-01-28T07:12:43 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
qwedsacf/competition_math | qwedsacf | 2023-01-28T20:28:01Z | 14 | 6 | null | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"explanation-generation",
"arxiv:2103.03874",
"region:us"
... | 2023-01-28T20:28:01Z | 2023-01-28T18:44:57.000Z | 2023-01-28T18:44:57 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: Mathematics Aptitude Test of Heuristics (MATH)
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
tags:
- explanation-generation
---
# Dataset Card for Mathematics Aptitude Test of Heuristics (MATH) dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/hendrycks/math
- **Repository:** https://github.com/hendrycks/math
- **Paper:** https://arxiv.org/pdf/2103.03874.pdf
- **Leaderboard:** N/A
- **Point of Contact:** Dan Hendrycks
### Dataset Summary
The Mathematics Aptitude Test of Heuristics (MATH) dataset consists of problems
from mathematics competitions, including the AMC 10, AMC 12, AIME, and more.
Each problem in MATH has a full step-by-step solution, which can be used to teach
models to generate answer derivations and explanations.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
A data instance consists of a competition math problem and its step-by-step solution written in LaTeX and natural language. The step-by-step solution contains the final answer enclosed in LaTeX's `\boxed` tag.
An example from the dataset is:
```
{'problem': 'A board game spinner is divided into three parts labeled $A$, $B$ and $C$. The probability of the spinner landing on $A$ is $\\frac{1}{3}$ and the probability of the spinner landing on $B$ is $\\frac{5}{12}$. What is the probability of the spinner landing on $C$? Express your answer as a common fraction.',
'level': 'Level 1',
'type': 'Counting & Probability',
'solution': 'The spinner is guaranteed to land on exactly one of the three regions, so we know that the sum of the probabilities of it landing in each region will be 1. If we let the probability of it landing in region $C$ be $x$, we then have the equation $1 = \\frac{5}{12}+\\frac{1}{3}+x$, from which we have $x=\\boxed{\\frac{1}{4}}$.'}
```
### Data Fields
* `problem`: The competition math problem.
* `solution`: The step-by-step solution.
* `level`: The problem's difficulty level from 'Level 1' to 'Level 5', where a subject's easiest problems for humans are assigned to 'Level 1' and a subject's hardest problems are assigned to 'Level 5'.
* `type`: The subject of the problem: Algebra, Counting & Probability, Geometry, Intermediate Algebra, Number Theory, Prealgebra and Precalculus.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
https://github.com/hendrycks/math/blob/main/LICENSE
### Citation Information
```bibtex
@article{hendrycksmath2021,
title={Measuring Mathematical Problem Solving With the MATH Dataset},
author={Dan Hendrycks
and Collin Burns
and Saurav Kadavath
and Akul Arora
and Steven Basart
and Eric Tang
and Dawn Song
and Jacob Steinhardt},
journal={arXiv preprint arXiv:2103.03874},
year={2021}
}
``` | [
-0.5363626480102539,
-0.6644977331161499,
0.25022175908088684,
0.327332466840744,
-0.09369288384914398,
0.12248176336288452,
-0.2506449818611145,
0.00711820600554347,
0.369283527135849,
0.22746224701404572,
-0.7152879238128662,
-0.644523024559021,
-0.6915560364723206,
0.07681454718112946,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
carlosejimenez/wikitext-103-block-size-1024 | carlosejimenez | 2023-01-31T01:12:32Z | 14 | 0 | null | [
"region:us"
] | 2023-01-31T01:12:32Z | 2023-01-29T19:39:45.000Z | 2023-01-29T19:39:45 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
clip-benchmark/wds_flickr8k | clip-benchmark | 2023-01-31T00:28:28Z | 14 | 0 | null | [
"region:us"
] | 2023-01-31T00:28:28Z | 2023-01-31T00:28:14.000Z | 2023-01-31T00:28:14 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Cohere/miracl-ja-corpus-22-12 | Cohere | 2023-02-06T11:57:11Z | 14 | 0 | null | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:ja",
"license:apache-2.0",
"region:us"
] | 2023-02-06T11:57:11Z | 2023-01-31T08:42:35.000Z | 2023-01-31T08:42:35 | ---
annotations_creators:
- expert-generated
language:
- ja
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (ja) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-ja-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-ja-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-ja-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ja-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ja-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-ja-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-ja-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-ja-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| [
-0.638614296913147,
-0.8295241594314575,
0.3249386250972748,
0.23641127347946167,
-0.05931806191802025,
-0.05849802494049072,
-0.32141202688217163,
-0.4966791868209839,
0.5616434812545776,
0.23018182814121246,
-0.5476011037826538,
-0.9981316328048706,
-0.6966386437416077,
0.338775634765625... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Cohere/miracl-ru-corpus-22-12 | Cohere | 2023-02-06T11:56:20Z | 14 | 0 | null | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:ru",
"license:apache-2.0",
"region:us"
] | 2023-02-06T11:56:20Z | 2023-01-31T11:24:36.000Z | 2023-01-31T11:24:36 | ---
annotations_creators:
- expert-generated
language:
- ru
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (ru) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-ru-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ru-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-ru-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ru-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-ru-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ru-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ru-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ru-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-ru-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ru-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-ru-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-ru-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| [
-0.60527503490448,
-0.8005921244621277,
0.32767659425735474,
0.23987498879432678,
-0.05017693713307381,
-0.07492946833372116,
-0.29386064410209656,
-0.4929465353488922,
0.546332836151123,
0.18042269349098206,
-0.5685477256774902,
-0.9987897872924805,
-0.6929963231086731,
0.349084734916687,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Basvoju/SemEval2018Task7 | Basvoju | 2023-02-03T12:59:36Z | 14 | 0 | acronym-identification | [
"task_categories:text-classification",
"task_ids:entity-linking-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:other",
"Relation Classification",
"Relation extraction",
"Scien... | 2023-02-03T12:59:36Z | 2023-01-31T22:13:20.000Z | 2023-01-31T22:13:20 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- other
multilinguality:
- monolingual
paperswithcode_id: acronym-identification
pretty_name: >-
Semeval2018Task7 is a dataset that describes the Semantic Relation Extraction
and Classification in Scientific Papers
size_categories:
- 1K<n<10K
source_datasets: []
tags:
- Relation Classification
- Relation extraction
- Scientific papers
- Research papers
task_categories:
- text-classification
task_ids:
- entity-linking-classification
train-eval-index:
- col_mapping:
labels: tags
tokens: tokens
config: default
splits:
eval_split: test
task: text-classification
task_id: entity_extraction
---
# Dataset Card for SemEval2018Task7
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://lipn.univ-paris13.fr/~gabor/semeval2018task7/](https://lipn.univ-paris13.fr/~gabor/semeval2018task7/)
- **Repository:** [https://github.com/gkata/SemEval2018Task7/tree/testing](https://github.com/gkata/SemEval2018Task7/tree/testing)
- **Paper:** [SemEval-2018 Task 7: Semantic Relation Extraction and Classification in Scientific Papers](https://aclanthology.org/S18-1111/)
- **Leaderboard:** [https://competitions.codalab.org/competitions/17422#learn_the_details-overview](https://competitions.codalab.org/competitions/17422#learn_the_details-overview)
- **Size of downloaded dataset files:** 1.93 MB
### Dataset Summary
Semeval2018Task7 is a dataset that describes the Semantic Relation Extraction and Classification in Scientific Papers.
The challenge focuses on domain-specific semantic relations and includes three different subtasks. The subtasks were designed so as to compare and quantify the effect of different pre-processing steps on the relation classification results. We expect the task to be relevant for a broad range of researchers working on extracting specialized knowledge from domain corpora, for example but not limited to scientific or bio-medical information extraction. The task attracted a total of 32 participants, with 158 submissions across different scenarios.
The three subtasks are:
- Subtask 1.1: Relation classification on clean data
- In the training data, semantic relations are manually annotated between entities.
- In the test data, only entity annotations and unlabeled relation instances are given.
- Given a scientific publication, The task is to predict the semantic relation between the entities.
- Subtask 1.2: Relation classification on noisy data
- Entity occurrences are automatically annotated in both the training and the test data.
- The task is to predict the semantic
relation between the entities.
- Subtask 2: Metrics for the extraction and classification scenario
- Evaluation of relation extraction
- Evaluation of relation classification
The Relations types are USAGE, RESULT, MODEL, PART_WHOLE, TOPIC, COMPARISION.
The following example shows a text snippet with the information provided in the test data:
Korean, a \<entity id=”H01-1041.10”>verb final language\</entity>with\<entity id=”H01-1041.11”>overt case markers\</entity>(...)
- A relation instance is identified by the unique identifier of the entities in the pair, e.g.(H01-1041.10, H01-1041.11)
- The information to be predicted is the relation class label: MODEL-FEATURE(H01-1041.10, H01-1041.11).
For details, see the paper https://aclanthology.org/S18-1111/.
### Supported Tasks and Leaderboards
- **Tasks:** Relation extraction and classification in scientific papers
- **Leaderboards:** [https://competitions.codalab.org/competitions/17422#learn_the_details-overview](https://competitions.codalab.org/competitions/17422#learn_the_details-overview)
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
#### subtask_1.1
- **Size of downloaded dataset files:** 714 KB
An example of 'train' looks as follows:
```json
{
"id": "H01-1041",
"title": "'Interlingua-Based Broad-Coverage Korean-to-English Translation in CCLING'",
"abstract": 'At MIT Lincoln Laboratory, we have been developing a Korean-to-English machine translation system CCLINC (Common Coalition Language System at Lincoln Laboratory) . The CCLINC Korean-to-English translation system consists of two core modules , language understanding and generation modules mediated by a language neutral meaning representation called a semantic frame . The key features of the system include: (i) Robust efficient parsing of Korean (a verb final language with overt case markers , relatively free word order , and frequent omissions of arguments ). (ii) High quality translation via word sense disambiguation and accurate word order generation of the target language . (iii) Rapid system development and porting to new domains via knowledge-based automated acquisition of grammars . Having been trained on Korean newspaper articles on missiles and chemical biological warfare, the system produces the translation output sufficient for content understanding of the original document.
"entities": [{'id': 'H01-1041.1', 'char_start': 54, 'char_end': 97},
{'id': 'H01-1041.2', 'char_start': 99, 'char_end': 161},
{'id': 'H01-1041.3', 'char_start': 169, 'char_end': 211},
{'id': 'H01-1041.4', 'char_start': 229, 'char_end': 240},
{'id': 'H01-1041.5', 'char_start': 244, 'char_end': 288},
{'id': 'H01-1041.6', 'char_start': 304, 'char_end': 342},
{'id': 'H01-1041.7', 'char_start': 353, 'char_end': 366},
{'id': 'H01-1041.8', 'char_start': 431, 'char_end': 437},
{'id': 'H01-1041.9', 'char_start': 442, 'char_end': 447},
{'id': 'H01-1041.10', 'char_start': 452, 'char_end': 470},
{'id': 'H01-1041.11', 'char_start': 477, 'char_end': 494},
{'id': 'H01-1041.12', 'char_start': 509, 'char_end': 523},
{'id': 'H01-1041.13', 'char_start': 553, 'char_end': 561},
{'id': 'H01-1041.14', 'char_start': 584, 'char_end': 594},
{'id': 'H01-1041.15', 'char_start': 600, 'char_end': 624},
{'id': 'H01-1041.16', 'char_start': 639, 'char_end': 659},
{'id': 'H01-1041.17', 'char_start': 668, 'char_end': 682},
{'id': 'H01-1041.18', 'char_start': 692, 'char_end': 715},
{'id': 'H01-1041.19', 'char_start': 736, 'char_end': 742},
{'id': 'H01-1041.20', 'char_start': 748, 'char_end': 796},
{'id': 'H01-1041.21', 'char_start': 823, 'char_end': 847},
{'id': 'H01-1041.22', 'char_start': 918, 'char_end': 935},
{'id': 'H01-1041.23', 'char_start': 981, 'char_end': 997}],
}
"relation": [{'label': 3, 'arg1': 'H01-1041.3', 'arg2': 'H01-1041.4', 'reverse': True},
{'label': 0, 'arg1': 'H01-1041.8', 'arg2': 'H01-1041.9', 'reverse': False},
{'label': 2, 'arg1': 'H01-1041.10', 'arg2': 'H01-1041.11', 'reverse': True},
{'label': 0, 'arg1': 'H01-1041.14', 'arg2': 'H01-1041.15', 'reverse': True}]
```
#### Subtask_1.2
- **Size of downloaded dataset files:** 1.00 MB
An example of 'train' looks as follows:
```json
{'id': 'L08-1450',
'title': '\nA LAF/GrAF based Encoding Scheme for underspecified Representations of syntactic Annotations.\n',
'abstract': 'Data models and encoding formats for syntactically annotated text corpora need to deal with syntactic ambiguity; underspecified representations are particularly well suited for the representation of ambiguousdata because they allow for high informational efficiency. We discuss the issue of being informationally efficient, and the trade-off between efficient encoding of linguistic annotations and complete documentation of linguistic analyses. The main topic of this article is adata model and an encoding scheme based on LAF/GrAF ( Ide and Romary, 2006 ; Ide and Suderman, 2007 ) which provides a flexible framework for encoding underspecified representations. We show how a set of dependency structures and a set of TiGer graphs ( Brants et al., 2002 ) representing the readings of an ambiguous sentence can be encoded, and we discuss basic issues in querying corpora which are encoded using the framework presented here.\n',
'entities': [{'id': 'L08-1450.4', 'char_start': 0, 'char_end': 3},
{'id': 'L08-1450.5', 'char_start': 5, 'char_end': 10},
{'id': 'L08-1450.6', 'char_start': 25, 'char_end': 31},
{'id': 'L08-1450.7', 'char_start': 61, 'char_end': 64},
{'id': 'L08-1450.8', 'char_start': 66, 'char_end': 72},
{'id': 'L08-1450.9', 'char_start': 82, 'char_end': 85},
{'id': 'L08-1450.10', 'char_start': 92, 'char_end': 100},
{'id': 'L08-1450.11', 'char_start': 102, 'char_end': 110},
{'id': 'L08-1450.12', 'char_start': 128, 'char_end': 142},
{'id': 'L08-1450.13', 'char_start': 181, 'char_end': 194},
{'id': 'L08-1450.14', 'char_start': 208, 'char_end': 211},
{'id': 'L08-1450.15', 'char_start': 255, 'char_end': 264},
{'id': 'L08-1450.16', 'char_start': 282, 'char_end': 286},
{'id': 'L08-1450.17', 'char_start': 408, 'char_end': 420},
{'id': 'L08-1450.18', 'char_start': 425, 'char_end': 443},
{'id': 'L08-1450.19', 'char_start': 450, 'char_end': 453},
{'id': 'L08-1450.20', 'char_start': 455, 'char_end': 459},
{'id': 'L08-1450.21', 'char_start': 481, 'char_end': 484},
{'id': 'L08-1450.22', 'char_start': 486, 'char_end': 490},
{'id': 'L08-1450.23', 'char_start': 508, 'char_end': 513},
{'id': 'L08-1450.24', 'char_start': 515, 'char_end': 519},
{'id': 'L08-1450.25', 'char_start': 535, 'char_end': 537},
{'id': 'L08-1450.26', 'char_start': 559, 'char_end': 561},
{'id': 'L08-1450.27', 'char_start': 591, 'char_end': 598},
{'id': 'L08-1450.28', 'char_start': 611, 'char_end': 619},
{'id': 'L08-1450.29', 'char_start': 649, 'char_end': 663},
{'id': 'L08-1450.30', 'char_start': 687, 'char_end': 707},
{'id': 'L08-1450.31', 'char_start': 722, 'char_end': 726},
{'id': 'L08-1450.32', 'char_start': 801, 'char_end': 808},
{'id': 'L08-1450.33', 'char_start': 841, 'char_end': 845},
{'id': 'L08-1450.34', 'char_start': 847, 'char_end': 852},
{'id': 'L08-1450.35', 'char_start': 857, 'char_end': 864},
{'id': 'L08-1450.36', 'char_start': 866, 'char_end': 872},
{'id': 'L08-1450.37', 'char_start': 902, 'char_end': 910},
{'id': 'L08-1450.1', 'char_start': 12, 'char_end': 16},
{'id': 'L08-1450.2', 'char_start': 27, 'char_end': 32},
{'id': 'L08-1450.3', 'char_start': 72, 'char_end': 80}],
'relation': [{'label': 1,
'arg1': 'L08-1450.12',
'arg2': 'L08-1450.13',
'reverse': False},
{'label': 5, 'arg1': 'L08-1450.17', 'arg2': 'L08-1450.18', 'reverse': False},
{'label': 1, 'arg1': 'L08-1450.28', 'arg2': 'L08-1450.29', 'reverse': False},
{'label': 3, 'arg1': 'L08-1450.30', 'arg2': 'L08-1450.32', 'reverse': False},
{'label': 3, 'arg1': 'L08-1450.34', 'arg2': 'L08-1450.35', 'reverse': False},
{'label': 3, 'arg1': 'L08-1450.36', 'arg2': 'L08-1450.37', 'reverse': True}]}
[ ]
```
### Data Fields
#### subtask_1_1
- `id`: the instance id of this abstract, a `string` feature.
- `title`: the title of this abstract, a `string` feature
- `abstract`: the abstract from the scientific papers, a `string` feature
- `entities`: the entity id's for the key phrases, a `list` of entity id's.
- `id`: the instance id of this sentence, a `string` feature.
- `char_start`: the 0-based index of the entity starting, an `ìnt` feature.
- `char_end`: the 0-based index of the entity ending, an `ìnt` feature.
- `relation`: the list of relations of this sentence marking the relation between the key phrases, a `list` of classification labels.
- `label`: the list of relations between the key phrases, a `list` of classification labels.
- `arg1`: the entity id of this key phrase, a `string` feature.
- `arg2`: the entity id of the related key phrase, a `string` feature.
- `reverse`: the reverse is `True` only if reverse is possible otherwise `False`, a `bool` feature.
```python
RELATIONS
{"":0,"USAGE": 1, "RESULT": 2, "MODEL-FEATURE": 3, "PART_WHOLE": 4, "TOPIC": 5, "COMPARE": 6}
```
#### subtask_1_2
- `id`: the instance id of this abstract, a `string` feature.
- `title`: the title of this abstract, a `string` feature
- `abstract`: the abstract from the scientific papers, a `string` feature
- `entities`: the entity id's for the key phrases, a `list` of entity id's.
- `id`: the instance id of this sentence, a `string` feature.
- `char_start`: the 0-based index of the entity starting, an `ìnt` feature.
- `char_end`: the 0-based index of the entity ending, an `ìnt` feature.
- `relation`: the list of relations of this sentence marking the relation between the key phrases, a `list` of classification labels.
- `label`: the list of relations between the key phrases, a `list` of classification labels.
- `arg1`: the entity id of this key phrase, a `string` feature.
- `arg2`: the entity id of the related key phrase, a `string` feature.
- `reverse`: the reverse is `True` only if reverse is possible otherwise `False`, a `bool` feature.
```python
RELATIONS
{"":0,"USAGE": 1, "RESULT": 2, "MODEL-FEATURE": 3, "PART_WHOLE": 4, "TOPIC": 5, "COMPARE": 6}
```
### Data Splits
| | | Train| Test |
|-------------|-----------|------|------|
| subtask_1_1 | text | 2807 | 3326 |
| | relations | 1228 | 1248 |
| subtask_1_2 | text | 1196 | 1193 |
| | relations | 335 | 355 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{gabor-etal-2018-semeval,
title = "{S}em{E}val-2018 Task 7: Semantic Relation Extraction and Classification in Scientific Papers",
author = {G{\'a}bor, Kata and
Buscaldi, Davide and
Schumann, Anne-Kathrin and
QasemiZadeh, Behrang and
Zargayouna, Ha{\"\i}fa and
Charnois, Thierry},
booktitle = "Proceedings of the 12th International Workshop on Semantic Evaluation",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S18-1111",
doi = "10.18653/v1/S18-1111",
pages = "679--688",
abstract = "This paper describes the first task on semantic relation extraction and classification in scientific paper abstracts at SemEval 2018. The challenge focuses on domain-specific semantic relations and includes three different subtasks. The subtasks were designed so as to compare and quantify the effect of different pre-processing steps on the relation classification results. We expect the task to be relevant for a broad range of researchers working on extracting specialized knowledge from domain corpora, for example but not limited to scientific or bio-medical information extraction. The task attracted a total of 32 participants, with 158 submissions across different scenarios.",
}
```
### Contributions
Thanks to [@basvoju](https://github.com/basvoju) for adding this dataset. | [
-0.427877277135849,
-0.526709794998169,
0.4560549855232239,
0.1434955596923828,
-0.35437846183776855,
-0.13207924365997314,
-0.1507975459098816,
-0.4986710548400879,
0.4696914553642273,
0.39740216732025146,
-0.70353102684021,
-0.9712769389152527,
-0.5422398447990417,
0.38093215227127075,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
active-learning/labeled_samples | active-learning | 2023-03-09T13:01:17Z | 14 | 0 | null | [
"region:us"
] | 2023-03-09T13:01:17Z | 2023-02-03T10:34:06.000Z | 2023-02-03T10:34:06 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
'5': '5'
'6': '6'
'7': '7'
'8': '8'
'9': '9'
splits:
- name: train
num_bytes: 56962.0
num_examples: 155
download_size: 42096
dataset_size: 56962.0
---
# Dataset Card for "labeled_samples"
This is a labeled dataset of images to train an image classification system. | [
-0.44179195165634155,
-0.15854227542877197,
-0.46774348616600037,
0.006108086556196213,
-0.6867730617523193,
0.22586305439472198,
0.31062009930610657,
0.017588017508387566,
0.12676382064819336,
0.6960676908493042,
-0.6334861516952515,
-0.7713401317596436,
-0.5770586729049683,
-0.1364329606... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Piro17/fer2013test | Piro17 | 2023-02-15T15:02:30Z | 14 | 0 | null | [
"region:us"
] | 2023-02-15T15:02:30Z | 2023-02-15T15:02:15.000Z | 2023-02-15T15:02:15 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': angry
'1': disgust
'2': fear
'3': happy
'4': neutral
'5': sad
'6': surprise
splits:
- name: train
num_bytes: 11521798.802
num_examples: 7178
download_size: 10231842
dataset_size: 11521798.802
---
# Dataset Card for "fer2013test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8555836081504822,
-0.43030887842178345,
0.15011002123355865,
0.4958614706993103,
0.07066726684570312,
-0.17525120079517365,
0.44903773069381714,
-0.22873079776763916,
0.6339302659034729,
0.3589031994342804,
-1.0182033777236938,
-0.423040509223938,
-0.3475821614265442,
0.0309041198343038... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
martinms20/eurosat50-land-cover | martinms20 | 2023-02-24T16:30:39Z | 14 | 0 | null | [
"task_categories:image-classification",
"region:us"
] | 2023-02-24T16:30:39Z | 2023-02-24T16:26:41.000Z | 2023-02-24T16:26:41 | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: klasifikasi-tutupan-lahan
## Dataset Description
This dataset has been automatically processed by AutoTrain for project klasifikasi-tutupan-lahan.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<64x64 RGB PIL image>",
"target": 8
},
{
"image": "<64x64 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['AnnualCrop', 'Forest', 'HerbaceousVegetation', 'Highway', 'Industrial', 'Pasture', 'PermanentCrop', 'Residential', 'River', 'SeaLake'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 400 |
| valid | 100 |
| [
-0.49215078353881836,
0.08950648456811905,
-0.024706577882170677,
0.32935813069343567,
-0.46176356077194214,
0.32536616921424866,
-0.16552495956420898,
-0.4305630326271057,
-0.07253549247980118,
0.3944629728794098,
-0.6328462362289429,
-0.5659212470054626,
-0.5350514650344849,
0.1641012877... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gokuls/wiki_book_corpus_raw_dataset_medium | gokuls | 2023-02-25T20:10:20Z | 14 | 0 | null | [
"region:us"
] | 2023-02-25T20:10:20Z | 2023-02-25T19:38:11.000Z | 2023-02-25T19:38:11 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 12250082590.5
num_examples: 40231449
download_size: 7774316723
dataset_size: 12250082590.5
---
# Dataset Card for "wiki_book_corpus_raw_dataset_medium"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.533679723739624,
-0.30267512798309326,
-0.01785081997513771,
0.08927731215953827,
-0.32503142952919006,
-0.07149175554513931,
-0.2989442050457001,
-0.069184809923172,
0.7700781226158142,
0.5377635359764099,
-0.5474231243133545,
-0.8841959834098816,
-0.5909751057624817,
-0.05774546414613... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jonathan-roberts1/UC_Merced_LandUse_MultiLabel | jonathan-roberts1 | 2023-04-03T16:33:24Z | 14 | 0 | null | [
"license:other",
"region:us"
] | 2023-04-03T16:33:24Z | 2023-02-27T15:54:34.000Z | 2023-02-27T15:54:34 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
sequence:
class_label:
names:
'0': airplane
'1': bare soil
'2': buildings
'3': cars
'4': chaparral
'5': court
'6': dock
'7': field
'8': grass
'9': mobile home
'10': pavement
'11': sand
'12': sea
'13': ship
'14': tanks
'15': trees
'16': water
splits:
- name: train
num_bytes: 438859816.5
num_examples: 2100
download_size: 416309630
dataset_size: 438859816.5
license: other
---
# Dataset Card for "UC_Merced_LandUse_MultiLabel"
## Dataset Description
- **Paper:** [Bag-of-visual-words and spatial extensions for land-use classification](https://dl.acm.org/doi/pdf/10.1145/1869790.1869829)
- **Paper:** [Multilabel Remote Sensing Image Retrieval Using a Semisupervised Graph-Theoretic Method](https://ieeexplore.ieee.org/iel7/36/4358825/08089668.pdf)
### Licensing Information
Public Domain; “Map services and data available from U.S. Geological Survey, National Geospatial Program.”
## Citation Information
Imagery:
[Bag-of-visual-words and spatial extensions for land-use classification](https://dl.acm.org/doi/pdf/10.1145/1869790.1869829)
Multilabels:
[Multilabel Remote Sensing Image Retrieval Using a Semisupervised Graph-Theoretic Method](https://ieeexplore.ieee.org/iel7/36/4358825/08089668.pdf)
```
@inproceedings{yang2010bag,
title = {Bag-of-visual-words and spatial extensions for land-use classification},
author = {Yang, Yi and Newsam, Shawn},
year = 2010,
booktitle = {Proceedings of the 18th SIGSPATIAL international conference on advances in geographic information systems},
pages = {270--279}
}
@article{8089668,
title = {Multilabel Remote Sensing Image Retrieval Using a Semisupervised Graph-Theoretic Method},
author = {Chaudhuri, Bindita and Demir, Begüm and Chaudhuri, Subhasis and Bruzzone, Lorenzo},
year = 2018,
journal = {IEEE Transactions on Geoscience and Remote Sensing},
volume = 56,
number = 2,
pages = {1144--1158},
doi = {10.1109/TGRS.2017.2760909}
}
``` | [
-0.6853434443473816,
-0.5369502305984497,
0.3430284857749939,
-0.0058755455538630486,
-0.3351908326148987,
0.32018542289733887,
-0.31437814235687256,
-0.4822590947151184,
0.03375839442014694,
0.37236279249191284,
-0.04203615337610245,
-0.9282703399658203,
-0.7268902659416199,
-0.1710338294... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Duskfallcrew/DuskfallCrewArtStyle_Lora | Duskfallcrew | 2023-04-25T04:30:25Z | 14 | 0 | null | [
"task_categories:text-to-image",
"size_categories:1K<n<10K",
"language:en",
"license:creativeml-openrail-m",
"Art Style",
"duskfallcrew",
"region:us"
] | 2023-04-25T04:30:25Z | 2023-03-01T05:31:16.000Z | 2023-03-01T05:31:16 | ---
license: creativeml-openrail-m
task_categories:
- text-to-image
language:
- en
tags:
- Art Style
- duskfallcrew
pretty_name: Duskfallcrew Art Style Dataset & Lora
size_categories:
- 1K<n<10K
---
# Dataset Card for DuskfallCrewArtStyle_Lora
## Dataset Description
- **Homepage:https://duskfallcrew.carrd.co/**
- **Point of Contact: See the Carrd website for contact info, or DM us on HF**
### Dataset Summary
This data set is the basis for the LoRa that is in this repository.
### Supported Tasks and Leaderboards
Text to Image / Stable Diffusion/ LoRA
### Languages
English
### Source Data
### Personal and Sensitive Information
This is based on our own Art, and while we're A OK for you to use it, you don't own the art within the dataset, but you may not care to anyways.
## Considerations for Using the Data
### Social Impact of Dataset
Shitty Art!
### Discussion of Biases
It largely has non binary features, not sure if it has any one specific gender. We have Dissociative identity disorder so laregely the faces in here are either alters in our system or other systems we've done art for.
### Other Known Limitations
SHITTYART!
## Additional Information
### Licensing Information
While it's under the lisc listed, we do ask you that you don't resell the dataset. You're responsible for your use of the dataset, and the faces within it. Your outputs are up to you.
### Citation Information
If you use the dataset, citation is nice, but it'd be even nicer if you gave us coffee! https://ko-fi.com/DUSKFALLcrew
| [
-0.13687460124492645,
-0.8258330821990967,
0.21394024789333344,
0.5624613165855408,
-0.33918529748916626,
0.19510100781917572,
0.30692628026008606,
-0.7232668399810791,
0.5632069706916809,
0.7341177463531494,
-0.8841023445129395,
-0.8841356039047241,
-0.4594377875328064,
-0.183152556419372... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
StemGene/eurosat-demo | StemGene | 2023-03-15T20:20:47Z | 14 | 0 | null | [
"region:us"
] | 2023-03-15T20:20:47Z | 2023-03-15T20:04:36.000Z | 2023-03-15T20:04:36 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': AnnualCrop
'1': Forest
'2': HerbaceousVegetation
'3': Highway
'4': Industrial
'5': Pasture
'6': PermanentCrop
'7': Residential
'8': River
'9': SeaLake
splits:
- name: train
num_bytes: 92168360.0
num_examples: 27000
download_size: 0
dataset_size: 92168360.0
---
# Dataset Card for "eurosat-demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8414227962493896,
-0.19318227469921112,
0.35342949628829956,
0.26124924421310425,
-0.21493539214134216,
-0.034770362079143524,
0.1158275306224823,
-0.11762873083353043,
0.8098520040512085,
0.3558275103569031,
-1.0067123174667358,
-0.8378922343254089,
-0.44547751545906067,
-0.20130451023... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MiteshRege/indian_food_images | MiteshRege | 2023-03-18T09:55:42Z | 14 | 0 | null | [
"region:us"
] | 2023-03-18T09:55:42Z | 2023-03-18T09:27:58.000Z | 2023-03-18T09:27:58 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': burger
'1': butter_naan
'2': chai
'3': chapati
'4': chole_bhature
'5': dal_makhani
'6': dhokla
'7': fried_rice
'8': idli
'9': jalebi
'10': kaathi_rolls
'11': kadai_paneer
'12': kulfi
'13': masala_dosa
'14': momos
'15': paani_puri
'16': pakode
'17': pav_bhaji
'18': pizza
'19': samosa
splits:
- name: train
num_bytes: 1478423639.5674334
num_examples: 5328
- name: test
num_bytes: 224186839.3925666
num_examples: 941
download_size: 1592823695
dataset_size: 1702610478.96
---
# Dataset Card for "indian_food_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4587499797344208,
-0.2844526171684265,
0.04367287829518318,
0.20662842690944672,
-0.15003235638141632,
0.015425116755068302,
0.2855421006679535,
-0.2996344268321991,
1.0105538368225098,
0.39493802189826965,
-0.6513556241989136,
-0.75360506772995,
-0.7309369444847107,
-0.0750306323170661... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arubenruben/portuguese_wikineural | arubenruben | 2023-04-10T13:45:47Z | 14 | 0 | null | [
"region:us"
] | 2023-04-10T13:45:47Z | 2023-03-21T16:07:03.000Z | 2023-03-21T16:07:03 | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 33140600
num_examples: 80560
- name: test
num_bytes: 4400517
num_examples: 10160
- name: validation
num_bytes: 4384834
num_examples: 10070
download_size: 10275737
dataset_size: 41925951
---
# Dataset Card for "portuguese_wikineural"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5264198780059814,
-0.4025104343891144,
-0.14136309921741486,
0.42740264534950256,
-0.3756020963191986,
-0.12761324644088745,
0.05140892043709755,
-0.4616546034812927,
1.0855072736740112,
0.5523971319198608,
-0.737739086151123,
-0.8420768976211548,
-0.8199377059936523,
-0.173888102173805... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
theblackcat102/codex-math-qa | theblackcat102 | 2023-03-26T01:04:18Z | 14 | 13 | null | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"language:en",
"license:other",
"codex-generated",
"code",
"mathematic",
"region:us"
] | 2023-03-26T01:04:18Z | 2023-03-22T00:56:14.000Z | 2023-03-22T00:56:14 | ---
license: other
task_categories:
- text2text-generation
- text-generation
language:
- en
tags:
- codex-generated
- code
- mathematic
---
# Codex Math QA
Solve math_qa using codex-davinci-002 via Python Programming.
[Since OpenAI decided to shut off code-davinci-002 behind Azure](https://mobile.twitter.com/Veqtor/status/1638081493099597824), this dataset aims to share the generation results for code-davinci-002 OpenAI's 176B code generation model.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|main|25065| 4133|2985|
|rational| - | 4151|2985|
1. Basic prompts : only question
```
dataset = load_dataset("theblackcat102/codex-math-qa", "main")
```
**Example :**
```
the average weight of 8 person ' s increases by 1.5 kg when a new person comes in place of one of them weighing 75 kg. what might be the weight of the new person ?
Write a short snippet of python function which solve this problem. No need to explain the answer, let the code output the answer.
```
output
```python
def average_weight(weights):
total_weight = sum(weights)
return total_weight / len(weights)
def new_person_weight(weights):
average_weight = average_weight(weights)
new_person_weight = average_weight + 1.5
return new_person_weight
weights = [75, 80, 85, 90, 95]
print(new_person_weight(weights))
```
2. Prompts with rationale : include the `Rationale` text which should helps the programming planinig
```
dataset = load_dataset("theblackcat102/codex-math-qa", "rational")
```
**Example :**
```
one copy machine can make 20 copies a minute, and a second copy machine makes 10 copies a minute. if the two copiers work together, how long would it take them to make 900 copies ?
Rationale: "total work done by both machines in a minute = 20 + 10 = 30 copies total number of copies required = 900 time = 900 / 30 = 30 mins answer d"
Write a short snippet of python function which solve this problem within 30 lines of code. Add comments in the function block for explaination.
```
output
```python
def copy_machine(copies, copy_speed):
"""
:param copies: number of copies required
:param copy_speed: speed of the copy machine
:return: time in minutes
"""
total_work = sum(copy_speed)
time = copies / total_work
return time
if __name__ == "__main__":
print(copy_machine(900, [20, 10]))
```
### Notes:
The generated results are unvalidated and are as what it is from the codex-davinci-002 outputs. So there's a majority of answers which is incorrect and code with syntax error. However, this is a work for a future study and the aim of this dataset was to provide a source or reference for code based math answering by codex-davinci-002.
## Dataset Creation
Dataset was sourced from [math_qa](https://huggingface.co/datasets/math_qa) and append prompts at the end of section for generating Python solutions for the answer. This is an aim for providing dataset for the work offload seem in galactica
The generation config for code-davinci-02 are as follows:
| name | value|
|-------|----:|
|max_tokens| 2048 |
|temperature| 0.5 |
|top_p| 0.7 |
### Citation Information
```
@inproceedings{amini-etal-2019-mathqa,
title = "{M}ath{QA}: Towards Interpretable Math Word Problem Solving with Operation-Based Formalisms",
author = "Amini, Aida and
Gabriel, Saadia and
Lin, Shanchuan and
Koncel-Kedziorski, Rik and
Choi, Yejin and
Hajishirzi, Hannaneh",
booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
month = jun,
year = "2019",
address = "Minneapolis, Minnesota",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N19-1245",
doi = "10.18653/v1/N19-1245",
pages = "2357--2367",
}
``` | [
-0.19329993426799774,
-0.560373067855835,
0.3362767994403839,
0.3111255168914795,
-0.26212456822395325,
-0.17984530329704285,
-0.003971071448177099,
-0.1847623735666275,
0.23122531175613403,
0.3481139838695526,
-0.767295777797699,
-0.042256977409124374,
-0.3571902811527252,
0.3547503352165... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Multimodal-Fatima/CUB_train | Multimodal-Fatima | 2023-03-22T02:08:06Z | 14 | 0 | null | [
"region:us"
] | 2023-03-22T02:08:06Z | 2023-03-22T02:07:02.000Z | 2023-03-22T02:07:02 | ---
dataset_info:
features:
- name: image
dtype: image
- name: description
dtype: string
- name: label
dtype:
class_label:
names:
'0': Black footed Albatross
'1': Laysan Albatross
'2': Sooty Albatross
'3': Groove billed Ani
'4': Crested Auklet
'5': Least Auklet
'6': Parakeet Auklet
'7': Rhinoceros Auklet
'8': Brewer Blackbird
'9': Red winged Blackbird
'10': Rusty Blackbird
'11': Yellow headed Blackbird
'12': Bobolink
'13': Indigo Bunting
'14': Lazuli Bunting
'15': Painted Bunting
'16': Cardinal
'17': Spotted Catbird
'18': Gray Catbird
'19': Yellow breasted Chat
'20': Eastern Towhee
'21': Chuck will Widow
'22': Brandt Cormorant
'23': Red faced Cormorant
'24': Pelagic Cormorant
'25': Bronzed Cowbird
'26': Shiny Cowbird
'27': Brown Creeper
'28': American Crow
'29': Fish Crow
'30': Black billed Cuckoo
'31': Mangrove Cuckoo
'32': Yellow billed Cuckoo
'33': Gray crowned Rosy Finch
'34': Purple Finch
'35': Northern Flicker
'36': Acadian Flycatcher
'37': Great Crested Flycatcher
'38': Least Flycatcher
'39': Olive sided Flycatcher
'40': Scissor tailed Flycatcher
'41': Vermilion Flycatcher
'42': Yellow bellied Flycatcher
'43': Frigatebird
'44': Northern Fulmar
'45': Gadwall
'46': American Goldfinch
'47': European Goldfinch
'48': Boat tailed Grackle
'49': Eared Grebe
'50': Horned Grebe
'51': Pied billed Grebe
'52': Western Grebe
'53': Blue Grosbeak
'54': Evening Grosbeak
'55': Pine Grosbeak
'56': Rose breasted Grosbeak
'57': Pigeon Guillemot
'58': California Gull
'59': Glaucous winged Gull
'60': Heermann Gull
'61': Herring Gull
'62': Ivory Gull
'63': Ring billed Gull
'64': Slaty backed Gull
'65': Western Gull
'66': Anna Hummingbird
'67': Ruby throated Hummingbird
'68': Rufous Hummingbird
'69': Green Violetear
'70': Long tailed Jaeger
'71': Pomarine Jaeger
'72': Blue Jay
'73': Florida Jay
'74': Green Jay
'75': Dark eyed Junco
'76': Tropical Kingbird
'77': Gray Kingbird
'78': Belted Kingfisher
'79': Green Kingfisher
'80': Pied Kingfisher
'81': Ringed Kingfisher
'82': White breasted Kingfisher
'83': Red legged Kittiwake
'84': Horned Lark
'85': Pacific Loon
'86': Mallard
'87': Western Meadowlark
'88': Hooded Merganser
'89': Red breasted Merganser
'90': Mockingbird
'91': Nighthawk
'92': Clark Nutcracker
'93': White breasted Nuthatch
'94': Baltimore Oriole
'95': Hooded Oriole
'96': Orchard Oriole
'97': Scott Oriole
'98': Ovenbird
'99': Brown Pelican
'100': White Pelican
'101': Western Wood Pewee
'102': Sayornis
'103': American Pipit
'104': Whip poor Will
'105': Horned Puffin
'106': Common Raven
'107': White necked Raven
'108': American Redstart
'109': Geococcyx
'110': Loggerhead Shrike
'111': Great Grey Shrike
'112': Baird Sparrow
'113': Black throated Sparrow
'114': Brewer Sparrow
'115': Chipping Sparrow
'116': Clay colored Sparrow
'117': House Sparrow
'118': Field Sparrow
'119': Fox Sparrow
'120': Grasshopper Sparrow
'121': Harris Sparrow
'122': Henslow Sparrow
'123': Le Conte Sparrow
'124': Lincoln Sparrow
'125': Nelson Sharp tailed Sparrow
'126': Savannah Sparrow
'127': Seaside Sparrow
'128': Song Sparrow
'129': Tree Sparrow
'130': Vesper Sparrow
'131': White crowned Sparrow
'132': White throated Sparrow
'133': Cape Glossy Starling
'134': Bank Swallow
'135': Barn Swallow
'136': Cliff Swallow
'137': Tree Swallow
'138': Scarlet Tanager
'139': Summer Tanager
'140': Artic Tern
'141': Black Tern
'142': Caspian Tern
'143': Common Tern
'144': Elegant Tern
'145': Forsters Tern
'146': Least Tern
'147': Green tailed Towhee
'148': Brown Thrasher
'149': Sage Thrasher
'150': Black capped Vireo
'151': Blue headed Vireo
'152': Philadelphia Vireo
'153': Red eyed Vireo
'154': Warbling Vireo
'155': White eyed Vireo
'156': Yellow throated Vireo
'157': Bay breasted Warbler
'158': Black and white Warbler
'159': Black throated Blue Warbler
'160': Blue winged Warbler
'161': Canada Warbler
'162': Cape May Warbler
'163': Cerulean Warbler
'164': Chestnut sided Warbler
'165': Golden winged Warbler
'166': Hooded Warbler
'167': Kentucky Warbler
'168': Magnolia Warbler
'169': Mourning Warbler
'170': Myrtle Warbler
'171': Nashville Warbler
'172': Orange crowned Warbler
'173': Palm Warbler
'174': Pine Warbler
'175': Prairie Warbler
'176': Prothonotary Warbler
'177': Swainson Warbler
'178': Tennessee Warbler
'179': Wilson Warbler
'180': Worm eating Warbler
'181': Yellow Warbler
'182': Northern Waterthrush
'183': Louisiana Waterthrush
'184': Bohemian Waxwing
'185': Cedar Waxwing
'186': American Three toed Woodpecker
'187': Pileated Woodpecker
'188': Red bellied Woodpecker
'189': Red cockaded Woodpecker
'190': Red headed Woodpecker
'191': Downy Woodpecker
'192': Bewick Wren
'193': Cactus Wren
'194': Carolina Wren
'195': House Wren
'196': Marsh Wren
'197': Rock Wren
'198': Winter Wren
'199': Common Yellowthroat
- name: file_name
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 583337273.046
num_examples: 5994
download_size: 583734869
dataset_size: 583337273.046
---
# Dataset Card for "CUB_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5708028078079224,
-0.10765746235847473,
0.06882014125585556,
0.5568844079971313,
-0.22886206209659576,
0.18270452320575714,
0.3440098464488983,
-0.06374897062778473,
0.7012922763824463,
0.36369091272354126,
-0.9124994874000549,
-0.6094102263450623,
-0.5226218104362488,
-0.34035140275955... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pszemraj/scientific_lay_summarisation-elife-norm | pszemraj | 2023-04-06T23:34:11Z | 14 | 3 | null | [
"task_categories:summarization",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"source_datasets:tomasg25/scientific_lay_summarisation",
"language:en",
"license:mit",
"region:us"
] | 2023-04-06T23:34:11Z | 2023-03-29T16:26:37.000Z | 2023-03-29T16:26:37 | ---
license: mit
task_categories:
- summarization
- text2text-generation
language:
- en
size_categories:
- 10K<n<100K
source_datasets: tomasg25/scientific_lay_summarisation
---
# scientific_lay_summarisation - elife - normalized
This is the "_elife_" split. For more words, refer to the [PLOS split README](https://huggingface.co/datasets/pszemraj/scientific_lay_summarisation-plos-norm)
## Contents
load with datasets:
```python
from datasets import load_dataset
# If the dataset is gated/private, make sure you have run huggingface-cli login
dataset = load_dataset("pszemraj/scientific_lay_summarisation-elife-norm")
dataset
```
Output:
```python
DatasetDict({
train: Dataset({
features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'],
num_rows: 4346
})
test: Dataset({
features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'],
num_rows: 241
})
validation: Dataset({
features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'],
num_rows: 241
})
})
```
## Lengths
Train set:

| [
-0.4464036822319031,
-0.4053535759449005,
-0.07124586403369904,
0.39316505193710327,
-0.4554915428161621,
-0.07984382659196854,
-0.2906795144081116,
-0.03546493873000145,
0.7585017085075378,
0.350328266620636,
-0.49108028411865234,
-0.6761195063591003,
-0.7984153032302856,
0.64001435041427... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
magicgh/alpaca-cleaned-random-25 | magicgh | 2023-04-01T07:23:31Z | 14 | 0 | null | [
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"instruction-finetuning",
"region:us"
] | 2023-04-01T07:23:31Z | 2023-04-01T06:54:02.000Z | 2023-04-01T06:54:02 | ---
license: cc-by-4.0
language:
- en
tags:
- instruction-finetuning
pretty_name: Alpaca-Cleaned-Random-25
task_categories:
- text-generation
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
axiong/pmc_oa | axiong | 2023-08-22T17:42:06Z | 14 | 18 | null | [
"region:us"
] | 2023-08-22T17:42:06Z | 2023-04-02T02:30:31.000Z | 2023-04-02T02:30:31 | # PMC-OA Dataset
**News: We have released the PMC-OA dataset. You can choose the subset specifically.**
**P.S.** There's something wrong with the huggingface dataset viewer when the dataset scale gets large.
So we sample a subset of it to visualize it directly on web. Click [PMC-OA-Demo](https://huggingface.co/datasets/axiong/pmc_oa_demo) to view it.
[中文文档](./README.zh.md)
- [PMC-OA Dataset](#pmc-oa-dataset)
- [Model Zoo](#model-zoo)
- [Daraset Structure](#daraset-structure)
- [Sample](#sample)
## Model Zoo
Check it out if you want to load model pretrained on PMC-OA directly.
We plan to release more models pretrained on PMC-OA. Feel free to reach us if the model you want is not included in model zoo for now.
Also, we express our thanks to the help from the community.
| Model | Link | Provider |
| --- | --- | --- |
| ViT-L-14 | https://huggingface.co/ryanyip7777/pmc_vit_l_14 | @ryanyip7777 |
## Daraset Structure
**PMC-OA** (seperated images, separated caption).
- `images.zip`: images folder
- `pmc_oa.jsonl`: dataset file of pmc-oa
- `pmc_oa_beta.jsonl`: dataset file of pmc-oa-beta
~~- `train.jsonl`: metafile of train set~~
~~- `valid.jsonl`: metafile of valid set~~
~~- `test.jsonl`: metafile of test set~~
The difference between PMC-OA & PMC-OA-Beta lies in the methods of processing captions.
In PMC-OA, we utilize ChatGPT to help us divide compound captions into seperate ones.
While PMC-OA-Beta keeps all the compound ones without division.
## Sample
A row in `pmc_oa.jsonl` is shown bellow,
```python
{
"image": "PMC212319_Fig3_4.jpg",
"caption": "A. Real time image of the translocation of ARF1-GFP to the plasma membrane ...",
}
```
Explanation to each key
- image: path to the image
- caption: corresponding to the image
| [
-0.7161734104156494,
-0.4382700026035309,
0.1298418939113617,
0.44899335503578186,
-0.5159133076667786,
-0.09455911070108414,
0.18593744933605194,
-0.24738852679729462,
0.2783305048942566,
0.6234467029571533,
-0.8673391342163086,
-0.6233710050582886,
-0.34993648529052734,
0.092119298875331... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
albertvillanova/tmp-imagefolder | albertvillanova | 2023-04-03T09:03:23Z | 14 | 0 | null | [
"region:us"
] | 2023-04-03T09:03:23Z | 2023-04-03T08:56:10.000Z | 2023-04-03T08:56:10 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': bharatanatyam
'1': kathak
splits:
- name: train
num_bytes: 18458.0
num_examples: 2
- name: validation
num_bytes: 8463.0
num_examples: 1
download_size: 29860
dataset_size: 26921.0
---
# Dataset Card for "tmp-imagefolder"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.447882741689682,
-0.07943598181009293,
0.21184439957141876,
0.23123641312122345,
-0.5343468189239502,
0.19554123282432556,
0.48112937808036804,
0.09820420295000076,
0.6402719020843506,
0.35842978954315186,
-0.7802523970603943,
-0.8725751638412476,
-0.8307746052742004,
-0.339672207832336... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
spongus/milly-images | spongus | 2023-04-15T17:41:37Z | 14 | 2 | null | [
"task_categories:text-to-image",
"task_categories:image-classification",
"task_categories:image-segmentation",
"size_categories:n<1K",
"language:en",
"license:unlicense",
"image",
"cat",
"silly",
"calico",
"region:us"
] | 2023-04-15T17:41:37Z | 2023-04-06T03:45:01.000Z | 2023-04-06T03:45:01 | ---
license: unlicense
tags:
- image
- cat
- silly
- calico
pretty_name: Milly Images
task_categories:
- text-to-image
- image-classification
- image-segmentation
language:
- en
size_categories:
- n<1K
---
A collection of images from a very silly cat, these are all from @fatfatmillycat in twitter. Intended to be used with stable-diffusion-v1-4 | [
-0.5743026733398438,
-0.49303245544433594,
0.5348048806190491,
0.3802662789821625,
-0.47833845019340515,
-0.00026284551131539047,
0.41417500376701355,
-0.03145241364836693,
1.5285907983779907,
0.9484140276908875,
-0.2380966693162918,
-0.004159851465374231,
-0.3289458155632019,
0.2134738713... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
asgaardlab/GameplayCaptions | asgaardlab | 2023-04-07T14:38:12Z | 14 | 4 | null | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"Gameplay",
"region:us"
] | 2023-04-07T14:38:12Z | 2023-04-07T04:01:01.000Z | 2023-04-07T04:01:01 | ---
dataset_info:
features:
- name: img_id
dtype: string
- name: game
dtype: string
- name: image
dtype: image
- name: blip2-opt-6.7b_captions.csv
dtype: string
- name: coca_captions.csv
dtype: string
- name: git-large-coco_captions.csv
dtype: string
- name: git-large-r-textcaps_captions.csv
dtype: string
- name: vit-gpt2_captions.csv
dtype: string
splits:
- name: validation
num_bytes: 69110393094.684
num_examples: 75979
download_size: 66660916127
dataset_size: 69110393094.684
license: apache-2.0
task_categories:
- image-to-text
- text-to-image
language:
- en
tags:
- Gameplay
pretty_name: Gameplay Captions
size_categories:
- 10K<n<100K
---
# Dataset Card for "Gameplay Captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5677613615989685,
-0.21642492711544037,
0.36468032002449036,
0.4652237594127655,
-0.20861905813217163,
0.2950376272201538,
0.195510596036911,
-0.022976703941822052,
0.7294928431510925,
0.5580108761787415,
-1.0736945867538452,
-0.7704365849494934,
-0.4739856719970703,
-0.2254703789949417... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hackathon-somos-nlp-2023/suicide-comments-es | hackathon-somos-nlp-2023 | 2023-04-10T09:26:54Z | 14 | 5 | null | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:es",
"license:apache-2.0",
"region:us"
] | 2023-04-10T09:26:54Z | 2023-04-08T16:43:52.000Z | 2023-04-08T16:43:52 | ---
task_categories:
- text-classification
language:
- es
size_categories:
- 1K<n<10K
license: apache-2.0
---
# Dataset Description
* Example model using the dataset: https://huggingface.co/hackathon-somos-nlp-2023/roberta-base-bne-finetuned-suicide-es
* Example space using the dataset: https://huggingface.co/spaces/hackathon-somos-nlp-2023/suicide-comments-es
* Language: Spanish
## Dataset Summary
The dataset consists of comments on Reddit, Twitter, and inputs/outputs of the Alpaca dataset translated to Spanish language and classified as suicidal ideation/behavior and non-suicidal.
# Dataset Structure
The dataset has 10050 rows (777 considered as Suicidal Ideation/Behavior and 9273 considered Not Suicidal).
## Dataset fields
* `Text`: User comment.
* `Label`: 1 if suicidal ideation/behavior; 0 if not suicidal comment.
# Dataset Creation
## Suicidal Ideation/Behavior
* 90 rows from Columbia Suicide Severity Rating Scale (C-SSRS)
https://zenodo.org/record/2667859#.ZDGnX-xBxYi
C-SSRS is a gold dataset for suicidal comments detection on Reddit.
We use `Helsinki-NLP/opus-mt-en-es` to translate the dataset. We also explode on paragraphs, filter messages less than 240 characters, and we filter the positive ones validating against the [Moderation API of OpenAI](https://platform.openai.com/docs/guides/moderation).
* 519 rows from https://github.com/laxmimerit/twitter-suicidal-intention-dataset/tree/master
The dataset contains the tweet data of suicidal intention and no intention data.
We use `Helsinki-NLP/opus-mt-en-es` to translate the dataset. We filter the positive ones validating against the [Moderation API of OpenAI](https://platform.openai.com/docs/guides/moderation).
* 168 rows added manually from public forums and public blogs.
## Non Suicidal
* 5000 rows from instructions of https://huggingface.co/datasets/somosnlp/somos-clean-alpaca-es
* 2000 rows from output of https://huggingface.co/datasets/somosnlp/somos-clean-alpaca-es
* 2000 rows from Columbia Suicide Severity Rating Scale (C-SSRS)
* 100 rows from https://huggingface.co/datasets/ziq/depression_advice. We use `Helsinki-NLP/opus-mt-en-es` to translate the dataset.
* 100 rows added manually from public forums, blogs and podcasts.
# Considerations for Using the Data
## Social Impact of Dataset
The dataset could contain some patterns to detect suicidal ideation/behavior.
## Discussion of Biases
No measures have been taken to estimate the bias and toxicity embedded in the dataset. However, the most of the data is collected on Reddit, Twitter, and ChatGPT. So there is probably an age bias because [the Internet is used more by younger people](https://www.statista.com/statistics/272365/age-distribution-of-internet-users-worldwide).
# Additional Information
## Team
* [dariolopez](https://huggingface.co/dariolopez)
* [diegogd](https://huggingface.co/diegogd)
## Licesing
This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
| [
-0.38043859601020813,
-0.7661312818527222,
0.5956836342811584,
0.8236222267150879,
-0.23711079359054565,
-0.48791131377220154,
-0.2202375829219818,
-0.37072792649269104,
0.5047534108161926,
0.19849279522895813,
-0.8874461054801941,
-0.8060506582260132,
-0.5657882690429688,
0.42723843455314... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ehartford/oa_leet10k | ehartford | 2023-04-15T20:08:10Z | 14 | 14 | null | [
"license:apache-2.0",
"region:us"
] | 2023-04-15T20:08:10Z | 2023-04-08T17:06:59.000Z | 2023-04-08T17:06:59 | ---
license: apache-2.0
---
| [
-0.12853363156318665,
-0.1861676573753357,
0.6529127359390259,
0.4943627119064331,
-0.19319328665733337,
0.23607459664344788,
0.3607197105884552,
0.050563324242830276,
0.5793654322624207,
0.7400139570236206,
-0.6508101224899292,
-0.2378395050764084,
-0.7102251648902893,
-0.0478259027004241... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
OllieStanley/oa_dolly_15k | OllieStanley | 2023-05-02T14:27:18Z | 14 | 2 | null | [
"region:us"
] | 2023-05-02T14:27:18Z | 2023-04-12T15:14:10.000Z | 2023-04-12T15:14:10 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
- name: METADATA
struct:
- name: CATEGORY
dtype: string
- name: CONTEXT
dtype: string
splits:
- name: train
num_bytes: 12686692
num_examples: 15015
download_size: 7872978
dataset_size: 12686692
---
# oa_dolly_15k
Dolly 15k dataset converted to OpenAssistant QA format. | [
0.0384342260658741,
-0.40292564034461975,
0.1462121456861496,
0.41122519969940186,
-0.49922600388526917,
-0.34647250175476074,
0.6217358112335205,
0.04363596811890602,
0.2863684594631195,
1.0831618309020996,
-0.55694979429245,
-0.7444108724594116,
-0.2898732125759125,
-0.0687757134437561,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vietgpt/databricks_dolly15k_vi | vietgpt | 2023-11-03T21:16:16Z | 14 | 0 | null | [
"region:us"
] | 2023-11-03T21:16:16Z | 2023-04-15T01:40:44.000Z | 2023-04-15T01:40:44 | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 14450287
num_examples: 15004
download_size: 7217068
dataset_size: 14450287
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
--- | [
-0.12853363156318665,
-0.1861676573753357,
0.6529127359390259,
0.4943627119064331,
-0.19319328665733337,
0.23607459664344788,
0.3607197105884552,
0.050563324242830276,
0.5793654322624207,
0.7400139570236206,
-0.6508101224899292,
-0.2378395050764084,
-0.7102251648902893,
-0.0478259027004241... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vietgpt/databricks_dolly15k_en | vietgpt | 2023-11-03T21:15:46Z | 14 | 0 | null | [
"language:en",
"region:us"
] | 2023-11-03T21:15:46Z | 2023-04-15T01:58:01.000Z | 2023-04-15T01:58:01 | ---
language: en
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 12413076
num_examples: 15014
download_size: 7321407
dataset_size: 12413076
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
--- | [
-0.12853363156318665,
-0.1861676573753357,
0.6529127359390259,
0.4943627119064331,
-0.19319328665733337,
0.23607459664344788,
0.3607197105884552,
0.050563324242830276,
0.5793654322624207,
0.7400139570236206,
-0.6508101224899292,
-0.2378395050764084,
-0.7102251648902893,
-0.0478259027004241... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BioDEX/raw_dataset | BioDEX | 2023-04-18T14:12:11Z | 14 | 1 | null | [
"region:us"
] | 2023-04-18T14:12:11Z | 2023-04-18T13:21:06.000Z | 2023-04-18T13:21:06 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mattymchen/cr | mattymchen | 2023-04-19T15:18:09Z | 14 | 0 | null | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"language:en",
"region:us"
] | 2023-04-19T15:18:09Z | 2023-04-19T14:57:36.000Z | 2023-04-19T14:57:36 | ---
language:
- en
task_categories:
- text-classification
task_ids:
- sentiment-classification
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: test
num_bytes: 408668
num_examples: 3775
download_size: 244814
dataset_size: 408668
---
# Dataset Card for "cr"
## Dataset Description
Product review dataset from SentEval.
## Data Fields
- `sentence`: Complete sentence expressing an opinion about a product.
- `label`: Sentiment of the opinion, either "negative" (0) or positive (1).
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.473418265581131,
-0.6141592264175415,
-0.17650973796844482,
0.49864551424980164,
-0.420478880405426,
0.1480957269668579,
-0.14308412373065948,
-0.18058344721794128,
0.5769026279449463,
0.5501534938812256,
-0.9484817981719971,
-1.0282158851623535,
-0.5214774012565613,
-0.0891009569168090... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zhengyun21/PMC-Patients-ReCDS | zhengyun21 | 2023-11-07T16:21:59Z | 14 | 4 | null | [
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-nc-sa-4.0",
"information retrieval",
"patient similarity",
"clinical decision support",
"arxiv:2202.13876",
"region:us"
] | 2023-11-07T16:21:59Z | 2023-04-23T14:54:45.000Z | 2023-04-23T14:54:45 | ---
license: cc-by-nc-sa-4.0
language:
- en
tags:
- information retrieval
- patient similarity
- clinical decision support
size_categories:
- 100K<n<1M
---
# Dataset Card for PMC-Patients-ReCDS
## Dataset Description
- **Homepage:** https://github.com/pmc-patients/pmc-patients
- **Repository:** https://github.com/pmc-patients/pmc-patients
- **Paper:** https://arxiv.org/pdf/2202.13876.pdf
- **Leaderboard:** https://pmc-patients.github.io/
- **Point of Contact:** zhengyun21@mails.tsinghua.edu.cn
### Dataset Summary
**PMC-Patients** is a first-of-its-kind dataset consisting of 167k patient summaries extracted from case reports in PubMed Central (PMC), 3.1M patient-article relevance and 293k patient-patient similarity annotations defined by PubMed citation graph.
### Supported Tasks and Leaderboards
Based on PMC-Patients, we define two tasks to benchmark Retrieval-based Clinical Decision Support (ReCDS) systems: Patient-to-Article Retrieval (PAR) and Patient-to-Patient Retrieval (PPR).
For details, please refer to [our paper](https://arxiv.org/pdf/2202.13876.pdf) and [leaderboard](https://pmc-patients.github.io/).
### Languages
English (en).
## Dataset Structure
The PMC-Patients ReCDS benchmark is presented as retrieval tasks and the data format is the same as [BEIR](https://github.com/beir-cellar/beir) benchmark.
To be specific, there are queries, corpus, and qrels (annotations).
### Queries
ReCDS-PAR and ReCDS-PPR tasks share the same query patient set and dataset split.
For each split (train, dev, and test), queries are stored a `jsonl` file that contains a list of dictionaries, each with two fields:
- `_id`: unique query identifier represented by patient_uid.
- `text`: query text represented by patient summary text.
### Corpus
Corpus is shared by different splits. For ReCDS-PAR, the corpus contains 11.7M PubMed articles, and for ReCDS-PPR, the corpus contains 155.2k reference patients from PMC-Patients. The corpus is also presented by a `jsonl` file that contains a list of dictionaries with three fields:
- `_id`: unique document identifier represented by PMID of the PubMed article in ReCDS-PAR, and patient_uid of the candidate patient in ReCDS-PPR.
- `title`: : title of the article in ReCDS-PAR, and empty string in ReCDS-PPR.
- `text`: abstract of the article in ReCDS-PAR, and patient summary text in ReCDS-PPR.
**PAR corpus note**
Due to its large size, we fail to upload the full PAR corpus on Huggingface. Instead, we provide PMIDs of the articles we include in PAR corpus, but we recommend you to download the dataset from [Figshare](https://figshare.com/collections/PMC-Patients/6723465) which contains the full PAR corpus file.
### Qrels
Qrels are TREC-style retrieval annotation files in `tsv` format.
A qrels file contains three tab-separated columns, i.e. the query identifier, corpus identifier, and score in this order. The scores (2 or 1) indicate the relevance level in ReCDS-PAR or similarity level in ReCDS-PPR.
Note that the qrels may not be the same as `relevant_articles` and `similar_patients` in `PMC-Patients.json` due to dataset split (see our manuscript for details).
### Data Instances
**A sample of query**
{"_id": "8699387-1", "text": "A 60-year-old female patient with a medical history of hypertension came to our attention because of several neurological deficits that had developed over the last few years, significantly impairing her daily life. Four years earlier, she developed sudden weakness and hypoesthesia of the right hand. The symptoms resolved in a few days and no specific diagnostic tests were performed. Two months later, she developed hypoesthesia and weakness of the right lower limb. On neurological examination at the time, she had spastic gait, ataxia, slight pronation of the right upper limb and bilateral Babinski sign. Brain MRI showed extensive white matter hyperintensities (WMHs), so leukodystrophy was suspected. However, these WMHs were located bilaterally in the corona radiata, basal ganglia, the anterior part of the temporal lobes and the medium cerebellar peduncle (A–D), and were highly suggestive of CADASIL. Genetic testing was performed, showing heterozygous mutation of the NOTCH3 gene (c.994 C<T; exon 6). The diagnosis of CADASIL was confirmed and antiplatelet prevention therapy was started. Since then, her clinical conditions remained stable, and the lesion load was unchanged at follow-up brain MRIs for 4 years until November 2020, when the patient was diagnosed with COVID-19 after a PCR nasal swab. The patient developed only mild respiratory symptoms, not requiring hospitalization or any specific treatment. Fifteen days after the COVID-19 diagnosis, she suddenly developed aphasia, agraphia and worsened right upper limb motor deficit, but she did not seek medical attention. Some days later, she reported these symptoms to her family medical doctor, and a new brain MRI was performed, showing a subacute ischemic area in the left corona radiata (E,F). Therapy with acetylsalicylic acid was switched to clopidogrel as secondary prevention, while her symptoms improved in the next few weeks. The patient underwent a carotid doppler ultrasound and an echocardiogram, which did not reveal any pathological changes. The review of the blood pressure log, both in-hospital and the personal one the patient had kept, excluded uncontrolled hypertension."}
**A sample of qrels**
query-id corpus-id score
8647806-1 6437752-1 1
8647806-1 6946242-1 1
### Data Splits
Refer to our paper.
## Dataset Creation
If you are interested in the collection of PMC-Patients and reproducing our baselines, please refer to [this reporsitory](https://github.com/zhao-zy15/PMC-Patients).
### Citation Information
If you find PMC-Patients helpful in your research, please cite our work by:
```
@misc{zhao2023pmcpatients,
title={PMC-Patients: A Large-scale Dataset of Patient Summaries and Relations for Benchmarking Retrieval-based Clinical Decision Support Systems},
author={Zhengyun Zhao and Qiao Jin and Fangyuan Chen and Tuorui Peng and Sheng Yu},
year={2023},
eprint={2202.13876},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| [
-0.3196576237678528,
-0.41148245334625244,
0.6232690811157227,
0.18747079372406006,
-0.27742937207221985,
0.03448789194226265,
0.06156044080853462,
-0.24781741201877594,
0.3323657810688019,
0.3797154426574707,
-0.3624376356601715,
-0.7083974480628967,
-0.522295355796814,
0.1468391120433807... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
iamketan25/roleplay-instructions-dataset | iamketan25 | 2023-04-24T22:32:40Z | 14 | 12 | null | [
"region:us"
] | 2023-04-24T22:32:40Z | 2023-04-24T22:32:18.000Z | 2023-04-24T22:32:18 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jkhedri/psychology-dataset-split | jkhedri | 2023-05-04T11:26:59Z | 14 | 4 | null | [
"region:us"
] | 2023-05-04T11:26:59Z | 2023-05-04T10:24:25.000Z | 2023-05-04T10:24:25 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
paul-ww/ei-abstract-significance | paul-ww | 2023-10-09T13:37:05Z | 14 | 0 | null | [
"region:us"
] | 2023-10-09T13:37:05Z | 2023-05-05T11:04:23.000Z | 2023-05-05T11:04:23 | ---
dataset_info:
features:
- name: pmcid
dtype: int32
- name: pmid
dtype: int32
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': no significant effect
'1': significant effect
splits:
- name: train
num_bytes: 1930106
num_examples: 1028
- name: validation
num_bytes: 229838
num_examples: 118
- name: test
num_bytes: 230635
num_examples: 123
download_size: 0
dataset_size: 2390579
---
# Dataset Card for "ei-abstract-significance"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5572952032089233,
-0.22263216972351074,
0.43761321902275085,
0.3124321699142456,
-0.15366196632385254,
-0.2758408188819885,
0.4180523455142975,
-0.6355077028274536,
1.2031753063201904,
-0.06486911326646805,
-0.6172229051589966,
-0.7180872559547424,
-0.7771996259689331,
0.028245808556675... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
andrewkatumba/cassava_leaf_diseases_dsa_2023 | andrewkatumba | 2023-05-07T21:48:08Z | 14 | 0 | null | [
"license:cc-by-sa-4.0",
"region:us"
] | 2023-05-07T21:48:08Z | 2023-05-07T21:18:23.000Z | 2023-05-07T21:18:23 | ---
license: cc-by-sa-4.0
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': cbsd
'1': cmd
'2': healthy
splits:
- name: train
num_bytes: 2065460109.0
num_examples: 900
- name: test
num_bytes: 334351258.0
num_examples: 150
download_size: 2392507756
dataset_size: 2399811367.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
baira/indian_food_images | baira | 2023-05-20T13:20:48Z | 14 | 0 | null | [
"license:openrail",
"region:us"
] | 2023-05-20T13:20:48Z | 2023-05-13T14:04:22.000Z | 2023-05-13T14:04:22 | ---
license: openrail
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': burger
'1': butter_naan
'2': chai
'3': chapati
'4': chole_bhature
'5': dal_makhani
'6': dhokla
'7': fried_rice
'8': idli
'9': jalebi
'10': kaathi_rolls
'11': kadai_paneer
'12': kulfi
'13': masala_dosa
'14': momos
'15': paani_puri
'16': pakode
'17': pav_bhaji
'18': pizza
'19': samosa
splits:
- name: train
num_bytes: 1377006438.2874336
num_examples: 5328
- name: test
num_bytes: 235132199.3925666
num_examples: 941
download_size: 1600810218
dataset_size: 1612138637.6800003
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Saiteja/celeb-identities | Saiteja | 2023-05-13T18:53:03Z | 14 | 0 | null | [
"region:us"
] | 2023-05-13T18:53:03Z | 2023-05-13T18:37:16.000Z | 2023-05-13T18:37:16 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': allu_arjun
'1': chiranjeevi
'2': kamal_haasan
'3': mahesh_babu
'4': prabhas
'5': rajnikanth
splits:
- name: train
num_bytes: 1952307.0
num_examples: 18
download_size: 1943795
dataset_size: 1952307.0
---
# Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4635334610939026,
-0.25037848949432373,
0.003946435172110796,
0.09495986253023148,
-0.0663595125079155,
0.3351336121559143,
0.2677994966506958,
-0.30673032999038696,
0.9174424409866333,
0.39497289061546326,
-0.8485507369041443,
-0.641608476638794,
-0.6570073366165161,
-0.266240954399108... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Ruqoyya/celeb-identities | Ruqoyya | 2023-05-14T08:14:55Z | 14 | 0 | null | [
"region:us"
] | 2023-05-14T08:14:55Z | 2023-05-14T08:14:52.000Z | 2023-05-14T08:14:52 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Albert_Einstein
'1': Ashley_Olsen
'2': Chris_Rock
'3': Cristiano_Ronaldo
'4': Didier_Drogba
'5': Idris_Elba
'6': Lionel_Messi
'7': Mary-Kate_Olsen
'8': Paul_Pogba
'9': Tamera_Mowry
'10': Tia_Mowry
splits:
- name: train
num_bytes: 1992683.0
num_examples: 34
download_size: 1995278
dataset_size: 1992683.0
---
# Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4635334610939026,
-0.25037848949432373,
0.003946435172110796,
0.09495986253023148,
-0.0663595125079155,
0.3351336121559143,
0.2677994966506958,
-0.30673032999038696,
0.9174424409866333,
0.39497289061546326,
-0.8485507369041443,
-0.641608476638794,
-0.6570073366165161,
-0.266240954399108... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
deepghs/nsfw_detect | deepghs | 2023-05-15T12:08:47Z | 14 | 5 | null | [
"size_categories:10K<n<100K",
"license:mit",
"art",
"region:us"
] | 2023-05-15T12:08:47Z | 2023-05-15T11:57:46.000Z | 2023-05-15T11:57:46 | ---
license: mit
tags:
- art
size_categories:
- 10K<n<100K
---
The dataset used for training the NSFW Detect classification model is divided into five categories: `drawing`, `hentai`, `neutral`, `porn`, and `sexy`, following the format mentioned in [GantMan/nsfw_model](https://github.com/GantMan/nsfw_model) and [yangbisheng2009/nsfw-resnet](https://github.com/yangbisheng2009/nsfw-resnet). | [
-0.6600344181060791,
-0.4077166020870209,
0.14049114286899567,
0.003656642744317651,
-0.40561944246292114,
-0.2259579598903656,
0.4955810606479645,
-0.37304461002349854,
-0.05725657567381859,
0.6936184167861938,
-0.7053744196891785,
-0.766187846660614,
-0.5468124747276306,
0.56054127216339... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Pranavkpba2000/skin_cancer_small_dataset | Pranavkpba2000 | 2023-05-16T11:12:18Z | 14 | 0 | null | [
"region:us"
] | 2023-05-16T11:12:18Z | 2023-05-16T11:12:00.000Z | 2023-05-16T11:12:00 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': AK
'1': BCC
'2': BKL
'3': DF
'4': MEL
'5': NV
'6': SCC
'7': VASC
splits:
- name: train
num_bytes: 66578294.72
num_examples: 11360
- name: test
num_bytes: 17394813.72
num_examples: 2840
download_size: 83755065
dataset_size: 83973108.44
---
# Dataset Card for "skin_cancer_small_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.21798191964626312,
-0.26655349135398865,
0.3284514546394348,
-0.04627654701471329,
-0.29701876640319824,
-0.07127431780099869,
0.27809658646583557,
-0.1886894404888153,
0.9559279680252075,
0.6593202352523804,
-0.7171578407287598,
-0.9664702415466309,
-0.577163577079773,
-0.3860535621643... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Multimodal-Fatima/Imagenette_train | Multimodal-Fatima | 2023-05-21T21:23:49Z | 14 | 0 | null | [
"region:us"
] | 2023-05-21T21:23:49Z | 2023-05-21T21:17:42.000Z | 2023-05-21T21:17:42 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': tench
'1': English springer
'2': cassette player
'3': chain saw
'4': church
'5': French horn
'6': garbage truck
'7': gas pump
'8': golf ball
'9': parachute
- name: id
dtype: int64
splits:
- name: train
num_bytes: 1104913038.331
num_examples: 9469
download_size: 0
dataset_size: 1104913038.331
---
# Dataset Card for "Imagenette_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6596733927726746,
0.01757710985839367,
0.160661980509758,
0.24996131658554077,
-0.17399901151657104,
-0.2720490097999573,
0.26825499534606934,
-0.18115821480751038,
0.8859354853630066,
0.38422369956970215,
-0.7364386916160583,
-0.6357364058494568,
-0.7845317721366882,
-0.502355575561523... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Multimodal-Fatima/TinyImagenet_train | Multimodal-Fatima | 2023-05-22T01:44:39Z | 14 | 0 | null | [
"region:us"
] | 2023-05-22T01:44:39Z | 2023-05-21T21:24:23.000Z | 2023-05-21T21:24:23 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': goldfish
'1': fire salamander
'2': American bullfrog
'3': tailed frog
'4': American alligator
'5': boa constrictor
'6': trilobite
'7': scorpion
'8': southern black widow
'9': tarantula
'10': centipede
'11': koala
'12': jellyfish
'13': brain coral
'14': snail
'15': sea slug
'16': American lobster
'17': spiny lobster
'18': black stork
'19': king penguin
'20': albatross
'21': dugong
'22': Yorkshire Terrier
'23': Golden Retriever
'24': Labrador Retriever
'25': German Shepherd Dog
'26': Standard Poodle
'27': tabby cat
'28': Persian cat
'29': Egyptian Mau
'30': cougar
'31': lion
'32': brown bear
'33': ladybug
'34': grasshopper
'35': stick insect
'36': cockroach
'37': praying mantis
'38': dragonfly
'39': monarch butterfly
'40': sulphur butterfly
'41': sea cucumber
'42': guinea pig
'43': pig
'44': ox
'45': bison
'46': bighorn sheep
'47': gazelle
'48': arabian camel
'49': orangutan
'50': chimpanzee
'51': baboon
'52': African bush elephant
'53': red panda
'54': abacus
'55': academic gown
'56': altar
'57': backpack
'58': baluster / handrail
'59': barbershop
'60': barn
'61': barrel
'62': basketball
'63': bathtub
'64': station wagon
'65': lighthouse
'66': beaker
'67': beer bottle
'68': bikini
'69': binoculars
'70': birdhouse
'71': bow tie
'72': brass memorial plaque
'73': bucket
'74': high-speed train
'75': butcher shop
'76': candle
'77': cannon
'78': cardigan
'79': automated teller machine
'80': CD player
'81': storage chest
'82': Christmas stocking
'83': cliff dwelling
'84': computer keyboard
'85': candy store
'86': convertible
'87': crane bird
'88': dam
'89': desk
'90': dining table
'91': dumbbell
'92': flagpole
'93': fly
'94': fountain
'95': freight car
'96': frying pan
'97': fur coat
'98': gas mask or respirator
'99': go-kart
'100': gondola
'101': hourglass
'102': iPod
'103': rickshaw
'104': kimono
'105': lampshade
'106': lawn mower
'107': lifeboat
'108': limousine
'109': magnetic compass
'110': maypole
'111': military uniform
'112': miniskirt
'113': moving van
'114': neck brace
'115': obelisk
'116': oboe
'117': pipe organ
'118': parking meter
'119': payphone
'120': picket fence
'121': pill bottle
'122': plunger
'123': police van
'124': poncho
'125': soda bottle
'126': potter's wheel
'127': missile
'128': punching bag
'129': refrigerator
'130': remote control
'131': rocking chair
'132': rugby ball
'133': sandal
'134': school bus
'135': scoreboard
'136': sewing machine
'137': snorkel
'138': sock
'139': sombrero
'140': space heater
'141': spider web
'142': sports car
'143': through arch bridge
'144': stopwatch
'145': sunglasses
'146': suspension bridge
'147': swim trunks / shorts
'148': syringe
'149': teapot
'150': teddy bear
'151': thatched roof
'152': torch
'153': tractor
'154': triumphal arch
'155': trolleybus
'156': turnstile
'157': umbrella
'158': vestment
'159': viaduct
'160': volleyball
'161': water jug
'162': water tower
'163': wok
'164': wooden spoon
'165': comic book
'166': fishing casting reel
'167': guacamole
'168': ice cream
'169': popsicle
'170': goose
'171': drumstick
'172': plate
'173': pretzel
'174': mashed potatoes
'175': cauliflower
'176': bell pepper
'177': lemon
'178': banana
'179': pomegranate
'180': meatloaf
'181': pizza
'182': pot pie
'183': espresso
'184': bee
'185': apron
'186': pole
'187': Chihuahua
'188': mountain
'189': cliff
'190': coral reef
'191': lakeshore
'192': beach
'193': acorn
'194': broom
'195': mushroom
'196': metal nail
'197': chain
'198': slug
'199': orange
- name: id
dtype: int64
splits:
- name: train
num_bytes: 196454984.0
num_examples: 100000
download_size: 147804439
dataset_size: 196454984.0
---
# Dataset Card for "TinyImagenet_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6402302980422974,
0.040814198553562164,
0.2859818637371063,
0.16169865429401398,
-0.3174223303794861,
-0.19661609828472137,
0.14949513971805573,
-0.02214314602315426,
0.8414563536643982,
0.16714692115783691,
-0.9340176582336426,
-0.3801514804363251,
-0.5786285400390625,
-0.3987608551979... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mcimpoi/dtd_split_1 | mcimpoi | 2023-05-22T12:42:00Z | 14 | 0 | null | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"texture",
"computer-vision",
"region:us"
] | 2023-05-22T12:42:00Z | 2023-05-22T10:17:50.000Z | 2023-05-22T10:17:50 | ---
license: cc-by-4.0
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': banded
'1': blotchy
'2': braided
'3': bubbly
'4': bumpy
'5': chequered
'6': cobwebbed
'7': cracked
'8': crosshatched
'9': crystalline
'10': dotted
'11': fibrous
'12': flecked
'13': freckled
'14': frilly
'15': gauzy
'16': grid
'17': grooved
'18': honeycombed
'19': interlaced
'20': knitted
'21': lacelike
'22': lined
'23': marbled
'24': matted
'25': meshed
'26': paisley
'27': perforated
'28': pitted
'29': pleated
'30': polka-dotted
'31': porous
'32': potholed
'33': scaly
'34': smeared
'35': spiralled
'36': sprinkled
'37': stained
'38': stratified
'39': striped
'40': studded
'41': swirly
'42': veined
'43': waffled
'44': woven
'45': wrinkled
'46': zigzagged
splits:
- name: train
num_bytes: 226313270.04
num_examples: 1880
- name: test
num_bytes: 172035822
num_examples: 1880
- name: validation
num_bytes: 222278767.48
num_examples: 1880
download_size: 629315160
dataset_size: 620627859.52
task_categories:
- image-classification
language:
- en
tags:
- texture
- computer-vision
pretty_name: Describable Textures Dataset
size_categories:
- 1K<n<10K
---
# Dataset Card for Describable Textures Dataset (DTD)
## Dataset Description
- Homepage: https://www.robots.ox.ac.uk/~vgg/data/dtd/
- Repository: https://github.com/mcimpoi/deep-fbanks
- Paper: https://openaccess.thecvf.com/content_cvpr_2014/html/Cimpoi_Describing_Textures_in_2014_CVPR_paper.html
- Leaderboard: https://paperswithcode.com/sota/image-classification-on-dtd
### Dataset Summary
Texture classification dataset; consists of 47 categories, 120 images per class.
### Data Splits
Equally split into train, val, test; The original paper proposed 10 splits; recent works (BYOL, arxiv:2006.07733) use only first split.
### Licensing Information
Not defined at https://www.robots.ox.ac.uk/~vgg/data/dtd/
### Citation Information
@InProceedings{cimpoi14describing,
Author = {M. Cimpoi and S. Maji and I. Kokkinos and S. Mohamed and and A. Vedaldi},
Title = {Describing Textures in the Wild},
Booktitle = {Proceedings of the {IEEE} Conf. on Computer Vision and Pattern Recognition ({CVPR})},
Year = {2014}}
| [
-0.5359573364257812,
-0.6326864957809448,
0.21034996211528778,
0.7498244643211365,
-0.7582719922065735,
0.17394572496414185,
-0.11490298807621002,
-0.4299744963645935,
0.18329477310180664,
0.4822629988193512,
-0.3432377576828003,
-0.8726906180381775,
-0.5596075057983398,
-0.091147124767303... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jyshbgde/cinescopeDataset | jyshbgde | 2023-06-24T06:39:57Z | 14 | 0 | null | [
"task_categories:feature-extraction",
"language:en",
"license:openrail",
"region:us"
] | 2023-06-24T06:39:57Z | 2023-05-22T14:11:53.000Z | 2023-05-22T14:11:53 | ---
license: openrail
task_categories:
- feature-extraction
language:
- en
pretty_name: cinescope
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tasksource/ruletaker | tasksource | 2023-07-28T20:30:37Z | 14 | 1 | null | [
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-07-28T20:30:37Z | 2023-05-23T09:33:10.000Z | 2023-05-23T09:33:10 | ---
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: label
dtype: string
- name: config
dtype: string
splits:
- name: train
num_bytes: 252209259
num_examples: 480152
- name: dev
num_bytes: 39591713
num_examples: 75872
- name: test
num_bytes: 80649163
num_examples: 151911
download_size: 34172740
dataset_size: 372450135
license: apache-2.0
language:
- en
---
# Dataset Card for "ruletaker"
https://github.com/allenai/ruletaker
```
@inproceedings{ruletaker2020,
title = {Transformers as Soft Reasoners over Language},
author = {Clark, Peter and Tafjord, Oyvind and Richardson, Kyle},
booktitle = {Proceedings of the Twenty-Ninth International Joint Conference on
Artificial Intelligence, {IJCAI-20}},
publisher = {International Joint Conferences on Artificial Intelligence Organization},
editor = {Christian Bessiere},
pages = {3882--3890},
year = {2020},
month = {7},
note = {Main track},
doi = {10.24963/ijcai.2020/537},
url = {https://doi.org/10.24963/ijcai.2020/537},
}
``` | [
-0.37211039662361145,
-0.5059909820556641,
0.3115074634552002,
0.1492321640253067,
-0.23520348966121674,
-0.21837769448757172,
-0.48829329013824463,
-0.08927256613969803,
0.0031104942318052053,
0.6446226835250854,
-0.7982654571533203,
-0.5681377649307251,
-0.5488452315330505,
0.21258212625... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
alexjercan/bugnet | alexjercan | 2023-07-26T05:35:52Z | 14 | 3 | null | [
"region:us"
] | 2023-07-26T05:35:52Z | 2023-05-24T14:11:29.000Z | 2023-05-24T14:11:29 | ---
dataset_info:
- config_name: Python
features:
- name: problem_id
dtype: string
- name: language
dtype: string
- name: original_status
dtype: string
- name: fail
dtype: string
- name: pass
dtype: string
- name: change
dtype: string
- name: i1
dtype: uint32
- name: i2
dtype: uint32
- name: j1
dtype: uint32
- name: j2
dtype: uint32
- name: error
dtype: string
- name: stderr
dtype: string
- name: stdout
dtype: string
- name: description
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 8237153
num_examples: 2557
- name: validation
num_bytes: 3497872
num_examples: 1105
- name: test
num_bytes: 205241
num_examples: 100
download_size: 19290233
dataset_size: 11940266
- config_name: C++
features:
- name: problem_id
dtype: string
- name: language
dtype: string
- name: original_status
dtype: string
- name: fail
dtype: string
- name: pass
dtype: string
- name: change
dtype: string
- name: i1
dtype: uint32
- name: i2
dtype: uint32
- name: j1
dtype: uint32
- name: j2
dtype: uint32
- name: error
dtype: string
- name: stderr
dtype: string
- name: stdout
dtype: string
- name: description
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 482930200
num_examples: 68621
- name: validation
num_bytes: 1129323
num_examples: 125
- name: test
num_bytes: 40048505
num_examples: 4769
download_size: 378900920
dataset_size: 524108028
---
# About the Dataset
The source code used to generate the dataset can be found on
[GitHub](https://github.com/alexjercan/bug-detection/tree/master/bugnet)
The dataset is based on the [CodeNet project](https://github.com/IBM/Project_CodeNet)
and contains Python and C++ code submissions for online coding competitions. The data
is obtained by selecting consecutive attempts of a single user that resulted in fixing a
buggy submission. Thus the data is represented by code pairs and annotated by the diff
and error of each changed instruction. We have already tokenized all the source code
files and kept the same format as in the original dataset.
The upgrade made compared to CodeNetPy is that we only keep one line errors.
This means that the task of bug detection and repair will be easier to manage.
We also removed all the files that fail on linters, so that we are focusing only
on bugs that cannot be identified easily.
The resulting dataset file will be a csv with the following columns:
- `problem_id`: The id of the problem, matches with the id from Project_CodeNet
- `language`: The programming language of the submission (`Python` or `C++`)
- `original_status`: The status of the initial submission (`TLE`, `MLE`, anything that is not `Accepted`)
- `fail`: The initial (buggy) source code formatted (`black` or `clang-fromat`)
- `pass`: The modified (accepted) source code formatted(`black` or `clang-format`
- `change`: The change that was made (`replace`, `insert`, `delete`)
- `i1`: Start of the change in the buggy source (the line; starting with 1)
- `i2`: End of the change in the buggy source (not inclusive; for `insert` we have `i1 == i2`)
- `j1`: Start of the change in the accepted source (the line; starting with 1)
- `j2`: End of the change in the accepted source (not inclusive; for `delete` we have `j1 == j2`)
- `error`: The error that was obtained running the buggy source code on the input/output examples
- `stderr`: The full output of stderr of running the buggy source code on the input/output examples
- `stdout`: The full output of stdout of running the buggy source code on the input/output examples
- `description`: The problem statement in html format
- `input`: The input for the test case
- `output`: The output for the test case
| [
-0.3229331076145172,
-0.35847920179367065,
0.08397600054740906,
0.14069241285324097,
-0.02529205195605755,
0.032501157373189926,
-0.2311055213212967,
-0.2608894109725952,
0.5029507875442505,
0.4691474437713623,
-0.5968070030212402,
-0.6961391568183899,
-0.41255372762680054,
0.2500128149986... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Pranavkpba2000/skin_cancer_complete_dataset_resized_123 | Pranavkpba2000 | 2023-05-25T04:09:47Z | 14 | 0 | null | [
"region:us"
] | 2023-05-25T04:09:47Z | 2023-05-25T04:09:23.000Z | 2023-05-25T04:09:23 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': AK
'1': BCC
'2': BKL
'3': DF
'4': MEL
'5': NV
'6': SCC
'7': VASC
splits:
- name: train
num_bytes: 170043159.063
num_examples: 28449
- name: test
num_bytes: 46642856.68
num_examples: 7112
download_size: 204564103
dataset_size: 216686015.743
---
# Dataset Card for "skin_cancer_complete_dataset_resized_123"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.1098494678735733,
-0.20697620511054993,
0.30389007925987244,
0.09970997273921967,
-0.42677322030067444,
0.14166070520877838,
0.20951998233795166,
-0.052387721836566925,
1.0221126079559326,
0.7429458498954773,
-0.7810341715812683,
-1.058660626411438,
-0.6049727201461792,
-0.1858404278755... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jlh/home-credit-example-raw | jlh | 2023-05-26T02:29:12Z | 14 | 0 | null | [
"region:us"
] | 2023-05-26T02:29:12Z | 2023-05-26T02:29:10.000Z | 2023-05-26T02:29:10 | ---
dataset_info:
features:
- name: SK_ID_CURR
dtype: int64
- name: TARGET
dtype: int64
- name: NAME_CONTRACT_TYPE
dtype: string
- name: CODE_GENDER
dtype: string
- name: FLAG_OWN_CAR
dtype: string
- name: FLAG_OWN_REALTY
dtype: string
- name: CNT_CHILDREN
dtype: int64
- name: AMT_INCOME_TOTAL
dtype: float64
- name: AMT_CREDIT
dtype: float64
- name: AMT_ANNUITY
dtype: float64
- name: AMT_GOODS_PRICE
dtype: float64
- name: NAME_TYPE_SUITE
dtype: string
- name: NAME_INCOME_TYPE
dtype: string
- name: NAME_EDUCATION_TYPE
dtype: string
- name: NAME_FAMILY_STATUS
dtype: string
- name: NAME_HOUSING_TYPE
dtype: string
- name: REGION_POPULATION_RELATIVE
dtype: float64
- name: DAYS_BIRTH
dtype: int64
- name: DAYS_EMPLOYED
dtype: int64
- name: DAYS_REGISTRATION
dtype: float64
- name: DAYS_ID_PUBLISH
dtype: int64
- name: OWN_CAR_AGE
dtype: float64
- name: FLAG_MOBIL
dtype: int64
- name: FLAG_EMP_PHONE
dtype: int64
- name: FLAG_WORK_PHONE
dtype: int64
- name: FLAG_CONT_MOBILE
dtype: int64
- name: FLAG_PHONE
dtype: int64
- name: FLAG_EMAIL
dtype: int64
- name: OCCUPATION_TYPE
dtype: string
- name: CNT_FAM_MEMBERS
dtype: float64
- name: REGION_RATING_CLIENT
dtype: int64
- name: REGION_RATING_CLIENT_W_CITY
dtype: int64
- name: WEEKDAY_APPR_PROCESS_START
dtype: string
- name: HOUR_APPR_PROCESS_START
dtype: int64
- name: REG_REGION_NOT_LIVE_REGION
dtype: int64
- name: REG_REGION_NOT_WORK_REGION
dtype: int64
- name: LIVE_REGION_NOT_WORK_REGION
dtype: int64
- name: REG_CITY_NOT_LIVE_CITY
dtype: int64
- name: REG_CITY_NOT_WORK_CITY
dtype: int64
- name: LIVE_CITY_NOT_WORK_CITY
dtype: int64
- name: ORGANIZATION_TYPE
dtype: string
- name: EXT_SOURCE_1
dtype: float64
- name: EXT_SOURCE_2
dtype: float64
- name: EXT_SOURCE_3
dtype: float64
- name: APARTMENTS_AVG
dtype: float64
- name: BASEMENTAREA_AVG
dtype: float64
- name: YEARS_BEGINEXPLUATATION_AVG
dtype: float64
- name: YEARS_BUILD_AVG
dtype: float64
- name: COMMONAREA_AVG
dtype: float64
- name: ELEVATORS_AVG
dtype: float64
- name: ENTRANCES_AVG
dtype: float64
- name: FLOORSMAX_AVG
dtype: float64
- name: FLOORSMIN_AVG
dtype: float64
- name: LANDAREA_AVG
dtype: float64
- name: LIVINGAPARTMENTS_AVG
dtype: float64
- name: LIVINGAREA_AVG
dtype: float64
- name: NONLIVINGAPARTMENTS_AVG
dtype: float64
- name: NONLIVINGAREA_AVG
dtype: float64
- name: APARTMENTS_MODE
dtype: float64
- name: BASEMENTAREA_MODE
dtype: float64
- name: YEARS_BEGINEXPLUATATION_MODE
dtype: float64
- name: YEARS_BUILD_MODE
dtype: float64
- name: COMMONAREA_MODE
dtype: float64
- name: ELEVATORS_MODE
dtype: float64
- name: ENTRANCES_MODE
dtype: float64
- name: FLOORSMAX_MODE
dtype: float64
- name: FLOORSMIN_MODE
dtype: float64
- name: LANDAREA_MODE
dtype: float64
- name: LIVINGAPARTMENTS_MODE
dtype: float64
- name: LIVINGAREA_MODE
dtype: float64
- name: NONLIVINGAPARTMENTS_MODE
dtype: float64
- name: NONLIVINGAREA_MODE
dtype: float64
- name: APARTMENTS_MEDI
dtype: float64
- name: BASEMENTAREA_MEDI
dtype: float64
- name: YEARS_BEGINEXPLUATATION_MEDI
dtype: float64
- name: YEARS_BUILD_MEDI
dtype: float64
- name: COMMONAREA_MEDI
dtype: float64
- name: ELEVATORS_MEDI
dtype: float64
- name: ENTRANCES_MEDI
dtype: float64
- name: FLOORSMAX_MEDI
dtype: float64
- name: FLOORSMIN_MEDI
dtype: float64
- name: LANDAREA_MEDI
dtype: float64
- name: LIVINGAPARTMENTS_MEDI
dtype: float64
- name: LIVINGAREA_MEDI
dtype: float64
- name: NONLIVINGAPARTMENTS_MEDI
dtype: float64
- name: NONLIVINGAREA_MEDI
dtype: float64
- name: FONDKAPREMONT_MODE
dtype: string
- name: HOUSETYPE_MODE
dtype: string
- name: TOTALAREA_MODE
dtype: float64
- name: WALLSMATERIAL_MODE
dtype: string
- name: EMERGENCYSTATE_MODE
dtype: string
- name: OBS_30_CNT_SOCIAL_CIRCLE
dtype: float64
- name: DEF_30_CNT_SOCIAL_CIRCLE
dtype: float64
- name: OBS_60_CNT_SOCIAL_CIRCLE
dtype: float64
- name: DEF_60_CNT_SOCIAL_CIRCLE
dtype: float64
- name: DAYS_LAST_PHONE_CHANGE
dtype: float64
- name: FLAG_DOCUMENT_2
dtype: int64
- name: FLAG_DOCUMENT_3
dtype: int64
- name: FLAG_DOCUMENT_4
dtype: int64
- name: FLAG_DOCUMENT_5
dtype: int64
- name: FLAG_DOCUMENT_6
dtype: int64
- name: FLAG_DOCUMENT_7
dtype: int64
- name: FLAG_DOCUMENT_8
dtype: int64
- name: FLAG_DOCUMENT_9
dtype: int64
- name: FLAG_DOCUMENT_10
dtype: int64
- name: FLAG_DOCUMENT_11
dtype: int64
- name: FLAG_DOCUMENT_12
dtype: int64
- name: FLAG_DOCUMENT_13
dtype: int64
- name: FLAG_DOCUMENT_14
dtype: int64
- name: FLAG_DOCUMENT_15
dtype: int64
- name: FLAG_DOCUMENT_16
dtype: int64
- name: FLAG_DOCUMENT_17
dtype: int64
- name: FLAG_DOCUMENT_18
dtype: int64
- name: FLAG_DOCUMENT_19
dtype: int64
- name: FLAG_DOCUMENT_20
dtype: int64
- name: FLAG_DOCUMENT_21
dtype: int64
- name: AMT_REQ_CREDIT_BUREAU_HOUR
dtype: float64
- name: AMT_REQ_CREDIT_BUREAU_DAY
dtype: float64
- name: AMT_REQ_CREDIT_BUREAU_WEEK
dtype: float64
- name: AMT_REQ_CREDIT_BUREAU_MON
dtype: float64
- name: AMT_REQ_CREDIT_BUREAU_QRT
dtype: float64
- name: AMT_REQ_CREDIT_BUREAU_YEAR
dtype: float64
splits:
- name: raw
num_bytes: 10681044
num_examples: 10000
download_size: 1985577
dataset_size: 10681044
---
# Dataset Card for "home-credit-example-raw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3094692528247833,
-0.4843045473098755,
0.13144183158874512,
0.1358606070280075,
-0.18400722742080688,
0.16780945658683777,
0.11804326623678207,
0.055554457008838654,
0.43646547198295593,
0.4631512761116028,
-0.7111963033676147,
-0.9316917657852173,
-0.20741575956344604,
-0.2430832087993... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Vikas-nnresearch/Knob-classification | Vikas-nnresearch | 2023-05-26T05:55:28Z | 14 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-05-26T05:55:28Z | 2023-05-26T05:54:22.000Z | 2023-05-26T05:54:22 | ---
license: apache-2.0
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Knob
'1': No knob
splits:
- name: train
num_bytes: 24695896.0
num_examples: 149
download_size: 24698150
dataset_size: 24695896.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Multimodal-Fatima/CIFAR100_train | Multimodal-Fatima | 2023-05-30T15:43:31Z | 14 | 0 | null | [
"region:us"
] | 2023-05-30T15:43:31Z | 2023-05-29T18:30:41.000Z | 2023-05-29T18:30:41 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': apple
'1': aquarium_fish
'2': baby
'3': bear
'4': beaver
'5': bed
'6': bee
'7': beetle
'8': bicycle
'9': bottle
'10': bowl
'11': boy
'12': bridge
'13': bus
'14': butterfly
'15': camel
'16': can
'17': castle
'18': caterpillar
'19': cattle
'20': chair
'21': chimpanzee
'22': clock
'23': cloud
'24': cockroach
'25': couch
'26': cra
'27': crocodile
'28': cup
'29': dinosaur
'30': dolphin
'31': elephant
'32': flatfish
'33': forest
'34': fox
'35': girl
'36': hamster
'37': house
'38': kangaroo
'39': keyboard
'40': lamp
'41': lawn_mower
'42': leopard
'43': lion
'44': lizard
'45': lobster
'46': man
'47': maple_tree
'48': motorcycle
'49': mountain
'50': mouse
'51': mushroom
'52': oak_tree
'53': orange
'54': orchid
'55': otter
'56': palm_tree
'57': pear
'58': pickup_truck
'59': pine_tree
'60': plain
'61': plate
'62': poppy
'63': porcupine
'64': possum
'65': rabbit
'66': raccoon
'67': ray
'68': road
'69': rocket
'70': rose
'71': sea
'72': seal
'73': shark
'74': shrew
'75': skunk
'76': skyscraper
'77': snail
'78': snake
'79': spider
'80': squirrel
'81': streetcar
'82': sunflower
'83': sweet_pepper
'84': table
'85': tank
'86': telephone
'87': television
'88': tiger
'89': tractor
'90': train
'91': trout
'92': tulip
'93': turtle
'94': wardrobe
'95': whale
'96': willow_tree
'97': wolf
'98': woman
'99': worm
- name: id
dtype: int64
- name: clip_tags_LAION_ViT_H_14_2B_simple_specific
dtype: string
- name: clip_tags_LAION_ViT_H_14_2B_ensemble_specific
dtype: string
splits:
- name: train
num_bytes: 113602267.0
num_examples: 50000
download_size: 112951195
dataset_size: 113602267.0
---
# Dataset Card for "CIFAR100_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7399245500564575,
0.007627011276781559,
0.13077926635742188,
0.3580496311187744,
-0.010700586251914501,
-0.02182561159133911,
0.2213069498538971,
-0.07319440692663193,
0.768987238407135,
0.28988927602767944,
-0.8095945715904236,
-0.48962607979774475,
-0.5204684138298035,
-0.388284116983... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Multimodal-Fatima/SST2_train | Multimodal-Fatima | 2023-05-30T03:18:08Z | 14 | 0 | null | [
"region:us"
] | 2023-05-30T03:18:08Z | 2023-05-30T03:17:50.000Z | 2023-05-30T03:17:50 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
- name: id
dtype: int64
splits:
- name: train
num_bytes: 117277546.0
num_examples: 6920
download_size: 114148970
dataset_size: 117277546.0
---
# Dataset Card for "SST2_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.2309194654226303,
-0.03479299321770668,
0.2386152446269989,
0.24812953174114227,
-0.4105331003665924,
0.08801988512277603,
0.24231630563735962,
-0.005027890671044588,
0.6219525337219238,
0.2793409526348114,
-0.8066979646682739,
-0.35475942492485046,
-0.6212666630744934,
-0.4965279102325... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
andersonbcdefg/dolly_reward_modeling_pairwise | andersonbcdefg | 2023-05-31T05:40:03Z | 14 | 0 | null | [
"region:us"
] | 2023-05-31T05:40:03Z | 2023-05-31T05:39:50.000Z | 2023-05-31T05:39:50 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response_a
dtype: string
- name: response_b
dtype: string
- name: explanation
dtype: string
- name: preferred
dtype: string
splits:
- name: train
num_bytes: 16503157
num_examples: 19343
download_size: 9011974
dataset_size: 16503157
---
# Dataset Card for "dolly_reward_modeling_pairwise"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.2233758270740509,
-0.2719705402851105,
-0.03182930871844292,
0.28480464220046997,
-0.12415023148059845,
-0.1084923967719078,
0.5132853388786316,
0.033172618597745895,
0.8793523907661438,
0.6153242588043213,
-0.6637430191040039,
-0.5900284051895142,
-0.6843528151512146,
-0.31306147575378... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.