author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
niurl | null | null | null | false | 42 | false | niurl/eraser_esnli | 2022-10-24T15:26:38.000Z | null | false | b2373d69c590ab02b4164d9a912b4eacb1f80bf5 | [] | [
"arxiv:1911.03429",
"license:apache-2.0"
] | https://huggingface.co/datasets/niurl/eraser_esnli/resolve/main/README.md | ---
license: apache-2.0
---
## Eraser Dataset Description
- **Homepage:http://www.eraserbenchmark.com**
- **Repository:https://github.com/jayded/eraserbenchmark**
- **Paper:https://arxiv.org/abs/1911.03429**
- **Leaderboard:http://www.eraserbenchmark.com/#leaderboard**
## e-SNLI Dataset Description
- **Repository:https://github.com/OanaMariaCamburu/e-SNLI**
- **Paper:http://papers.nips.cc/paper/8163-e-snli-natural-language-inference-with-natural-language-explanations.pdf**
|
Biborg | null | null | null | false | null | false | Biborg/renaud | 2022-10-20T12:01:22.000Z | null | false | 874846f577f88bfaa9303a9d463fadd9899213f1 | [] | [
"license:other"
] | https://huggingface.co/datasets/Biborg/renaud/resolve/main/README.md | ---
license: other
---
|
KGraph | null | null | null | false | 24 | false | KGraph/FB15k-237 | 2022-10-21T09:03:28.000Z | null | false | c7368ccc03358758270dbf9e475222444d19926b | [] | [
"annotations_creators:found",
"annotations_creators:crowdsourced",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"tags:knowledge graph",
"tags:knowledge",
"tags:link prediction",
"tags:link",
"task_categories:other"
] | https://huggingface.co/datasets/KGraph/FB15k-237/resolve/main/README.md | ---
annotations_creators:
- found
- crowdsourced
language:
- en
language_creators: []
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: FB15k-237
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- knowledge graph
- knowledge
- link prediction
- link
task_categories:
- other
task_ids: []
---
# Dataset Card for FB15k-237
## Table of Contents
- [Dataset Card for FB15k-237](#dataset-card-for-fb15k-237)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://deepai.org/dataset/fb15k-237](https://deepai.org/dataset/fb15k-237)
- **Repository:**
- **Paper:** [More Information Needed](https://paperswithcode.com/dataset/fb15k-237)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
FB15k-237 is a link prediction dataset created from FB15k. While FB15k consists of 1,345 relations, 14,951 entities, and 592,213 triples, many triples are inverses that cause leakage from the training to testing and validation splits. FB15k-237 was created by Toutanova and Chen (2015) to ensure that the testing and evaluation datasets do not have inverse relation test leakage. In summary, FB15k-237 dataset contains 310,079 triples with 14,505 entities and 237 relation types.
### Supported Tasks and Leaderboards
Supported Tasks: link prediction task on knowledge graphs.
Leaderboads:
[More Information Needed](https://paperswithcode.com/sota/link-prediction-on-fb15k-237)
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{schlichtkrull2018modeling,
title={Modeling relational data with graph convolutional networks},
author={Schlichtkrull, Michael and Kipf, Thomas N and Bloem, Peter and Berg, Rianne van den and Titov, Ivan and Welling, Max},
booktitle={European semantic web conference},
pages={593--607},
year={2018},
organization={Springer}
}
```
### Contributions
Thanks to [@pp413](https://github.com/pp413) for adding this dataset. |
cjvt | null | @inproceedings{fiser2012slownet,
title={sloWNet 3.0: development, extension and cleaning},
author={Fi{\v{s}}er, Darja and Novak, Jernej and Erjavec, Toma{\v{z}}},
booktitle={Proceedings of 6th International Global Wordnet Conference (GWC 2012)},
pages={113--117},
year={2012}
} | sloWNet is the Slovene WordNet developed in the expand approach: it contains the complete Princeton WordNet 3.0 and
over 70 000 Slovene literals. These literals have been added automatically using different types of existing resources,
such as bilingual dictionaries, parallel corpora and Wikipedia. 33 000 literals have been subsequently hand-validated. | false | 25 | false | cjvt/slownet | 2022-10-21T12:44:13.000Z | null | false | 64562bea2ded1dc071782fe699625f2d27357b41 | [] | [
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language:sl",
"language_creators:machine-generated",
"language_creators:found",
"license:cc-by-sa-4.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"tags:slownet",
"tags:wordnet",
"tags:pwn",
"task_categories:other"
] | https://huggingface.co/datasets/cjvt/slownet/resolve/main/README.md | ---
annotations_creators:
- machine-generated
- expert-generated
language:
- sl
language_creators:
- machine-generated
- found
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: Semantic lexicon of Slovene sloWNet
size_categories:
- 100K<n<1M
source_datasets: []
tags:
- slownet
- wordnet
- pwn
task_categories:
- other
task_ids: []
---
# Dataset Card for SloWNet
### Dataset Summary
sloWNet is the Slovene WordNet developed in the expand approach: it contains the complete Princeton WordNet 3.0 and over 70 000 Slovene literals. These literals have been added automatically using different types of existing resources, such as bilingual dictionaries, parallel corpora and Wikipedia. 33 000 literals have been subsequently hand-validated.
For a detailed description of the data, please see the paper Fišer et al. (2012).
### Supported Tasks and Leaderboards
Other (the data is a knowledge base).
### Languages
Slovenian.
## Dataset Structure
### Data Instances
Each synset is stored in its own instance. The following instance represents a synset containing the English synonyms `{'able'}` and Slovene synonyms `{'sposoben', 'zmožen'}`:
```
{
'id': 'eng-30-00001740-a',
'pos': 'a',
'bcs': 3,
'en_synonyms': {
'words': ['able'],
'senses': [1],
'pwnids': ['able%3:00:00::']
},
'sl_synonyms': {
'words': ['sposoben', 'zmožen'],
'is_validated': [False, False]
},
'en_def': "(usually followed by `to') having the necessary means or skill or know-how or authority to do something",
'sl_def': 'N/A',
'en_usages': [
'able to swim',
'she was able to program her computer',
'we were at last able to buy a car',
'able to get a grant for the project'
],
'sl_usages': [],
'ilrs': {
'types': ['near_antonym', 'be_in_state', 'be_in_state', 'eng_derivative', 'eng_derivative'],
'id_synsets': ['eng-30-00002098-a', 'eng-30-05200169-n', 'eng-30-05616246-n', 'eng-30-05200169-n', 'eng-30-05616246-n']
},
'semeval07_cluster': 'able',
'domains': ['quality']
}
```
### Data Fields
- `id`: a string ID of the synset;
- `pos`: part of speech tag of the synset;
- `bcs`: Base Concept Set index (`-1` if not present);
- `en_synonyms`: the English synonyms in the synset - synonym `i` is described with its form (`words[i]`), sense (`senses[i]`), and Princeton WordNet ID (`pwnids[i]`);
- `sl_synonyms`: the Slovene synonyms in the synset - synonym `i` is described with its form (`words[i]`) and a flag marking if its correctness has been manually validated (`is_validated[i]`);
- `en_def`: the English definition (`"N/A"` if not present);
- `sl_def`: the Slovene definition (`"N/A"` if not present);
- `en_usages`: the English examples of usage;
- `sl_usages`: the Slovene examples of usage;
- `ilrs`: internal language relations - relation `i` is described by its type (`types[i]`) and the target synset (`id_synsets[i]`);
- `semeval07_cluster`: string cluster (`"N/A"` if not present);
- `domains`: domains of the synset.
## Additional Information
### Dataset Curators
Darja Fišer.
### Licensing Information
CC BY-SA 4.0
### Citation Information
```
@inproceedings{fiser2012slownet,
title={sloWNet 3.0: development, extension and cleaning},
author={Fi{\v{s}}er, Darja and Novak, Jernej and Erjavec, Toma{\v{z}}},
booktitle={Proceedings of 6th International Global Wordnet Conference (GWC 2012)},
pages={113--117},
year={2012}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
|
amanneo | null | null | null | false | 12 | false | amanneo/collected-mail-corpus-mini | 2022-10-20T13:08:59.000Z | null | false | cfb34519c9fedf86d0548262071deabaa2443c0b | [] | [] | https://huggingface.co/datasets/amanneo/collected-mail-corpus-mini/resolve/main/README.md | ---
dataset_info:
features:
- name: id
dtype: float64
- name: email_type
dtype: string
- name: text
dtype: string
- name: mail_length
dtype: int64
splits:
- name: test
num_bytes: 4260.131707317073
num_examples: 21
- name: train
num_bytes: 37326.86829268293
num_examples: 184
download_size: 26719
dataset_size: 41587.0
---
# Dataset Card for "collected-mail-corpus-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
peteromallet | null | null | null | false | null | false | peteromallet/snarf | 2022-10-20T13:41:37.000Z | null | false | fe9cca5cd9ffe5d6bdeaa402c239964befc94d1c | [] | [
"license:openrail"
] | https://huggingface.co/datasets/peteromallet/snarf/resolve/main/README.md | ---
license: openrail
---
|
copenlu | null | null | null | false | 1 | false | copenlu/spiced | 2022-10-24T12:31:04.000Z | null | false | aa1f981bd3a7bb02a46b9c472ac89a93c7024ed6 | [] | [
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language:en",
"language_creators:found",
"license:mit",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|s2orc",
"tags:scientific text",
"tags:scholarly text",
"tags:semantic text similarity",
"tags:fact checking",
"tags:misinformation",
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:semantic-similarity-scoring"
] | https://huggingface.co/datasets/copenlu/spiced/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
- machine-generated
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: null
pretty_name: SPICED
size_categories:
- 1K<n<10K
source_datasets:
- extended|s2orc
tags:
- scientific text
- scholarly text
- semantic text similarity
- fact checking
- misinformation
task_categories:
- text-classification
task_ids:
- text-scoring
- semantic-similarity-scoring
---
# Dataset Card for SPICED
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://www.copenlu.com/publication/2022_emnlp_wright/
- **Repository:** https://github.com/copenlu/scientific-information-change
- **Paper:**
### Dataset Summary
The Scientific Paraphrase and Information ChangE Dataset (SPICED) is a dataset of paired scientific findings from scientific papers, news media, and Twitter. The types of pairs are between <paper, news> and <paper, tweet>. Each pair is labeled for the degree of information similarity in the _findings_ described by each sentence, on a scale from 1-5. This is called the _Information Matching Score (IMS)_. The data was curated from S2ORC and matched news articles and Tweets using Altmetric. Instances are annotated by experts using the Prolific platform and Potato. Please use the following citation when using this dataset:
```
@article{modeling-information-change,
title={{Modeling Information Change in Science Communication with Semantically Matched Paraphrases}},
author={Wright, Dustin and Pei, Jiaxin and Jurgens, David and Augenstein, Isabelle},
year={2022},
booktitle = {Proceedings of EMNLP},
publisher = {Association for Computational Linguistics},
year = 2022
}
```
### Supported Tasks and Leaderboards
The task is to predict the IMS between two scientific sentences, which is a scalar between 1 and 5. Preferred metrics are mean-squared error and Pearson correlation.
### Languages
English
## Dataset Structure
### Data Fields
- DOI: The DOI of the original scientific article
- instance\_id: Unique instance ID for the sample. The ID contains the field, whether or not it is a tweet, and whether or not the sample was manually labeled or automatically using SBERT (marked as "easy")
- News Finding: Text of the news or tweet finding
- Paper Finding: Text of the paper finding
- News Context: For news instances, the surrounding two sentences for the news finding. For tweets, a copy of the tweet
- Paper Context: The surrounding two sentences for the paper finding
- scores: Annotator scores after removing low competence annotators
- field: The academic field of the paper ('Computer\_Science', 'Medicine', 'Biology', or 'Psychology')
- split: The dataset split ('train', 'val', or 'test')
- final\_score: The IMS of the instance
- source: Either "news" or "tweet"
- News Url: A URL to the source article if a news instance or the tweet ID of a tweet
### Data Splits
- train: 4721 instances
- validation: 664 instances
- test: 640 instances
## Dataset Creation
For the full details of how the dataset was created, please refer to our [EMNLP 2022 paper]().
### Curation Rationale
Science communication is a complex process of translation from highly technical scientific language to common language that lay people can understand. At the same time, the general public relies on good science communication in order to inform critical decisions about their health and behavior. SPICED was curated in order to provide a training dataset and benchmark for machine learning models to measure changes in scientific information at different stages of the science communication pipeline.
### Source Data
#### Initial Data Collection and Normalization
Scientific text: S2ORC
News articles and Tweets are collected through Altmetric.
#### Who are the source language producers?
Scientists, journalists, and Twitter users.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Models trained on SPICED can be used to perform large scale analyses of science communication. They can be used to match the same finding discussed in different media, and reveal trends in differences in reporting at different stages of the science communication pipeline. It is hoped that this can help to build tools which will improve science communication.
### Discussion of Biases
The dataset is restricted to computer science, medicine, biology, and psychology, which may introduce some bias in the topics which models will perform well on.
### Other Known Limitations
While some context is available, we do not release the full text of news articles and scientific papers, which may contain further context to help with learning the task. We do however provide the paper DOIs and links to the original news articles in case full text is desired.
## Additional Information
### Dataset Curators
Dustin Wright, Jiaxin Pei, David Jurgens, and Isabelle Augenstein
### Licensing Information
MIT
### Contributions
Thanks to [@dwright37](https://github.com/dwright37) for adding this dataset. |
relbert | null | @inproceedings{jurgens-etal-2012-semeval,
title = "{S}em{E}val-2012 Task 2: Measuring Degrees of Relational Similarity",
author = "Jurgens, David and
Mohammad, Saif and
Turney, Peter and
Holyoak, Keith",
booktitle = "*{SEM} 2012: The First Joint Conference on Lexical and Computational Semantics {--} Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation ({S}em{E}val 2012)",
month = "7-8 " # jun,
year = "2012",
address = "Montr{\'e}al, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S12-1047",
pages = "356--364",
} | [SemEVAL 2012 task 2: Relational Similarity](https://aclanthology.org/S12-1047/) | false | 2,779 | false | relbert/semeval2012_relational_similarity_v4 | 2022-10-21T10:13:46.000Z | null | false | 1d1b487f8fa455d2c09468bbfb58d971bf7f1720 | [] | [
"language:en",
"license:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K"
] | https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v4/resolve/main/README.md | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
pretty_name: SemEval2012 task 2 Relational Similarity
---
# Dataset Card for "relbert/semeval2012_relational_similarity_v4"
## Dataset Description
- **Repository:** [RelBERT](https://github.com/asahi417/relbert)
- **Paper:** [https://aclanthology.org/S12-1047/](https://aclanthology.org/S12-1047/)
- **Dataset:** SemEval2012: Relational Similarity
### Dataset Summary
***IMPORTANT***: This is the same dataset as [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity),
but with a different dataset construction.
Relational similarity dataset from [SemEval2012 task 2](https://aclanthology.org/S12-1047/), compiled to fine-tune [RelBERT](https://github.com/asahi417/relbert) model.
The dataset contains a list of positive and negative word pair from 89 pre-defined relations.
The relation types are constructed on top of following 10 parent relation types.
```shell
{
1: "Class Inclusion", # Hypernym
2: "Part-Whole", # Meronym, Substance Meronym
3: "Similar", # Synonym, Co-hypornym
4: "Contrast", # Antonym
5: "Attribute", # Attribute, Event
6: "Non Attribute",
7: "Case Relation",
8: "Cause-Purpose",
9: "Space-Time",
10: "Representation"
}
```
Each of the parent relation is further grouped into child relation types where the definition can be found [here](https://drive.google.com/file/d/0BzcZKTSeYL8VenY0QkVpZVpxYnc/view?resourcekey=0-ZP-UARfJj39PcLroibHPHw).
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'relation_type': '8d',
'positives': [ [ "breathe", "live" ], [ "study", "learn" ], [ "speak", "communicate" ], ... ]
'negatives': [ [ "starving", "hungry" ], [ "clean", "bathe" ], [ "hungry", "starving" ], ... ]
}
```
### Data Splits
| name |train|validation|
|---------|----:|---------:|
|semeval2012_relational_similarity| 89 | 89|
### Number of Positive/Negative Word-pairs in each Split
| | positives | negatives |
|:--------------------------------------------|------------:|------------:|
| ('1', 'parent', 'train') | 88 | 544 |
| ('1', 'parent', 'validation') | 22 | 136 |
| ('10', 'parent', 'train') | 48 | 584 |
| ('10', 'parent', 'validation') | 12 | 146 |
| ('10a', 'child', 'train') | 8 | 1324 |
| ('10a', 'child', 'validation') | 2 | 331 |
| ('10a', 'child_prototypical', 'train') | 97 | 1917 |
| ('10a', 'child_prototypical', 'validation') | 26 | 521 |
| ('10b', 'child', 'train') | 8 | 1325 |
| ('10b', 'child', 'validation') | 2 | 331 |
| ('10b', 'child_prototypical', 'train') | 90 | 1558 |
| ('10b', 'child_prototypical', 'validation') | 27 | 469 |
| ('10c', 'child', 'train') | 8 | 1327 |
| ('10c', 'child', 'validation') | 2 | 331 |
| ('10c', 'child_prototypical', 'train') | 85 | 1640 |
| ('10c', 'child_prototypical', 'validation') | 20 | 390 |
| ('10d', 'child', 'train') | 8 | 1328 |
| ('10d', 'child', 'validation') | 2 | 331 |
| ('10d', 'child_prototypical', 'train') | 77 | 1390 |
| ('10d', 'child_prototypical', 'validation') | 22 | 376 |
| ('10e', 'child', 'train') | 8 | 1329 |
| ('10e', 'child', 'validation') | 2 | 332 |
| ('10e', 'child_prototypical', 'train') | 67 | 884 |
| ('10e', 'child_prototypical', 'validation') | 20 | 234 |
| ('10f', 'child', 'train') | 8 | 1328 |
| ('10f', 'child', 'validation') | 2 | 331 |
| ('10f', 'child_prototypical', 'train') | 80 | 1460 |
| ('10f', 'child_prototypical', 'validation') | 19 | 306 |
| ('1a', 'child', 'train') | 8 | 1324 |
| ('1a', 'child', 'validation') | 2 | 331 |
| ('1a', 'child_prototypical', 'train') | 106 | 1854 |
| ('1a', 'child_prototypical', 'validation') | 17 | 338 |
| ('1b', 'child', 'train') | 8 | 1324 |
| ('1b', 'child', 'validation') | 2 | 331 |
| ('1b', 'child_prototypical', 'train') | 95 | 1712 |
| ('1b', 'child_prototypical', 'validation') | 28 | 480 |
| ('1c', 'child', 'train') | 8 | 1327 |
| ('1c', 'child', 'validation') | 2 | 331 |
| ('1c', 'child_prototypical', 'train') | 80 | 1528 |
| ('1c', 'child_prototypical', 'validation') | 25 | 502 |
| ('1d', 'child', 'train') | 8 | 1323 |
| ('1d', 'child', 'validation') | 2 | 330 |
| ('1d', 'child_prototypical', 'train') | 112 | 2082 |
| ('1d', 'child_prototypical', 'validation') | 23 | 458 |
| ('1e', 'child', 'train') | 8 | 1329 |
| ('1e', 'child', 'validation') | 2 | 332 |
| ('1e', 'child_prototypical', 'train') | 63 | 775 |
| ('1e', 'child_prototypical', 'validation') | 24 | 256 |
| ('2', 'parent', 'train') | 80 | 552 |
| ('2', 'parent', 'validation') | 20 | 138 |
| ('2a', 'child', 'train') | 8 | 1324 |
| ('2a', 'child', 'validation') | 2 | 330 |
| ('2a', 'child_prototypical', 'train') | 93 | 1885 |
| ('2a', 'child_prototypical', 'validation') | 36 | 736 |
| ('2b', 'child', 'train') | 8 | 1327 |
| ('2b', 'child', 'validation') | 2 | 331 |
| ('2b', 'child_prototypical', 'train') | 86 | 1326 |
| ('2b', 'child_prototypical', 'validation') | 19 | 284 |
| ('2c', 'child', 'train') | 8 | 1325 |
| ('2c', 'child', 'validation') | 2 | 331 |
| ('2c', 'child_prototypical', 'train') | 96 | 1773 |
| ('2c', 'child_prototypical', 'validation') | 21 | 371 |
| ('2d', 'child', 'train') | 8 | 1328 |
| ('2d', 'child', 'validation') | 2 | 331 |
| ('2d', 'child_prototypical', 'train') | 79 | 1329 |
| ('2d', 'child_prototypical', 'validation') | 20 | 338 |
| ('2e', 'child', 'train') | 8 | 1327 |
| ('2e', 'child', 'validation') | 2 | 331 |
| ('2e', 'child_prototypical', 'train') | 82 | 1462 |
| ('2e', 'child_prototypical', 'validation') | 23 | 463 |
| ('2f', 'child', 'train') | 8 | 1327 |
| ('2f', 'child', 'validation') | 2 | 331 |
| ('2f', 'child_prototypical', 'train') | 88 | 1869 |
| ('2f', 'child_prototypical', 'validation') | 17 | 371 |
| ('2g', 'child', 'train') | 8 | 1323 |
| ('2g', 'child', 'validation') | 2 | 330 |
| ('2g', 'child_prototypical', 'train') | 108 | 1925 |
| ('2g', 'child_prototypical', 'validation') | 27 | 480 |
| ('2h', 'child', 'train') | 8 | 1327 |
| ('2h', 'child', 'validation') | 2 | 331 |
| ('2h', 'child_prototypical', 'train') | 84 | 1540 |
| ('2h', 'child_prototypical', 'validation') | 21 | 385 |
| ('2i', 'child', 'train') | 8 | 1328 |
| ('2i', 'child', 'validation') | 2 | 332 |
| ('2i', 'child_prototypical', 'train') | 72 | 1335 |
| ('2i', 'child_prototypical', 'validation') | 21 | 371 |
| ('2j', 'child', 'train') | 8 | 1328 |
| ('2j', 'child', 'validation') | 2 | 331 |
| ('2j', 'child_prototypical', 'train') | 80 | 1595 |
| ('2j', 'child_prototypical', 'validation') | 19 | 369 |
| ('3', 'parent', 'train') | 64 | 568 |
| ('3', 'parent', 'validation') | 16 | 142 |
| ('3a', 'child', 'train') | 8 | 1327 |
| ('3a', 'child', 'validation') | 2 | 331 |
| ('3a', 'child_prototypical', 'train') | 87 | 1597 |
| ('3a', 'child_prototypical', 'validation') | 18 | 328 |
| ('3b', 'child', 'train') | 8 | 1327 |
| ('3b', 'child', 'validation') | 2 | 331 |
| ('3b', 'child_prototypical', 'train') | 87 | 1833 |
| ('3b', 'child_prototypical', 'validation') | 18 | 407 |
| ('3c', 'child', 'train') | 8 | 1326 |
| ('3c', 'child', 'validation') | 2 | 331 |
| ('3c', 'child_prototypical', 'train') | 93 | 1664 |
| ('3c', 'child_prototypical', 'validation') | 18 | 315 |
| ('3d', 'child', 'train') | 8 | 1324 |
| ('3d', 'child', 'validation') | 2 | 331 |
| ('3d', 'child_prototypical', 'train') | 101 | 1943 |
| ('3d', 'child_prototypical', 'validation') | 22 | 372 |
| ('3e', 'child', 'train') | 8 | 1332 |
| ('3e', 'child', 'validation') | 2 | 332 |
| ('3e', 'child_prototypical', 'train') | 49 | 900 |
| ('3e', 'child_prototypical', 'validation') | 20 | 368 |
| ('3f', 'child', 'train') | 8 | 1327 |
| ('3f', 'child', 'validation') | 2 | 331 |
| ('3f', 'child_prototypical', 'train') | 90 | 1983 |
| ('3f', 'child_prototypical', 'validation') | 15 | 362 |
| ('3g', 'child', 'train') | 8 | 1331 |
| ('3g', 'child', 'validation') | 2 | 332 |
| ('3g', 'child_prototypical', 'train') | 61 | 1089 |
| ('3g', 'child_prototypical', 'validation') | 14 | 251 |
| ('3h', 'child', 'train') | 8 | 1328 |
| ('3h', 'child', 'validation') | 2 | 331 |
| ('3h', 'child_prototypical', 'train') | 71 | 1399 |
| ('3h', 'child_prototypical', 'validation') | 28 | 565 |
| ('4', 'parent', 'train') | 64 | 568 |
| ('4', 'parent', 'validation') | 16 | 142 |
| ('4a', 'child', 'train') | 8 | 1327 |
| ('4a', 'child', 'validation') | 2 | 331 |
| ('4a', 'child_prototypical', 'train') | 85 | 1766 |
| ('4a', 'child_prototypical', 'validation') | 20 | 474 |
| ('4b', 'child', 'train') | 8 | 1330 |
| ('4b', 'child', 'validation') | 2 | 332 |
| ('4b', 'child_prototypical', 'train') | 66 | 949 |
| ('4b', 'child_prototypical', 'validation') | 15 | 214 |
| ('4c', 'child', 'train') | 8 | 1326 |
| ('4c', 'child', 'validation') | 2 | 331 |
| ('4c', 'child_prototypical', 'train') | 86 | 1755 |
| ('4c', 'child_prototypical', 'validation') | 25 | 446 |
| ('4d', 'child', 'train') | 8 | 1332 |
| ('4d', 'child', 'validation') | 2 | 333 |
| ('4d', 'child_prototypical', 'train') | 46 | 531 |
| ('4d', 'child_prototypical', 'validation') | 17 | 218 |
| ('4e', 'child', 'train') | 8 | 1326 |
| ('4e', 'child', 'validation') | 2 | 331 |
| ('4e', 'child_prototypical', 'train') | 92 | 2021 |
| ('4e', 'child_prototypical', 'validation') | 19 | 402 |
| ('4f', 'child', 'train') | 8 | 1328 |
| ('4f', 'child', 'validation') | 2 | 332 |
| ('4f', 'child_prototypical', 'train') | 72 | 1464 |
| ('4f', 'child_prototypical', 'validation') | 21 | 428 |
| ('4g', 'child', 'train') | 8 | 1324 |
| ('4g', 'child', 'validation') | 2 | 330 |
| ('4g', 'child_prototypical', 'train') | 106 | 2057 |
| ('4g', 'child_prototypical', 'validation') | 23 | 435 |
| ('4h', 'child', 'train') | 8 | 1326 |
| ('4h', 'child', 'validation') | 2 | 331 |
| ('4h', 'child_prototypical', 'train') | 85 | 1787 |
| ('4h', 'child_prototypical', 'validation') | 26 | 525 |
| ('5', 'parent', 'train') | 72 | 560 |
| ('5', 'parent', 'validation') | 18 | 140 |
| ('5a', 'child', 'train') | 8 | 1324 |
| ('5a', 'child', 'validation') | 2 | 331 |
| ('5a', 'child_prototypical', 'train') | 101 | 1876 |
| ('5a', 'child_prototypical', 'validation') | 22 | 439 |
| ('5b', 'child', 'train') | 8 | 1329 |
| ('5b', 'child', 'validation') | 2 | 332 |
| ('5b', 'child_prototypical', 'train') | 70 | 1310 |
| ('5b', 'child_prototypical', 'validation') | 17 | 330 |
| ('5c', 'child', 'train') | 8 | 1327 |
| ('5c', 'child', 'validation') | 2 | 331 |
| ('5c', 'child_prototypical', 'train') | 85 | 1552 |
| ('5c', 'child_prototypical', 'validation') | 20 | 373 |
| ('5d', 'child', 'train') | 8 | 1324 |
| ('5d', 'child', 'validation') | 2 | 330 |
| ('5d', 'child_prototypical', 'train') | 102 | 1783 |
| ('5d', 'child_prototypical', 'validation') | 27 | 580 |
| ('5e', 'child', 'train') | 8 | 1329 |
| ('5e', 'child', 'validation') | 2 | 332 |
| ('5e', 'child_prototypical', 'train') | 68 | 1283 |
| ('5e', 'child_prototypical', 'validation') | 19 | 357 |
| ('5f', 'child', 'train') | 8 | 1327 |
| ('5f', 'child', 'validation') | 2 | 331 |
| ('5f', 'child_prototypical', 'train') | 77 | 1568 |
| ('5f', 'child_prototypical', 'validation') | 28 | 567 |
| ('5g', 'child', 'train') | 8 | 1328 |
| ('5g', 'child', 'validation') | 2 | 332 |
| ('5g', 'child_prototypical', 'train') | 79 | 1626 |
| ('5g', 'child_prototypical', 'validation') | 14 | 266 |
| ('5h', 'child', 'train') | 8 | 1324 |
| ('5h', 'child', 'validation') | 2 | 330 |
| ('5h', 'child_prototypical', 'train') | 109 | 2348 |
| ('5h', 'child_prototypical', 'validation') | 20 | 402 |
| ('5i', 'child', 'train') | 8 | 1324 |
| ('5i', 'child', 'validation') | 2 | 331 |
| ('5i', 'child_prototypical', 'train') | 96 | 2010 |
| ('5i', 'child_prototypical', 'validation') | 27 | 551 |
| ('6', 'parent', 'train') | 64 | 568 |
| ('6', 'parent', 'validation') | 16 | 142 |
| ('6a', 'child', 'train') | 8 | 1324 |
| ('6a', 'child', 'validation') | 2 | 330 |
| ('6a', 'child_prototypical', 'train') | 102 | 1962 |
| ('6a', 'child_prototypical', 'validation') | 27 | 530 |
| ('6b', 'child', 'train') | 8 | 1327 |
| ('6b', 'child', 'validation') | 2 | 331 |
| ('6b', 'child_prototypical', 'train') | 90 | 1840 |
| ('6b', 'child_prototypical', 'validation') | 15 | 295 |
| ('6c', 'child', 'train') | 8 | 1325 |
| ('6c', 'child', 'validation') | 2 | 331 |
| ('6c', 'child_prototypical', 'train') | 90 | 1968 |
| ('6c', 'child_prototypical', 'validation') | 27 | 527 |
| ('6d', 'child', 'train') | 8 | 1328 |
| ('6d', 'child', 'validation') | 2 | 331 |
| ('6d', 'child_prototypical', 'train') | 82 | 1903 |
| ('6d', 'child_prototypical', 'validation') | 17 | 358 |
| ('6e', 'child', 'train') | 8 | 1327 |
| ('6e', 'child', 'validation') | 2 | 331 |
| ('6e', 'child_prototypical', 'train') | 85 | 1737 |
| ('6e', 'child_prototypical', 'validation') | 20 | 398 |
| ('6f', 'child', 'train') | 8 | 1326 |
| ('6f', 'child', 'validation') | 2 | 331 |
| ('6f', 'child_prototypical', 'train') | 87 | 1652 |
| ('6f', 'child_prototypical', 'validation') | 24 | 438 |
| ('6g', 'child', 'train') | 8 | 1326 |
| ('6g', 'child', 'validation') | 2 | 331 |
| ('6g', 'child_prototypical', 'train') | 94 | 1740 |
| ('6g', 'child_prototypical', 'validation') | 17 | 239 |
| ('6h', 'child', 'train') | 8 | 1324 |
| ('6h', 'child', 'validation') | 2 | 330 |
| ('6h', 'child_prototypical', 'train') | 115 | 2337 |
| ('6h', 'child_prototypical', 'validation') | 14 | 284 |
| ('7', 'parent', 'train') | 64 | 568 |
| ('7', 'parent', 'validation') | 16 | 142 |
| ('7a', 'child', 'train') | 8 | 1324 |
| ('7a', 'child', 'validation') | 2 | 331 |
| ('7a', 'child_prototypical', 'train') | 99 | 2045 |
| ('7a', 'child_prototypical', 'validation') | 24 | 516 |
| ('7b', 'child', 'train') | 8 | 1330 |
| ('7b', 'child', 'validation') | 2 | 332 |
| ('7b', 'child_prototypical', 'train') | 69 | 905 |
| ('7b', 'child_prototypical', 'validation') | 12 | 177 |
| ('7c', 'child', 'train') | 8 | 1327 |
| ('7c', 'child', 'validation') | 2 | 331 |
| ('7c', 'child_prototypical', 'train') | 85 | 1402 |
| ('7c', 'child_prototypical', 'validation') | 20 | 313 |
| ('7d', 'child', 'train') | 8 | 1324 |
| ('7d', 'child', 'validation') | 2 | 331 |
| ('7d', 'child_prototypical', 'train') | 98 | 2064 |
| ('7d', 'child_prototypical', 'validation') | 25 | 497 |
| ('7e', 'child', 'train') | 8 | 1328 |
| ('7e', 'child', 'validation') | 2 | 331 |
| ('7e', 'child_prototypical', 'train') | 78 | 1270 |
| ('7e', 'child_prototypical', 'validation') | 21 | 298 |
| ('7f', 'child', 'train') | 8 | 1326 |
| ('7f', 'child', 'validation') | 2 | 331 |
| ('7f', 'child_prototypical', 'train') | 89 | 1377 |
| ('7f', 'child_prototypical', 'validation') | 22 | 380 |
| ('7g', 'child', 'train') | 8 | 1328 |
| ('7g', 'child', 'validation') | 2 | 332 |
| ('7g', 'child_prototypical', 'train') | 72 | 885 |
| ('7g', 'child_prototypical', 'validation') | 21 | 263 |
| ('7h', 'child', 'train') | 8 | 1324 |
| ('7h', 'child', 'validation') | 2 | 331 |
| ('7h', 'child_prototypical', 'train') | 94 | 1479 |
| ('7h', 'child_prototypical', 'validation') | 29 | 467 |
| ('8', 'parent', 'train') | 64 | 568 |
| ('8', 'parent', 'validation') | 16 | 142 |
| ('8a', 'child', 'train') | 8 | 1324 |
| ('8a', 'child', 'validation') | 2 | 331 |
| ('8a', 'child_prototypical', 'train') | 93 | 1640 |
| ('8a', 'child_prototypical', 'validation') | 30 | 552 |
| ('8b', 'child', 'train') | 8 | 1330 |
| ('8b', 'child', 'validation') | 2 | 332 |
| ('8b', 'child_prototypical', 'train') | 61 | 1126 |
| ('8b', 'child_prototypical', 'validation') | 20 | 361 |
| ('8c', 'child', 'train') | 8 | 1326 |
| ('8c', 'child', 'validation') | 2 | 331 |
| ('8c', 'child_prototypical', 'train') | 96 | 1547 |
| ('8c', 'child_prototypical', 'validation') | 15 | 210 |
| ('8d', 'child', 'train') | 8 | 1325 |
| ('8d', 'child', 'validation') | 2 | 331 |
| ('8d', 'child_prototypical', 'train') | 92 | 1472 |
| ('8d', 'child_prototypical', 'validation') | 25 | 438 |
| ('8e', 'child', 'train') | 8 | 1327 |
| ('8e', 'child', 'validation') | 2 | 331 |
| ('8e', 'child_prototypical', 'train') | 87 | 1340 |
| ('8e', 'child_prototypical', 'validation') | 18 | 270 |
| ('8f', 'child', 'train') | 8 | 1326 |
| ('8f', 'child', 'validation') | 2 | 331 |
| ('8f', 'child_prototypical', 'train') | 83 | 1416 |
| ('8f', 'child_prototypical', 'validation') | 28 | 452 |
| ('8g', 'child', 'train') | 8 | 1330 |
| ('8g', 'child', 'validation') | 2 | 332 |
| ('8g', 'child_prototypical', 'train') | 62 | 640 |
| ('8g', 'child_prototypical', 'validation') | 19 | 199 |
| ('8h', 'child', 'train') | 8 | 1324 |
| ('8h', 'child', 'validation') | 2 | 331 |
| ('8h', 'child_prototypical', 'train') | 100 | 1816 |
| ('8h', 'child_prototypical', 'validation') | 23 | 499 |
| ('9', 'parent', 'train') | 72 | 560 |
| ('9', 'parent', 'validation') | 18 | 140 |
| ('9a', 'child', 'train') | 8 | 1324 |
| ('9a', 'child', 'validation') | 2 | 331 |
| ('9a', 'child_prototypical', 'train') | 96 | 1520 |
| ('9a', 'child_prototypical', 'validation') | 27 | 426 |
| ('9b', 'child', 'train') | 8 | 1326 |
| ('9b', 'child', 'validation') | 2 | 331 |
| ('9b', 'child_prototypical', 'train') | 93 | 1783 |
| ('9b', 'child_prototypical', 'validation') | 18 | 307 |
| ('9c', 'child', 'train') | 8 | 1330 |
| ('9c', 'child', 'validation') | 2 | 332 |
| ('9c', 'child_prototypical', 'train') | 59 | 433 |
| ('9c', 'child_prototypical', 'validation') | 22 | 163 |
| ('9d', 'child', 'train') | 8 | 1328 |
| ('9d', 'child', 'validation') | 2 | 332 |
| ('9d', 'child_prototypical', 'train') | 78 | 1683 |
| ('9d', 'child_prototypical', 'validation') | 15 | 302 |
| ('9e', 'child', 'train') | 8 | 1329 |
| ('9e', 'child', 'validation') | 2 | 332 |
| ('9e', 'child_prototypical', 'train') | 66 | 1426 |
| ('9e', 'child_prototypical', 'validation') | 21 | 475 |
| ('9f', 'child', 'train') | 8 | 1328 |
| ('9f', 'child', 'validation') | 2 | 331 |
| ('9f', 'child_prototypical', 'train') | 79 | 1436 |
| ('9f', 'child_prototypical', 'validation') | 20 | 330 |
| ('9g', 'child', 'train') | 8 | 1324 |
| ('9g', 'child', 'validation') | 2 | 331 |
| ('9g', 'child_prototypical', 'train') | 100 | 1685 |
| ('9g', 'child_prototypical', 'validation') | 23 | 384 |
| ('9h', 'child', 'train') | 8 | 1325 |
| ('9h', 'child', 'validation') | 2 | 331 |
| ('9h', 'child_prototypical', 'train') | 95 | 1799 |
| ('9h', 'child_prototypical', 'validation') | 22 | 462 |
| ('9i', 'child', 'train') | 8 | 1328 |
| ('9i', 'child', 'validation') | 2 | 332 |
| ('9i', 'child_prototypical', 'train') | 79 | 1361 |
| ('9i', 'child_prototypical', 'validation') | 14 | 252 |
### Citation Information
```
@inproceedings{jurgens-etal-2012-semeval,
title = "{S}em{E}val-2012 Task 2: Measuring Degrees of Relational Similarity",
author = "Jurgens, David and
Mohammad, Saif and
Turney, Peter and
Holyoak, Keith",
booktitle = "*{SEM} 2012: The First Joint Conference on Lexical and Computational Semantics {--} Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation ({S}em{E}val 2012)",
month = "7-8 " # jun,
year = "2012",
address = "Montr{\'e}al, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S12-1047",
pages = "356--364",
}
``` |
devozs | null | null | null | false | 226 | false | devozs/israeli_soccer_news | 2022-10-22T06:20:33.000Z | null | false | f249eac7e732f016ae2db3ea0a1b1f90d76cf722 | [] | [] | https://huggingface.co/datasets/devozs/israeli_soccer_news/resolve/main/README.md | ---
dataset_info:
features:
- name: article_title
dtype: string
- name: article_body
dtype: string
- name: article_body_length
dtype: int64
splits:
- name: train
num_bytes: 8956722.687408645
num_examples: 4310
- name: validation
num_bytes: 995422.3125913552
num_examples: 479
download_size: 4052466
dataset_size: 9952145.0
---
# Dataset Card for "israeli_soccer_news"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
drt | null | @inproceedings{KQAPro,
title={{KQA P}ro: A Large Diagnostic Dataset for Complex Question Answering over Knowledge Base},
author={Cao, Shulin and Shi, Jiaxin and Pan, Liangming and Nie, Lunyiu and Xiang, Yutong and Hou, Lei and Li, Juanzi and He, Bin and Zhang, Hanwang},
booktitle={ACL'22},
year={2022}
} | A large-scale, diverse, challenging dataset of complex question answering over knowledge base. | false | 136 | false | drt/kqa_pro | 2022-10-20T19:35:20.000Z | null | false | 0b26da66cec9a4d1e42bde3560aeae9f89f6433b | [] | [
"arxiv:2007.03875",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language:en",
"language_creators:found",
"license:mit",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"tags:knowledge graph",
"tags:freebase",
"task_categories:question-answering",
"task_ids:open-domain-qa"
] | https://huggingface.co/datasets/drt/kqa_pro/resolve/main/README.md | ---
annotations_creators:
- machine-generated
- expert-generated
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
pretty_name: KQA-Pro
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- knowledge graph
- freebase
task_categories:
- question-answering
task_ids:
- open-domain-qa
---
# Dataset Card for KQA Pro
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Configs](#data-configs)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [How to run SPARQLs and programs](#how-to-run-sparqls-and-programs)
- [Knowledge Graph File](#knowledge-graph-file)
- [How to Submit to Leaderboard](#how-to-submit-results-of-test-set)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://thukeg.gitee.io/kqa-pro/
- **Repository:** https://github.com/shijx12/KQAPro_Baselines
- **Paper:** [KQA Pro: A Dataset with Explicit Compositional Programs for Complex Question Answering over Knowledge Base](https://aclanthology.org/2022.acl-long.422/)
- **Leaderboard:** http://thukeg.gitee.io/kqa-pro/leaderboard.html
- **Point of Contact:** shijx12 at gmail dot com
### Dataset Summary
KQA Pro is a large-scale dataset of complex question answering over knowledge base. The questions are very diverse and challenging, requiring multiple reasoning capabilities including compositional reasoning, multi-hop reasoning, quantitative comparison, set operations, and etc. Strong supervisions of SPARQL and program are provided for each question.
### Supported Tasks and Leaderboards
It supports knowlege graph based question answering. Specifically, it provides SPARQL and *program* for each question.
### Languages
English
## Dataset Structure
**train.json/val.json**
```
[
{
'question': str,
'sparql': str, # executable in our virtuoso engine
'program':
[
{
'function': str, # function name
'dependencies': [int], # functional inputs, representing indices of the preceding functions
'inputs': [str], # textual inputs
}
],
'choices': [str], # 10 answer choices
'answer': str, # golden answer
}
]
```
**test.json**
```
[
{
'question': str,
'choices': [str], # 10 answer choices
}
]
```
### Data Configs
This dataset has two configs: `train_val` and `test` because they have different available fields. Please specify this like `load_dataset('drt/kqa_pro', 'train_val')`.
### Data Splits
train, val, test
## Additional Information
### Knowledge Graph File
You can find the knowledge graph file `kb.json` in the original github repository. It comes with the format:
```json
{
'concepts':
{
'<id>':
{
'name': str,
'instanceOf': ['<id>', '<id>'], # ids of parent concept
}
},
'entities': # excluding concepts
{
'<id>':
{
'name': str,
'instanceOf': ['<id>', '<id>'], # ids of parent concept
'attributes':
[
{
'key': str, # attribute key
'value': # attribute value
{
'type': 'string'/'quantity'/'date'/'year',
'value': float/int/str, # float or int for quantity, int for year, 'yyyy/mm/dd' for date
'unit': str, # for quantity
},
'qualifiers':
{
'<qk>': # qualifier key, one key may have multiple corresponding qualifier values
[
{
'type': 'string'/'quantity'/'date'/'year',
'value': float/int/str,
'unit': str,
}, # the format of qualifier value is similar to attribute value
]
}
},
]
'relations':
[
{
'predicate': str,
'object': '<id>', # NOTE: it may be a concept id
'direction': 'forward'/'backward',
'qualifiers':
{
'<qk>': # qualifier key, one key may have multiple corresponding qualifier values
[
{
'type': 'string'/'quantity'/'date'/'year',
'value': float/int/str,
'unit': str,
}, # the format of qualifier value is similar to attribute value
]
}
},
]
}
}
}
```
### How to run SPARQLs and programs
We implement multiple baselines in our [codebase](https://github.com/shijx12/KQAPro_Baselines), which includes a supervised SPARQL parser and program parser.
In the SPARQL parser, we implement a query engine based on [Virtuoso](https://github.com/openlink/virtuoso-opensource.git).
You can install the engine based on our [instructions](https://github.com/shijx12/KQAPro_Baselines/blob/master/SPARQL/README.md), and then feed your predicted SPARQL to get the answer.
In the program parser, we implement a rule-based program executor, which receives a predicted program and returns the answer.
Detailed introductions of our functions can be found in our [paper](https://arxiv.org/abs/2007.03875).
### How to submit results of test set
You need to predict answers for all questions of test set and write them in a text file **in order**, one per line.
Here is an example:
```
Tron: Legacy
Palm Beach County
1937-03-01
The Queen
...
```
Then you need to send the prediction file to us by email <caosl19@mails.tsinghua.edu.cn>, we will reply to you with the performance as soon as possible.
To appear in the learderboard, you need to also provide following information:
- model name
- affiliation
- open-ended or multiple-choice
- whether use the supervision of SPARQL in your model or not
- whether use the supervision of program in your model or not
- single model or ensemble model
- (optional) paper link
- (optional) code link
### Licensing Information
MIT License
### Citation Information
If you find our dataset is helpful in your work, please cite us by
```
@inproceedings{KQAPro,
title={{KQA P}ro: A Large Diagnostic Dataset for Complex Question Answering over Knowledge Base},
author={Cao, Shulin and Shi, Jiaxin and Pan, Liangming and Nie, Lunyiu and Xiang, Yutong and Hou, Lei and Li, Juanzi and He, Bin and Zhang, Hanwang},
booktitle={ACL'22},
year={2022}
}
```
### Contributions
Thanks to [@happen2me](https://github.com/happen2me) for adding this dataset.
|
SickBoy | null | @article{Jaume2019FUNSDAD,
title={FUNSD: A Dataset for Form Understanding in Noisy Scanned Documents},
author={Guillaume Jaume and H. K. Ekenel and J. Thiran},
journal={2019 International Conference on Document Analysis and Recognition Workshops (ICDARW)},
year={2019},
volume={2},
pages={1-6}
} | https://guillaumejaume.github.io/FUNSD/ | false | 2 | false | SickBoy/prueba_dataset_layoutlm | 2022-10-20T22:42:12.000Z | null | false | a4bfdad07e72023d4faa228dee434671560fa723 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/SickBoy/prueba_dataset_layoutlm/resolve/main/README.md | ---
license: openrail
---
|
jdimos8 | null | null | null | false | null | false | jdimos8/french_admin | 2022-10-20T22:36:27.000Z | null | false | b7b579483fa3e773c1a14fd4c56452d1f7e0216f | [] | [] | https://huggingface.co/datasets/jdimos8/french_admin/resolve/main/README.md | |
iejMac | null | null | null | false | null | false | iejMac/CLIP-DiDeMo | 2022-10-21T00:14:25.000Z | null | false | 7a73e990a66bbccce114fadf1b20cb911c85079e | [] | [
"license:mit"
] | https://huggingface.co/datasets/iejMac/CLIP-DiDeMo/resolve/main/README.md | ---
license: mit
---
|
huashen218 | null | null | null | false | null | false | huashen218/convxai-cia-dataset | 2022-10-21T00:29:10.000Z | null | false | 51984dd3e03d28441c5a87213f6606489d2c8878 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/huashen218/convxai-cia-dataset/resolve/main/README.md | ---
license: afl-3.0
---
|
trunster | null | null | null | false | null | false | trunster/zilvy | 2022-10-21T01:01:02.000Z | null | false | b19e1800bc69d477a6cd517a03017d01c0030e00 | [] | [] | https://huggingface.co/datasets/trunster/zilvy/resolve/main/README.md | |
anhdungitvn | null | null | null | false | 1 | false | anhdungitvn/sccr | 2022-10-21T03:39:41.000Z | null | false | dc6044224ca6348df633d07d3079ae8795333de1 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/anhdungitvn/sccr/resolve/main/README.md | ---
license: apache-2.0
---
```python
from datasets import load_dataset
data_name = "anhdungitvn/sccr"
data_files = {"train": "train.tsv", "eval": "eval.tsv"}
sccr = load_dataset(data_name, data_files=data_files)
sccr
```
```python
DatasetDict({
train: Dataset({
features: ['text', 'labels'],
num_rows: 14478
})
eval: Dataset({
features: ['text', 'labels'],
num_rows: 1609
})
})
```
### References
- <a href="https://www.aivivn.com/contests/6">SC: Sentiment Classification (Phân loại sắc thái bình luận)</a>
|
autoevaluate | null | null | null | false | 7 | false | autoevaluate/autoeval-eval-SpaceDoge__dataset_test_1-SpaceDoge__dataset_test_1-a8c4b7-1826662823 | 2022-10-21T03:39:36.000Z | null | false | 962f9d70b3fcfd790d3f512d857ec8fa0547fd16 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:SpaceDoge/dataset_test_1"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-SpaceDoge__dataset_test_1-SpaceDoge__dataset_test_1-a8c4b7-1826662823/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- SpaceDoge/dataset_test_1
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-1.3b_eval
metrics: []
dataset_name: SpaceDoge/dataset_test_1
dataset_config: SpaceDoge--dataset_test_1
dataset_split: test
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: SpaceDoge/dataset_test_1
* Config: SpaceDoge--dataset_test_1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@SpaceDoge](https://huggingface.co/SpaceDoge) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-SpaceDoge__dataset_test_1-SpaceDoge__dataset_test_1-a8c4b7-1826662822 | 2022-10-21T03:37:58.000Z | null | false | 0a3222fdc8e5964048ffe5c1476f791863b42169 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:SpaceDoge/dataset_test_1"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-SpaceDoge__dataset_test_1-SpaceDoge__dataset_test_1-a8c4b7-1826662822/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- SpaceDoge/dataset_test_1
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-350m_eval
metrics: []
dataset_name: SpaceDoge/dataset_test_1
dataset_config: SpaceDoge--dataset_test_1
dataset_split: test
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: SpaceDoge/dataset_test_1
* Config: SpaceDoge--dataset_test_1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@SpaceDoge](https://huggingface.co/SpaceDoge) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-SpaceDoge__dataset_test_1-SpaceDoge__dataset_test_1-a8c4b7-1826662824 | 2022-10-21T03:41:41.000Z | null | false | 6645a4a439b651250e7aec5e5678fa0bf04e693a | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:SpaceDoge/dataset_test_1"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-SpaceDoge__dataset_test_1-SpaceDoge__dataset_test_1-a8c4b7-1826662824/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- SpaceDoge/dataset_test_1
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-2.7b_eval
metrics: []
dataset_name: SpaceDoge/dataset_test_1
dataset_config: SpaceDoge--dataset_test_1
dataset_split: test
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: SpaceDoge/dataset_test_1
* Config: SpaceDoge--dataset_test_1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@SpaceDoge](https://huggingface.co/SpaceDoge) for evaluating this model. |
bizjay | null | null | null | false | null | false | bizjay/DataTest | 2022-10-28T10:43:44.000Z | null | false | cf94a914f6428bf55eb50afe92de3460dcdecfb1 | [] | [] | https://huggingface.co/datasets/bizjay/DataTest/resolve/main/README.md | This is dummy data
license: unknown
---
multilinguality:
- monolingual |
lcw99 | null | null | null | false | 536 | false | lcw99/oscar-ko-only | 2022-10-21T05:52:05.000Z | null | false | 18112c5f65fe4c2593104cbc0850e2a7737cc41f | [] | [
"language:ko"
] | https://huggingface.co/datasets/lcw99/oscar-ko-only/resolve/main/README.md | ---
language:
- ko
---
# oscar dataset only korean |
lcw99 | null | null | null | false | 26 | false | lcw99/cc100-ko-only | 2022-10-21T07:23:11.000Z | null | false | 56ede88fa531e775aa97d6f958c501207ceace7b | [] | [
"language:ko"
] | https://huggingface.co/datasets/lcw99/cc100-ko-only/resolve/main/README.md | ---
language:
- ko
---
# cc100 dataset Korean only |
Poupou | null | null | null | false | null | false | Poupou/Gitcoin-ODS-Hackhaton-GR15 | 2022-10-30T14:56:15.000Z | null | false | 9ee08c272b9686659e1faa515e73f2c3e0233f04 | [] | [
"annotations_creators:no-annotation",
"language:en",
"language_creators:expert-generated",
"license:mit",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"tags:Gitcoin",
"tags:Gitcoin Grants",
"tags:Sybil",
"tags:Sybil Slayers",
"tags:FDD",
"tags:Web3",
"tags:Public Goods",
"tags:Fraud Detection",
"tags:DAO",
"tags:Ethereum",
"tags:Polygon",
"task_categories:feature-extraction"
] | https://huggingface.co/datasets/Poupou/Gitcoin-ODS-Hackhaton-GR15/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- expert-generated
license:
- mit
multilinguality:
- monolingual
pretty_name: Gitcoin FDD Open Data Science Hackathon GR15
size_categories:
- 1M<n<10M
source_datasets:
- original
tags:
- Gitcoin
- Gitcoin Grants
- Sybil
- Sybil Slayers
- FDD
- Web3
- Public Goods
- Fraud Detection
- DAO
- Ethereum
- Polygon
task_categories:
- feature-extraction
task_ids: []
---
# Dataset Card for [Gitcoin ODS Hackathon GR15]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://gitcoin.co/issue/29389
- **Repository:** https://github.com/poupou-web3/GC-ODS-Sybil
- **Point of Contact:** https://discord.com/channels/562828676480237578/1024788324826763284
### Dataset Summary
This data set was created in the context of the first [Gitcoin Open Data Science Hackathon](https://go.gitcoin.co/blog/open-data-science-hackathon).
It contains all the transactions on the Ethereum and Polygon chains of the wallet that contributed to the Grant 15 of Gitcoin grants program.
It was created in order to find patterns in the transactions of potential Sybil attackers by exploring their on-chain activity.
## Dataset Creation
### Source Data
The wallet address from grant 15 was extracted from the data put together by the Gitcoin DAO. [GR_15_DATA](https://drive.google.com/drive/folders/17OdrV7SA0I56aDMwqxB6jMwoY3tjSf5w)
The data was produced using [Etherscan API](https://etherscan.io/) and [PolygonScan API](https://polygonscan.com/) and using scripts available later at [repo](https://github.com/poupou-web3/GC-ODS-Sybil).
An address contributing to the [GR_15_DATA](https://drive.google.com/drive/folders/17OdrV7SA0I56aDMwqxB6jMwoY3tjSf5w) with no found transaction on a chain will not appear in the data gathered.
** Careful the transaction data only contains "normal" transactions as described by the API provider.**
## Dataset Structure
### Data Instances
There are 4 CSV files.
- 2 for transactions: one for the Ethereum transactions and one for the Polygon transactions.
- 2 for features: one for the Ethereum transactions and one for the Polygon transactions.
### Data Fields
As provided by the [Etherscan API](https://etherscan.io/) and [PolygonScan API](https://polygonscan.com/).
A column address was added for easier manipulation and to have all the transactions of all addresses in the same file.
It is an unsupervised machine-learning task, there is no target column.
Most of the extracted features have been extracted using [tsfresh](https://tsfresh.readthedocs.io/en/latest/). The code is available in the GitHub [repo](https://github.com/poupou-web3/GC-ODS-Sybil). It allows reproducing the extraction from the 2 transactions CSV. Column names are named by tsfresh, each feature can be found in the documentation for more detailed definitions. Following are the descriptions for features not explained in by tsfresh:
- countUniqueInteracted : Count the number of unique addresses with which the wallet address has interacted.
- countTx: The total number of transactions
- ratioUniqueInteracted : countUniqueInteracted / countTx
- outgoing: Number of outgoing transactions
- outgoingRatio : outgoing / countTx
## Considerations for Using the Data
### Social Impact of Dataset
The creation of the data set may help in fraud detection and defence in public goods funding.
## Additional Information
### Licensing Information
MIT
### Citation Information
Please cite this data set if you use it, especially in the hackathon context.
### Contributions
Thanks to [@poupou-web3](https://github.com/poupou-web3) for adding this dataset. |
relbert | null | @inproceedings{jurgens-etal-2012-semeval,
title = "{S}em{E}val-2012 Task 2: Measuring Degrees of Relational Similarity",
author = "Jurgens, David and
Mohammad, Saif and
Turney, Peter and
Holyoak, Keith",
booktitle = "*{SEM} 2012: The First Joint Conference on Lexical and Computational Semantics {--} Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation ({S}em{E}val 2012)",
month = "7-8 " # jun,
year = "2012",
address = "Montr{\'e}al, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S12-1047",
pages = "356--364",
} | [SemEVAL 2012 task 2: Relational Similarity](https://aclanthology.org/S12-1047/) | false | 42 | false | relbert/semeval2012_relational_similarity_v5 | 2022-10-21T10:29:48.000Z | null | false | 3c84296545ff027b36f6d99d921aeb4b48e9ceb1 | [] | [
"language:en",
"license:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K"
] | https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v5/resolve/main/README.md | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
pretty_name: SemEval2012 task 2 Relational Similarity
---
# Dataset Card for "relbert/semeval2012_relational_similarity_v5"
## Dataset Description
- **Repository:** [RelBERT](https://github.com/asahi417/relbert)
- **Paper:** [https://aclanthology.org/S12-1047/](https://aclanthology.org/S12-1047/)
- **Dataset:** SemEval2012: Relational Similarity
### Dataset Summary
***IMPORTANT***: This is the same dataset as [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity),
but with a different dataset construction.
Relational similarity dataset from [SemEval2012 task 2](https://aclanthology.org/S12-1047/), compiled to fine-tune [RelBERT](https://github.com/asahi417/relbert) model.
The dataset contains a list of positive and negative word pair from 89 pre-defined relations.
The relation types are constructed on top of following 10 parent relation types.
```shell
{
1: "Class Inclusion", # Hypernym
2: "Part-Whole", # Meronym, Substance Meronym
3: "Similar", # Synonym, Co-hypornym
4: "Contrast", # Antonym
5: "Attribute", # Attribute, Event
6: "Non Attribute",
7: "Case Relation",
8: "Cause-Purpose",
9: "Space-Time",
10: "Representation"
}
```
Each of the parent relation is further grouped into child relation types where the definition can be found [here](https://drive.google.com/file/d/0BzcZKTSeYL8VenY0QkVpZVpxYnc/view?resourcekey=0-ZP-UARfJj39PcLroibHPHw).
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'relation_type': '8d',
'positives': [ [ "breathe", "live" ], [ "study", "learn" ], [ "speak", "communicate" ], ... ]
'negatives': [ [ "starving", "hungry" ], [ "clean", "bathe" ], [ "hungry", "starving" ], ... ]
}
```
### Data Splits
| name |train|validation|
|---------|----:|---------:|
|semeval2012_relational_similarity| 89 | 89|
### Number of Positive/Negative Word-pairs in each Split
| | positives | negatives |
|:------------------------------------------|------------:|------------:|
| ('1', 'parent', 'train') | 110 | 680 |
| ('10', 'parent', 'train') | 60 | 730 |
| ('10a', 'child', 'train') | 10 | 1655 |
| ('10a', 'child_prototypical', 'train') | 123 | 2438 |
| ('10b', 'child', 'train') | 10 | 1656 |
| ('10b', 'child_prototypical', 'train') | 117 | 2027 |
| ('10c', 'child', 'train') | 10 | 1658 |
| ('10c', 'child_prototypical', 'train') | 105 | 2030 |
| ('10d', 'child', 'train') | 10 | 1659 |
| ('10d', 'child_prototypical', 'train') | 99 | 1766 |
| ('10e', 'child', 'train') | 10 | 1661 |
| ('10e', 'child_prototypical', 'train') | 87 | 1118 |
| ('10f', 'child', 'train') | 10 | 1659 |
| ('10f', 'child_prototypical', 'train') | 99 | 1766 |
| ('1a', 'child', 'train') | 10 | 1655 |
| ('1a', 'child_prototypical', 'train') | 123 | 2192 |
| ('1b', 'child', 'train') | 10 | 1655 |
| ('1b', 'child_prototypical', 'train') | 123 | 2192 |
| ('1c', 'child', 'train') | 10 | 1658 |
| ('1c', 'child_prototypical', 'train') | 105 | 2030 |
| ('1d', 'child', 'train') | 10 | 1653 |
| ('1d', 'child_prototypical', 'train') | 135 | 2540 |
| ('1e', 'child', 'train') | 10 | 1661 |
| ('1e', 'child_prototypical', 'train') | 87 | 1031 |
| ('2', 'parent', 'train') | 100 | 690 |
| ('2a', 'child', 'train') | 10 | 1654 |
| ('2a', 'child_prototypical', 'train') | 129 | 2621 |
| ('2b', 'child', 'train') | 10 | 1658 |
| ('2b', 'child_prototypical', 'train') | 105 | 1610 |
| ('2c', 'child', 'train') | 10 | 1656 |
| ('2c', 'child_prototypical', 'train') | 117 | 2144 |
| ('2d', 'child', 'train') | 10 | 1659 |
| ('2d', 'child_prototypical', 'train') | 99 | 1667 |
| ('2e', 'child', 'train') | 10 | 1658 |
| ('2e', 'child_prototypical', 'train') | 105 | 1925 |
| ('2f', 'child', 'train') | 10 | 1658 |
| ('2f', 'child_prototypical', 'train') | 105 | 2240 |
| ('2g', 'child', 'train') | 10 | 1653 |
| ('2g', 'child_prototypical', 'train') | 135 | 2405 |
| ('2h', 'child', 'train') | 10 | 1658 |
| ('2h', 'child_prototypical', 'train') | 105 | 1925 |
| ('2i', 'child', 'train') | 10 | 1660 |
| ('2i', 'child_prototypical', 'train') | 93 | 1706 |
| ('2j', 'child', 'train') | 10 | 1659 |
| ('2j', 'child_prototypical', 'train') | 99 | 1964 |
| ('3', 'parent', 'train') | 80 | 710 |
| ('3a', 'child', 'train') | 10 | 1658 |
| ('3a', 'child_prototypical', 'train') | 105 | 1925 |
| ('3b', 'child', 'train') | 10 | 1658 |
| ('3b', 'child_prototypical', 'train') | 105 | 2240 |
| ('3c', 'child', 'train') | 10 | 1657 |
| ('3c', 'child_prototypical', 'train') | 111 | 1979 |
| ('3d', 'child', 'train') | 10 | 1655 |
| ('3d', 'child_prototypical', 'train') | 123 | 2315 |
| ('3e', 'child', 'train') | 10 | 1664 |
| ('3e', 'child_prototypical', 'train') | 69 | 1268 |
| ('3f', 'child', 'train') | 10 | 1658 |
| ('3f', 'child_prototypical', 'train') | 105 | 2345 |
| ('3g', 'child', 'train') | 10 | 1663 |
| ('3g', 'child_prototypical', 'train') | 75 | 1340 |
| ('3h', 'child', 'train') | 10 | 1659 |
| ('3h', 'child_prototypical', 'train') | 99 | 1964 |
| ('4', 'parent', 'train') | 80 | 710 |
| ('4a', 'child', 'train') | 10 | 1658 |
| ('4a', 'child_prototypical', 'train') | 105 | 2240 |
| ('4b', 'child', 'train') | 10 | 1662 |
| ('4b', 'child_prototypical', 'train') | 81 | 1163 |
| ('4c', 'child', 'train') | 10 | 1657 |
| ('4c', 'child_prototypical', 'train') | 111 | 2201 |
| ('4d', 'child', 'train') | 10 | 1665 |
| ('4d', 'child_prototypical', 'train') | 63 | 749 |
| ('4e', 'child', 'train') | 10 | 1657 |
| ('4e', 'child_prototypical', 'train') | 111 | 2423 |
| ('4f', 'child', 'train') | 10 | 1660 |
| ('4f', 'child_prototypical', 'train') | 93 | 1892 |
| ('4g', 'child', 'train') | 10 | 1654 |
| ('4g', 'child_prototypical', 'train') | 129 | 2492 |
| ('4h', 'child', 'train') | 10 | 1657 |
| ('4h', 'child_prototypical', 'train') | 111 | 2312 |
| ('5', 'parent', 'train') | 90 | 700 |
| ('5a', 'child', 'train') | 10 | 1655 |
| ('5a', 'child_prototypical', 'train') | 123 | 2315 |
| ('5b', 'child', 'train') | 10 | 1661 |
| ('5b', 'child_prototypical', 'train') | 87 | 1640 |
| ('5c', 'child', 'train') | 10 | 1658 |
| ('5c', 'child_prototypical', 'train') | 105 | 1925 |
| ('5d', 'child', 'train') | 10 | 1654 |
| ('5d', 'child_prototypical', 'train') | 129 | 2363 |
| ('5e', 'child', 'train') | 10 | 1661 |
| ('5e', 'child_prototypical', 'train') | 87 | 1640 |
| ('5f', 'child', 'train') | 10 | 1658 |
| ('5f', 'child_prototypical', 'train') | 105 | 2135 |
| ('5g', 'child', 'train') | 10 | 1660 |
| ('5g', 'child_prototypical', 'train') | 93 | 1892 |
| ('5h', 'child', 'train') | 10 | 1654 |
| ('5h', 'child_prototypical', 'train') | 129 | 2750 |
| ('5i', 'child', 'train') | 10 | 1655 |
| ('5i', 'child_prototypical', 'train') | 123 | 2561 |
| ('6', 'parent', 'train') | 80 | 710 |
| ('6a', 'child', 'train') | 10 | 1654 |
| ('6a', 'child_prototypical', 'train') | 129 | 2492 |
| ('6b', 'child', 'train') | 10 | 1658 |
| ('6b', 'child_prototypical', 'train') | 105 | 2135 |
| ('6c', 'child', 'train') | 10 | 1656 |
| ('6c', 'child_prototypical', 'train') | 117 | 2495 |
| ('6d', 'child', 'train') | 10 | 1659 |
| ('6d', 'child_prototypical', 'train') | 99 | 2261 |
| ('6e', 'child', 'train') | 10 | 1658 |
| ('6e', 'child_prototypical', 'train') | 105 | 2135 |
| ('6f', 'child', 'train') | 10 | 1657 |
| ('6f', 'child_prototypical', 'train') | 111 | 2090 |
| ('6g', 'child', 'train') | 10 | 1657 |
| ('6g', 'child_prototypical', 'train') | 111 | 1979 |
| ('6h', 'child', 'train') | 10 | 1654 |
| ('6h', 'child_prototypical', 'train') | 129 | 2621 |
| ('7', 'parent', 'train') | 80 | 710 |
| ('7a', 'child', 'train') | 10 | 1655 |
| ('7a', 'child_prototypical', 'train') | 123 | 2561 |
| ('7b', 'child', 'train') | 10 | 1662 |
| ('7b', 'child_prototypical', 'train') | 81 | 1082 |
| ('7c', 'child', 'train') | 10 | 1658 |
| ('7c', 'child_prototypical', 'train') | 105 | 1715 |
| ('7d', 'child', 'train') | 10 | 1655 |
| ('7d', 'child_prototypical', 'train') | 123 | 2561 |
| ('7e', 'child', 'train') | 10 | 1659 |
| ('7e', 'child_prototypical', 'train') | 99 | 1568 |
| ('7f', 'child', 'train') | 10 | 1657 |
| ('7f', 'child_prototypical', 'train') | 111 | 1757 |
| ('7g', 'child', 'train') | 10 | 1660 |
| ('7g', 'child_prototypical', 'train') | 93 | 1148 |
| ('7h', 'child', 'train') | 10 | 1655 |
| ('7h', 'child_prototypical', 'train') | 123 | 1946 |
| ('8', 'parent', 'train') | 80 | 710 |
| ('8a', 'child', 'train') | 10 | 1655 |
| ('8a', 'child_prototypical', 'train') | 123 | 2192 |
| ('8b', 'child', 'train') | 10 | 1662 |
| ('8b', 'child_prototypical', 'train') | 81 | 1487 |
| ('8c', 'child', 'train') | 10 | 1657 |
| ('8c', 'child_prototypical', 'train') | 111 | 1757 |
| ('8d', 'child', 'train') | 10 | 1656 |
| ('8d', 'child_prototypical', 'train') | 117 | 1910 |
| ('8e', 'child', 'train') | 10 | 1658 |
| ('8e', 'child_prototypical', 'train') | 105 | 1610 |
| ('8f', 'child', 'train') | 10 | 1657 |
| ('8f', 'child_prototypical', 'train') | 111 | 1868 |
| ('8g', 'child', 'train') | 10 | 1662 |
| ('8g', 'child_prototypical', 'train') | 81 | 839 |
| ('8h', 'child', 'train') | 10 | 1655 |
| ('8h', 'child_prototypical', 'train') | 123 | 2315 |
| ('9', 'parent', 'train') | 90 | 700 |
| ('9a', 'child', 'train') | 10 | 1655 |
| ('9a', 'child_prototypical', 'train') | 123 | 1946 |
| ('9b', 'child', 'train') | 10 | 1657 |
| ('9b', 'child_prototypical', 'train') | 111 | 2090 |
| ('9c', 'child', 'train') | 10 | 1662 |
| ('9c', 'child_prototypical', 'train') | 81 | 596 |
| ('9d', 'child', 'train') | 10 | 1660 |
| ('9d', 'child_prototypical', 'train') | 93 | 1985 |
| ('9e', 'child', 'train') | 10 | 1661 |
| ('9e', 'child_prototypical', 'train') | 87 | 1901 |
| ('9f', 'child', 'train') | 10 | 1659 |
| ('9f', 'child_prototypical', 'train') | 99 | 1766 |
| ('9g', 'child', 'train') | 10 | 1655 |
| ('9g', 'child_prototypical', 'train') | 123 | 2069 |
| ('9h', 'child', 'train') | 10 | 1656 |
| ('9h', 'child_prototypical', 'train') | 117 | 2261 |
| ('9i', 'child', 'train') | 10 | 1660 |
| ('9i', 'child_prototypical', 'train') | 93 | 1613 |
| ('AtLocation', 'N/A', 'validation') | 960 | 4646 |
| ('CapableOf', 'N/A', 'validation') | 536 | 4734 |
| ('Causes', 'N/A', 'validation') | 194 | 4738 |
| ('CausesDesire', 'N/A', 'validation') | 40 | 4730 |
| ('CreatedBy', 'N/A', 'validation') | 4 | 3554 |
| ('DefinedAs', 'N/A', 'validation') | 4 | 1182 |
| ('Desires', 'N/A', 'validation') | 56 | 4732 |
| ('HasA', 'N/A', 'validation') | 168 | 4772 |
| ('HasFirstSubevent', 'N/A', 'validation') | 4 | 3554 |
| ('HasLastSubevent', 'N/A', 'validation') | 10 | 4732 |
| ('HasPrerequisite', 'N/A', 'validation') | 450 | 4744 |
| ('HasProperty', 'N/A', 'validation') | 266 | 4766 |
| ('HasSubevent', 'N/A', 'validation') | 330 | 4768 |
| ('IsA', 'N/A', 'validation') | 816 | 4688 |
| ('MadeOf', 'N/A', 'validation') | 48 | 4726 |
| ('MotivatedByGoal', 'N/A', 'validation') | 50 | 4736 |
| ('PartOf', 'N/A', 'validation') | 82 | 4742 |
| ('ReceivesAction', 'N/A', 'validation') | 52 | 4726 |
| ('SymbolOf', 'N/A', 'validation') | 4 | 1184 |
| ('UsedFor', 'N/A', 'validation') | 660 | 4760 |
### Citation Information
```
@inproceedings{jurgens-etal-2012-semeval,
title = "{S}em{E}val-2012 Task 2: Measuring Degrees of Relational Similarity",
author = "Jurgens, David and
Mohammad, Saif and
Turney, Peter and
Holyoak, Keith",
booktitle = "*{SEM} 2012: The First Joint Conference on Lexical and Computational Semantics {--} Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation ({S}em{E}val 2012)",
month = "7-8 " # jun,
year = "2012",
address = "Montr{\'e}al, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S12-1047",
pages = "356--364",
}
``` |
polinaeterna | null | null | null | false | 3 | false | polinaeterna/smol | 2022-10-21T09:27:16.000Z | null | false | f24c2fdd646ac249a494d600e1d0c3f4dbfa3d46 | [] | [] | https://huggingface.co/datasets/polinaeterna/smol/resolve/main/README.md | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: test
num_bytes: 28
num_examples: 2
- name: train
num_bytes: 44
num_examples: 2
download_size: 1776
dataset_size: 72
---
# Dataset Card for "smol"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
che111 | null | null | null | false | 10 | false | che111/laion256 | 2022-10-21T13:52:40.000Z | null | false | f9c8169c018078936cab936ad7180570161b3e73 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/che111/laion256/resolve/main/README.md | ---
license: openrail
---
|
ellabettison | null | null | null | false | 29 | false | ellabettison/processed_roberta_dataset_padded | 2022-10-21T19:04:37.000Z | null | false | ca0bd5a57affbcfe3d126b88792cc7f1d3da3f5b | [] | [] | https://huggingface.co/datasets/ellabettison/processed_roberta_dataset_padded/resolve/main/README.md | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: test
num_bytes: 67291910.40480152
num_examples: 623004
- name: train
num_bytes: 269167425.5951985
num_examples: 2492014
download_size: 54543864
dataset_size: 336459336.0
---
# Dataset Card for "processed_roberta_dataset_padded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
api19750904 | null | null | null | false | 1 | false | api19750904/News_bcn_sentiment | 2022-10-21T15:25:49.000Z | null | false | 9141dc1ea6a4efac822323572cb35247ae66c050 | [] | [] | https://huggingface.co/datasets/api19750904/News_bcn_sentiment/resolve/main/README.md | News on Barcelona en spanish media outlets |
api19750904 | null | null | null | false | 3 | false | api19750904/train_test_bcn | 2022-10-21T17:04:57.000Z | null | false | 24981646147a0f7eb53b576cefdce881f3227853 | [] | [] | https://huggingface.co/datasets/api19750904/train_test_bcn/resolve/main/README.md | |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__data_2-default-112182-1832662968 | 2022-10-21T18:22:50.000Z | null | false | 0b2726b8a85a6eab027db75a1b71b7db8bd7faf2 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/data_2"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__data_2-default-112182-1832662968/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/data_2
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-560m
metrics: []
dataset_name: phpthinh/data_2
dataset_config: default
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: phpthinh/data_2
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__data_1-default-4c0514-1832562967 | 2022-10-21T18:20:18.000Z | null | false | 44f14afa7b7eff6ed57c00c45d004d5ff2658a33 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/data_1"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__data_1-default-4c0514-1832562967/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/data_1
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-560m
metrics: []
dataset_name: phpthinh/data_1
dataset_config: default
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: phpthinh/data_1
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
Nyanko138 | null | null | null | false | null | false | Nyanko138/img-trainset | 2022-11-11T06:09:07.000Z | null | false | 4c9be1e799edfc1600b1db549886a7b055fe4e0e | [] | [
"license:openrail"
] | https://huggingface.co/datasets/Nyanko138/img-trainset/resolve/main/README.md | ---
license: openrail
---
|
api19750904 | null | null | null | false | 3 | false | api19750904/noticias | 2022-10-21T17:51:40.000Z | null | false | 9254e5ea8b4d1d876458c1cb8290938a3fb77cd7 | [] | [] | https://huggingface.co/datasets/api19750904/noticias/resolve/main/README.md | |
freddyaboulton | null | null | null | false | null | false | freddyaboulton/space-metrics | 2022-10-21T19:26:34.000Z | null | false | 8a11750258a806047f77a59abbf4df2d74ca8d4c | [] | [
"license:mit"
] | https://huggingface.co/datasets/freddyaboulton/space-metrics/resolve/main/README.md | ---
license: mit
---
|
VKAgbesi | null | null | null | false | null | false | VKAgbesi/Ewe_News_Dataset | 2022-10-21T18:47:15.000Z | null | false | 5572b9894fbc50d2976bd894c872d0ac1f31a7a6 | [] | [] | https://huggingface.co/datasets/VKAgbesi/Ewe_News_Dataset/resolve/main/README.md | The Ewe news dataset contains 1,705,600 words, making 4264 different news articles. The articles are collected from different media portals in West Africa. After the collection process, the words are translated and further cross-checked by eight Ewe tutors in Ghana for efficient semantic representation and to prevent any duplication.
The dataset consists of six (6) different classes: coronavirus, local, business, sports, entertainment, and politics.
NOTE:
For more details on access to the Ewe news dataset, please contact via the following email:
Email : victoragbesivik@gmail.com or Email: vkagbesi@std.uestc.edu.cn |
arbml | null | null | null | false | null | false | arbml/CAYLOU | 2022-10-21T20:00:21.000Z | null | false | a324158f6379b6265690bd09b46147c3338be53a | [] | [] | https://huggingface.co/datasets/arbml/CAYLOU/resolve/main/README.md | ---
dataset_info:
features:
- name: Source
dtype: string
- name: Target
dtype: string
splits:
- name: train
num_bytes: 597877
num_examples: 5191
download_size: 170284
dataset_size: 597877
---
# Dataset Card for "CAYLOU"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arbml | null | null | null | false | null | false | arbml/Arabic_Hate_Speech | 2022-10-21T20:22:02.000Z | null | false | 24a2ceacb185767e845fb1126a794f3de5e4ba7a | [] | [] | https://huggingface.co/datasets/arbml/Arabic_Hate_Speech/resolve/main/README.md | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet
dtype: string
- name: is_off
dtype: string
- name: is_hate
dtype: string
- name: is_vlg
dtype: string
- name: is_vio
dtype: string
splits:
- name: train
num_bytes: 1656540
num_examples: 8557
- name: validation
num_bytes: 234165
num_examples: 1266
download_size: 881261
dataset_size: 1890705
---
# Dataset Card for "Arabic_Hate_Speech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arbml | null | null | null | false | null | false | arbml/Author_Attribution_Tweets | 2022-10-21T20:26:29.000Z | null | false | e5324fb79212d18f0c10429254a54c639f25a03a | [] | [] | https://huggingface.co/datasets/arbml/Author_Attribution_Tweets/resolve/main/README.md | ---
dataset_info:
features:
- name: tweet
dtype: string
- name: author
dtype: string
splits:
- name: test
num_bytes: 2629687
num_examples: 13341
- name: train
num_bytes: 10441650
num_examples: 53198
download_size: 6482998
dataset_size: 13071337
---
# Dataset Card for "Author_Attribution_Tweets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arbml | null | null | null | false | null | false | arbml/DAWQAS | 2022-10-21T20:29:07.000Z | null | false | 768272dae01e6bfb26841a5389a7e0b5ec5b0aa0 | [] | [] | https://huggingface.co/datasets/arbml/DAWQAS/resolve/main/README.md | ---
dataset_info:
features:
- name: QID
dtype: string
- name: Site_id
dtype: string
- name: Question
dtype: string
- name: Answer
dtype: string
- name: Answer1
dtype: string
- name: Answer2
dtype: string
- name: Answer3
dtype: string
- name: Answer4
dtype: string
- name: Answer5
dtype: string
- name: Answer6
dtype: string
- name: Answer7
dtype: string
- name: Answer8
dtype: string
- name: Answer9
dtype: string
- name: Answer10
dtype: string
- name: Answer11
dtype: string
- name: Original_Category
dtype: string
- name: Author
dtype: string
- name: Date
dtype: string
- name: Site
dtype: string
- name: Year
dtype: string
splits:
- name: train
num_bytes: 22437661
num_examples: 3209
download_size: 10844359
dataset_size: 22437661
---
# Dataset Card for "DAWQAS"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jbluew | null | null | null | false | null | false | jbluew/diffuser_px | 2022-10-21T21:19:06.000Z | null | false | 2f5f8febaa4b80438e0bf1666800bc4d13c58343 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/jbluew/diffuser_px/resolve/main/README.md | ---
license: openrail
---
|
argosopentech | null | null | null | false | 1 | false | argosopentech/libretranslate-communityds | 2022-10-21T22:40:45.000Z | null | false | 45d52638e7c1582648dc4522dcf6f16bff05e749 | [] | [] | https://huggingface.co/datasets/argosopentech/libretranslate-communityds/resolve/main/README.md | # Community Dataset
Community suggestions to improve machine translations
https://github.com/LibreTranslate/CommunityDS
https://libretranslate.com/
1653250371.jsonl
```
{"q": "انا احبك يا امي ", "s": "Is breá liom tú, Mam.ggc", "source": "ar", "target": "ga"}
{"q": "plump", "s": "montok", "source": "en", "target": "id"}
{"q": "iron out", "s": "loswerden", "source": "en", "target": "de"}
```
|
toloka | null | null | null | false | null | false | toloka/WSDMCup2023 | 2022-10-21T22:50:08.000Z | null | false | 3edd56030cc6472918277a53c0c108eeb6fec5ec | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/toloka/WSDMCup2023/resolve/main/README.md | ---
license: cc-by-4.0
---
|
arbml | null | null | null | false | null | false | arbml/L_HSAB | 2022-10-21T23:20:09.000Z | null | false | 8ece234cf7f61947b738f708fbeedd29b3e7bc78 | [] | [] | https://huggingface.co/datasets/arbml/L_HSAB/resolve/main/README.md | ---
dataset_info:
features:
- name: Tweet
dtype: string
- name: label
dtype:
class_label:
names:
0: null
1: abusive
2: hate
3: normal
splits:
- name: train
num_bytes: 1352345
num_examples: 5846
download_size: 566158
dataset_size: 1352345
---
# Dataset Card for "L_HSAB"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arbml | null | null | null | false | null | false | arbml/AraSenti_Lexicon | 2022-10-21T23:26:24.000Z | null | false | 6468c6249f8cf2dc9fd1047a7a33cfcdf164f056 | [] | [] | https://huggingface.co/datasets/arbml/AraSenti_Lexicon/resolve/main/README.md | ---
dataset_info:
features:
- name: Term
dtype: string
- name: Sentiment
dtype: string
splits:
- name: train
num_bytes: 6556665
num_examples: 225329
download_size: 2464254
dataset_size: 6556665
---
# Dataset Card for "AraSenti_Lexicon"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arbml | null | null | null | false | 6 | false | arbml/AraFacts | 2022-10-21T23:35:54.000Z | null | false | a33b6be24c79b6df9b472ee1bbc57bf9a40b4917 | [] | [] | https://huggingface.co/datasets/arbml/AraFacts/resolve/main/README.md | ---
dataset_info:
features:
- name: ClaimID
dtype: string
- name: claim
dtype: string
- name: description
dtype: string
- name: source
dtype: string
- name: date
dtype: string
- name: source_label
dtype: string
- name: normalized_label
dtype: string
- name: source_category
dtype: string
- name: normalized_category
dtype: string
- name: source_url
dtype: string
- name: claim_urls
dtype: string
- name: evidence_urls
dtype: string
- name: claim_type
dtype: string
splits:
- name: train
num_bytes: 13201528
num_examples: 6222
download_size: 5719822
dataset_size: 13201528
---
# Dataset Card for "AraFacts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arbml | null | null | null | false | null | false | arbml/BBN_Blog_Posts | 2022-10-21T23:43:31.000Z | null | false | c7851e3c61d936d8f892fca61b428f2b0f2b01ce | [] | [] | https://huggingface.co/datasets/arbml/BBN_Blog_Posts/resolve/main/README.md | ---
dataset_info:
features:
- name: Arabic_text
dtype: string
- name: ar:manual_sentiment
dtype: string
- name: ar:manual_confidence
dtype: string
splits:
- name: train
num_bytes: 145550
num_examples: 1200
download_size: 76441
dataset_size: 145550
---
# Dataset Card for "BBN_Blog_Posts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
karbazhyev | null | null | null | false | null | false | karbazhyev/test | 2022-10-21T23:44:34.000Z | null | false | 8d767414b5ff632186ae2f6098e095fe29fb5856 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/karbazhyev/test/resolve/main/README.md | ---
license: apache-2.0
---
|
kargaranamir | null | null | null | false | 3 | false | kargaranamir/HengamCorpus | 2022-10-22T00:39:45.000Z | null | false | 3a9e155509345d4c6a68a373aa514be6f906002c | [] | [
"license:mit"
] | https://huggingface.co/datasets/kargaranamir/HengamCorpus/resolve/main/README.md | ---
license: mit
---
|
api19750904 | null | null | null | false | 1 | false | api19750904/clean_news | 2022-10-22T05:14:01.000Z | null | false | ad92360b864060cf7fde58fb0861d686e02d3fd9 | [] | [] | https://huggingface.co/datasets/api19750904/clean_news/resolve/main/README.md | Clean News Spain |
SadNoodle | null | null | null | false | 1 | false | SadNoodle/ZUN_Faces | 2022-10-22T06:20:51.000Z | null | false | 25e6f97852cea4b6ededb5a1dd7c59c2eda4dbc8 | [] | [
"annotations_creators:found",
"language_creators:found",
"license:cc",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"tags:ZUN",
"tags:anime",
"tags:touhou",
"task_categories:image-to-image"
] | https://huggingface.co/datasets/SadNoodle/ZUN_Faces/resolve/main/README.md | ---
annotations_creators:
- found
language: []
language_creators:
- found
license:
- cc
multilinguality:
- monolingual
pretty_name: Faces with ZUN styles.
size_categories:
- n<1K
source_datasets:
- original
tags:
- ZUN
- anime
- touhou
task_categories:
- image-to-image
task_ids: []
--- |
faruk | null | null | null | false | null | false | faruk/bengali-names-vs-gender | 2022-10-22T07:48:50.000Z | null | false | ca5782a21d5111b3c8a8d0046c70c5e490bb3b02 | [] | [
"doi:10.57967/hf/0053",
"license:afl-3.0"
] | https://huggingface.co/datasets/faruk/bengali-names-vs-gender/resolve/main/README.md | ---
license: afl-3.0
---
# Bengali Female VS Male Names Dataset
An NLP dataset that contains 2030 data samples of Bengali names and corresponding gender both for female and male. This is a very small and simple toy dataset that can be used by NLP starters to practice sequence classification problem and other NLP problems like gender recognition from names.
# Background
In Bengali language, name of a person is dependent largely on their gender. Normally, name of a female ends with certain type of suffix "A", "I", "EE" ["আ", "ই", "ঈ"]. And the names of male are significantly different from female in terms of phoneme patterns and ending suffix. So, In my observation there is a significant possibility that these difference in patterns can be used for gender classification based on names.
Find the full documentation here:
[Documentation and dataset specifications](https://github.com/faruk-ahmad/bengali-female-vs-male-names)
## Dataset Format
The dataset is in CSV format. There are two columns- namely
1. Name
2. Gender
Each row has two attributes. First one is name, second one is the gender. The name attribute is in ```utf-8``` encoding. And the second attribute i.e. the gender attribute has been signified by 0 and 1 as
| | |
|---|---|
|male| 0|
|female| 1|
| | |
## Dataset Statistics
The number of samples per class is as bellow-
| | |
|---|---|
|male| 1029|
|female| 1001|
| | |
## Possible Use Cases
1. Sequence Classification using RNN, LSTM etc
2. Sequence modeling using other type of machine learning algorithms
3. Gender recognition based on names
## Disclaimer
The names were collected from internet using different sources like wikipedia, baby name suggestion websites, blogs etc. If someones name is in the dataset, that is totally unintentional. |
api19750904 | null | null | null | false | 1 | false | api19750904/news_stemm_es | 2022-10-22T08:03:18.000Z | null | false | 54d741401d7c2105f5e1a39b9c6669f22c49202e | [] | [] | https://huggingface.co/datasets/api19750904/news_stemm_es/resolve/main/README.md | News spanish media outlets |
ZongqianLi | null | null | null | false | 34 | false | ZongqianLi/Perovskite_Solar_Cells_Papers | 2022-10-22T10:35:04.000Z | null | false | 9b1bb527f8354eb490b82e11d30f3153c5c7dc49 | [] | [] | https://huggingface.co/datasets/ZongqianLi/Perovskite_Solar_Cells_Papers/resolve/main/README.md | ---
dataset_info:
features:
- name: abstract
dtype: string
- name: classification
dtype: string
- name: classification_value
dtype: int64
- name: doi
dtype: string
- name: journal
dtype: string
- name: paragraphs
dtype: string
- name: press
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 45084513
num_examples: 1712
download_size: 22303808
dataset_size: 45084513
---
# Dataset Card for "Perovskite_Solar_Cells_Papers"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ZongqianLi | null | null | null | false | 13 | false | ZongqianLi/Dye_Sensitized_Solar_Cells_Papers | 2022-10-22T10:36:13.000Z | null | false | 99f873f2acde05b88f0a911b1b2c30a50144f828 | [] | [] | https://huggingface.co/datasets/ZongqianLi/Dye_Sensitized_Solar_Cells_Papers/resolve/main/README.md | ---
dataset_info:
features:
- name: abstract
dtype: string
- name: classification
dtype: string
- name: classification_value
dtype: int64
- name: doi
dtype: string
- name: journal
dtype: string
- name: paragraphs
dtype: string
- name: press
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 124717777
num_examples: 5334
download_size: 61814259
dataset_size: 124717777
---
# Dataset Card for "Dye_Sensitized_Solar_Cells_Papers"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ZongqianLi | null | null | null | false | 8 | false | ZongqianLi/Perovskite_Solar_Cells_Papers_List | 2022-10-22T10:38:13.000Z | null | false | c45b4e47268a34dfe2a8e3829dcf0e8d8832431f | [] | [] | https://huggingface.co/datasets/ZongqianLi/Perovskite_Solar_Cells_Papers_List/resolve/main/README.md | ---
dataset_info:
features:
- name: abstract
dtype: string
- name: classification
dtype: string
- name: classification_value
dtype: int64
- name: doi
dtype: string
- name: journal
dtype: string
- name: paragraphs
sequence: string
- name: press
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 45172780
num_examples: 1712
download_size: 22540228
dataset_size: 45172780
---
# Dataset Card for "Perovskite_Solar_Cells_Papers_List"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ZongqianLi | null | null | null | false | null | false | ZongqianLi/Dye_Sensitized_Solar_Cells_Papers_List | 2022-10-22T10:39:15.000Z | null | false | c23d9ab8f24b34946fc288b8274ac91fb94dacd8 | [] | [] | https://huggingface.co/datasets/ZongqianLi/Dye_Sensitized_Solar_Cells_Papers_List/resolve/main/README.md | ---
dataset_info:
features:
- name: abstract
dtype: string
- name: classification
dtype: string
- name: classification_value
dtype: int64
- name: doi
dtype: string
- name: journal
dtype: string
- name: paragraphs
sequence: string
- name: press
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 125015329
num_examples: 5334
download_size: 62491875
dataset_size: 125015329
---
# Dataset Card for "Dye_Sensitized_Solar_Cells_Papers_List"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Odiseo | null | null | null | false | null | false | Odiseo/odiseoface | 2022-10-22T12:36:45.000Z | null | false | 7570c229c711ba7df50ea606787c7646d6f5fd01 | [] | [
"license:artistic-2.0"
] | https://huggingface.co/datasets/Odiseo/odiseoface/resolve/main/README.md | ---
license: artistic-2.0
---
|
jafdxc | null | null | null | false | null | false | jafdxc/celeb-identities | 2022-10-22T14:44:10.000Z | null | false | a5546b26a14869e8be1edca41bf1636f178984c0 | [] | [] | https://huggingface.co/datasets/jafdxc/celeb-identities/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: clarkson
1: freeman
2: jackie_chan
3: jennifer
4: serena
splits:
- name: train
num_bytes: 1305982.0
num_examples: 13
download_size: 1306199
dataset_size: 1305982.0
---
# Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nestor95 | null | null | null | false | null | false | Nestor95/ME | 2022-10-22T15:48:35.000Z | null | false | 700ecc573caf794dcc653c22ffb17432cb701b34 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/Nestor95/ME/resolve/main/README.md | ---
license: openrail
---
|
orgbug | null | null | null | false | null | false | orgbug/test | 2022-10-22T16:19:48.000Z | null | false | 57e33c65203ff2d5f5eb159d13d62a4bb0990b76 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/orgbug/test/resolve/main/README.md | ---
license: apache-2.0
---
|
aimagic | null | null | null | false | null | false | aimagic/big5essay | 2022-10-22T19:59:47.000Z | null | false | 1db722b3b2cdac83d6d8af9439a366e745964015 | [] | [
"license:mit"
] | https://huggingface.co/datasets/aimagic/big5essay/resolve/main/README.md | ---
license: mit
---
|
nick-carroll1 | null | null | null | false | 87 | false | nick-carroll1/lyrics_dataset | 2022-10-23T17:56:11.000Z | null | false | 328ac75de85373f41365238b2c9cdf1163c4945c | [] | [] | https://huggingface.co/datasets/nick-carroll1/lyrics_dataset/resolve/main/README.md | ---
dataset_info:
features:
- name: Artist
dtype: string
- name: Song
dtype: string
- name: Lyrics
dtype: string
splits:
- name: train
num_bytes: 371464
num_examples: 237
download_size: 166829
dataset_size: 371464
---
# Dataset Card for "lyrics_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
michellejieli | null | null | null | false | 13 | false | michellejieli/friends_dataset | 2022-10-23T13:21:12.000Z | null | false | c019b34c131cb6c4b5694f910961f72f6f147ba9 | [] | [
"language:en",
"tags:distilroberta",
"tags:sentiment",
"tags:emotion",
"tags:twitter",
"tags:reddit"
] | https://huggingface.co/datasets/michellejieli/friends_dataset/resolve/main/README.md | ---
language: "en"
tags:
- distilroberta
- sentiment
- emotion
- twitter
- reddit
---
# Dataset Card for friends_data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Friends dataset consists of speech-based dialogue from the Friends TV sitcom. It is extracted from the [SocialNLP EmotionX 2019 challenge](https://sites.google.com/view/emotionx2019/datasets).
### Supported Tasks and Leaderboards
text-classification, sentiment-classification: The dataset is mainly used to predict a sentiment label given text input.
### Languages
The utterances are in English.
## Dataset Structure
### Data Instances
A data point containing text and the corresponding label.
An example from the friends_dataset looks like this:
{
'text': 'Well! Well! Well! Joey Tribbiani! So you came back huh?',
'label': 'surprise'
}
### Data Fields
The field includes a text column and a corresponding emotion label.
## Dataset Creation
### Curation Rationale
The dataset contains 1000 English-language dialogues originally in JSON files. The JSON file contains an array of dialogue objects. Each dialogue object is an array of line objects, and each line object contains speaker, utterance, emotion, and annotation strings.
{
"speaker": "Chandler",
"utterance": "My duties? All right.",
"emotion": "surprise",
"annotation": "2000030"
}
Utterance and emotion were extracted from the original files into a CSV file. The dataset was cleaned to remove non-neutral labels. This dataset was created to be used in fine-tuning an emotion sentiment classifier that can be useful to teach individuals with autism how to read facial expressions. |
TomTBT | null | null | The PMC Open Access Subset includes more than 3.4 million journal articles and preprints that are made available under
license terms that allow reuse.
Not all articles in PMC are available for text mining and other reuse, many have copyright protection, however articles
in the PMC Open Access Subset are made available under Creative Commons or similar licenses that generally allow more
liberal redistribution and reuse than a traditional copyrighted work.
The PMC Open Access Subset is one part of the PMC Article Datasets
This version focus on associating the graphics of figures with their captions | false | 17 | false | TomTBT/pmc_open_access_figure | 2022-11-01T13:19:19.000Z | null | false | 47569c1759a1babffbc55784252c8d5d31875993 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/TomTBT/pmc_open_access_figure/resolve/main/README.md | ---
license: apache-2.0
---
|
Roderich | null | null | null | false | null | false | Roderich/Elsa_prueba | 2022-10-22T22:25:31.000Z | null | false | 1ede12140e260ae57927006045ec50e7fdf4da4b | [] | [
"license:other"
] | https://huggingface.co/datasets/Roderich/Elsa_prueba/resolve/main/README.md | ---
license: other
---
|
Escalibur | null | null | null | false | null | false | Escalibur/realSergio | 2022-10-22T22:37:26.000Z | null | false | a1ca710081a0cf551d68e8fa2e58cb24016bce11 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/Escalibur/realSergio/resolve/main/README.md | ---
license: unknown
---
|
mfigurski80 | null | null | null | false | 85 | false | mfigurski80/processed_narrative_relationship_dataset | 2022-11-01T01:00:16.000Z | null | false | fa56884038f5566930d101134cb74fc8912a92ee | [] | [] | https://huggingface.co/datasets/mfigurski80/processed_narrative_relationship_dataset/resolve/main/README.md | ---
dataset_info:
features:
- name: subject
dtype: string
- name: object
dtype: string
- name: dialogue
dtype: string
- name: pair_examples
dtype: int64
splits:
- name: test
num_bytes: 3410751.179531327
num_examples: 15798
- name: train
num_bytes: 13642788.820468673
num_examples: 63191
download_size: 9671733
dataset_size: 17053540.0
---
# Dataset Card for "processed_narrative_relationship_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nateraw | null | null | null | false | null | false | nateraw/misc | 2022-10-27T00:46:51.000Z | null | false | a4bcc1f51937cbae5ef5c13296bdec964afff653 | [] | [
"license:mit"
] | https://huggingface.co/datasets/nateraw/misc/resolve/main/README.md | ---
license: mit
---
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-squad-plain_text-be943f-1842563161 | 2022-10-23T02:42:33.000Z | null | false | a6895a95b21e1c435a01b40c6be3d7280a727f07 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:squad"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-squad-plain_text-be943f-1842563161/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad
eval_info:
task: extractive_question_answering
model: Aiyshwariya/bert-finetuned-squad
metrics: ['squad', 'bertscore']
dataset_name: squad
dataset_config: plain_text
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Aiyshwariya/bert-finetuned-squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jsfs11](https://huggingface.co/jsfs11) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-squad-plain_text-be943f-1842563162 | 2022-10-23T02:42:25.000Z | null | false | e8e49851544cde36cf86caec6e1e653e4cb56d42 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:squad"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-squad-plain_text-be943f-1842563162/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad
eval_info:
task: extractive_question_answering
model: Neulvo/bert-finetuned-squad
metrics: ['squad', 'bertscore']
dataset_name: squad
dataset_config: plain_text
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Neulvo/bert-finetuned-squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jsfs11](https://huggingface.co/jsfs11) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-squad-plain_text-be943f-1842563163 | 2022-10-23T02:42:15.000Z | null | false | 5da30b83882e79083ee59bd450c0ada0300a59d6 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:squad"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-squad-plain_text-be943f-1842563163/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad
eval_info:
task: extractive_question_answering
model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt
metrics: ['squad', 'bertscore']
dataset_name: squad
dataset_config: plain_text
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jsfs11](https://huggingface.co/jsfs11) for evaluating this model. |
kayt | null | null | null | false | null | false | kayt/finetuning | 2022-10-23T05:26:18.000Z | null | false | 2a3bb2e3c5547c512306534192af06db7dc5d43b | [] | [] | https://huggingface.co/datasets/kayt/finetuning/resolve/main/README.md | |
huabin | null | null | null | false | null | false | huabin/momo | 2022-10-23T06:01:57.000Z | null | false | 946b87cd3fe02ce0c8827b865d0f3a0340f8066a | [] | [
"license:c-uda"
] | https://huggingface.co/datasets/huabin/momo/resolve/main/README.md | ---
license: c-uda
---
|
fourteenBDr | null | null | null | false | null | false | fourteenBDr/shiji | 2022-10-23T10:33:10.000Z | null | false | 6d4e61b584aec1e2d29f95d05baa037b84f23825 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/fourteenBDr/shiji/resolve/main/README.md | ---
license: apache-2.0
---
|
BridgeQZH | null | null | null | false | null | false | BridgeQZH/amagazine | 2022-10-29T20:56:57.000Z | null | false | b825fdf740a1d6820c02e06a1d8741005f858612 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/BridgeQZH/amagazine/resolve/main/README.md | ---
license: openrail
---
|
P22 | null | null | null | false | null | false | P22/beta-flower | 2022-10-23T11:58:59.000Z | null | false | 567094ef0bf698519f811edb7bef6b629ec1beed | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/P22/beta-flower/resolve/main/README.md | ---
license: afl-3.0
---
|
Rosenberg | null | null | null | false | 55 | false | Rosenberg/genia | 2022-10-23T12:08:03.000Z | null | false | d71cefadbbee8cdb4a2b09e9783de79ba3da242b | [] | [
"license:mit"
] | https://huggingface.co/datasets/Rosenberg/genia/resolve/main/README.md | ---
license: mit
---
|
matejklemen | null | @book{steen2010method,
title={A method for linguistic metaphor identification: From MIP to MIPVU},
author={Steen, Gerard and Dorst, Lettie and Herrmann, J. and Kaal, Anna and Krennmayr, Tina and Pasma, Trijntje},
volume={14},
year={2010},
publisher={John Benjamins Publishing}
} | The resource contains a selection of excerpts from BNC-Baby files that have been annotated for metaphor.
There are four registers, each comprising about 50,000 words: academic texts, news texts, fiction, and conversations.
Words have been separately labelled as participating in multi-word expressions (about 1.5%) or as discarded for
metaphor analysis (0.02%). Main categories include words that are related to metaphor (MRW), words that signal
metaphor (MFlag), and words that are not related to metaphor. For metaphor-related words, subdivisions have been made
between clear cases of metaphor versus borderline cases (WIDLII, When In Doubt, Leave It In). Another parameter of
metaphor-related words makes a distinction between direct metaphor, indirect metaphor, and implicit metaphor. | false | 22 | false | matejklemen/vuamc | 2022-10-26T08:50:42.000Z | null | false | 884b7444f79ed8f90b22ab80ee2469eb65b697cf | [] | [
"annotations_creators:expert-generated",
"language:en",
"language_creators:found",
"license:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:100K<n<1M",
"tags:metaphor-classification",
"tags:multiword-expression-detection",
"tags:vua20",
"tags:vua18",
"tags:mipvu",
"task_categories:text-classification",
"task_categories:token-classification",
"task_ids:multi-class-classification"
] | https://huggingface.co/datasets/matejklemen/vuamc/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: VUA Metaphor Corpus
size_categories:
- 10K<n<100K
- 100K<n<1M
source_datasets: []
tags:
- metaphor-classification
- multiword-expression-detection
- vua20
- vua18
- mipvu
task_categories:
- text-classification
- token-classification
task_ids:
- multi-class-classification
---
# Dataset Card for VUA Metaphor Corpus
**Important note#1**: This is a slightly simplified but mostly complete parse of the corpus. What is missing are lemmas and some metadata that was not important at the time of writing the parser. See the section `Simplifications` for more information on this.
**Important note#2**: The dataset contains metadata - to ignore it and correctly remap the annotations, see the section `Discarding metadata`.
### Dataset Summary
VUA Metaphor Corpus (VUAMC) contains a selection of excerpts from BNC-Baby files that have been annotated for metaphor. There are four registers, each comprising about 50 000 words: academic texts, news texts, fiction, and conversations.
Words have been separately labelled as participating in multi-word expressions (about 1.5%) or as discarded for metaphor analysis (0.02%). Main categories include words that are related to metaphor (MRW), words that signal metaphor (MFlag), and words that are not related to metaphor. For metaphor-related words, subdivisions have been made between clear cases of metaphor versus borderline cases (WIDLII, When In Doubt, Leave It In). Another parameter of metaphor-related words makes a distinction between direct metaphor, indirect metaphor, and implicit metaphor.
### Supported Tasks and Leaderboards
Metaphor detection, metaphor type classification.
### Languages
English.
## Dataset Structure
### Data Instances
A sample instance from the dataset:
```
{
'document_name': 'kcv-fragment42',
'words': ['', 'I', 'think', 'we', 'should', 'have', 'different', 'holidays', '.'],
'pos_tags': ['N/A', 'PNP', 'VVB', 'PNP', 'VM0', 'VHI', 'AJ0', 'NN2', 'PUN'],
'met_type': [
{'type': 'mrw/met', 'word_indices': [5]}
],
'meta': ['vocal/laugh', 'N/A', 'N/A', 'N/A', 'N/A', 'N/A', 'N/A', 'N/A', 'N/A']
}
```
### Data Fields
The instances are ordered as they appear in the corpus.
- `document_name`: a string containing the name of the document in which the sentence appears;
- `words`: words in the sentence (`""` when the word represents metadata);
- `pos_tags`: POS tags of the words, encoded using the BNC basic tagset (`"N/A"` when the word does not have an associated POS tag);
- `met_type`: metaphors in the sentence, marked by their type and word indices;
- `meta`: selected metadata tags providing additional context to the sentence. Metadata may not correspond to a specific word. In this case, the metadata is represented with an empty string (`""`) in `words` and a `"N/A"` tag in `pos_tags`.
## Dataset Creation
For detailed information on the corpus, please check out the references in the `Citation Information` section or contact the dataset authors.
## Simplifications
The raw corpus is equipped with rich metadata and encoded in the TEI XML format. The textual part is fully parsed except for the lemmas, i.e. all the sentences in the raw corpus are present in the dataset.
However, parsing the metadata fully is unnecessarily tedious, so certain simplifications were made:
- paragraph information is not preserved as the dataset is parsed at sentence level;
- manual corrections (`<corr>`) of incorrectly written words are ignored, and the original, incorrect form of the words is used instead;
- `<ptr>` and `<anchor>` tags are ignored as I cannot figure out what they represent;
- the attributes `rendition` (in `<hi>` tags) and `new` (in `<shift>` tags) are not exposed.
## Discarding metadata
The dataset contains rich metadata, which is stored in the `meta` attribute. To keep data aligned, empty words or `"N/A"`s are inserted into the other attributes. If you want to ignore the metadata and correct the metaphor type annotations, you can use code similar to the following snippet:
```python3
data = datasets.load_dataset("matejklemen/vuamc")["train"]
data = data.to_pandas()
for idx_ex in range(data.shape[0]):
curr_ex = data.iloc[idx_ex]
idx_remap = {}
for idx_word, word in enumerate(curr_ex["words"]):
if len(word) != 0:
idx_remap[idx_word] = len(idx_remap)
# Note that lists are stored as np arrays by datasets, while we are storing new data in a list!
# (unhandled for simplicity)
words, pos_tags, met_type = curr_ex[["words", "pos_tags", "met_type"]].tolist()
if len(idx_remap) != len(curr_ex["words"]):
words = list(filter(lambda _word: len(_word) > 0, curr_ex["words"]))
pos_tags = list(filter(lambda _pos: _pos != "N/A", curr_ex["pos_tags"]))
met_type = []
for met_info in curr_ex["met_type"]:
met_type.append({
"type": met_info["type"],
"word_indices": list(map(lambda _i: idx_remap[_i], met_info["word_indices"]))
})
```
## Additional Information
### Dataset Curators
Gerard Steen; et al. (please see http://hdl.handle.net/20.500.12024/2541 for the full list).
### Licensing Information
Available for non-commercial use on condition that the terms of the [BNC Licence](http://www.natcorp.ox.ac.uk/docs/licence.html) are observed and that this header is included in its entirety with any copy distributed.
### Citation Information
```
@book{steen2010method,
title={A method for linguistic metaphor identification: From MIP to MIPVU},
author={Steen, Gerard and Dorst, Lettie and Herrmann, J. and Kaal, Anna and Krennmayr, Tina and Pasma, Trijntje},
volume={14},
year={2010},
publisher={John Benjamins Publishing}
}
```
```
@inproceedings{leong-etal-2020-report,
title = "A Report on the 2020 {VUA} and {TOEFL} Metaphor Detection Shared Task",
author = "Leong, Chee Wee (Ben) and
Beigman Klebanov, Beata and
Hamill, Chris and
Stemle, Egon and
Ubale, Rutuja and
Chen, Xianyang",
booktitle = "Proceedings of the Second Workshop on Figurative Language Processing",
year = "2020",
url = "https://aclanthology.org/2020.figlang-1.3",
doi = "10.18653/v1/2020.figlang-1.3",
pages = "18--29"
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
|
Rosenberg | null | null | null | false | null | false | Rosenberg/conll2003 | 2022-10-23T12:41:04.000Z | null | false | 681708c46bb571d716afbc9501c1fbd96c530ab6 | [] | [
"license:mit"
] | https://huggingface.co/datasets/Rosenberg/conll2003/resolve/main/README.md | ---
license: mit
---
|
Rosenberg | null | null | null | false | null | false | Rosenberg/weibo_ner | 2022-10-25T12:29:55.000Z | null | false | 0159c148e6fbd59f3a162659dc69edf3758990a1 | [] | [
"license:mit"
] | https://huggingface.co/datasets/Rosenberg/weibo_ner/resolve/main/README.md | ---
license: mit
---
|
gisbornetv | null | null | null | false | null | false | gisbornetv/teseting | 2022-10-23T16:06:04.000Z | null | false | f41e7a9ef4b6efb3b0593771ffa80b8fb7851a2c | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/gisbornetv/teseting/resolve/main/README.md | ---
license: afl-3.0
---
|
ArteChile | null | null | null | false | null | false | ArteChile/footos | 2022-10-23T17:38:01.000Z | null | false | 653ad516164c7f80662f71fded1c3c6c5d37c13a | [] | [
"license:artistic-2.0"
] | https://huggingface.co/datasets/ArteChile/footos/resolve/main/README.md | ---
license: artistic-2.0
---
|
Nerfgun3 | null | null | null | false | null | false | Nerfgun3/space_style | 2022-10-24T19:39:57.000Z | null | false | f521d71ad8871bfe07d1b7f809c38ed578d79f93 | [] | [
"language:en",
"tags:stable-diffusion",
"tags:text-to-image",
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Nerfgun3/space_style/resolve/main/README.md | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Space Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"art by space_style"```
If it is to strong just add [] around it.
Trained until 15000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 15k steps ver in your folder
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/flz5Oxz.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/5btpoXs.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/PtySCd4.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/NbSue9H.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/QhjRezm.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
mozay22 | null | null | null | false | 8 | false | mozay22/heart_disease | 2022-11-15T13:10:27.000Z | null | false | 131a50f9074579632723246dc3a15b42323852b1 | [] | [
"license:other"
] | https://huggingface.co/datasets/mozay22/heart_disease/resolve/main/README.md | ---
license: other
---
|
rufimelo | null | null | null | false | null | false | rufimelo/PortugueseLegalSentences-v1 | 2022-10-24T13:16:43.000Z | null | false | e75c0ba2a7b8754214c22b71ed4ab002e518d665 | [] | [
"annotations_creators:no-annotation",
"language_creators:found",
"language:pt",
"license:apache-2.0",
"multilinguality:monolingual",
"source_datasets:original"
] | https://huggingface.co/datasets/rufimelo/PortugueseLegalSentences-v1/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- pt
license:
- apache-2.0
multilinguality:
- monolingual
source_datasets:
- original
---
# Portuguese Legal Sentences
Collection of Legal Sentences from the Portuguese Supreme Court of Justice
The goal of this dataset was to be used for MLM and TSDAE
### Contributions
[@rufimelo99](https://github.com/rufimelo99)
|
jeffdshen | null | null | null | false | 9 | false | jeffdshen/redefine_math2_8shot | 2022-10-23T20:15:28.000Z | null | false | 7a108fbda32cda49a9a25ae914817723b0934e36 | [] | [
"license:cc-by-2.0"
] | https://huggingface.co/datasets/jeffdshen/redefine_math2_8shot/resolve/main/README.md | ---
license: cc-by-2.0
---
|
jeffdshen | null | null | null | false | 8 | false | jeffdshen/redefine_math0_8shot | 2022-10-23T20:17:15.000Z | null | false | fa3b315810609649398e22125a46364aae950dce | [] | [
"license:cc-by-2.0"
] | https://huggingface.co/datasets/jeffdshen/redefine_math0_8shot/resolve/main/README.md | ---
license: cc-by-2.0
---
|
jeffdshen | null | null | null | false | 16 | false | jeffdshen/neqa0_8shot | 2022-10-23T20:18:00.000Z | null | false | d479875e3aa40d524f67059a1d8ed5d56b6141a6 | [] | [
"license:cc-by-2.0"
] | https://huggingface.co/datasets/jeffdshen/neqa0_8shot/resolve/main/README.md | ---
license: cc-by-2.0
---
|
jeffdshen | null | null | null | false | 9 | false | jeffdshen/neqa2_8shot | 2022-10-23T20:19:39.000Z | null | false | 15de2e240c01577b58f949d06d419f18bfcd1563 | [] | [
"license:cc-by-2.0"
] | https://huggingface.co/datasets/jeffdshen/neqa2_8shot/resolve/main/README.md | ---
license: cc-by-2.0
---
|
Nerfgun3 | null | null | null | false | null | false | Nerfgun3/flower_style | 2022-11-14T23:33:41.000Z | null | false | 44f567ff2d0412890477ee26b25eba67bb356f77 | [] | [
"language:en",
"license:creativeml-openrail-m",
"tags:stable-diffusion",
"tags:text-to-image",
"tags:image-to-image"
] | https://huggingface.co/datasets/Nerfgun3/flower_style/resolve/main/README.md | ---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
- image-to-image
inference: false
---
# Flower Style Embedding / Textual Inversion
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/flower_style/resolve/main/flower_style_showcase.jpg"/>
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"art by flower_style"```
If it is to strong just add [] around it.
Trained until 15000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 15k steps ver in your folder
Have fun :)
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963397 | 2022-10-24T02:24:00.000Z | null | false | bbbeda405dd254bbc39be64fd07ca56e9c42722a | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/neqa0_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963397/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/neqa0_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-30b_eval
metrics: []
dataset_name: jeffdshen/neqa0_8shot
dataset_config: jeffdshen--neqa0_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: jeffdshen/neqa0_8shot
* Config: jeffdshen--neqa0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963393 | 2022-10-23T21:17:44.000Z | null | false | 628102b7e82b9a387a255a6e51170e64a7674645 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/neqa0_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963393/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/neqa0_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-1.3b_eval
metrics: []
dataset_name: jeffdshen/neqa0_8shot
dataset_config: jeffdshen--neqa0_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: jeffdshen/neqa0_8shot
* Config: jeffdshen--neqa0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063400 | 2022-10-23T21:05:23.000Z | null | false | 165ecd1b7528c0a28047f431599ec63ccc225ba5 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/neqa2_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063400/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/neqa2_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-350m_eval
metrics: []
dataset_name: jeffdshen/neqa2_8shot
dataset_config: jeffdshen--neqa2_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: jeffdshen/neqa2_8shot
* Config: jeffdshen--neqa2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963391 | 2022-10-23T21:04:17.000Z | null | false | e2501deb7ee46551f0d545d7cc9d08c205bddd94 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/neqa0_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963391/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/neqa0_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-125m_eval
metrics: []
dataset_name: jeffdshen/neqa0_8shot
dataset_config: jeffdshen--neqa0_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: jeffdshen/neqa0_8shot
* Config: jeffdshen--neqa0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963392 | 2022-10-23T21:07:19.000Z | null | false | 386f0520a81bc2e006e403d88b0e58a25b7edceb | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/neqa0_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963392/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/neqa0_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-350m_eval
metrics: []
dataset_name: jeffdshen/neqa0_8shot
dataset_config: jeffdshen--neqa0_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: jeffdshen/neqa0_8shot
* Config: jeffdshen--neqa0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963394 | 2022-10-23T21:31:31.000Z | null | false | 5493c393d6b927541a9bb351bfe46ce48a363ad2 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/neqa0_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963394/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/neqa0_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-2.7b_eval
metrics: []
dataset_name: jeffdshen/neqa0_8shot
dataset_config: jeffdshen--neqa0_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: jeffdshen/neqa0_8shot
* Config: jeffdshen--neqa0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963395 | 2022-10-23T22:16:35.000Z | null | false | 378354c50946fbf08d8a6563e5da4f69b05f57e1 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/neqa0_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963395/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/neqa0_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-6.7b_eval
metrics: []
dataset_name: jeffdshen/neqa0_8shot
dataset_config: jeffdshen--neqa0_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: jeffdshen/neqa0_8shot
* Config: jeffdshen--neqa0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963396 | 2022-10-23T22:56:59.000Z | null | false | 3beafd757977584c5a7b0426b2025d14a12b872d | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/neqa0_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963396/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/neqa0_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-13b_eval
metrics: []
dataset_name: jeffdshen/neqa0_8shot
dataset_config: jeffdshen--neqa0_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: jeffdshen/neqa0_8shot
* Config: jeffdshen--neqa0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063399 | 2022-10-23T21:02:53.000Z | null | false | 37caa5b64dbc5c3649fb79afa9d8ac337cacf4df | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/neqa2_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063399/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/neqa2_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-125m_eval
metrics: []
dataset_name: jeffdshen/neqa2_8shot
dataset_config: jeffdshen--neqa2_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: jeffdshen/neqa2_8shot
* Config: jeffdshen--neqa2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063402 | 2022-10-23T21:19:17.000Z | null | false | 0d4bc186a5d5a1dc46d0e0206ed53c204f882a88 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/neqa2_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063402/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/neqa2_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-2.7b_eval
metrics: []
dataset_name: jeffdshen/neqa2_8shot
dataset_config: jeffdshen--neqa2_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: jeffdshen/neqa2_8shot
* Config: jeffdshen--neqa2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.