id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
wmt/wmt21 | 2022-07-31T18:12:55.000Z | [
"region:us"
] | wmt | null | null | 0 | 4 | 2022-07-31T08:01:03 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ziwenyd/transcoder-geeksforgeeks | 2022-08-03T14:59:08.000Z | [
"license:mit",
"region:us"
] | ziwenyd | null | null | 3 | 4 | 2022-08-01T09:28:39 | ---
license: mit
---
# statistics
cpp-java: 627 pairs
python-java: 616 pairs
cpp-python: 545 pairs
| 105 | [
[
-0.016937255859375,
0.0107421875,
0.0247650146484375,
0.056610107421875,
-0.0163116455078125,
-0.016876220703125,
0.006381988525390625,
-0.023468017578125,
0.02447509765625,
0.013763427734375,
0.005840301513671875,
-0.0292510986328125,
-0.042724609375,
0.021... |
spacemanidol/ESCI-product-search | 2022-08-04T16:59:47.000Z | [
"region:us"
] | spacemanidol | null | @misc{reddy2022shopping,
title={Shopping Queries Dataset: A Large-Scale {ESCI} Benchmark for Improving Product Search},
author={Chandan K. Reddy and Lluís Màrquez and Fran Valero and Nikhil Rao and Hugo Zaragoza and Sambaran Bandyopadhyay and
Arnab Biswas and Anlu Xing and Karthik Subbian},
year={2022},
eprint={2206.06588},
archivePrefix={arXiv}
} | 0 | 4 | 2022-08-01T22:20:48 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sepidmnorozy/Arabic_sentiment | 2022-08-02T16:12:59.000Z | [
"region:us"
] | sepidmnorozy | null | null | 0 | 4 | 2022-08-02T15:43:48 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
dangne/processed-wikipedia-20220301.simple | 2022-08-02T16:41:20.000Z | [
"region:us"
] | dangne | null | null | 0 | 4 | 2022-08-02T16:28:00 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
rungalileo/emotion | 2022-08-04T04:58:18.000Z | [
"region:us"
] | rungalileo | null | null | 0 | 4 | 2022-08-04T04:58:08 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
kfuangsung/AmazonFineFoodReviews | 2022-08-04T09:33:45.000Z | [
"region:us"
] | kfuangsung | null | null | 1 | 4 | 2022-08-04T06:40:20 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sepidmnorozy/Hindi_sentiment | 2022-08-06T13:05:14.000Z | [
"region:us"
] | sepidmnorozy | null | null | 1 | 4 | 2022-08-04T12:57:46 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
VanessaSchenkel/opus_books_en_pt | 2022-08-06T22:46:10.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:translation",
"size_categories:1K<n<10K",
"source_datasets:extended|opus_books",
"language:en",
"language:pt",
"license:afl-3.0",
"region:us"
] | VanessaSchenkel | null | null | 1 | 4 | 2022-08-06T22:34:58 | ---
annotations_creators:
- found
language:
- en
- pt
language_creators:
- found
license:
- afl-3.0
multilinguality:
- translation
pretty_name: VanessaSchenkel/opus_books_en_pt
size_categories:
- 1K<n<10K
source_datasets:
- extended|opus_books
tags: []
task_categories:
- translation
task_ids: []
---
How to use it:
```
from datasets import load_dataset
remote_dataset = load_dataset("VanessaSchenkel/opus_books_en_pt", field="data")
remote_dataset
```
Output:
```
DatasetDict({
train: Dataset({
features: ['id', 'translation'],
num_rows: 1404
})
})
```
Exemple:
```
remote_dataset["train"][5]
```
Output:
```
{'id': '5',
'translation': {'en': "There was nothing so very remarkable in that; nor did Alice think it so very much out of the way to hear the Rabbit say to itself, 'Oh dear!",
'pt': 'Não havia nada de tão extraordinário nisso; nem Alice achou assim tão fora do normal ouvir o Coelho dizer para si mesmo: —"Oh, céus!'}}
``` | 964 | [
[
-0.027069091796875,
-0.0266265869140625,
-0.006473541259765625,
0.00482940673828125,
-0.035125732421875,
-0.0245208740234375,
-0.0257110595703125,
-0.010711669921875,
0.02984619140625,
0.03515625,
-0.040374755859375,
-0.063720703125,
-0.022857666015625,
0.04... |
skorkmaz88/iris | 2022-08-10T21:52:52.000Z | [
"region:us"
] | skorkmaz88 | null | null | 0 | 4 | 2022-08-10T21:52:38 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
huggingface/transformers-stats-space-data | 2023-10-29T23:04:33.000Z | [
"region:us"
] | huggingface | null | null | 0 | 4 | 2022-08-13T15:08:56 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
philschmid/processed_bert_dataset | 2022-08-14T08:43:17.000Z | [
"region:us"
] | philschmid | null | null | 1 | 4 | 2022-08-14T08:29:53 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sepidmnorozy/Chinese_sentiment | 2022-08-15T23:09:45.000Z | [
"region:us"
] | sepidmnorozy | null | null | 3 | 4 | 2022-08-15T23:08:48 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Norod78/EmojiFFHQAlignedFaces | 2022-08-16T13:40:19.000Z | [
"region:us"
] | Norod78 | null | null | 1 | 4 | 2022-08-16T13:39:41 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
winvoker/lvis | 2023-07-19T13:16:53.000Z | [
"task_categories:image-segmentation",
"task_ids:instance-segmentation",
"size_categories:1M<n<10M",
"license:cc-by-4.0",
"segmentation",
"coco",
"region:us"
] | winvoker | Progress on object detection is enabled by datasets that focus the research community's attention on open challenges. This process led us from simple images to complex scenes and from bounding boxes to segmentation masks. In this work, we introduce LVIS (pronounced `el-vis'): a new dataset for Large Vocabulary Instance Segmentation. We plan to collect ~2 million high-quality instance segmentation masks for over 1000 entry-level object categories in 164k images. Due to the Zipfian distribution of categories in natural images, LVIS naturally has a long tail of categories with few training samples. Given that state-of-the-art deep learning methods for object detection perform poorly in the low-sample regime, we believe that our dataset poses an important and exciting new scientific challenge. | @inproceedings{gupta2019lvis,
title={ LVIS: A Dataset for Large Vocabulary Instance Segmentation},
author={Gupta, Agrim and Dollar, Piotr and Girshick, Ross},
booktitle={Proceedings of the {IEEE} Conference on Computer Vision and Pattern Recognition},
year={2019}
} | 0 | 4 | 2022-08-18T15:17:30 | ---
viewer: true
annotations_creators: []
language: []
language_creators: []
license:
- cc-by-4.0
pretty_name: lvis
size_categories:
- 1M<n<10M
source_datasets: []
tags:
- segmentation
- coco
task_categories:
- image-segmentation
task_ids:
- instance-segmentation
---
# LVIS
### Dataset Summary
This dataset is the implementation of LVIS dataset into Hugging Face datasets. Please visit the original website for more information.
- https://www.lvisdataset.org/
### Loading
This code returns train, validation and test generators.
```python
from datasets import load_dataset
dataset = load_dataset("winvoker/lvis")
```
Objects is a dictionary which contains annotation information like bbox, class.
```
DatasetDict({
train: Dataset({
features: ['id', 'image', 'height', 'width', 'objects'],
num_rows: 100170
})
validation: Dataset({
features: ['id', 'image', 'height', 'width', 'objects'],
num_rows: 4809
})
test: Dataset({
features: ['id', 'image', 'height', 'width', 'objects'],
num_rows: 19822
})
})
```
### Access Generators
```python
train = dataset["train"]
validation = dataset["validation"]
test = dataset["test"]
```
An example row is as follows.
```json
{ 'id': 0,
'image': '000000437561.jpg',
'height': 480,
'width': 640,
'objects': {
'bboxes': [[[392, 271, 14, 3]],
'classes': [117],
'segmentation': [[376, 272, 375, 270, 372, 269, 371, 269, 373, 269, 373]]
}
}
``` | 1,539 | [
[
-0.032501220703125,
-0.02978515625,
0.018096923828125,
0.004932403564453125,
-0.016754150390625,
-0.003940582275390625,
-0.00046634674072265625,
-0.01409912109375,
0.033721923828125,
0.020904541015625,
-0.060089111328125,
-0.044464111328125,
-0.0307769775390625,... |
allenai/multixscience_sparse_oracle | 2022-11-24T16:50:08.000Z | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | allenai | null | null | 0 | 4 | 2022-08-18T23:32:04 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
paperswithcode_id: multi-xscience
pretty_name: Multi-XScience
---
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `related_work` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5482 | 0.2243 | 0.2243 | 0.2243 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5476 | 0.2209 | 0.2209 | 0.2209 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5480 | 0.2272 | 0.2272 | 0.2272 | | 1,462 | [
[
-0.0220794677734375,
0.00008511543273925781,
0.0181121826171875,
0.00499725341796875,
-0.0145263671875,
0.008026123046875,
-0.0030727386474609375,
0.007358551025390625,
0.048126220703125,
0.0338134765625,
-0.051239013671875,
-0.031097412109375,
-0.04391479492187... |
jamescalam/movielens-25m-ratings | 2022-08-22T15:55:40.000Z | [
"region:us"
] | jamescalam | This is a dataset that streams user ratings from the MovieLens 25M dataset from the MovieLens servers. | @InProceedings{huggingface:dataset,
title = {MovieLens Ratings},
author={Ismail Ashraq, James Briggs},
year={2022}
} | 0 | 4 | 2022-08-22T15:55:15 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
allenai/multixscience_sparse_mean | 2022-11-24T16:48:30.000Z | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | allenai | null | null | 1 | 4 | 2022-08-25T22:58:26 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
paperswithcode_id: multi-xscience
pretty_name: Multi-XScience
---
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `related_work` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"mean"`, i.e. the number of documents retrieved, `k`, is set as the mean number of documents seen across examples in this dataset, in this case `k==4`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5482 | 0.2243 | 0.1578 | 0.2689 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5476 | 0.2209 | 0.1592 | 0.2650 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.548 | 0.2272 | 0.1611 | 0.2704 | | 1,490 | [
[
-0.02398681640625,
0.0013284683227539062,
0.0157318115234375,
0.005718231201171875,
-0.017333984375,
0.0055999755859375,
-0.005413055419921875,
0.00928497314453125,
0.04949951171875,
0.026458740234375,
-0.04852294921875,
-0.032989501953125,
-0.04766845703125,
... |
ShapeNet/ShapeNetSem-archive | 2023-09-20T14:59:59.000Z | [
"language:en",
"license:other",
"3D shapes",
"region:us"
] | ShapeNet | null | null | 1 | 4 | 2022-08-26T09:34:36 | ---
language:
- en
pretty_name: ShapeNetSem
tags:
- 3D shapes
license: other
extra_gated_heading: Acknowledge license to accept the repository
extra_gated_prompt: >-
To request access to this ShapeNet repo, you will need to provide your **full name** (please provide both your first and last name), the name of your **advisor or the principal investigator (PI)** of your lab (in the PI/Advisor) fields, and the name of the **school or company** that you are affiliated with (the **Affiliation** field).
After requesting access to this ShapeNet repo, you will be considered for access approval.
After access approval, you (the "Researcher") receive permission to use the ShapeNet database (the "Database") at Princeton University and Stanford University. In exchange for being able to join the ShapeNet community and receive such permission, Researcher hereby agrees to the following terms and conditions:
Researcher shall use the Database only for non-commercial research and educational purposes.
Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.
Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify Princeton University and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted 3D models that he or she may create from the Database.
Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.
Princeton University and Stanford University reserve the right to terminate Researcher's access to the Database at any time.
If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.
The law of the State of New Jersey shall apply to all disputes under this agreement.
For access to the data, please fill in your **full name** (both first and last name), the name of your **advisor or principal investigator (PI)**, and the name of the **school or company** you are affliated with.
Please actually fill out the fields (DO NOT put the word "Advisor" for PI/Advisor and the word "School" for "Affiliation", please specify the name of your advisor and the name of your school).
extra_gated_fields:
Name: text
PI/Advisor: text
Affiliation: text
Purpose: text
Country: text
I agree to use this dataset for non-commercial use ONLY: checkbox
---
This repository contains archives (zip files) for ShapeNetSem, a subset of [ShapeNet](https://shapenet.org) richly annotated with physical attributes.
Please see [DATA.md](DATA.md) for details about the data.
If you use ShapeNet data, you agree to abide by the [ShapeNet terms of use](https://shapenet.org/terms). You are only allowed to redistribute the data to your research associates and colleagues provided that they first agree to be bound by these terms and conditions.
If you use this data, please cite the main ShapeNet technical report and the "Semantically-enriched 3D Models for Common-sense Knowledge" workshop paper.
```
@techreport{shapenet2015,
title = {{ShapeNet: An Information-Rich 3D Model Repository}},
author = {Chang, Angel X. and Funkhouser, Thomas and Guibas, Leonidas and Hanrahan, Pat and Huang, Qixing and Li, Zimo and Savarese, Silvio and Savva, Manolis and Song, Shuran and Su, Hao and Xiao, Jianxiong and Yi, Li and Yu, Fisher},
number = {arXiv:1512.03012 [cs.GR]},
institution = {Stanford University --- Princeton University --- Toyota Technological Institute at Chicago},
year = {2015}
}
@article{savva2015semgeo,
title={{Semantically-Enriched 3D Models for Common-sense Knowledge}},
author={Manolis Savva and Angel X. Chang and Pat Hanrahan},
journal = {CVPR 2015 Workshop on Functionality, Physics, Intentionality and Causality},
year = {2015}
}
```
For more information, please contact us at shapenetwebmaster@gmail.com and indicate ShapeNetSem in the title of your email.
| 4,411 | [
[
-0.0179290771484375,
-0.0244293212890625,
0.041900634765625,
0.00731658935546875,
-0.01495361328125,
-0.05279541015625,
0.00138092041015625,
-0.049591064453125,
0.037750244140625,
0.045654296875,
-0.0301055908203125,
-0.04339599609375,
-0.04071044921875,
0.0... |
demo-org/diabetes | 2022-08-30T21:08:59.000Z | [
"task_categories:text-classification",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"region:us"
] | demo-org | null | null | 0 | 4 | 2022-08-30T21:06:15 | ---
language:
- en
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
paperswithcode_id: null
pretty_name: Diabetes
---
# Dataset Card for Auditor_Review
This file is a copy, the original version is hosted at [data.world](https://data.world/rshah/diabetes) | 332 | [
[
0.005046844482421875,
-0.004405975341796875,
0.004596710205078125,
0.0015697479248046875,
-0.052337646484375,
0.0162353515625,
0.03228759765625,
-0.02764892578125,
0.049224853515625,
0.07025146484375,
-0.019317626953125,
-0.03448486328125,
-0.025421142578125,
... |
openclimatefix/era5-reanalysis | 2022-12-01T15:18:54.000Z | [
"license:mit",
"region:us"
] | openclimatefix | null | null | 0 | 4 | 2022-09-02T12:37:58 | ---
license: mit
---
This repo contains converted ECMWF ERA5 reanalysis files for both hourly atmospheric and land variables from Jan 2014 to October 2022. The data has been converted from the downloaded NetCDF files into Zarr using Xarray. Each file is 1 day of reanalysis, and so has 24 timesteps at a 0.25 degree grid resolution. All variables in the reanalysis are included here. | 385 | [
[
-0.0496826171875,
-0.0184173583984375,
0.041168212890625,
0.00982666015625,
-0.0242156982421875,
-0.0254974365234375,
0.00986480712890625,
-0.0419921875,
0.01238250732421875,
0.07757568359375,
-0.061798095703125,
-0.049072265625,
-0.01458740234375,
0.0268554... |
affahrizain/jigsaw-toxic-comment | 2023-02-19T11:51:27.000Z | [
"region:us"
] | affahrizain | null | null | 0 | 4 | 2022-09-06T19:36:24 | ---
dataset_info:
features:
- name: labels
dtype: int64
- name: comment_clean
dtype: string
splits:
- name: train
num_bytes: 57080609
num_examples: 159100
- name: dev
num_bytes: 7809213
num_examples: 22393
- name: test
num_bytes: 22245686
num_examples: 63978
download_size: 13050863
dataset_size: 87135508
---
# Dataset Card for "jigsaw-toxic-comment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 534 | [
[
-0.0243072509765625,
-0.023345947265625,
0.018341064453125,
0.01496124267578125,
-0.033660888671875,
0.0014820098876953125,
0.028228759765625,
-0.01425933837890625,
0.055023193359375,
0.030975341796875,
-0.05328369140625,
-0.045684814453125,
-0.04669189453125,
... |
ai-forever/school_notebooks_EN | 2023-02-09T18:26:07.000Z | [
"task_categories:image-segmentation",
"task_categories:object-detection",
"source_datasets:original",
"language:en",
"license:mit",
"optical-character-recognition",
"text-detection",
"ocr",
"region:us"
] | ai-forever | null | null | 1 | 4 | 2022-09-08T09:31:05 | ---
language:
- en
license:
- mit
source_datasets:
- original
task_categories:
- image-segmentation
- object-detection
task_ids: []
tags:
- optical-character-recognition
- text-detection
- ocr
---
# School Notebooks Dataset
The images of school notebooks with handwritten notes in English.
The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages.
## Annotation format
The annotation is in COCO format. The `annotation.json` should have the following dictionaries:
- `annotation["categories"]` - a list of dicts with a categories info (categotiy names and indexes).
- `annotation["images"]` - a list of dictionaries with a description of images, each dictionary must contain fields:
- `file_name` - name of the image file.
- `id` for image id.
- `annotation["annotations"]` - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields:
- `image_id` - the index of the image on which the polygon is located.
- `category_id` - the polygon’s category index.
- `attributes` - dict with some additional annotation information. In the `translation` subdict you can find text translation for the line.
- `segmentation` - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y. | 1,416 | [
[
-0.0278472900390625,
-0.0428466796875,
0.0189056396484375,
0.007411956787109375,
-0.035064697265625,
0.018524169921875,
-0.011688232421875,
-0.0213775634765625,
0.0274658203125,
0.042724609375,
-0.0265045166015625,
-0.054473876953125,
-0.053558349609375,
0.0... |
MayaGalvez/multilingual_xnli | 2022-09-08T17:10:15.000Z | [
"region:us"
] | MayaGalvez | null | null | 0 | 4 | 2022-09-08T17:09:55 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
bigscience/xP3megds | 2023-05-30T15:52:11.000Z | [
"task_categories:other",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:100M<n<1B",
"language:ak",
"language:ar",
"language:as",
"language:bm",
"language:bn",
"language:ca",
"language:code",
"language:en",
"lan... | bigscience | null | null | 0 | 4 | 2022-09-09T08:15:42 | ---
annotations_creators:
- expert-generated
- crowdsourced
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
license:
- apache-2.0
multilinguality:
- multilingual
pretty_name: xP3
size_categories:
- 100M<n<1B
task_categories:
- other
---
# Dataset Card for xP3
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/bigscience-workshop/xmtf
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
### Dataset Summary
> xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3). We provide this version to save processing time and ease reproducibility.
- **Languages:** 46 (Can be extended by [recreating with more splits](https://github.com/bigscience-workshop/xmtf#create-xp3))
- **xP3 Dataset Family:**
<table>
<tr>
<th>Name</th>
<th>Explanation</th>
<th>Example models</th>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t>
<td>Mixture of 17 tasks in 277 languages with English prompts</td>
<td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t>
<td>Mixture of 13 training tasks in 46 languages with English prompts</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t>
<td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t>
<td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td>
<td></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t>
<td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t>
<td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
</tr>
</table>
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"inputs": "Sentence 1: Fue académico en literatura metafísica, teología y ciencias clásicas.\nSentence 2: Fue académico en literatura metafísica, teología y ciencia clásica.\nQuestion: Can we rewrite Sentence 1 to Sentence 2? Yes or No?",
"targets": "Yes"
}
```
### Data Fields
The data fields are the same among all splits:
- `inputs`: the natural language input fed to the model
- `targets`: the natural language target that the model has to generate
### Data Splits
The below table summarizes sizes per language (computed from the `merged_{lang}.jsonl` files). Due to languages like `tw` only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage.
|Language|Kilobytes|%|Samples|%|
|--------|------:|-:|---:|-:|
|tw|106288|0.11|265071|0.34|
|bm|107056|0.11|265180|0.34|
|ak|108096|0.11|265071|0.34|
|eu|108112|0.11|269973|0.34|
|ca|110608|0.12|271191|0.34|
|fon|113072|0.12|265063|0.34|
|st|114080|0.12|265063|0.34|
|ki|115040|0.12|265180|0.34|
|tum|116032|0.12|265063|0.34|
|wo|122560|0.13|365063|0.46|
|ln|126304|0.13|365060|0.46|
|as|156256|0.16|265063|0.34|
|or|161472|0.17|265063|0.34|
|kn|165456|0.17|265063|0.34|
|ml|175040|0.18|265864|0.34|
|rn|192992|0.2|318189|0.4|
|nso|229712|0.24|915051|1.16|
|tn|235536|0.25|915054|1.16|
|lg|235936|0.25|915021|1.16|
|rw|249360|0.26|915043|1.16|
|ts|250256|0.26|915044|1.16|
|sn|252496|0.27|865056|1.1|
|xh|254672|0.27|915058|1.16|
|zu|263712|0.28|915061|1.16|
|ny|272128|0.29|915063|1.16|
|ig|325232|0.34|950097|1.2|
|yo|352784|0.37|918416|1.16|
|ne|393680|0.41|315754|0.4|
|pa|523248|0.55|339210|0.43|
|gu|560688|0.59|347499|0.44|
|sw|560896|0.59|1114455|1.41|
|mr|666240|0.7|417269|0.53|
|bn|832720|0.88|428843|0.54|
|ta|924496|0.97|410633|0.52|
|te|1332912|1.4|573364|0.73|
|ur|1918272|2.02|855756|1.08|
|vi|3101408|3.27|1667306|2.11|
|code|4330752|4.56|2707724|3.43|
|hi|4393696|4.63|1543441|1.96|
|zh|4589904|4.83|3560556|4.51|
|id|4606288|4.85|2627392|3.33|
|ar|4677264|4.93|2148955|2.72|
|fr|5546688|5.84|5055942|6.41|
|pt|6129584|6.46|3562772|4.52|
|es|7571808|7.98|5151349|6.53|
|en|37261104|39.25|31495184|39.93|
|total|94941936|100.0|78883588|100.0|
## Dataset Creation
### Source Data
#### Training datasets
- Code Miscellaneous
- [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex)
- [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus)
- [GreatCode](https://huggingface.co/datasets/great_code)
- [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes)
- Closed-book QA
- [Hotpot QA](https://huggingface.co/datasets/hotpot_qa)
- [Trivia QA](https://huggingface.co/datasets/trivia_qa)
- [Web Questions](https://huggingface.co/datasets/web_questions)
- [Wiki QA](https://huggingface.co/datasets/wiki_qa)
- Extractive QA
- [Adversarial QA](https://huggingface.co/datasets/adversarial_qa)
- [CMRC2018](https://huggingface.co/datasets/cmrc2018)
- [DRCD](https://huggingface.co/datasets/clue)
- [DuoRC](https://huggingface.co/datasets/duorc)
- [MLQA](https://huggingface.co/datasets/mlqa)
- [Quoref](https://huggingface.co/datasets/quoref)
- [ReCoRD](https://huggingface.co/datasets/super_glue)
- [ROPES](https://huggingface.co/datasets/ropes)
- [SQuAD v2](https://huggingface.co/datasets/squad_v2)
- [xQuAD](https://huggingface.co/datasets/xquad)
- TyDI QA
- [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary)
- [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp)
- Multiple-Choice QA
- [ARC](https://huggingface.co/datasets/ai2_arc)
- [C3](https://huggingface.co/datasets/c3)
- [CoS-E](https://huggingface.co/datasets/cos_e)
- [Cosmos](https://huggingface.co/datasets/cosmos)
- [DREAM](https://huggingface.co/datasets/dream)
- [MultiRC](https://huggingface.co/datasets/super_glue)
- [OpenBookQA](https://huggingface.co/datasets/openbookqa)
- [PiQA](https://huggingface.co/datasets/piqa)
- [QUAIL](https://huggingface.co/datasets/quail)
- [QuaRel](https://huggingface.co/datasets/quarel)
- [QuaRTz](https://huggingface.co/datasets/quartz)
- [QASC](https://huggingface.co/datasets/qasc)
- [RACE](https://huggingface.co/datasets/race)
- [SciQ](https://huggingface.co/datasets/sciq)
- [Social IQA](https://huggingface.co/datasets/social_i_qa)
- [Wiki Hop](https://huggingface.co/datasets/wiki_hop)
- [WiQA](https://huggingface.co/datasets/wiqa)
- Paraphrase Identification
- [MRPC](https://huggingface.co/datasets/super_glue)
- [PAWS](https://huggingface.co/datasets/paws)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [QQP](https://huggingface.co/datasets/qqp)
- Program Synthesis
- [APPS](https://huggingface.co/datasets/codeparrot/apps)
- [CodeContests](https://huggingface.co/datasets/teven/code_contests)
- [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs)
- [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp)
- [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search)
- [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code)
- Structure-to-text
- [Common Gen](https://huggingface.co/datasets/common_gen)
- [Wiki Bio](https://huggingface.co/datasets/wiki_bio)
- Sentiment
- [Amazon](https://huggingface.co/datasets/amazon_polarity)
- [App Reviews](https://huggingface.co/datasets/app_reviews)
- [IMDB](https://huggingface.co/datasets/imdb)
- [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes)
- [Yelp](https://huggingface.co/datasets/yelp_review_full)
- Simplification
- [BiSECT](https://huggingface.co/datasets/GEM/BiSECT)
- Summarization
- [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail)
- [Gigaword](https://huggingface.co/datasets/gigaword)
- [MultiNews](https://huggingface.co/datasets/multi_news)
- [SamSum](https://huggingface.co/datasets/samsum)
- [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua)
- [XLSum](https://huggingface.co/datasets/GEM/xlsum)
- [XSum](https://huggingface.co/datasets/xsum)
- Topic Classification
- [AG News](https://huggingface.co/datasets/ag_news)
- [DBPedia](https://huggingface.co/datasets/dbpedia_14)
- [TNEWS](https://huggingface.co/datasets/clue)
- [TREC](https://huggingface.co/datasets/trec)
- [CSL](https://huggingface.co/datasets/clue)
- Translation
- [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200)
- [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt)
- Word Sense disambiguation
- [WiC](https://huggingface.co/datasets/super_glue)
- [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic)
#### Evaluation datasets (included in [xP3all](https://huggingface.co/datasets/bigscience/xP3all) except for NLI & HumanEval)
- Natural Language Inference (NLI)
- [ANLI](https://huggingface.co/datasets/anli)
- [CB](https://huggingface.co/datasets/super_glue)
- [RTE](https://huggingface.co/datasets/super_glue)
- [XNLI](https://huggingface.co/datasets/xnli)
- Coreference Resolution
- [Winogrande](https://huggingface.co/datasets/winogrande)
- [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd)
- Program Synthesis
- [HumanEval](https://huggingface.co/datasets/openai_humaneval)
- Sentence Completion
- [COPA](https://huggingface.co/datasets/super_glue)
- [Story Cloze](https://huggingface.co/datasets/story_cloze)
- [XCOPA](https://huggingface.co/datasets/xcopa)
- [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze)
## Additional Information
### Licensing Information
The dataset is released under Apache 2.0.
### Citation Information
```bibtex
@misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset. | 12,670 | [
[
-0.03680419921875,
-0.034515380859375,
0.020233154296875,
0.01171112060546875,
0.00983428955078125,
0.01136016845703125,
-0.019805908203125,
-0.0260467529296875,
0.03350830078125,
0.01329803466796875,
-0.05670166015625,
-0.05718994140625,
-0.035247802734375,
... |
allenai/wcep_sparse_max | 2022-11-24T15:03:54.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | allenai | null | null | 0 | 4 | 2022-09-14T20:36:21 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: WCEP-10
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: wcep
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
---
This is a copy of the [WCEP-10](https://huggingface.co/datasets/ccdv/WCEP-10) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==10`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8753 | 0.6443 | 0.5919 | 0.6588 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8706 | 0.6280 | 0.5988 | 0.6346 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8836 | 0.6658 | 0.6296 | 0.6746 | | 1,754 | [
[
-0.03790283203125,
-0.0005388259887695312,
0.018707275390625,
0.01515960693359375,
-0.018951416015625,
-0.004894256591796875,
-0.015777587890625,
-0.0011987686157226562,
0.0195159912109375,
0.0290679931640625,
-0.0382080078125,
-0.045684814453125,
-0.05618286132... |
allenai/wcep_sparse_mean | 2022-11-24T15:10:48.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | allenai | null | null | 0 | 4 | 2022-09-14T20:36:44 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: WCEP-10
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: wcep
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
---
This is a copy of the [WCEP-10](https://huggingface.co/datasets/ccdv/WCEP-10) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"mean"`, i.e. the number of documents retrieved, `k`, is set as the mean number of documents seen across examples in this dataset, in this case `k==9`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8753 | 0.6443 | 0.6196 | 0.6237 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8706 | 0.6280 | 0.6260 | 0.5989 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8836 | 0.6658 | 0.6601 | 0.6388 | | 1,751 | [
[
-0.03631591796875,
-0.002536773681640625,
0.0185699462890625,
0.0122528076171875,
-0.0247650146484375,
-0.00716400146484375,
-0.01212310791015625,
0.00029397010803222656,
0.02911376953125,
0.0276031494140625,
-0.041259765625,
-0.04241943359375,
-0.05270385742187... |
Coalth/Centaurs | 2022-09-29T02:13:34.000Z | [
"region:us"
] | Coalth | null | null | 1 | 4 | 2022-09-16T02:10:10 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
psyche/korean_idioms | 2022-10-23T04:02:44.000Z | [
"task_categories:text-classification",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ko",
"region:us"
] | psyche | null | null | 0 | 4 | 2022-09-16T11:31:37 | ---
annotations_creators:
- machine-generated
language:
- ko
language_creators:
- found
multilinguality:
- monolingual
pretty_name: psyche/korean_idioms
size_categories:
- 1K<n<10K
source_datasets:
- original
tags: []
task_categories:
- text-classification
---
NLI를 위한 한국어 속담 데이터셋입니다.
'question'은 속담의 의미와 보기(5지선다)가 표시되어 있으며,
'label'에는 정답의 번호(0-4)가 표시되어 있습니다.
licence: cc-by-sa-2.0-kr (원본 출처:국립국어원 표준국어대사전)
|Model| psyche/korean_idioms |
|:------:|:---:|
|klue/bert-base|0.7646| | 482 | [
[
-0.030731201171875,
-0.053436279296875,
0.0266876220703125,
0.0694580078125,
-0.0399169921875,
-0.01849365234375,
0.0001819133758544922,
-0.0259246826171875,
0.040130615234375,
0.0251312255859375,
-0.05322265625,
-0.032623291015625,
-0.026611328125,
0.041473... |
OxAISH-AL-LLM/pubmed_20k_rct | 2022-09-21T19:40:11.000Z | [
"region:us"
] | OxAISH-AL-LLM | null | null | 0 | 4 | 2022-09-16T15:23:52 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
open-source-metrics/text-to-image-checkpoint-downloads | 2022-10-06T19:28:20.000Z | [
"region:us"
] | open-source-metrics | null | null | 0 | 4 | 2022-09-18T01:30:05 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.014984130859375,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005062103271484375,
0.051361083984375,
0.016998291015625,
-0.0521240234375,
-0.01496124267578125,
-0.0604248046875,
0.037... |
teven/webnlg_2020_human_eval | 2022-09-18T21:02:19.000Z | [
"region:us"
] | teven | null | null | 0 | 4 | 2022-09-18T21:01:26 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.014984130859375,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005062103271484375,
0.051361083984375,
0.016998291015625,
-0.0521240234375,
-0.01496124267578125,
-0.0604248046875,
0.037... |
kejian/codesearchnet-python-raw | 2022-09-19T06:13:46.000Z | [
"region:us"
] | kejian | null | null | 0 | 4 | 2022-09-19T06:13:32 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jmercat/risk_biased_dataset | 2023-08-01T19:08:31.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | jmercat | Dataset of pre-processed samples from a small portion of the Waymo Open Motion Data for our risk-biased prediction task. | @InProceedings{NiMe:2022,
author = {Haruki Nishimura, Jean Mercat, Blake Wulfe, Rowan McAllister},
title = {RAP: Risk-Aware Prediction for Robust Planning},
booktitle = {Proceedings of the 2022 IEEE International Conference on Robot Learning (CoRL)},
month = {December},
year = {2022},
address = {Grafton Road, Auckland CBD, Auckland 1010},
url = {},
} | 0 | 4 | 2022-09-27T22:35:21 | ---
license: cc-by-nc-4.0
---
The code is provided under a Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. Under the license, the code is provided royalty free for non-commercial purposes only. The code may be covered by patents and if you want to use the code for commercial purposes, please contact us for a different license.
This dataset is a pre-processed small sample of the Waymo Open Motion Dataset intended for illustration purposes only.
| 471 | [
[
-0.01568603515625,
0.0032958984375,
0.03924560546875,
0.004108428955078125,
-0.052215576171875,
-0.0382080078125,
0.02301025390625,
-0.033935546875,
0.02410888671875,
0.07330322265625,
-0.07489013671875,
-0.044708251953125,
-0.0233612060546875,
-0.0029258728... |
autoevaluator/benchmark-dummy-data | 2022-11-18T13:19:56.000Z | [
"region:us"
] | autoevaluator | null | null | 0 | 4 | 2022-09-28T07:57:08 | # Dummy Dataset for AutoTrain Benchmark
This dataset contains dummy data that's needed to create AutoTrain projects for benchmarks like [RAFT](https://huggingface.co/spaces/ought/raft-leaderboard). See [here](https://github.com/huggingface/hf_benchmarks) for more details. | 273 | [
[
-0.04632568359375,
-0.015777587890625,
0.00440216064453125,
0.033416748046875,
0.0007185935974121094,
0.01812744140625,
0.029510498046875,
0.0012502670288085938,
0.016448974609375,
0.01450347900390625,
-0.07080078125,
-0.0300750732421875,
-0.0139923095703125,
... |
emoneil/reflections-in-peer-counseling | 2022-10-14T03:59:04.000Z | [
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:conversational",
"task_ids:dialogue-generation",
"annotations_creators:expert-generated",
"size_categories:1K<n<10K",
"gpt3",
"natural language processing",
"natural language generation",
"peer counseling",
"reg... | emoneil | null | null | 0 | 4 | 2022-09-30T04:21:28 | ---
annotations_creators:
- expert-generated
language: []
language_creators: []
license: []
pretty_name: Reflections in Peer Counseling
size_categories:
- 1K<n<10K
source_datasets: []
tags:
- gpt3
- natural language processing
- natural language generation
- peer counseling
task_categories:
- summarization
- text-generation
- conversational
task_ids:
- dialogue-generation
---
# Dataset Card for Reflections in Peer Counseling
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper: Automatic Reflection Generation for Peer-to-Peer Counseling**
- **Point of Contact: emoneil@sas.upenn.edu**
### Dataset Summary
The dataset derives from conversations between clients and counselors on a large peer-to-peer online counseling service. There are a total of 1061 observations across training and testing datasets, with 50 additional randomly sampled examples used in defining the few-shot learning prompt or for validation purposes in tuning hyperparameters, thus totaling 1111 observations across these sets. These observations were sourced from a larger dataset consisting of annotations of several different clinical counseling skills. We thus focus on the annotations of counselor reflections. The counselor reflections were annotated at utterance level with counselor verbal behaviors using the Motivational Interviewing Treatment Integrity 4.2 (MITI) and the Motivational Interviewing Skill Code 2.5 (MISC) manuals. Thus, the entire dataset consists of conversational context-counselor reflection pairs.
### Supported Tasks and Leaderboards
The dataset was used for conditioning and tuning generative models for generating reflection statements in the domain of peer-to-peer counseling.
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
Each instance consists of the chat room id of the conversation in which the dialogue occurred, the prompt which is the conversational context that immediately precedes the counselor reflection (including previous utterances from either the client or counselor up until and including the most recent prior client message that immediately followed a counselor’s message), and the completion which is the counselor reflection.
```
{
'chat_id': "1234567",
'prompt': "Client: I'm 19, he's 25. He's not very considerate of how I feel but says he cares about me and loves me.\nCounselor:",
'completion': " The words are easy, actions are needed. Guys who are 25 just desire to have different experiences.\n\n",
}
```
### Data Fields
* `chat_id`: an integer defining the chat id of the conversation
* `prompt`: a string corresponding to the conversational context preceding the counselor reflection with the messages separated by new line characters and each utterance prepended by 'Client:' or 'Counselor:'. The string ends with 'Counselor:' to indicate that it is followed by the counselor completion described below.
* `completion`: a string corresponding to the counselor reflection
### Data Splits
The dataset is split into training, testing, and a small set of 50 examples used either for designing the few-shot learning prompt or tuning hyperparameters. 911 examples were used for training. 350 of these examples also constitute a reduced training set used in comparative experiments. 150 examples were used for testing. 50 of these testing examples (randomly selected) were used in the human evaluation. We ensured that the chat identifiers for messages in the test set uniquely differed from those included in the training set.
## Dataset Creation
### Curation Rationale
Reflective listening is a critical skill in peer-to-peer counseling that is only effective when tailored to the context. Thus, we wanted to home in on this particular skill and explore the potential of state-of-the-art language models for text generation in this domain.
### Source Data
#### Initial Data Collection and Normalization
The dataset was created by filtering the larger dataset of utterances annotated for many different counseling skills to only those counselor messages annotated as reflections. Then, the prompt instances were created by identifying the preceding messages for each of these counselor reflection instances. After the prompts were initially created, prompts with less than or equal to five words were removed.
The author created reference reflections for each of the 350 training example prompts in the reduced training set and each of the 150 testing example prompts. In creating a reference reflection given each conversational context, the author intended to simulate responding to the client in roughly the same time a counselor would as if this turn was embedded in a conversation the client was having with the author. This gauging of time is based on the author’s experience in volunteering as a counselor at crisis hotlines. It is possible that the reference reflections were created in roughly even less time than an average counselor response given that there were hundreds of conversational contexts for which reflections needed to be created.
#### Who are the source language producers?
The 'client' messages are utterances of those seeking mental health support on a large online counseling service platform. The 'counselor' messages are utterances of minimally-trained peer counselors of this large online counseling service.
For each of the 350 training example prompts in the reduced training set and each of the 150 testing example prompts, a reference reflection was also created by the author.
### Annotations
#### Annotation process
The human evaluation examined text of generative models fine-tuned on the full training set, a reduced training set, and reference reflections; a few-shot learning model; the actual counselor; and the reference reflection.
We administered a survey through Amazon Mechanical Turk Developer Sandbox. 50 of the testing prompts were provided along with the corresponding six response sources. Provided with the conversational context, the annotators evaluated responses based on three criteria: fluency, resemblance of reflection, and overall preference. Thus, for each context, evaluators measured the fluency, reflection resemblance, and overall preference for all six candidate responses.
We used a variation of Efficient Annotation of Scalar Labels (EASL), a hybrid approach between direct assessment and online pairwise ranking aggregation and rank-based magnitude estimation. Evaluators saw all six responses at once (without knowledge of each response’s origin) and used a sliding scale from 1 to 5 to rate the responses based on each of the three dimensions. The order of the model responses for each conversational context was randomized. We provided examples of response ratings for ratings of 1 and 5 on the overall fluency and reflection resemblance dimensions. However, we did not include an example for overall preference, noting its subjectivity. The order of the model responses for each conversational context was randomized. We provided examples of response ratings for ratings of 1 and 5 on the overall fluency and reflection resemblance dimensions. However, we did not include an example for overall preference, noting its subjectivity.
Fluency refers to the response's overall fluency and human-likeness. In the instructions, we noted non-capitalized words and colloquial language are acceptable and not to be considered fluency errors. Reflection resemblance refers to whether the response captures and returns to the client something the client has said. Overall preference refers to the extent to which the evaluator likes the response.
Using Krippendorff’s alpha, we measured inter-annotator agreement, obtaining alpha values of -0.0369, 0.557, and 0.358 for overall fluency, reflection resemblance, and overall preference, respectively. Although these agreement values are low, the 0.557 inter-annotator agreement we obtained for reflection resemblance is notably higher than the inter-annotator agreement obtained for reflection likeness in the most relevant prior work.
#### Who are the annotators?
The three annotators recruited for the human evaluation were familiar with counseling reflections. All three annotators have worked with this large online counseling service dataset with IRB approval. They are quite familiar with motivational interviewing codes, annotating messages and using large language models for mass labeling.
### Personal and Sensitive Information
Due to the sensitive nature of this dataset and privacy concerns, we are unable to publicly share the data.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset of reflections in peer-to-peer counseling can be used as a reference point in understanding and evaluating counselor clinical skills and furthering the potential of language technology to be applied in this space. Given the sensitive nature of the mental health care context and the minimal training of these counselors, the use of such data requires care in understanding the limitations of technology defined based on this language.
### Discussion of Biases
Much of the language of conversations on this online counseling service platform is very informal and some client and counselor utterances may also contain pejorative language.
As for the generated text assessed in the human evaluation of this work, it is important to note that GPT-3 was trained on over 45 terabytes of data from the internet and books, and large volumes of data collected from online sources will inevitably contain biases that may be captured. There may thus be inadvertent discrimination against subclasses of particular protected groups. Using generated responses as a source of guidance rather than using generative systems as the counselors themselves may be able to balance the benefits and risks of using artificial intelligence in delicate mental health settings. It is imperative that such systems are not misused by companies seeking to maximize efficiency and minimize cost.
The reference reflections in this work were created by the author, whose experience with counseling and motivational interviewing derives from over one hundred hours of training at a teen-to-teen crisis hotline and textline service and experience through a research fellowship developing and user testing a platform for nurses to practice and grow their motivational interviewing skills. Therefore, the reference reflections may not be as clinically precise as are possible from a medical professional, and the diversity of reflections is inherently limited.
### Other Known Limitations
## Additional Information
### Dataset Curators
Developed by Emma O'Neil, João Sedoc, Diyi Yang, Haiyi Zhu, Lyle Ungar.
### Licensing Information
### Citation Information
### Contributions
Thanks to [@emoneil](https://github.com/emoneil) for adding this dataset. | 12,004 | [
[
-0.032928466796875,
-0.045013427734375,
0.036376953125,
0.0521240234375,
-0.005992889404296875,
-0.0095672607421875,
-0.0214691162109375,
-0.0252838134765625,
0.00815582275390625,
0.0254364013671875,
-0.045135498046875,
-0.05218505859375,
-0.04132080078125,
... |
juliensimon/autotrain-data-chest-xray-demo | 2022-10-06T09:15:55.000Z | [
"task_categories:image-classification",
"region:us"
] | juliensimon | null | null | 0 | 4 | 2022-10-06T08:25:44 | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: chest-xray-demo
## Dataset Description
This dataset has been automatically processed by AutoTrain for project chest-xray-demo.
The original dataset is located at https://www.kaggle.com/datasets/paultimothymooney/chest-xray-pneumonia
## Dataset Structure
```
├── train
│ ├── NORMAL
│ └── PNEUMONIA
└── valid
├── NORMAL
└── PNEUMONIA
```
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<2090x1858 L PIL image>",
"target": 0
},
{
"image": "<1422x1152 L PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(num_classes=2, names=['NORMAL', 'PNEUMONIA'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follows:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 5216 |
| valid | 624 |
| 1,106 | [
[
-0.0211944580078125,
0.0130615234375,
0.017059326171875,
0.0009255409240722656,
-0.0299530029296875,
-0.003204345703125,
0.00714111328125,
-0.0009670257568359375,
0.013946533203125,
0.0338134765625,
-0.0458984375,
-0.049285888671875,
-0.059051513671875,
0.00... |
meliascosta/wiki_academic_subjects | 2022-12-05T20:16:02.000Z | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-3.0",
"hierarchical",
"acade... | meliascosta | null | null | 4 | 4 | 2022-10-06T16:08:56 | ---
license: cc-by-3.0
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
multilinguality:
- monolingual
paperswithcode_id: wikitext-2
pretty_name: Wikipedia Outline of Academic Disciplines
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- hierarchical
- academic
- tree
- dag
- topics
- subjects
task_categories:
- text-classification
task_ids:
- multi-label-classification
---
# Dataset Card for Wiki Academic Disciplines`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset was created from the [English wikipedia](https://meta.wikimedia.org/wiki/Data_dump_torrents#English_Wikipedia) dump of January 2022.
The main goal was to train a hierarchical classifier of academic subjects using [HiAGM](https://github.com/Alibaba-NLP/HiAGM).
### Supported Tasks and Leaderboard
Text classification - No leaderboard at the moment.
### Languages
English
## Dataset Structure
The dataset consists of groups of labeled text chunks (tokenized by spaces and with stopwords removed).
Labels are organized in a hieararchy (a DAG with a special Root node) of academic subjects.
Nodes correspond to entries in the [outline of academic disciplines](https://en.wikipedia.org/wiki/Outline_of_academic_disciplines) article from Wikipedia.
### Data Instances
Data is split in train/test/val each on a separate `.jsonl` file. Label hierarchy is listed a as TAB separated adjacency list on a `.taxonomy` file.
### Data Fields
JSONL files contain only two fields: a "token" field which holds the text tokens and a "label" field which holds a list of labels for that text.
### Data Splits
80/10/10 TRAIN/TEST/VAL schema
## Dataset Creation
All texts where extracted following the linked articles on [outline of academic disciplines](https://en.wikipedia.org/wiki/Outline_of_academic_disciplines)
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Wiki Dump
#### Who are the source language producers?
Wikipedia community.
### Annotations
#### Annotation process
Texts where automatically assigned to their linked academic discipline
#### Who are the annotators?
Wikipedia Community.
### Personal and Sensitive Information
All information is public.
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Creative Commons 3.0 (see [Wikipedia:Copyrights](https://en.wikipedia.org/wiki/Wikipedia:Copyrights))
### Citation Information
1. Zhou, Jie, et al. "Hierarchy-aware global model for hierarchical text classification." Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020.
### Contributions
Thanks to [@meliascosta](https://github.com/meliascosta) for adding this dataset.
| 4,159 | [
[
-0.037994384765625,
-0.049591064453125,
0.0033168792724609375,
-0.0006718635559082031,
0.00206756591796875,
0.0128631591796875,
-0.0197601318359375,
-0.0325927734375,
0.032867431640625,
0.0259552001953125,
-0.046783447265625,
-0.07763671875,
-0.04180908203125,
... |
argilla/go_emotions_multi-label | 2022-10-07T13:22:38.000Z | [
"region:us"
] | argilla | null | null | 0 | 4 | 2022-10-07T13:22:29 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
eliolio/docvqa | 2022-10-11T21:10:16.000Z | [
"task_ids:document-question-answering",
"language:en",
"arxiv:2007.00398",
"region:us"
] | eliolio | null | null | 0 | 4 | 2022-10-11T18:29:55 | ---
language:
- en
paperswithcode_id: docvqa
pretty_name: DocVQA - A Dataset for VQA on Document Images
task_ids:
- document-question-answering
---
# DocVQA: A Dataset for VQA on Document Images
The DocVQA dataset can be downloaded from the [challenge page](https://rrc.cvc.uab.es/?ch=17) in RRC portal ("Downloads" tab).
## Dataset Structure
The DocVQA comprises 50, 000 questions framed on 12,767 images. The data is split randomly in an 80−10−10 ratio to train, validation and test splits.
- Train split: 39,463 questions, 10,194 images
- Validation split: 5,349 questions and 1,286 images
- Test split has 5,188 questions and 1,287 images.
## Resources and Additional Information
- More information can be found on the [challenge page](https://rrc.cvc.uab.es/?ch=17) and in the [DocVQA paper](https://arxiv.org/abs/2007.00398).
- Document images are taken from the [UCSF Industry Documents Library](https://www.industrydocuments.ucsf.edu/). It consists of a mix of printed, typewritten and handwritten content. A wide variety of document types appears in this dataset including letters, memos, notes, reports etc.
## Citation Information
```
@InProceedings{mathew2021docvqa,
author = {Mathew, Minesh and Karatzas, Dimosthenis and Jawahar, CV},
title = {Docvqa: A dataset for vqa on document images},
booktitle = {Proceedings of the IEEE/CVF winter conference on applications of computer vision},
year = {2021},
pages = {2200--2209},
}
``` | 1,477 | [
[
-0.02557373046875,
-0.026885986328125,
0.03765869140625,
-0.01385498046875,
-0.0189666748046875,
-0.01253509521484375,
0.0276947021484375,
-0.0160675048828125,
-0.01119232177734375,
0.05926513671875,
-0.038482666015625,
-0.048065185546875,
-0.03509521484375,
... |
allenai/multixscience_dense_max | 2022-11-18T19:56:15.000Z | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | allenai | null | null | 1 | 4 | 2022-10-12T13:29:58 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
paperswithcode_id: multi-xscience
pretty_name: Multi-XScience
---
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of its `test` split have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `related_work` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==20`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5270 | 0.2005 | 0.0573 | 0.3785 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5310 | 0.2026 | 0.059 | 0.3831 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5229 | 0.2081 | 0.058 | 0.3794 | | 1,570 | [
[
-0.023773193359375,
-0.00608062744140625,
0.0162506103515625,
0.00861358642578125,
-0.01065826416015625,
0.004383087158203125,
-0.01316070556640625,
0.006744384765625,
0.040771484375,
0.03326416015625,
-0.041473388671875,
-0.03533935546875,
-0.049224853515625,
... |
allenai/wcep_dense_max | 2022-11-18T20:00:07.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | allenai | null | null | 0 | 4 | 2022-10-12T14:08:37 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: WCEP-10
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: wcep
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
---
This is a copy of the [WCEP-10](https://huggingface.co/datasets/ccdv/WCEP-10) dataset, except the input source documents of its `test` split have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==10`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8590 | 0.6490 | 0.5967 | 0.6631 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8578 | 0.6326 | 0.6040 | 0.6401 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8678 | 0.6631 | 0.6301 | 0.6740 | | 1,832 | [
[
-0.035614013671875,
-0.00960540771484375,
0.019378662109375,
0.0143890380859375,
-0.0164031982421875,
-0.0054473876953125,
-0.01873779296875,
-0.0022449493408203125,
0.0240478515625,
0.0347900390625,
-0.034423828125,
-0.045440673828125,
-0.054443359375,
0.00... |
allenai/multinews_dense_max | 2022-11-11T01:29:44.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | allenai | null | null | 0 | 4 | 2022-10-12T19:15:14 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: Multi-News
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: multi-news
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
---
This is a copy of the [Multi-News](https://huggingface.co/datasets/multi_news) dataset, except the input source documents of its `test` split have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==10`
Retrieval results on the `train` set:
Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8661 | 0.6867 | 0.2118 | 0.7966 |
Retrieval results on the `validation` set:
Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8626 | 0.6859 | 0.2083 | 0.7949 |
Retrieval results on the `test` set:
Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8625 | 0.6927 | 0.2096 | 0.7971 | | 1,841 | [
[
-0.0289306640625,
-0.0244903564453125,
0.015960693359375,
0.01360321044921875,
-0.0206451416015625,
-0.004119873046875,
-0.020843505859375,
0.008941650390625,
0.032440185546875,
0.032379150390625,
-0.03948974609375,
-0.043487548828125,
-0.0570068359375,
0.00... |
allenai/multinews_dense_mean | 2022-11-19T04:38:47.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | allenai | null | null | 0 | 4 | 2022-10-12T19:17:57 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: Multi-News
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: multi-news
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
---
This is a copy of the [Multi-News](https://huggingface.co/datasets/multi_news) dataset, except the input source documents of its `train`, `validation` and `test` splits have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==3`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8661 | 0.6867 | 0.5936 | 0.6917 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8626 | 0.6859 | 0.5874 | 0.6925 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8625 | 0.6927 | 0.5938 | 0.6993 | | 1,870 | [
[
-0.029266357421875,
-0.0243072509765625,
0.0159912109375,
0.01507568359375,
-0.0213470458984375,
-0.005573272705078125,
-0.02081298828125,
0.007266998291015625,
0.03350830078125,
0.034393310546875,
-0.039642333984375,
-0.045440673828125,
-0.056915283203125,
... |
arize-ai/beer_reviews_label_drift_neutral | 2022-10-19T13:19:17.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] | arize-ai | This dataset was crafted to be used in our tutorial [Link to the tutorial when
ready]. It consists on product reviews from an e-commerce store. The reviews
are labeled on a scale from 1 to 5 (stars). The training & validation sets are
fully composed by reviews written in english. However, the production set has
some reviews written in spanish. At Arize, we work to surface this issue and
help you solve it. | # @InProceedings{huggingface:dataset,
# title = {A great new dataset},
# author={huggingface, Inc.
# },
# year={2020}
# }
# | 0 | 4 | 2022-10-19T13:16:00 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: sentiment-classification-reviews-with-drift
size_categories:
- 10K<n<100K
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for `reviews_with_drift`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [language](#language)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place.
### Supported Tasks and Leaderboards
`text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
### language
Text is mainly written in english.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset. | 3,309 | [
[
-0.04541015625,
-0.0330810546875,
0.0184478759765625,
0.00952911376953125,
-0.0277557373046875,
0.011993408203125,
-0.0245208740234375,
-0.0144500732421875,
0.045440673828125,
0.045623779296875,
-0.0743408203125,
-0.07159423828125,
-0.039642333984375,
0.0027... |
SALT-NLP/FLUE-FiQA | 2022-10-21T17:29:14.000Z | [
"license:cc-by-3.0",
"region:us"
] | SALT-NLP | null | null | 2 | 4 | 2022-10-19T23:39:48 | ---
license: cc-by-3.0
---
## Dataset Summary
- **Homepage:** https://sites.google.com/view/salt-nlp-flang
- **Models:** https://huggingface.co/SALT-NLP/FLANG-BERT
- **Repository:** https://github.com/SALT-NLP/FLANG
## FLUE
FLUE (Financial Language Understanding Evaluation) is a comprehensive and heterogeneous benchmark that has been built from 5 diverse financial domain specific datasets.
Sentiment Classification: [Financial PhraseBank](https://huggingface.co/datasets/financial_phrasebank)\
Sentiment Analysis, Question Answering: [FiQA 2018](https://huggingface.co/datasets/SALT-NLP/FLUE-FiQA)\
New Headlines Classification: [Headlines](https://www.kaggle.com/datasets/daittan/gold-commodity-news-and-dimensions)\
Named Entity Recognition: [NER](https://huggingface.co/datasets/SALT-NLP/FLUE-NER)\
Structure Boundary Detection: [FinSBD3](https://sites.google.com/nlg.csie.ntu.edu.tw/finweb2021/shared-task-finsbd-3)
## Dataset Structure
The FiQA dataset has a corpus, queries and qrels (relevance judgments file). They are in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
| 1,841 | [
[
-0.034271240234375,
-0.0423583984375,
0.0140533447265625,
0.02362060546875,
0.0007982254028320312,
0.0039215087890625,
-0.0210723876953125,
-0.0132293701171875,
0.01215362548828125,
0.02386474609375,
-0.0189971923828125,
-0.0538330078125,
-0.03533935546875,
... |
cjvt/slo_thesaurus | 2022-10-20T12:23:03.000Z | [
"task_categories:other",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:sl",
"license:cc-by-sa-4.0",
"sopomenke",
"synonyms",
"region:us"
] | cjvt | This is an automatically created Slovene thesaurus from Slovene data available in a comprehensive
English–Slovenian dictionary, a monolingual dictionary, and a corpus. A network analysis on the bilingual dictionary
word co-occurrence graph was used, together with additional information from the distributional thesaurus data
available as part of the Sketch Engine tool and extracted from the 1.2 billion word Gigafida corpus and the
monolingual dictionary. | @article{krek2017translation,
title={From translation equivalents to synonyms: creation of a Slovene thesaurus using word co-occurrence network analysis},
author={Krek, Simon and Laskowski, Cyprian and Robnik-{\v{S}}ikonja, Marko},
journal={Proceedings of eLex},
pages={93--109},
year={2017}
} | 0 | 4 | 2022-10-20T05:56:11 | ---
annotations_creators:
- machine-generated
language:
- sl
language_creators:
- machine-generated
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: Thesaurus of Modern Slovene 1.0
size_categories:
- 100K<n<1M
source_datasets: []
tags:
- sopomenke
- synonyms
task_categories:
- other
task_ids: []
---
# Dataset Card for Thesaurus of Modern Slovene 1.0
Also known as "Sopomenke 1.0". Available in application form online: https://viri.cjvt.si/sopomenke/slv/.
### Dataset Summary
This is an automatically created Slovene thesaurus from Slovene data available in a comprehensive English–Slovenian dictionary, a monolingual dictionary, and a corpus. A network analysis on the bilingual dictionary word co-occurrence graph was used, together with additional information from the distributional thesaurus data available as part of the Sketch Engine tool and extracted from the 1.2 billion word Gigafida corpus and the monolingual dictionary.
For a detailed description of the data, please see the paper Krek et al. (2017).
### Supported Tasks and Leaderboards
Other (the data is a knowledge base).
### Languages
Slovenian.
## Dataset Structure
### Data Instances
Each entry is stored in its own instance. The following instance contains the metadata for the `headword` "abeceda" (EN: "alphabet").
```
{
'id_headword': 'th.12',
'headword': 'abeceda',
'groups_core': [],
'groups_near': [
{
'id_words': ['th.12.1', 'th.12.2'],
'words': ['pisava', 'črkopis'],
'scores': [0.3311710059642792, 0.3311710059642792],
'domains': [['jezikoslovje'], ['jezikoslovje']]
}
]
}
```
### Data Fields
- `id_headword`: a string ID of the word;
- `headword`: the word whose synonyms are grouped in the instance;
- `groups_core`: groups of likely synonyms - each group contains the IDs of the words (`id_words`), the synonyms (`words`), and how strong the synonym relation (`scores`) is. Some groups also have domains annotated (`domains`, >= 1 per word, i.e. `domains` is a list of lists);
- `groups_near`: same as `groups_near`, but the synonyms here are typically less likely to be exact synonyms and more likely to be otherwise similar.
## Additional Information
### Dataset Curators
Simon Krek; et al. (please see http://hdl.handle.net/11356/1166 for the full list).
### Licensing Information
CC BY-SA 4.0
### Citation Information
```
@article{krek2017translation,
title={From translation equivalents to synonyms: creation of a Slovene thesaurus using word co-occurrence network analysis},
author={Krek, Simon and Laskowski, Cyprian and Robnik-{\v{S}}ikonja, Marko},
journal={Proceedings of eLex},
pages={93--109},
year={2017}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
| 2,807 | [
[
-0.040771484375,
-0.04949951171875,
-0.0028171539306640625,
0.0051727294921875,
-0.034393310546875,
-0.032012939453125,
-0.0174102783203125,
-0.01227569580078125,
0.062255859375,
0.0440673828125,
-0.07440185546875,
-0.0677490234375,
-0.04205322265625,
0.0254... |
tiagoseca/raw_dre_corpus | 2022-11-02T12:37:09.000Z | [
"region:us"
] | tiagoseca | null | null | 0 | 4 | 2022-10-22T18:53:42 | # not demo
alright
## Subheader
This is so cool!
| 53 | [
[
-0.037750244140625,
-0.0176849365234375,
0.026031494140625,
0.04974365234375,
-0.036041259765625,
0.0208282470703125,
0.01032257080078125,
0.04315185546875,
0.0455322265625,
0.0255279541015625,
-0.0682373046875,
-0.03515625,
-0.034088134765625,
-0.0047607421... |
VietAI/vi_pubmed | 2022-11-07T01:12:52.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"language:vi",
"language:en",
"license:cc",
"arxiv:2210.05610",
"arxiv:2210.05598",
"region:us"
] | VietAI | null | null | 6 | 4 | 2022-11-06T01:36:50 | ---
license: cc
language:
- vi
- en
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: pubmed
dataset_info:
features:
- name: en
dtype: string
- name: vi
dtype: string
splits:
- name: pubmed22
num_bytes: 44360028980
num_examples: 20087006
download_size: 23041004247
dataset_size: 44360028980
---
# Dataset Summary
20M Vietnamese PubMed biomedical abstracts translated by the [state-of-the-art English-Vietnamese Translation project](https://arxiv.org/abs/2210.05610). The data has been used as unlabeled dataset for [pretraining a Vietnamese Biomedical-domain Transformer model](https://arxiv.org/abs/2210.05598).

image source: [Enriching Biomedical Knowledge for Vietnamese Low-resource Language Through Large-Scale Translation](https://arxiv.org/abs/2210.05598)
# Language
- English: Original biomedical abstracts from [Pubmed](https://www.nlm.nih.gov/databases/download/pubmed_medline_faq.html)
- Vietnamese: Synthetic abstract translated by a [state-of-the-art English-Vietnamese Translation project](https://arxiv.org/abs/2210.05610)
# Dataset Structure
- The English sequences are
- The Vietnamese sequences are
# Source Data - Initial Data Collection and Normalization
https://www.nlm.nih.gov/databases/download/pubmed_medline_faq.html
# Licensing Information
[Courtesy of the U.S. National Library of Medicine.](https://www.nlm.nih.gov/databases/download/terms_and_conditions.html)
# Citation
```
@misc{mtet,
doi = {10.48550/ARXIV.2210.05610},
url = {https://arxiv.org/abs/2210.05610},
author = {Ngo, Chinh and Trinh, Trieu H. and Phan, Long and Tran, Hieu and Dang, Tai and Nguyen, Hieu and Nguyen, Minh and Luong, Minh-Thang},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {MTet: Multi-domain Translation for English and Vietnamese},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
```
@misc{vipubmed,
doi = {10.48550/ARXIV.2210.05598},
url = {https://arxiv.org/abs/2210.05598},
author = {Phan, Long and Dang, Tai and Tran, Hieu and Phan, Vy and Chau, Lam D. and Trinh, Trieu H.},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Enriching Biomedical Knowledge for Vietnamese Low-resource Language Through Large-Scale Translation},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
``` | 2,800 | [
[
-0.01092529296875,
-0.051177978515625,
0.035003662109375,
0.0154266357421875,
-0.030029296875,
-0.00040793418884277344,
-0.022369384765625,
-0.0259552001953125,
0.003391265869140625,
0.048126220703125,
-0.019287109375,
-0.0462646484375,
-0.059906005859375,
0... |
alkzar90/cell_benchmark | 2023-01-23T21:36:52.000Z | [
"region:us"
] | alkzar90 | A segmentation dataset for [TODO: complete...] | null | 0 | 4 | 2022-11-06T20:09:50 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
lmqg/qag_tweetqa | 2022-12-02T19:16:46.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:tweet_qa",
"language:en",
"license:cc-by-sa-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
] | lmqg | Question & answer generation dataset based on [TweetQA](https://huggingface.co/datasets/tweet_qa). | @inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
} | 0 | 4 | 2022-11-11T11:11:25 | ---
license: cc-by-sa-4.0
pretty_name: TweetQA for question generation
language: en
multilinguality: monolingual
size_categories: 1k<n<10K
source_datasets: tweet_qa
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qag_tweetqa"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the [tweet_qa](https://huggingface.co/datasets/tweet_qa). The test set of the original data is not publicly released, so we randomly sampled test questions from the training set.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
English (en)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"paragraph": "I would hope that Phylicia Rashad would apologize now that @missjillscott has! You cannot discount 30 victims who come with similar stories.— JDWhitner (@JDWhitner) July 7, 2015",
"questions": [ "what should phylicia rashad do now?", "how many victims have come forward?" ],
"answers": [ "apologize", "30" ],
"questions_answers": "Q: what should phylicia rashad do now?, A: apologize Q: how many victims have come forward?, A: 30"
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `questions_answers`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|4536 | 583| 583|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | 2,527 | [
[
-0.02825927734375,
-0.07342529296875,
0.022216796875,
0.0022411346435546875,
-0.023895263671875,
-0.0006413459777832031,
-0.0104522705078125,
-0.0152740478515625,
0.006710052490234375,
0.029449462890625,
-0.06536865234375,
-0.045928955078125,
-0.01947021484375,
... |
bigbio/cas | 2022-12-22T15:44:18.000Z | [
"multilinguality:monolingual",
"language:fr",
"license:other",
"region:us"
] | bigbio | We manually annotated two corpora from the biomedical field. The ESSAI corpus contains clinical trial protocols in French. They were mainly obtained from the National Cancer Institute The typical protocol consists of two parts: the summary of the trial, which indicates the purpose of the trial and the methods applied; and a detailed description of the trial with the inclusion and exclusion criteria. The CAS corpus contains clinical cases published in scientific literature and training material. They are published in different journals from French-speaking countries (France, Belgium, Switzerland, Canada, African countries, tropical countries) and are related to various medical specialties (cardiology, urology, oncology, obstetrics, pulmonology, gastro-enterology). The purpose of clinical cases is to describe clinical situations of patients. Hence, their content is close to the content of clinical narratives (description of diagnoses, treatments or procedures, evolution, family history, expected audience, etc.). In clinical cases, the negation is frequently used for describing the patient signs, symptoms, and diagnosis. Speculation is present as well but less frequently.
This version only contain the annotated CAS corpus | @inproceedings{grabar-etal-2018-cas,
title = {{CAS}: {F}rench Corpus with Clinical Cases},
author = {Grabar, Natalia and Claveau, Vincent and Dalloux, Cl{\'e}ment},
year = 2018,
month = oct,
booktitle = {
Proceedings of the Ninth International Workshop on Health Text Mining and
Information Analysis
},
publisher = {Association for Computational Linguistics},
address = {Brussels, Belgium},
pages = {122--128},
doi = {10.18653/v1/W18-5614},
url = {https://aclanthology.org/W18-5614},
abstract = {
Textual corpora are extremely important for various NLP applications as
they provide information necessary for creating, setting and testing these
applications and the corresponding tools. They are also crucial for
designing reliable methods and reproducible results. Yet, in some areas,
such as the medical area, due to confidentiality or to ethical reasons, it
is complicated and even impossible to access textual data representative of
those produced in these areas. We propose the CAS corpus built with
clinical cases, such as they are reported in the published scientific
literature in French. We describe this corpus, currently containing over
397,000 word occurrences, and the existing linguistic and semantic
annotations.
}
} | 0 | 4 | 2022-11-13T22:07:35 |
---
language:
- fr
bigbio_language:
- French
license: other
multilinguality: monolingual
bigbio_license_shortname: DUA
pretty_name: CAS
homepage: https://clementdalloux.fr/?page_id=28
bigbio_pubmed: False
bigbio_public: False
bigbio_tasks:
- TEXT_CLASSIFICATION
---
# Dataset Card for CAS
## Dataset Description
- **Homepage:** https://clementdalloux.fr/?page_id=28
- **Pubmed:** False
- **Public:** False
- **Tasks:** TXTCLASS
We manually annotated two corpora from the biomedical field. The ESSAI corpus contains clinical trial protocols in French. They were mainly obtained from the National Cancer Institute The typical protocol consists of two parts: the summary of the trial, which indicates the purpose of the trial and the methods applied; and a detailed description of the trial with the inclusion and exclusion criteria. The CAS corpus contains clinical cases published in scientific literature and training material. They are published in different journals from French-speaking countries (France, Belgium, Switzerland, Canada, African countries, tropical countries) and are related to various medical specialties (cardiology, urology, oncology, obstetrics, pulmonology, gastro-enterology). The purpose of clinical cases is to describe clinical situations of patients. Hence, their content is close to the content of clinical narratives (description of diagnoses, treatments or procedures, evolution, family history, expected audience, etc.). In clinical cases, the negation is frequently used for describing the patient signs, symptoms, and diagnosis. Speculation is present as well but less frequently.
This version only contain the annotated CAS corpus
## Citation Information
```
@inproceedings{grabar-etal-2018-cas,
title = {{CAS}: {F}rench Corpus with Clinical Cases},
author = {Grabar, Natalia and Claveau, Vincent and Dalloux, Cl{'e}ment},
year = 2018,
month = oct,
booktitle = {
Proceedings of the Ninth International Workshop on Health Text Mining and
Information Analysis
},
publisher = {Association for Computational Linguistics},
address = {Brussels, Belgium},
pages = {122--128},
doi = {10.18653/v1/W18-5614},
url = {https://aclanthology.org/W18-5614},
abstract = {
Textual corpora are extremely important for various NLP applications as
they provide information necessary for creating, setting and testing these
applications and the corresponding tools. They are also crucial for
designing reliable methods and reproducible results. Yet, in some areas,
such as the medical area, due to confidentiality or to ethical reasons, it
is complicated and even impossible to access textual data representative of
those produced in these areas. We propose the CAS corpus built with
clinical cases, such as they are reported in the published scientific
literature in French. We describe this corpus, currently containing over
397,000 word occurrences, and the existing linguistic and semantic
annotations.
}
}
```
| 3,092 | [
[
-0.007335662841796875,
-0.049713134765625,
0.045989990234375,
0.030670166015625,
-0.024078369140625,
-0.008514404296875,
-0.01043701171875,
-0.042816162109375,
0.058197021484375,
0.056610107421875,
-0.022796630859375,
-0.07916259765625,
-0.062744140625,
0.03... |
bigbio/geokhoj_v1 | 2022-12-22T15:44:42.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-nc-4.0",
"region:us"
] | bigbio | GEOKhoj v1 is a annotated corpus of control/perturbation labels for 30,000 samples
from Microarray, Transcriptomics and Single cell experiments which are available on
the GEO (Gene Expression Omnibus) database | @misc{geokhoj_v1,
author = {Elucidata, Inc.},
title = {GEOKhoj v1},
howpublished = {\\url{https://github.com/ElucidataInc/GEOKhoj-datasets/tree/main/geokhoj_v1}},
} | 0 | 4 | 2022-11-13T22:08:46 |
---
language:
- en
bigbio_language:
- English
license: cc-by-nc-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_NC_4p0
pretty_name: GEOKhoj v1
homepage: https://github.com/ElucidataInc/GEOKhoj-datasets/tree/main/geokhoj_v1
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- TEXT_CLASSIFICATION
---
# Dataset Card for GEOKhoj v1
## Dataset Description
- **Homepage:** https://github.com/ElucidataInc/GEOKhoj-datasets/tree/main/geokhoj_v1
- **Pubmed:** False
- **Public:** True
- **Tasks:** TXTCLASS
GEOKhoj v1 is a annotated corpus of control/perturbation labels for 30,000 samples
from Microarray, Transcriptomics and Single cell experiments which are available on
the GEO (Gene Expression Omnibus) database
## Citation Information
```
@misc{geokhoj_v1,
author = {Elucidata, Inc.},
title = {GEOKhoj v1},
howpublished = {\url{https://github.com/ElucidataInc/GEOKhoj-datasets/tree/main/geokhoj_v1}},
}
```
| 948 | [
[
-0.03466796875,
-0.046783447265625,
0.0132904052734375,
0.01227569580078125,
-0.045257568359375,
0.00949859619140625,
-0.0103302001953125,
-0.0019464492797851562,
0.032257080078125,
0.03607177734375,
-0.04571533203125,
-0.09112548828125,
-0.05194091796875,
0... |
bigbio/mayosrs | 2022-12-22T15:44:58.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc0-1.0",
"region:us"
] | bigbio | MayoSRS consists of 101 clinical term pairs whose relatedness was determined by nine medical coders and three physicians from the Mayo Clinic. | @article{pedersen2007measures,
title={Measures of semantic similarity and relatedness in the biomedical domain},
author={Pedersen, Ted and Pakhomov, Serguei VS and Patwardhan, Siddharth and Chute, Christopher G},
journal={Journal of biomedical informatics},
volume={40},
number={3},
pages={288--299},
year={2007},
publisher={Elsevier}
} | 0 | 4 | 2022-11-13T22:09:14 |
---
language:
- en
bigbio_language:
- English
license: cc0-1.0
multilinguality: monolingual
bigbio_license_shortname: CC0_1p0
pretty_name: MayoSRS
homepage: https://conservancy.umn.edu/handle/11299/196265
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- SEMANTIC_SIMILARITY
---
# Dataset Card for MayoSRS
## Dataset Description
- **Homepage:** https://conservancy.umn.edu/handle/11299/196265
- **Pubmed:** False
- **Public:** True
- **Tasks:** STS
MayoSRS consists of 101 clinical term pairs whose relatedness was determined by nine medical coders and three physicians from the Mayo Clinic.
## Citation Information
```
@article{pedersen2007measures,
title={Measures of semantic similarity and relatedness in the biomedical domain},
author={Pedersen, Ted and Pakhomov, Serguei VS and Patwardhan, Siddharth and Chute, Christopher G},
journal={Journal of biomedical informatics},
volume={40},
number={3},
pages={288--299},
year={2007},
publisher={Elsevier}
}
```
| 999 | [
[
-0.021270751953125,
-0.0189971923828125,
0.04022216796875,
0.00955963134765625,
-0.032745361328125,
-0.013214111328125,
-0.014984130859375,
-0.0216827392578125,
0.059478759765625,
0.02685546875,
-0.041961669921875,
-0.06256103515625,
-0.04840087890625,
0.042... |
bigbio/minimayosrs | 2022-12-22T15:45:36.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc0-1.0",
"region:us"
] | bigbio | MiniMayoSRS is a subset of the MayoSRS and consists of 30 term pairs on which a higher inter-annotator agreement was
achieved. The average correlation between physicians is 0.68. The average correlation between medical coders is 0.78. | @article{pedersen2007measures,
title={Measures of semantic similarity and relatedness in the biomedical domain},
author={Pedersen, Ted and Pakhomov, Serguei VS and Patwardhan, Siddharth and Chute, Christopher G},
journal={Journal of biomedical informatics},
volume={40},
number={3},
pages={288--299},
year={2007},
publisher={Elsevier}
} | 1 | 4 | 2022-11-13T22:09:56 |
---
language:
- en
bigbio_language:
- English
license: cc0-1.0
multilinguality: monolingual
bigbio_license_shortname: CC0_1p0
pretty_name: MiniMayoSRS
homepage: https://conservancy.umn.edu/handle/11299/196265
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- SEMANTIC_SIMILARITY
---
# Dataset Card for MiniMayoSRS
## Dataset Description
- **Homepage:** https://conservancy.umn.edu/handle/11299/196265
- **Pubmed:** False
- **Public:** True
- **Tasks:** STS
MiniMayoSRS is a subset of the MayoSRS and consists of 30 term pairs on which a higher inter-annotator agreement was
achieved. The average correlation between physicians is 0.68. The average correlation between medical coders is 0.78.
## Citation Information
```
@article{pedersen2007measures,
title={Measures of semantic similarity and relatedness in the biomedical domain},
author={Pedersen, Ted and Pakhomov, Serguei VS and Patwardhan, Siddharth and Chute, Christopher G},
journal={Journal of biomedical informatics},
volume={40},
number={3},
pages={288--299},
year={2007},
publisher={Elsevier}
}
```
| 1,099 | [
[
-0.0248565673828125,
-0.02313232421875,
0.040863037109375,
-0.012786865234375,
-0.038848876953125,
-0.022064208984375,
-0.00881195068359375,
-0.0221099853515625,
0.05584716796875,
0.01070404052734375,
-0.0338134765625,
-0.0418701171875,
-0.058319091796875,
0... |
alexandrainst/da-wit | 2022-11-18T15:48:44.000Z | [
"task_categories:image-to-text",
"task_categories:zero-shot-image-classification",
"task_categories:feature-extraction",
"task_ids:image-captioning",
"size_categories:100K<n<1M",
"source_datasets:wikimedia/wit_base",
"language:da",
"license:cc-by-sa-4.0",
"region:us"
] | alexandrainst | null | null | 2 | 4 | 2022-11-13T22:14:51 | ---
pretty_name: Danish WIT
language:
- da
license:
- cc-by-sa-4.0
size_categories:
- 100K<n<1M
source_datasets:
- wikimedia/wit_base
task_categories:
- image-to-text
- zero-shot-image-classification
- feature-extraction
task_ids:
- image-captioning
---
# Dataset Card for Danish WIT
## Dataset Description
- **Repository:** <https://gist.github.com/saattrupdan/bb6c9c52d9f4b35258db2b2456d31224>
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:dan.nielsen@alexandra.dk)
- **Size of downloaded dataset files:** 7.5 GB
- **Size of the generated dataset:** 7.8 GB
- **Total amount of disk used:** 15.3 GB
### Dataset Summary
Google presented the Wikipedia Image Text (WIT) dataset in [July
2021](https://dl.acm.org/doi/abs/10.1145/3404835.3463257), a dataset which contains
scraped images from Wikipedia along with their descriptions. WikiMedia released
WIT-Base in [September
2021](https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/),
being a modified version of WIT where they have removed the images with empty
"reference descriptions", as well as removing images where a person's face covers more
than 10% of the image surface, along with inappropriate images that are candidate for
deletion. This dataset is the Danish portion of the WIT-Base dataset, consisting of
roughly 160,000 images with associated Danish descriptions. We release the dataset
under the [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/), in
accordance with WIT-Base's [identical
license](https://huggingface.co/datasets/wikimedia/wit_base#licensing-information).
### Supported Tasks and Leaderboards
Training machine learning models for caption generation, zero-shot image classification
and text-image search are the intended tasks for this dataset. No leaderboard is active
at this point.
### Languages
The dataset is available in Danish (`da`).
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 7.5 GB
- **Size of the generated dataset:** 7.8 GB
- **Total amount of disk used:** 15.3 GB
An example from the `train` split looks as follows.
```
{
"image": [PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=300x409 at 0x7FE4384E2190],
"image_url": "https://upload.wikimedia.org/wikipedia/commons/4/45/Bispen_-_inside.jpg",
"embedding": [2.8568285, 2.9562542, 0.33794892, 8.753725, ...],
"metadata_url": "http://commons.wikimedia.org/wiki/File:Bispen_-_inside.jpg",
"original_height": 3161,
"original_width": 2316,
"mime_type": "image/jpeg",
"caption_attribution_description": "Kulturhuset Bispen set indefra. Biblioteket er til venstre",
"page_url": "https://da.wikipedia.org/wiki/Bispen",
"attribution_passes_lang_id": True,
"caption_alt_text_description": None,
"caption_reference_description": "Bispen set indefra fra 1. sal, hvor ....",
"caption_title_and_reference_description": "Bispen [SEP] Bispen set indefra ...",
"context_page_description": "Bispen er navnet på det offentlige kulturhus i ...",
"context_section_description": "Bispen er navnet på det offentlige kulturhus i ...",
"hierarchical_section_title": "Bispen",
"is_main_image": True,
"page_changed_recently": True,
"page_title": "Bispen",
"section_title": None
}
```
### Data Fields
The data fields are the same among all splits.
- `image`: an `Image` feature.
- `image_url`: a `str` feature.
- `embedding`: a `list` feature.
- `metadata_url`: a `str` feature.
- `original_height`: an `int` or `NaN` feature.
- `original_width`: an `int` or `NaN` feature.
- `mime_type`: a `str` or `None` feature.
- `caption_attribution_description`: a `str` or `None` feature.
- `page_url`: a `str` feature.
- `attribution_passes_lang_id`: a `bool` or `None` feature.
- `caption_alt_text_description`: a `str` or `None` feature.
- `caption_reference_description`: a `str` or `None` feature.
- `caption_title_and_reference_description`: a `str` or `None` feature.
- `context_page_description`: a `str` or `None` feature.
- `context_section_description`: a `str` or `None` feature.
- `hierarchical_section_title`: a `str` feature.
- `is_main_image`: a `bool` or `None` feature.
- `page_changed_recently`: a `bool` or `None` feature.
- `page_title`: a `str` feature.
- `section_title`: a `str` or `None` feature.
### Data Splits
Roughly 2.60% of the WIT-Base dataset comes from the Danish Wikipedia. We have split
the resulting 168,740 samples into a training set, validation set and testing set of
the following sizes:
| split | samples |
|---------|--------:|
| train | 167,460 |
| val | 256 |
| test | 1,024 |
## Dataset Creation
### Curation Rationale
It is quite cumbersome to extract the Danish portion of the WIT-Base dataset,
especially as the dataset takes up 333 GB of disk space, so the curation of Danish-WIT
is purely to make it easier to work with the Danish portion of it.
### Source Data
The original data was collected from WikiMedia's
[WIT-Base](https://huggingface.co/datasets/wikimedia/wit_base) dataset, which in turn
comes from Google's [WIT](https://huggingface.co/datasets/google/wit) dataset.
## Additional Information
### Dataset Curators
[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) curated this dataset.
### Licensing Information
The dataset is licensed under the [CC BY-SA 4.0
license](https://creativecommons.org/licenses/by-sa/4.0/).
| 5,537 | [
[
-0.053070068359375,
-0.0257568359375,
0.0171661376953125,
0.01456451416015625,
-0.028594970703125,
-0.024627685546875,
-0.02099609375,
-0.043731689453125,
0.0248565673828125,
0.025177001953125,
-0.0556640625,
-0.04815673828125,
-0.0275421142578125,
0.0329284... |
olm/olm-october-2022-tokenized-512 | 2022-11-16T01:47:11.000Z | [
"region:us"
] | olm | null | null | 0 | 4 | 2022-11-16T01:24:02 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: train
num_bytes: 79589759460
num_examples: 25807315
download_size: 21375344353
dataset_size: 79589759460
---
# Dataset Card for "olm-october-2022-tokenized-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 488 | [
[
-0.040008544921875,
-0.0185699462890625,
0.015716552734375,
0.00930023193359375,
-0.02490234375,
-0.0156402587890625,
0.025238037109375,
-0.0186767578125,
0.0640869140625,
0.05419921875,
-0.05279541015625,
-0.058868408203125,
-0.03692626953125,
-0.0098190307... |
declare-lab/HyperRED | 2022-11-23T10:55:14.000Z | [
"license:cc-by-sa-3.0",
"arxiv:2211.10018",
"region:us"
] | declare-lab | null | null | 2 | 4 | 2022-11-22T07:46:53 | ---
license: cc-by-sa-3.0
---
# Dataset Card for HyperRED
## Description
- **Repository:** https://github.com/declare-lab/HyperRED
- **Paper (EMNLP 2022):** https://arxiv.org/abs/2211.10018
### Summary
HyperRED is a dataset for the new task of hyper-relational extraction, which extracts relation triplets together with qualifier information such as time, quantity or location. For example, the relation triplet (Leonard Parker, Educated At, Harvard University) can be factually enriched by including the qualifier (End Time, 1967). HyperRED contains 44k sentences with 62 relation types and 44 qualifier types.
### Languages
English.
## Dataset Structure
### Data Fields
- **tokens:** Sentence text tokens.
- **entities:** List of each entity span. The span indices correspond to each token in the space-separated text (inclusive-start and exclusive-end index)
- **relations:** List of each relationship label between the head and tail entity spans. Each relation contains a list of qualifiers where each qualifier has the value entity span and qualifier label.
### Data Instances
An example instance of the dataset is shown below:
```
{
"tokens": ['Acadia', 'University', 'is', 'a', 'predominantly', 'undergraduate', 'university', 'located', 'in', 'Wolfville', ',', 'Nova', 'Scotia', ',', 'Canada', 'with', 'some', 'graduate', 'programs', 'at', 'the', 'master', "'", 's', 'level', 'and', 'one', 'at', 'the', 'doctoral', 'level', '.'],
"entities": [
{'span': (0, 2), 'label': 'Entity'},
{'span': (9, 13), 'label': 'Entity'},
{'span': (14, 15), 'label': 'Entity'},
],
"relations": [
{
"head": [0, 2],
"tail": [9, 13],
"label": "headquarters location",
"qualifiers": [
{"span": [14, 15], "label": "country"}
]
}
],
}
```
### Data Splits
The dataset contains 39,840 instances for training, 1,000 instances for validation and 4,000 instances for testing.
### Dataset Creation
The dataset is constructed from distant supervision between Wikipedia and Wikidata, and the human annotation process is detailed in the paper.
## Citation Information
```
@inproceedings{chia2022hyperred,
title={A Dataset for Hyper-Relational Extraction and a Cube-Filling Approach},
author={Yew Ken Chia, Lidong Bing, Sharifah Mahani Aljunied, Luo Si and Soujanya Poria},
booktitle={Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing},
year={2022}
}
``` | 2,473 | [
[
-0.043365478515625,
-0.050445556640625,
0.0271759033203125,
0.00630950927734375,
-0.0028533935546875,
-0.0169830322265625,
-0.0194091796875,
-0.03338623046875,
0.0182647705078125,
0.0256805419921875,
-0.0421142578125,
-0.0704345703125,
-0.02215576171875,
0.0... |
surrey-nlp/SAD | 2022-11-28T18:41:51.000Z | [
"task_categories:text-classification",
"annotations_creators:Jordan Painter, Diptesh Kanojia",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | surrey-nlp | null | null | 0 | 4 | 2022-11-28T15:26:38 | ---
annotations_creators:
- Jordan Painter, Diptesh Kanojia
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: 'Utilising Weak Supervision to create S3D: A Sarcasm Annotated Dataset'
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
---
# Utilising Weak Supervision to Create S3D: A Sarcasm Annotated Dataset
This is the repository for the S3D dataset published at EMNLP 2022. The dataset can help build sarcasm detection models.
# SAD
The SAD dataset is our gold standard dataset of tweets labelled for sarcasm. These tweets were scraped by observing a '#sarcasm' hashtag and then manually annotated by three annotators.
There are a total of 1170 pairs of a sarcastic and non-sarcastic tweets which were both posted by the same user, resulting in a total of 2340 tweets annotated for sarcasm.
These tweets can be accessed by using the Twitter API so that they can be used for other experiments.
# Data Fields
- Tweet ID: The ID of the labelled tweet
- Label: A label to denote if a given tweet is sarcastic
# Data Splits
- Train: 1638
- Valid: 351
- Test: 351 | 1,140 | [
[
-0.019622802734375,
-0.039306640625,
0.03076171875,
0.042510986328125,
-0.01094818115234375,
-0.01319122314453125,
0.01373291015625,
-0.0166168212890625,
0.0248870849609375,
0.03131103515625,
-0.043487548828125,
-0.040191650390625,
-0.052581787109375,
0.0360... |
neuralcatcher/hateful_memes | 2022-12-01T07:08:59.000Z | [
"arxiv:2005.04790",
"region:us"
] | neuralcatcher | null | null | 2 | 4 | 2022-12-01T03:49:06 | # The Hateful Memes Challenge README
The Hateful Memes Challenge is a dataset and benchmark created by Facebook AI to drive and measure progress on multimodal reasoning and understanding. The task focuses on detecting hate speech in multimodal memes.
Please see the paper for further details:
[The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes
D. Kiela, H. Firooz, A. Mohan, V. Goswami, A. Singh, P. Ringshia, D. Testuggine](
https://arxiv.org/abs/2005.04790)
For more details, see also the website:
https://hatefulmemeschallenge.com
# Dataset details
The files for this folder are arranged as follows:
img/ - the PNG images
train.jsonl - the training set
dev_seen.jsonl - the "seen" development set
test_seen.jsonl - the "seen" test set
dev_unseen.jsonl - the "unseen" development set
test_unseen.jsonl - the "unseen" test set
The "seen" dataset was presented in the NeurIPS paper; the “unseen” dev and test set were released as a part of the NeurIPS 2020 competition.
The .jsonl format contains one JSON-encoded example per line, each of which has the following fields:
‘text’ - the text occurring in the meme
‘img’ - the path to the image in the img/ directory
‘label’ - the label for the meme (0=not-hateful, 1=hateful), provided for train and dev
The metric to use is AUROC. You may also report accuracy in addition, since this is arguably more interpretable. To compute these metrics, we recommend the roc_auc_score and accuracy_score methods in sklearn.metrics, with default settings.
# Getting started
To get started working on this dataset, there's an easy-to-use "starter kit" available in MMF: https://github.com/facebookresearch/mmf/tree/master/projects/hateful_memes.
# Note on Annotator Accuracy
As is to be expected with a dataset of this size and nature, some of the examples in the training set have been misclassified. We are not claiming that our dataset labels are completely accurate, or even that all annotators would agree on a particular label. Misclassifications, although possible, should be very rare in the dev and seen test set, however, and we will take extra care with the unseen test set.
As a reminder, the annotations collected for this dataset were not collected using Facebook annotators and we did not employ Facebook’s hate speech policy. As such, the dataset labels do not in any way reflect Facebook’s official stance on this matter.
# License
The dataset is licensed under the terms in the `LICENSE.txt` file.
# Image Attribution
If you wish to display example memes in your paper, please provide the following attribution:
*Image is a compilation of assets, including ©Getty Image.*
# Citations
If you wish to cite this work, please use the following BiBTeX:
```
@inproceedings{Kiela:2020hatefulmemes,
author = {Kiela, Douwe and Firooz, Hamed and Mohan, Aravind and Goswami, Vedanuj and Singh, Amanpreet and Ringshia, Pratik and Testuggine, Davide},
booktitle = {Advances in Neural Information Processing Systems},
editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin},
pages = {2611--2624},
publisher = {Curran Associates, Inc.},
title = {The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes},
url = {https://proceedings.neurips.cc/paper/2020/file/1b84c4cee2b8b3d823b30e2d604b1878-Paper.pdf},
volume = {33},
year = {2020}
}
```
# Contact
If you have any questions or comments on the dataset, please contact hatefulmemeschallenge@fb.com or one of the authors.
| 3,624 | [
[
-0.03570556640625,
-0.0615234375,
-0.007129669189453125,
0.01549530029296875,
-0.009979248046875,
0.01751708984375,
-0.012725830078125,
-0.0440673828125,
0.0229949951171875,
0.00960540771484375,
-0.05926513671875,
-0.031036376953125,
-0.057769775390625,
0.01... |
arbml/Dvoice | 2022-12-03T15:39:09.000Z | [
"region:us"
] | arbml | null | null | 0 | 4 | 2022-12-03T15:34:53 | ---
dataset_info:
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
splits:
- name: test
num_bytes: 52843034.0
num_examples: 457
- name: train
num_bytes: 153498349.056
num_examples: 1368
- name: validation
num_bytes: 54017328.0
num_examples: 456
download_size: 194658648
dataset_size: 260358711.056
---
# Dataset Card for "Dvoice"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.02587890625,
-0.0047760009765625,
0.0176544189453125,
0.018096923828125,
-0.02105712890625,
0.004795074462890625,
0.031890869140625,
0.0056610107421875,
0.0472412109375,
0.040496826171875,
-0.05517578125,
-0.057891845703125,
-0.03790283203125,
-0.03033447... |
lucadiliello/bookcorpusopen | 2022-12-04T19:09:30.000Z | [
"region:us"
] | lucadiliello | null | null | 1 | 4 | 2022-12-04T19:05:51 | ---
dataset_info:
features:
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 6643459928
num_examples: 17868
download_size: 3940589290
dataset_size: 6643459928
---
# Dataset Card for "bookcorpusopen"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 400 | [
[
-0.0302734375,
-0.0015611648559570312,
-0.01053619384765625,
0.01580810546875,
-0.01241302490234375,
0.007061004638671875,
0.0170440673828125,
-0.01580810546875,
0.045928955078125,
0.048919677734375,
-0.06982421875,
-0.06121826171875,
-0.0333251953125,
-0.01... |
Prarabdha/Rick_and_Morty_Transcript | 2022-12-05T16:09:45.000Z | [
"license:mit",
"region:us"
] | Prarabdha | null | null | 3 | 4 | 2022-12-05T16:02:12 | ---
license: mit
---
## Context
I got inspiration for this dataset from the [Rick&Morty Scripts](https://www.kaggle.com/datasets/andradaolteanu/rickmorty-scripts) by [Andrada Olteanu](https://www.kaggle.com/andradaolteanu) but felt like dataset was a little small and outdated
This dataset includes almost all the episodes till Season 5. More data will be updated
## Content
Rick and Morty Transcripts:
- index: index of the row
- speaker: the character's name
- dialogue: the dialogue of the character
## Acknowledgements
Thanks to the transcripts made available by
- [RickandMorty.fandom.com](https://rickandmorty.fandom.com/)
- [RickandMorty.newtfire.org](http://rickandmorty.newtfire.org/transcripts.html) | 718 | [
[
-0.0211639404296875,
-0.026641845703125,
0.04443359375,
-0.00649261474609375,
-0.01212310791015625,
-0.01396942138671875,
-0.00140380859375,
-0.0030307769775390625,
0.0305633544921875,
0.022674560546875,
-0.07318115234375,
-0.0252532958984375,
-0.033477783203125... |
argilla/medical-keywords | 2022-12-07T12:00:34.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"region:us"
] | argilla | null | null | 2 | 4 | 2022-12-07T11:49:17 | ---
language:
- en
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- keyphrase-extraction
- named-entity-recognition
dataset_info:
features:
- name: text
dtype: string
- name: tokens
sequence: string
- name: prediction
list:
- name: end
dtype: int64
- name: label
dtype: string
- name: score
dtype: float64
- name: start
dtype: int64
- name: prediction_agent
dtype: string
- name: annotation
dtype: 'null'
- name: annotation_agent
dtype: 'null'
- name: id
dtype: 'null'
- name: metadata
struct:
- name: medical_specialty
dtype: string
- name: status
dtype: string
- name: event_timestamp
dtype: timestamp[us]
- name: metrics
dtype: 'null'
splits:
- name: train
num_bytes: 58986555
num_examples: 148699
download_size: 17498377
dataset_size: 58986555
---
# Dataset Card for "medical-keywords"
## Dataset Description
- **Homepage:** Kaggle Challenge
- **Repository:** https://www.kaggle.com/datasets/tboyle10/medicaltranscriptions
- **Paper:** N.A.
- **Leaderboard:** N.A.
- **Point of Contact:** N.A.
### Dataset Summary
Medical transcription data scraped from mtsamples.com
Medical data is extremely hard to find due to HIPAA privacy regulations. This dataset offers a solution by providing medical transcription samples.
This dataset contains sample medical transcriptions for various medical specialties.
### Languages
english
### Citation Information
Acknowledgements
Medical transcription data scraped from mtsamples.com
### Contributions
Thanks to [@davidberenstein1957](https://github.com/davidberenstein1957) for adding this dataset. | 1,744 | [
[
0.01342010498046875,
-0.0296478271484375,
0.03448486328125,
0.0009679794311523438,
-0.040740966796875,
0.015472412109375,
-0.00328826904296875,
-0.01141357421875,
0.050872802734375,
0.05499267578125,
-0.050628662109375,
-0.07342529296875,
-0.05487060546875,
... |
facebook/panda | 2022-12-10T14:01:45.000Z | [
"task_categories:token-classification",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"lice... | facebook | null | null | 5 | 4 | 2022-12-10T13:54:23 | ---
annotations_creators:
- expert-generated
- crowdsourced
language:
- en
language_creators:
- crowdsourced
- expert-generated
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: winobias
pretty_name: panda
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- fairness
- nlp
- demographic
- diverse
- gender
- non-binary
- race
- age
task_categories:
- token-classification
task_ids: []
---
# Dataset Card for PANDA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/facebookresearch/ResponsibleNLP/
- **Paper:** https://arxiv.org/abs/2205.12586
- **Point of Contact:** rebeccaqian@meta.com, ccross@meta.com, douwe@huggingface.co, adinawilliams@meta.com
### Dataset Summary
PANDA (Perturbation Augmentation NLP DAtaset) consists of approximately 100K pairs of crowdsourced human-perturbed text snippets (original, perturbed). Annotators were given selected terms and target demographic attributes, and instructed to rewrite text snippets along three demographic axes: gender, race and age, while preserving semantic meaning. Text snippets were sourced from a range of text corpora (BookCorpus, Wikipedia, ANLI, MNLI, SST, SQuAD). PANDA can be used for training a learned perturber that can rewrite text with control. PANDA can also be used to evaluate the demographic robustness of language models.
### Languages
English
## Dataset Structure
### Data Instances
- Size of training data: 198.6 MB
- Size of validation data: 22.2 MB
Examples of data instances:
```
{
"original": "the moment the girl mentions the subject she will be yours .",
"selected_word": "girl",
"target_attribute": "man",
"perturbed": "the moment the boy mentions the subject he will be yours.\n\n"
}
{
"original": "are like magic tricks, says the New York Times ' Michael Kimmelman. <SEP> Michael Kimmelman has never likened anything to a magic trick.",
"selected_word": "Michael",
"target_attribute": "woman",
"perturbed": "are like magic tricks, says the New York Times' Michelle Kimmelman. <SEP> Michelle Kimmelman has never likened anything to a magic trick."
}
{
"original": "lilly ann looked at him asking herself how he cold not know .",
"selected_word": "he",
"target_attribute": "non-binary",
"perturbed": "Lilly Ann looked at them, asking herself how they could not know."
}
```
Examples with <SEP> tokens are the result of concatenation of text fields in source datasets, such as the premise and hypothesis of NLI datasets.
### Data Fields
- `original`: Source (unperturbed) text snippet, sampled from a variety of English text corpora.
- `selected_word`: Demographic term that needs to be perturbed.
- `target_attribute`: Target demographic category.
- `perturbed`: Perturbed text snippet, which is the source text rewritten to alter the selected word along the specified target demographic attribute. For example, if the selected word is "Lily" and target is "man", all references to "Lily" (eg. pronouns) in the source text are altered to refer to a man. Note that some examples may be unchanged, either due to the lack of demographic information, or ambiguity of the task; given the subjective nature of identifying demographic terms and attributes, we allow some room for interpretation provided the rewrite does not perpetuate harmful social biases.
### Data Splits
- `train`: 94966
- `valid`: 10551
## Dataset Creation
### Curation Rationale
We constructed PANDA to create and release the first large scale dataset of demographic text perturbations. This enables the training of the first neural perturber model, which outperforms heuristic approaches.
### Source Data
#### Initial Data Collection and Normalization
We employed 524 crowdworkers to create PANDA examples over the span of several months. Annotators were tasked with rewriting text snippets sourced from popular English text corpora. For more information on the task UI and methodology, see our paper *Perturbation Augmentation for Fairer NLP*.
### Annotations
#### Annotation process
PANDA was collected in a 3 stage annotation process:
1. Span identification: Annotators select demographic terms in source text samples.
2. Attribute identification: Identified demographic terms are annotated for gender/race/age attributes, such as "man", "Asian", "old" etc.
3. Rewrite text: Annotators rewrite text by modifying the selected entity to reflect the target demographic attribute. Annotators are encouraged to create minimal edits, eg. "George" -> "Georgina".
The annotation process is explained in more detail in our paper.
#### Who are the annotators?
PANDA was annotated by English speaking Amazon Mechanical Turk workers. We included a voluntary demographic survey along with annotation tasks that did not contribute to pay. For a breakdown of annotators' demographic identities, see our paper.
### Personal and Sensitive Information
PANDA does not contain identifying information about annotators.
## Considerations for Using the Data
### Social Impact of Dataset
By releasing the first large scale dataset of demographic text rewrites, we hope to enable exciting future work in fairness in NLP toward more scalable, automated approaches to reducing biases in datasets and language models.
Furthermore, PANDA aims to be diverse in text domain and demographic representation. PANDA includes a large proportion of non-binary gender annotations, which are underrepresented in existing text corpora and prior fairness datasets. Text examples vary in length, with examples spanning single sentences and long Wikipedia passages, and are sourced from a variety of text corpora that can be used to train a domain agnostic perturber.
### Discussion of Biases
For this work, we sourced our annotated data from a range of sources to ensure: (i) permissive data licensing, (ii) that our perturber works well on downstream applications such as NLU classification tasks, and (iii) that our perturber can handle data from multiple domains to be maximally useful. However, we acknowledge that there may be other existing biases in PANDA as a result of our data sourcing choices. For example, it is possible that data sources like BookWiki primarily contain topics of interest to people with a certain amount of influence and educational access, people from the so-called “Western world”, etc. Other topics that might be interesting and relevant to others may be missing or only present in limited quantities. The present approach can only weaken associations inherited from the data sources we use, but in future work, we would love to explore the efficacy of our approach on text from other sources that contain a wider range of topics and text domain differences.
### Other Known Limitations
Our augmentation process can sometimes create nonexistent versions of real people, such as discussing an English King Victor (not a historical figure), as opposed to a Queen Victoria (a historical figure). We embrace the counterfactuality of many of our perturbations, but the lack of guaranteed factuality means that our approach may not be well-suited to all NLP tasks. For example, it might not be suitable for augmenting misinformation detection datasets, because peoples’ names, genders, and other demographic information should not be changed.
## Additional Information
### Dataset Curators
Rebecca Qian, Candace Ross, Jude Fernandes, Douwe Kiela and Adina Williams.
### Licensing Information
PANDA is released under the MIT license.
### Citation Information
https://arxiv.org/abs/2205.12586
### Contributions
Thanks to [@Rebecca-Qian](https://github.com/Rebecca-Qian) for adding this dataset. | 8,662 | [
[
-0.022552490234375,
-0.06671142578125,
-0.01178741455078125,
0.039581298828125,
-0.01503753662109375,
-0.0123443603515625,
-0.0162811279296875,
-0.0162811279296875,
0.0306549072265625,
0.051361083984375,
-0.0531005859375,
-0.045440673828125,
-0.038787841796875,
... |
sasha/butterflies_10k_names_multiple | 2022-12-15T23:37:48.000Z | [
"region:us"
] | sasha | null | null | 0 | 4 | 2022-12-15T23:37:13 | ---
dataset_info:
features:
- name: image
dtype: image
- name: description
dtype: string
- name: url
dtype: string
- name: sim_score
dtype: float64
- name: name
dtype: string
splits:
- name: train
num_bytes: 260929983.907
num_examples: 7061
download_size: 268647797
dataset_size: 260929983.907
---
# Dataset Card for "butterflies_10k_names_multiple"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 530 | [
[
-0.0394287109375,
-0.0014171600341796875,
0.00630950927734375,
0.047210693359375,
-0.009857177734375,
0.00618743896484375,
0.0197296142578125,
-0.0207977294921875,
0.059967041015625,
0.03912353515625,
-0.06787109375,
-0.039093017578125,
-0.054473876953125,
0... |
lmqg/qag_koquad | 2022-12-18T08:03:53.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:lmqg/qg_koquad",
"language:ko",
"license:cc-by-sa-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
] | lmqg | Question & answer generation dataset based on SQuAD. | @inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
} | 2 | 4 | 2022-12-18T07:05:17 | ---
license: cc-by-sa-4.0
pretty_name: SQuAD for question generation
language: ko
multilinguality: monolingual
size_categories: 1k<n<10K
source_datasets: lmqg/qg_koquad
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qag_koquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the KOQuAD.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Korean (ko)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"paragraph": ""3.13 만세운동" 은 1919년 3.13일 전주에서 일어난 만세운동이다. 지역 인사들과 함께 신흥학교 학생들이 주도적인 역할을 하며, 만세운동을 이끌었다. 박태련, 김신극 등 전주 지도자들은 군산에서 4일과 5일 독립만세 시위가 감행됐다는 소식에 듣고 준비하고 있었다. 천도교와 박태련 신간회 총무집에서 필요한 태극기를 인쇄하기로 했었다. 서울을 비롯한 다른 지방에서 시위가 계속되자 일본경찰은 신흥학교와 기전학교를 비롯한 전주시내 학교에 강제 방학조치를 취했다. 이에 최종삼 등 신흥학교 학생 5명은 밤을 이용해 신흥학교 지하실에서 태극기 등 인쇄물을 만들었다. 준비를 마친 이들은 13일 장터로 모이기 시작했고, 채소가마니로 위장한 태극기를 장터로 실어 나르고 거사 직전 시장 입구인 완산동과 전주교 건너편에서 군중들에게 은밀히 배부했다. 낮 12시20분께 신흥학교와 기전학교 학생 및 천도교도 등은 태극기를 들고 만세를 불렀다. 남문 밖 시장, 제2보통학교(현 완산초등학교)에서 모여 인쇄물을 뿌리며 시가지로 구보로 행진했다. 시위는 오후 11시까지 서너차례 계속됐다. 또 다음날 오후 3시에도 군중이 모여 만세를 불렀다. 이후 고형진, 남궁현, 김병학, 김점쇠, 이기곤, 김경신 등 신흥학교 학생들은 시위를 주도했다는 혐의로 모두 실형 1년을 언도 받았다. 이외 신흥학교 학생 3명은 일제의 고문에 옥사한 것으로 알려졌다. 또 시위를 지도한 김인전 목사는 이후 중국 상해로 거처를 옮겨 임시정부에서 활동했다. 현재 신흥학교 교문 옆에 만세운동 기념비가 세워져 있다.",
"questions": [ "만세운동 기념비가 세워져 있는 곳은?", "일본경찰의 강제 방학조치에도 불구하고 학생들은 신흥학교 지하실에 모여서 어떤 인쇄물을 만들었는가?", "여러 지방에서 시위가 일어나자 일본경찰이 전주시내 학교에 감행한 조치는 무엇인가?", "지역인사들과 신흥고등학교 학생들이 주도적인 역할을 한 3.13 만세운동이 일어난 해는?", "신흥학교 학생들은 시위를 주도했다는 혐의로 모두 실형 몇년을 언도 받았는가?", "만세운동에서 주도적인 역할을 한 이들은?", "1919년 3.1 운동이 일어난 지역은 어디인가?", "3.13 만세운동이 일어난 곳은?" ],
"answers": [ "신흥학교 교문 옆", "태극기", "강제 방학조치", "1919년", "1년", "신흥학교 학생들", "전주", "전주" ],
"questions_answers": "question: 만세운동 기념비가 세워져 있는 곳은?, answer: 신흥학교 교문 옆 | question: 일본경찰의 강제 방학조치에도 불구하고 학생들은 신흥학교 지하실에 모여서 어떤 인쇄물을 만들었는가?, answer: 태극기 | question: 여러 지방에서 시위가 일어나자 일본경찰이 전주시내 학교에 감행한 조치는 무엇인가?, answer: 강제 방학조치 | question: 지역인사들과 신흥고등학교 학생들이 주도적인 역할을 한 3.13 만세운동이 일어난 해는?, answer: 1919년 | question: 신흥학교 학생들은 시위를 주도했다는 혐의로 모두 실형 몇년을 언도 받았는가?, answer: 1년 | question: 만세운동에서 주도적인 역할을 한 이들은?, answer: 신흥학교 학생들 | question: 1919년 3.1 운동이 일어난 지역은 어디인가?, answer: 전주 | question: 3.13 만세운동이 일어난 곳은?, answer: 전주"
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `questions_answers`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|9600 | 960 | 4442|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | 3,635 | [
[
-0.055877685546875,
-0.060638427734375,
0.033538818359375,
0.0112457275390625,
-0.0282135009765625,
0.0025806427001953125,
0.01812744140625,
-0.0113525390625,
0.03228759765625,
0.023712158203125,
-0.04046630859375,
-0.0303955078125,
-0.033355712890625,
0.012... |
lmqg/qag_jaquad | 2022-12-18T07:54:08.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:lmqg/qg_jaquad",
"language:ja",
"license:cc-by-sa-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
] | lmqg | Question & answer generation dataset based on SQuAD. | @inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
} | 0 | 4 | 2022-12-18T07:05:33 | ---
license: cc-by-sa-4.0
pretty_name: SQuAD for question generation
language: ja
multilinguality: monolingual
size_categories: 1k<n<10K
source_datasets: lmqg/qg_jaquad
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qag_jaquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the JAQuAD.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Japanese (ja)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"paragraph": ""Nerdilinga"は898年にカロリング朝の王領として初めて文献に記録されている。レーゲンスブルク司教の統治下でネルトリンゲンは市場町に成長していった。1215年にネルトリンゲンは皇帝フリードリヒ2世から都市権を与えられ、帝国自由都市となった。この年に最初の市壁が築かれた。その縄張りは現在も街の地図に見て取れる。1219年、ネルトリンゲンの聖霊降臨祭についての最も古い文献上の記録が遺されている。重要な交易路が交差するこの都市は穀物、家畜、織物、毛皮、金属製品の主要な集散地に発展していった。ネルトリンゲンはフランクフルトと並ぶドイツで最も重要な遠距離交易都市の一つとなったのである。",
"questions": [ "1215年にネルトリンゲンは誰から都市権を与えられ、帝国自由都市となったか。", "\"Nerdilinga\"の最初の記録は何年のものですか。" ],
"answers": [ "皇帝フリードリヒ2世", "898年" ],
"questions_answers": "question: 1215年にネルトリンゲンは誰から都市権を与えられ、帝国自由都市となったか。, answer: 皇帝フリードリヒ2世 | question: "Nerdilinga"の最初の記録は何年のものですか。, answer: 898年"
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `questions_answers`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|9508| 1431 | 3050|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | 2,510 | [
[
-0.048736572265625,
-0.077880859375,
0.02508544921875,
0.0032291412353515625,
-0.0290374755859375,
-0.01345062255859375,
-0.0150909423828125,
-0.01439666748046875,
0.02862548828125,
0.034149169921875,
-0.05535888671875,
-0.039764404296875,
-0.0198974609375,
... |
fewshot-goes-multilingual/cs_csfd-movie-reviews | 2022-12-18T21:30:56.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:cs",
"license:cc-by-sa-4.0",
"movie reviews",
"rat... | fewshot-goes-multilingual | null | null | 0 | 4 | 2022-12-18T20:05:15 | ---
annotations_creators:
- crowdsourced
language:
- cs
language_creators:
- crowdsourced
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: CSFD movie reviews (Czech)
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- movie reviews
- rating prediction
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for CSFD movie reviews (Czech)
## Dataset Description
The dataset contains user reviews from Czech/Slovak movie databse website <https://csfd.cz>.
Each review contains text, rating, date, and basic information about the movie (or TV series).
The dataset has in total (train+validation+test) 30,000 reviews. The data is balanced - each rating has approximately the same frequency.
## Dataset Features
Each sample contains:
- `review_id`: unique string identifier of the review.
- `rating_str`: string representation of the rating (from "0/5" to "5/5")
- `rating_int`: integer representation of the rating (from 0 to 5)
- `date`: date of publishing the review (just date, no time nor timezone)
- `comment_language`: language of the review (always "cs")
- `comment`: the string of the review
- `item_title`: title of the reviewed item
- `item_year`: publishing year of the item (string, can also be a range)
- `item_kind`: kind of the item - either "film" or "seriál"
- `item_genres`: list of genres of the item
- `item_directors`: list of director names of the item
- `item_screenwriters`: list of screenwriter names of the item
- `item_cast`: list of actors and actress in the item
## Dataset Source
The data was mined and sampled from the <https://csfd.cz> website.
Make sure to comply with the terms of conditions of the website operator when using the data.
| 1,775 | [
[
-0.037811279296875,
-0.0229949951171875,
0.00553131103515625,
0.047576904296875,
-0.06005859375,
0.004810333251953125,
-0.0181121826171875,
-0.005458831787109375,
0.01141357421875,
0.05340576171875,
-0.0709228515625,
-0.07574462890625,
-0.0252685546875,
0.02... |
IIT-K/CISLR | 2022-12-20T08:39:26.000Z | [
"multilinguality:translation",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:sgn",
"license:afl-3.0",
"Indian Sign Language",
"Sign Language Recognition",
"region:us"
] | IIT-K | null | null | 1 | 4 | 2022-12-20T07:42:08 | ---
language:
- sgn
license:
- afl-3.0
multilinguality:
- translation
pretty_name: CISLR
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- Indian Sign Language
- Sign Language Recognition
---
# CISLR: Corpus for Indian Sign Language Recognition
This repository contains the Indian Sign Language Dataset proposed in the following paper
> **Paper:** CISLR: Corpus for Indian Sign Language Recognition https://preview.aclanthology.org/emnlp-22-ingestion/2022.emnlp-main.707/
> **Authors:** Abhinav Joshi, Ashwani Bhat, Pradeep S, Priya Gole, Shashwat Gupta, Shreyansh Agarwal, Ashutosh Modi <br>
>
> **Abstract:** *Indian Sign Language, though used by a diverse community, still lacks well-annotated resources for developing systems that would enable sign language processing. In recent years researchers have actively worked for sign languages like American Sign Languages, however, Indian Sign language is still far from data-driven tasks like machine translation. To address this gap, in this paper, we introduce a new dataset CISLR (Corpus for Indian Sign Language Recognition) for word-level recognition in Indian Sign Language using videos. The corpus has a large vocabulary of around 4700 words covering different topics and domains. Further, we propose a baseline model for word recognition from sign language videos. To handle the low resource problem in the Indian Sign Language, the proposed model consists of a prototype-based one-shot learner that leverages resource rich American Sign Language to learn generalized features for improving predictions in Indian Sign Language. Our experiments show that gesture features learned in another sign language can help perform one-shot predictions in CISLR.*
## Directory Structure
.
├── dataset.csv # list of all videos with categorical annotations
├── prototype.csv # files used as prototypes
├── test.csv # files used as testset
├── CISLR_v1.5-a_videos # dataset videos
├── __Rz2PaTB1c.mp4
├── _2TlWc7fctg.mp4
.
.
├── zZVuyuVTFW0.mp4
└── I3D_features.pkl # extracted Inception3D features
## Citation
> Abhinav Joshi, Ashwani Bhat, Pradeep S, Priya Gole, Shashwat Gupta, Shreyansh Agarwal, and Ashutosh Modi. 2022. CISLR: Corpus for Indian Sign Language Recognition. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10357–10366, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
## Acknowledgments
This project was a part of IIT Kanpur's [SURGE](https://surge.iitk.ac.in/) Initiative. | 2,670 | [
[
0.0009784698486328125,
-0.008148193359375,
-0.00594329833984375,
-0.005580902099609375,
-0.03857421875,
0.0144805908203125,
0.002841949462890625,
-0.03338623046875,
0.01111602783203125,
0.0195465087890625,
-0.03515625,
-0.043609619140625,
-0.07659912109375,
... |
RuyuanWan/Dynasent_Disagreement | 2022-12-26T22:14:00.000Z | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:find",
"multilinguality:monolingual",
"source_datasets:extended",
"language:en",
"region:us"
] | RuyuanWan | null | null | 0 | 4 | 2022-12-26T21:32:44 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- find
license: []
multilinguality:
- monolingual
pretty_name: RuyuanWan/Dynasent_Disagreement
size_categories: []
source_datasets:
- extended
tags: []
task_categories:
- text-classification
task_ids: []
---
This dataset is processed version of Dynamic Sentiment Analysis (DynaSent) dataset including text and the annotation disagreement labels. <br>
Paper: Everyone's Voice Matters: Quantifying Annotation Disagreement Using Demographic Information <br>
Authors: Ruyuan Wan, Jaehyung Kim, Dongyeop Kang <br>
Github repo: https://github.com/minnesotanlp/Quantifying-Annotation-Disagreement <br>
Source Data: [Dynamic Sentiment Analysis Dataset(Potts et al. 2021)](https://github.com/cgpotts/dynasent) <br> | 784 | [
[
-0.031585693359375,
-0.04046630859375,
0.0302276611328125,
0.038909912109375,
-0.01904296875,
0.035003662109375,
0.00021409988403320312,
-0.01678466796875,
0.039031982421875,
0.01751708984375,
-0.06744384765625,
-0.05078125,
-0.050872802734375,
0.01750183105... |
RuyuanWan/Politeness_Disagreement | 2022-12-26T22:21:56.000Z | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:extended",
"language:en",
"region:us"
] | RuyuanWan | null | null | 0 | 4 | 2022-12-26T21:44:39 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
license: []
multilinguality:
- monolingual
pretty_name: RuyuanWan/Politeness_Disagreement
size_categories: []
source_datasets:
- extended
tags: []
task_categories:
- text-classification
task_ids: []
---
This dataset is processed version of Stanford Politeness Corpus (Wikipedia) including text and the annotation disagreement labels. <br>
Paper: Everyone's Voice Matters: Quantifying Annotation Disagreement Using Demographic Information <br>
Authors: Ruyuan Wan, Jaehyung Kim, Dongyeop Kang <br>
Github repo: https://github.com/minnesotanlp/Quantifying-Annotation-Disagreement <br>
Source Data: [Wikipedia Politeness Corpus(Danescu-Niculescu-Mizil et al. 2013)](https://convokit.cornell.edu/documentation/wiki_politeness.html) <br>
| 827 | [
[
-0.04644775390625,
-0.04510498046875,
0.02069091796875,
0.033447265625,
0.0031833648681640625,
-0.031585693359375,
-0.0256805419921875,
-0.0265655517578125,
0.0328369140625,
0.025848388671875,
-0.047271728515625,
-0.039337158203125,
-0.033782958984375,
0.034... |
and-effect/mdk_gov_data_titles_clf | 2023-05-25T12:43:42.000Z | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended",
"language:de",
"license:cc-by-4.0",
"region:us"
] | and-effect | null | null | 1 | 4 | 2023-01-04T16:20:31 | ---
annotations_creators: crowdsourced
language_creators: other
language: de
multilinguality: monolingual
size_categories:
- 1K<n<10K
source_datasets: extended
task_categories:
- text-classification
pretty_name: GOVDATA dataset titles labelled
license: cc-by-4.0
---
# Dataset Card for MDK
This dataset was created as part of the [Bertelsmann Foundation's](https://www.bertelsmann-stiftung.de/de/startseite)
[Musterdatenkatalog (MDK)]("https://www.bertelsmann-stiftung.de/de/unsere-projekte/smart-country/musterdatenkatalog") project. The MDK provides an overview of Open Data in municipalities in Germany. It is intended to help municipalities in Germany, as well as data analysts and journalists, to get an overview of the topics and the extent to which cities have already published data sets.
## Dataset Description
### Dataset Summary
The dataset is an annotated corpus of 1258 records based on the metadata of the datasets from [GOVDATA](https://www.govdata.de/). GovData is a data portal that aims to make cities' data available in a standardized way.
The annotation maps the titles of the datasets to a taxonomy containing categories such as 'Verkehr - KFZ - Messung' or 'Abfallwirtschaft - Abfallkalender'. Through the assignment the names of the data sets can be normalized and grouped. In total, the taxonomy consists 250 categories. Each category is divided into two levels:
- Level 1: "Thema" (topic)

- Level 2: "Bezeichnung" (label).
The first dash divides the levels. For example:

You can find an interactive view of the taxonomy with all labels [here](https://huggingface.co/spaces/and-effect/Musterdatenkatalog).
The repository contains a small and a large version of the data. The small version is for testing purposes only. The large data set contains all 1258 entries. The large and small datasets are split into a training and a testing dataset. In addition, the large dataset folder contains of a validation dataset that has been annotated separately. The validation dataset is an additional dataset that we created for the evaluation of the algorithm. It also consists of data from GOVDATA and has the same structure as the test and training data set.
### Languages
The language data is German.
## Dataset Structure
### Data Fields
| dataset | size |
|-----|-----|
| small/train | 18.96 KB |
| small/test | 6.13 KB |
| large/train | 517.77 KB |
| large/test | 118.66 KB |
An example of looks as follows:
```json
{
"doc_id": "a063d3b7-4c09-421e-9849-073dc8939e76",
"title": "Dienstleistungen Alphabetisch sortiert April 2019",
"description": "CSV-Datei mit allen Dienstleistungen der Kreisverwaltung Kleve. Sortiert nach AlphabetStand 01.04.2019",
"labels_name": "Sonstiges - Sonstiges",
"labels": 166
}
```
The data fields are the same among all splits:
- doc_id (uuid): identifier for each document
- title (str): dataset title from GOVDATA
- description (str): description of the dataset
- labels_name (str): annotation with labels from taxonomy
- labels (int): labels indexed from 0 to 250
### Data Splits
| dataset_name | dataset_splits | train_size | test_size | validation_size
|-----|-----|-----|-----|-----|
| dataset_large | train, test, validation | 1009 | 249 | 101
| dataset_small | train, test | 37 | 13 | None
## Dataset Creation
The dataset was created through multiple manual annotation rounds.
### Source Data
The data comes from [GOVDATA](https://www.govdata.de/), an open data portal of Germany. It aims to provide central access to administrative data from the federal, state and local governments. Their aim is to make data available in one place and thus easier to use. The data available is structured in 13 categories ranging from finance, to international topics, health, education and science and technology. [GOVDATA](https://www.govdata.de/) offers a [CKAN API](https://ckan.govdata.de/) to make requests and provides metadata for each data entry.
#### Initial Data Collection and Normalization
Several sources were used for the annotation process. A sample was collected from [GOVDATA](https://www.govdata.de/) with actual datasets. For the sample, 50 records were drawn for each group. Additional samples are from the previous version of the [MDK](https://github.com/bertelsmannstift/Musterdatenkatalog) that contain older data from [GOVDATA](https://www.govdata.de/). Some of the datasets from the old [MDK](https://github.com/bertelsmannstift/Musterdatenkatalog) already contained an annotation, but since the taxonomy is not the same, the data were re-annotated. A sample was drawn from each source (randomly and by manual selection), resulting in a total of 1258 titles.
### Annotations
#### Annotation process
The data was annotated in four rounds and one additional test round. In each round a percentage of the data was allocated to all annotators to caluculate the inter-annotator agreement using Cohens Kappa.
The following table shows the results of the of the annotations:
| | **Cohens Kappa** | **Number of Annotators** | **Number of Documents** |
| ------------------ | :--------------: | ------------------------ | ----------------------- |
| **Test Round** | .77 | 6 | 50 |
| **Round 1** | .41 | 2 | 120 |
| **Round 2** | .76 | 4 | 480 |
| **Round 3** | .71 | 3 | 420 |
| **Round 4** | .87 | 2 | 416 |
| **Validation set** | - | 1 | 177 |
In addition, a validation set was generated by the dataset curators.
#### Who are the annotators?
Annotators are all employees from [&effect data solutions GmbH](https://www.and-effect.com/). The taxonomy as well as rules and problems in the assignment of datasets were discussed and debated in advance of the development of the taxonomy and the annotation in two workshops with experts and representatives of the open data community and local governments as well as with the project members of the [Musterdatenkatalog]("https://www.bertelsmann-stiftung.de/de/unsere-projekte/smart-country/musterdatenkatalog") from the Bertelsmann Foundation. On this basis, the [&effect](https://www.and-effect.com/) employees were instructed in the annotation by the curators of the datasets.
## Considerations for Using the Data
The dataset for the annotation process was generated by sampling from [GOVDATA](https://www.govdata.de/) and data previously collected from GOVDATA. The data on GOVDATA is continuously updated and data can get deleted. Thus, there is no guarantee that data entries included here will still be available.
### Social Impact of Dataset
Since 2017, the German government has been promoting systematic and free access to public administration data with first laws on open data in municipalities. In this way, a contribution is aimed at the development of a [knowledge society] (https://www.verwaltung-innovativ.de/DE/Startseite/startseite_node.html). The categorization of open data of cities in a standardized and detailed taxonomy supports this process of making data of municipalities freely, openly and structured accessible.
### Discussion of Biases (non-ethical)
The data was mainly sampled at random from the categories available on GOVDATA. Although all categories were sampled there is still some imbalance in the data. For example: entries for the concept 'Raumordnung, Raumplanung und Raumentwicklung - Bebauungsplan' make up the majority class. Although manual selection of data was also used for not all previous concepts data entries was found. However, for 95% of concepts at least one data entry is available.
## Additional Information
### Dataset Curators
Friederike Bauer
Rahkakavee Baskaran
### Licensing Information
CC BY 4.0 | 8,082 | [
[
-0.059539794921875,
-0.0423583984375,
0.0269622802734375,
-0.0011577606201171875,
-0.0308685302734375,
-0.0300140380859375,
-0.01580810546875,
-0.031829833984375,
0.03961181640625,
0.0458984375,
-0.032928466796875,
-0.07183837890625,
-0.046539306640625,
0.01... |
irds/clueweb12 | 2023-01-05T02:56:45.000Z | [
"task_categories:text-retrieval",
"region:us"
] | irds | null | null | 0 | 4 | 2023-01-05T02:56:39 | ---
pretty_name: '`clueweb12`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `clueweb12`
The `clueweb12` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=733,019,372
This dataset is used by: [`clueweb12_touche-2020-task-2`](https://huggingface.co/datasets/irds/clueweb12_touche-2020-task-2), [`clueweb12_touche-2021-task-2`](https://huggingface.co/datasets/irds/clueweb12_touche-2021-task-2)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/clueweb12', 'docs')
for record in docs:
record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
| 1,060 | [
[
-0.0227203369140625,
0.0037746429443359375,
0.0042877197265625,
0.021728515625,
-0.01448822021484375,
-0.0123748779296875,
-0.00015807151794433594,
-0.039459228515625,
0.0309906005859375,
0.00998687744140625,
-0.06500244140625,
-0.05706787109375,
-0.032775878906... |
irds/disks45_nocr_trec-robust-2004 | 2023-01-05T03:01:45.000Z | [
"task_categories:text-retrieval",
"source_datasets:irds/disks45_nocr",
"region:us"
] | irds | null | null | 0 | 4 | 2023-01-05T03:01:40 | ---
pretty_name: '`disks45/nocr/trec-robust-2004`'
viewer: false
source_datasets: ['irds/disks45_nocr']
task_categories:
- text-retrieval
---
# Dataset Card for `disks45/nocr/trec-robust-2004`
The `disks45/nocr/trec-robust-2004` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/disks45#disks45/nocr/trec-robust-2004).
# Data
This dataset provides:
- `queries` (i.e., topics); count=250
- `qrels`: (relevance assessments); count=311,410
- For `docs`, use [`irds/disks45_nocr`](https://huggingface.co/datasets/irds/disks45_nocr)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/disks45_nocr_trec-robust-2004', 'queries')
for record in queries:
record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...}
qrels = load_dataset('irds/disks45_nocr_trec-robust-2004', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@misc{Voorhees1996Disks45,
title = {NIST TREC Disks 4 and 5: Retrieval Test Collections Document Set},
author = {Ellen M. Voorhees},
doi = {10.18434/t47g6m},
year = {1996},
publisher = {National Institute of Standards and Technology}
}
@inproceedings{Voorhees2004Robust,
title={Overview of the TREC 2004 Robust Retrieval Track},
author={Ellen Voorhees},
booktitle={TREC},
year={2004}
}
@inproceedings{Huston2014ACO,
title={A Comparison of Retrieval Models using Term Dependencies},
author={Samuel Huston and W. Bruce Croft},
booktitle={CIKM},
year={2014}
}
```
| 1,839 | [
[
-0.0252227783203125,
-0.0306854248046875,
0.017608642578125,
-0.01392364501953125,
-0.00644683837890625,
0.0009102821350097656,
0.007472991943359375,
-0.0005664825439453125,
0.028076171875,
0.031585693359375,
-0.036468505859375,
-0.06414794921875,
-0.01879882812... |
irds/gov_trec-web-2002_named-page | 2023-01-05T03:04:44.000Z | [
"task_categories:text-retrieval",
"source_datasets:irds/gov",
"region:us"
] | irds | null | null | 0 | 4 | 2023-01-05T03:04:38 | ---
pretty_name: '`gov/trec-web-2002/named-page`'
viewer: false
source_datasets: ['irds/gov']
task_categories:
- text-retrieval
---
# Dataset Card for `gov/trec-web-2002/named-page`
The `gov/trec-web-2002/named-page` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/gov#gov/trec-web-2002/named-page).
# Data
This dataset provides:
- `queries` (i.e., topics); count=150
- `qrels`: (relevance assessments); count=170
- For `docs`, use [`irds/gov`](https://huggingface.co/datasets/irds/gov)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/gov_trec-web-2002_named-page', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/gov_trec-web-2002_named-page', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Craswell2002TrecWeb,
title={Overview of the TREC-2002 Web Track},
author={Nick Craswell and David Hawking},
booktitle={TREC},
year={2002}
}
```
| 1,339 | [
[
-0.0218658447265625,
-0.01812744140625,
0.01324462890625,
-0.003326416015625,
-0.01541900634765625,
-0.004180908203125,
-0.0036563873291015625,
0.012298583984375,
0.0254669189453125,
0.032989501953125,
-0.040283203125,
-0.0648193359375,
-0.0128021240234375,
... |
irds/msmarco-document-v2_trec-dl-2019 | 2023-01-05T03:41:24.000Z | [
"task_categories:text-retrieval",
"source_datasets:irds/msmarco-document-v2",
"region:us"
] | irds | null | null | 0 | 4 | 2023-01-05T03:41:18 | ---
pretty_name: '`msmarco-document-v2/trec-dl-2019`'
viewer: false
source_datasets: ['irds/msmarco-document-v2']
task_categories:
- text-retrieval
---
# Dataset Card for `msmarco-document-v2/trec-dl-2019`
The `msmarco-document-v2/trec-dl-2019` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/msmarco-document-v2#msmarco-document-v2/trec-dl-2019).
# Data
This dataset provides:
- `queries` (i.e., topics); count=200
- `qrels`: (relevance assessments); count=13,940
- For `docs`, use [`irds/msmarco-document-v2`](https://huggingface.co/datasets/irds/msmarco-document-v2)
This dataset is used by: [`msmarco-document-v2_trec-dl-2019_judged`](https://huggingface.co/datasets/irds/msmarco-document-v2_trec-dl-2019_judged)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/msmarco-document-v2_trec-dl-2019', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/msmarco-document-v2_trec-dl-2019', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Craswell2019TrecDl,
title={Overview of the TREC 2019 deep learning track},
author={Nick Craswell and Bhaskar Mitra and Emine Yilmaz and Daniel Campos and Ellen Voorhees},
booktitle={TREC 2019},
year={2019}
}
@inproceedings{Bajaj2016Msmarco,
title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset},
author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang},
booktitle={InCoCo@NIPS},
year={2016}
}
```
| 2,018 | [
[
-0.0216827392578125,
-0.02008056640625,
0.0179443359375,
-0.0086669921875,
-0.0164947509765625,
-0.0038604736328125,
-0.00909423828125,
-0.01174163818359375,
0.0146636962890625,
0.033447265625,
-0.043121337890625,
-0.0634765625,
-0.0300140380859375,
0.021179... |
irds/wikiclir_de | 2023-01-05T03:57:33.000Z | [
"task_categories:text-retrieval",
"region:us"
] | irds | null | null | 0 | 4 | 2023-01-05T03:57:28 | ---
pretty_name: '`wikiclir/de`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `wikiclir/de`
The `wikiclir/de` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/de).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=2,091,278
- `queries` (i.e., topics); count=938,217
- `qrels`: (relevance assessments); count=5,550,454
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_de', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_de', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_de', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
| 1,861 | [
[
-0.022369384765625,
-0.0289459228515625,
0.0074920654296875,
0.0014257431030273438,
-0.0074462890625,
-0.0028934478759765625,
-0.0288543701171875,
-0.01441192626953125,
0.0350341796875,
0.0173492431640625,
-0.026397705078125,
-0.058868408203125,
-0.041015625,
... |
irds/trec-cast_v1_2020 | 2023-01-05T04:03:31.000Z | [
"task_categories:text-retrieval",
"source_datasets:irds/trec-cast_v1",
"region:us"
] | irds | null | null | 0 | 4 | 2023-01-05T04:03:25 | ---
pretty_name: '`trec-cast/v1/2020`'
viewer: false
source_datasets: ['irds/trec-cast_v1']
task_categories:
- text-retrieval
---
# Dataset Card for `trec-cast/v1/2020`
The `trec-cast/v1/2020` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-cast#trec-cast/v1/2020).
# Data
This dataset provides:
- `queries` (i.e., topics); count=216
- `qrels`: (relevance assessments); count=40,451
- For `docs`, use [`irds/trec-cast_v1`](https://huggingface.co/datasets/irds/trec-cast_v1)
This dataset is used by: [`trec-cast_v1_2020_judged`](https://huggingface.co/datasets/irds/trec-cast_v1_2020_judged)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/trec-cast_v1_2020', 'queries')
for record in queries:
record # {'query_id': ..., 'raw_utterance': ..., 'automatic_rewritten_utterance': ..., 'manual_rewritten_utterance': ..., 'manual_canonical_result_id': ..., 'topic_number': ..., 'turn_number': ...}
qrels = load_dataset('irds/trec-cast_v1_2020', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Dalton2020Cast,
title={CAsT 2020: The Conversational Assistance Track Overview},
author={Jeffrey Dalton and Chenyan Xiong and Jamie Callan},
booktitle={TREC},
year={2020}
}
```
| 1,619 | [
[
-0.0241851806640625,
-0.0242462158203125,
0.00801849365234375,
0.0003571510314941406,
-0.02252197265625,
0.00012129545211791992,
-0.00293731689453125,
0.0067901611328125,
0.035858154296875,
0.03887939453125,
-0.048126220703125,
-0.0697021484375,
-0.0290069580078... |
aashsach/legaleval_rr | 2023-01-26T08:06:40.000Z | [
"region:us"
] | aashsach | SemEval 2023 Task LegalEval | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | 0 | 4 | 2023-01-05T10:20:02 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
gabrielaltay/pubtator-central-bigbio-kb-2022-12-18 | 2023-01-07T05:51:13.000Z | [
"region:us"
] | gabrielaltay | null | null | 0 | 4 | 2023-01-07T05:19:49 | ---
dataset_info:
features:
- name: id
dtype: string
- name: document_id
dtype: string
- name: passages
list:
- name: id
dtype: string
- name: type
dtype: string
- name: text
sequence: string
- name: offsets
sequence:
list: int32
- name: entities
list:
- name: id
dtype: string
- name: type
dtype: string
- name: text
sequence: string
- name: offsets
sequence:
list: int32
- name: normalized
list:
- name: db_name
dtype: string
- name: db_id
dtype: string
- name: events
list:
- name: id
dtype: string
- name: type
dtype: string
- name: trigger
struct:
- name: text
sequence: string
- name: offsets
sequence:
list: int32
- name: arguments
list:
- name: role
dtype: string
- name: ref_id
dtype: string
- name: coreferences
list:
- name: id
dtype: string
- name: entity_ids
sequence: string
- name: relations
list:
- name: id
dtype: string
- name: type
dtype: string
- name: arg1_id
dtype: string
- name: arg2_id
dtype: string
- name: normalized
list:
- name: db_name
dtype: string
- name: db_id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 101493304127
num_examples: 33653973
- name: validation
num_bytes: 2115702473
num_examples: 701124
- name: test
num_bytes: 2117460487
num_examples: 701125
download_size: 49786905438
dataset_size: 105726467087
---
# Dataset Card for "pubtator-central-bigbio-kb-2022-12-18"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,895 | [
[
-0.03619384765625,
-0.0211944580078125,
0.036468505859375,
0.0255584716796875,
-0.034423828125,
0.002666473388671875,
0.006633758544921875,
-0.0125732421875,
0.054443359375,
0.0411376953125,
-0.0496826171875,
-0.05194091796875,
-0.038543701171875,
0.01029205... |
Lemswasabi/luxembourgish-asr-rtl-lu | 2023-01-08T15:44:54.000Z | [
"language:lb",
"license:cc-by-nc-nd-4.0",
"region:us"
] | Lemswasabi | luxembourgish-asr-rtl-lu dataset is a speech corpus for the under-resourced Luxembourgish language. | null | 1 | 4 | 2023-01-08T15:29:50 | ---
license: cc-by-nc-nd-4.0
language:
- lb
---
# About the Speech Corpus
`luxembourgish-asr-rtl-lu` dataset is a speech corpus for the under-resourced Luxembourgish language. The audio-transcription pairs were collected from [RTL.lu](http://www.rtl.lu/).
We used forced alignment to segment the audio files. The transcriptions were validated with the help of language experts at the [Center for the Luxembourgish Language](https://portal.education.lu/zls).
# Citation
```
@misc{lb-wav2vec2,
author = {Nguyen, Le Minh and Nayak, Shekhar and Coler, Matt.},
keywords = {Luxembourgish, multilingual speech recognition, language modelling, wav2vec 2.0 XLSR-53, under-resourced language},
title = {IMPROVING LUXEMBOURGISH SPEECH RECOGNITION WITH CROSS-LINGUAL SPEECH REPRESENTATIONS},
year = {2022},
copyright = {2023 IEEE}
}
```
# Copyright notice
Copyright © 2022 RTL.lu. All rights reserved. | 906 | [
[
-0.01910400390625,
-0.047454833984375,
-0.00411224365234375,
0.006084442138671875,
-0.0251922607421875,
0.020172119140625,
-0.03192138671875,
-0.038970947265625,
0.01206207275390625,
0.03216552734375,
-0.045989990234375,
-0.035247802734375,
-0.036407470703125,
... |
sustcsenlp/bn_emotion_speech_corpus | 2023-01-11T09:00:32.000Z | [
"task_categories:audio-classification",
"size_categories:1K<n<10K",
"language:bn",
"license:cc-by-4.0",
"region:us"
] | sustcsenlp | SUST Bangla Emotional Speech Coropus Dataset | @dataset{sadia_sultana_2021_4526477,
author = {Sadia Sultana},
title = {SUST Bangla Emotional Speech Corpus (SUBESCO)},
month = feb,
year = 2021,
note = {{This database was created as a part of PhD thesis
project of the author Sadia Sultana. It was
designed and developed by the author in the
Department of Computer Science and Engineering of
Shahjalal University of Science and Technology.
Financial grant was supported by the university.
If you use the dataset please cite SUBESCO and the
corresponding academic journal publication in Plos
One.}},
publisher = {Zenodo},
version = {version - 1.1},
doi = {10.5281/zenodo.4526477},
url = {https://doi.org/10.5281/zenodo.4526477}
} | 4 | 4 | 2023-01-10T15:49:12 | ---
license: cc-by-4.0
task_categories:
- audio-classification
language:
- bn
pretty_name: SUST BANGLA EMOTIONAL SPEECH CORPUS
size_categories:
- 1K<n<10K
---
# SUST BANGLA EMOTIONAL SPEECH CORPUS
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** [SUBESCO PAPER](https://doi.org/10.1371/journal.pone.0250173)
- **Leaderboard:**
- **Point of Contact:** [Sadia Sultana](sadia-cse@sust.edu)
### Dataset Summary
SUBESCO is an audio-only emotional speech corpus of 7000 sentence-level utterances of the Bangla language. 20 professional actors (10 males and 10 females) participated in the recordings of 10 sentences for 7 target emotions. The emotions are Anger, Disgust, Fear, Happiness, Neutral, Sadness and Surprise. Total duration of the corpus is 7 hours 40 min 40 sec. Total size of the dataset is 2.03 GB. The dataset was evaluated by 50 raters (25 males, 25 females). Human perception test achieved a raw accuracy of 71%. All the details relating to creation, evaluation and analysis of SUBESCO have been described in the corresponding journal paper which has been published in Plos One.
https://doi.org/10.1371/journal.pone.0250173
### Downloading the data
```
from datasets import load_dataset
train = load_dataset("sustcsenlp/bn_emotion_speech_corpus",split="train")
```
### Naming Convention
Each audio file in the dataset has a unique name. There are eight parts in the file name where all the parts are connected by underscores. The order of all the parts is organized as: Gender-Speaker's serial number-Speaker's name-Unit of recording-Unit number- Emotion name- Repeating number and the File format.
For example, the filename F_02_MONIKA_S_1_NEUTRAL_5.wav refers to:
| Symbol | Meaning |
| ----------- | ----------- |
| F | Speaker Gender |
| 02 | Speaker Number |
| MONIKA | Speaker Name |
| S_1 | Sentence Number |
| NEUTRAL | Emotion |
| 5 | Take Number |
### Languages
This dataset contains Bangla Audio Data.
## Dataset Creation
This database was created as a part of PhD thesis project of the author Sadia Sultana. It was designed and developed by the author in the Department of Computer Science and Engineering of Shahjalal University of Science and Technology. Financial grant was supported by the university. If you use the dataset please cite SUBESCO and the corresponding academic journal publication in Plos One.
### Citation Information
```
@dataset{sadia_sultana_2021_4526477,
author = {Sadia Sultana},
title = {SUST Bangla Emotional Speech Corpus (SUBESCO)},
month = feb,
year = 2021,
note = {{This database was created as a part of PhD thesis
project of the author Sadia Sultana. It was
designed and developed by the author in the
Department of Computer Science and Engineering of
Shahjalal University of Science and Technology.
Financial grant was supported by the university.
If you use the dataset please cite SUBESCO and the
corresponding academic journal publication in Plos
One.}},
publisher = {Zenodo},
version = {version - 1.1},
doi = {10.5281/zenodo.4526477},
url = {https://doi.org/10.5281/zenodo.4526477}
}
```
### Contributors
| Name | University |
| ----------- | ----------- |
| Sadia Sultana | Shahjalal University of Science and Technology |
| Dr. M. Zafar Iqbal | Shahjalal University of Science and Technology |
| Dr. M. Shahidur Rahman | Shahjalal University of Science and Technology |
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed] | 4,699 | [
[
-0.03363037109375,
-0.042083740234375,
-0.0079345703125,
0.0274505615234375,
-0.0400390625,
-0.01244354248046875,
-0.01386260986328125,
-0.02752685546875,
0.04388427734375,
0.0255279541015625,
-0.056915283203125,
-0.06396484375,
-0.046630859375,
0.0162658691... |
metaeval/lingnli | 2023-05-31T08:40:53.000Z | [
"task_categories:text-classification",
"language:en",
"license:unknown",
"region:us"
] | metaeval | null | null | 0 | 4 | 2023-01-11T20:59:56 | ---
language:
- en
task_categories:
- text-classification
license: unknown
---
https://github.com/Alicia-Parrish/ling_in_loop/
```bib
@inproceedings{parrish-etal-2021-putting-linguist,
title = "Does Putting a Linguist in the Loop Improve {NLU} Data Collection?",
author = "Parrish, Alicia and
Huang, William and
Agha, Omar and
Lee, Soo-Hwan and
Nangia, Nikita and
Warstadt, Alexia and
Aggarwal, Karmanya and
Allaway, Emily and
Linzen, Tal and
Bowman, Samuel R.",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.421",
doi = "10.18653/v1/2021.findings-emnlp.421",
pages = "4886--4901",
}
``` | 910 | [
[
-0.0170440673828125,
-0.04888916015625,
0.0213165283203125,
0.0123138427734375,
-0.0226593017578125,
0.017852783203125,
-0.0287322998046875,
-0.031585693359375,
0.05303955078125,
0.035369873046875,
0.0010957717895507812,
-0.043304443359375,
-0.02154541015625,
... |
poolrf2001/FaceMask | 2023-01-17T22:58:52.000Z | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:mit",
"region:us"
] | poolrf2001 | MaskFace es un conjunto de datos de imágenes de personas con y sin mascarillas Consta de 3 clases: 1 clase de si la persona está puesta la mascarilla,
otra clase si la persona no esta puesta la mascarilla y una clase donde la persona está puesta la mascarilla incorrectamente. | @ONLINE {masksdata,
author="Pool_rf",
title="Mask face dataset",
month="January",
year="2023",
url="https://github.com/poolrf2001/maskFace"
} | 0 | 4 | 2023-01-17T16:37:30 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: FaceMask
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
dataset_info:
features:
- name: image
dtype: image
- name: labels
dtype:
class_label:
names:
0: mask_weared_incorrect
1: with_mask
2: without_mask
splits:
- name: train
num_bytes: 38806014
num_examples: 1500
- name: validation
num_bytes: 4758962
num_examples: 180
- name: test
num_bytes: 4693735
num_examples: 180
download_size: 48258711
dataset_size: 49140913
---
# Dataset Card for Beans
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** (https://huggingface.co/datasets/poolrf2001/FaceMask)
- **Repository:** (https://huggingface.co/datasets/poolrf2001/FaceMask)
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** N/A
### Dataset Summary
Beans leaf dataset with images of diseased and health leaves.
### Supported Tasks and Leaderboards
- `image-classification`: Based on a leaf image, the goal of this task is to predict the disease type (Angular Leaf Spot and Bean Rust), if any.
### Languages
English
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=128x128 at 0x16BAA72A4A8>,
'labels': 1
}
```
### Data Fields
The data instances have the following fields:
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `labels`: an `int` classification label.
Class Label Mappings:
```json
{
"mask_weared_incorrect": 0,
"with_mask": 1,
"without_mask": 2,
}
```
### Data Splits
| |train|validation|test|
|-------------|----:|---------:|---:|
|# of examples|1500 |180 |180 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@ONLINE {beansdata,
author="Pool",
title="FaceMask dataset",
month="January",
year="2023",
url="https://github.com/poolrf2001/maskFace"
}
```
### Contributions
| 4,375 | [
[
-0.035675048828125,
-0.0438232421875,
0.0008463859558105469,
0.022705078125,
-0.01256561279296875,
0.00601959228515625,
-0.005405426025390625,
-0.035736083984375,
0.04180908203125,
0.038177490234375,
-0.050262451171875,
-0.06976318359375,
-0.06011962890625,
... |
DFKI-SLT/knowledge_net | 2023-01-19T09:16:32.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:entity-linking-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"knowledgenet",
"region:us"
] | DFKI-SLT | KnowledgeNet is a benchmark dataset for the task of automatically populating a knowledge base (Wikidata) with facts
expressed in natural language text on the web. KnowledgeNet provides text exhaustively annotated with facts, thus
enabling the holistic end-to-end evaluation of knowledge base population systems as a whole, unlike previous benchmarks
that are more suitable for the evaluation of individual subcomponents (e.g., entity linking, relation extraction).
For instance, the dataset contains text expressing the fact (Gennaro Basile; RESIDENCE; Moravia), in the passage:
"Gennaro Basile was an Italian painter, born in Naples but active in the German-speaking countries. He settled at Brünn,
in Moravia, and lived about 1756..."
For a description of the dataset and baseline systems, please refer to their
[EMNLP paper](https://github.com/diffbot/knowledge-net/blob/master/knowledgenet-emnlp-cameraready.pdf).
Note: This Datasetreader currently only supports the `train` split and does not contain negative examples | @inproceedings{mesquita-etal-2019-knowledgenet,
title = "{K}nowledge{N}et: A Benchmark Dataset for Knowledge Base Population",
author = "Mesquita, Filipe and
Cannaviccio, Matteo and
Schmidek, Jordan and
Mirza, Paramita and
Barbosa, Denilson",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-1069",
doi = "10.18653/v1/D19-1069",
pages = "749--758",} | 2 | 4 | 2023-01-19T09:15:44 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: KnowledgeNet is a dataset for automatically populating a knowledge base
size_categories:
- 10K<n<100K
source_datasets: []
tags:
- knowledgenet
task_categories:
- text-classification
task_ids:
- multi-class-classification
- entity-linking-classification
dataset_info:
- config_name: knet
features:
- name: fold
dtype: int32
- name: documentId
dtype: string
- name: source
dtype: string
- name: documentText
dtype: string
- name: passages
sequence:
- name: passageId
dtype: string
- name: passageStart
dtype: int32
- name: passageEnd
dtype: int32
- name: passageText
dtype: string
- name: exhaustivelyAnnotatedProperties
sequence:
- name: propertyId
dtype: string
- name: propertyName
dtype: string
- name: propertyDescription
dtype: string
- name: facts
sequence:
- name: factId
dtype: string
- name: propertyId
dtype: string
- name: humanReadable
dtype: string
- name: annotatedPassage
dtype: string
- name: subjectStart
dtype: int32
- name: subjectEnd
dtype: int32
- name: subjectText
dtype: string
- name: subjectUri
dtype: string
- name: objectStart
dtype: int32
- name: objectEnd
dtype: int32
- name: objectText
dtype: string
- name: objectUri
dtype: string
splits:
- name: train
num_bytes: 10161415
num_examples: 3977
download_size: 14119313
dataset_size: 10161415
- config_name: knet_tokenized
features:
- name: doc_id
dtype: string
- name: passage_id
dtype: string
- name: fact_id
dtype: string
- name: tokens
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': O
'1': PER
'2': ORG
'3': LOC
'4': DATE
- name: subj_uri
dtype: string
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': O
'1': PER
'2': ORG
'3': LOC
'4': DATE
- name: obj_uri
dtype: string
- name: relation
dtype:
class_label:
names:
'0': NO_RELATION
'1': DATE_OF_BIRTH
'2': DATE_OF_DEATH
'3': PLACE_OF_RESIDENCE
'4': PLACE_OF_BIRTH
'5': NATIONALITY
'6': EMPLOYEE_OR_MEMBER_OF
'7': EDUCATED_AT
'8': POLITICAL_AFFILIATION
'9': CHILD_OF
'10': SPOUSE
'11': DATE_FOUNDED
'12': HEADQUARTERS
'13': SUBSIDIARY_OF
'14': FOUNDED_BY
'15': CEO
splits:
- name: train
num_bytes: 4511963
num_examples: 10895
download_size: 14119313
dataset_size: 4511963
- config_name: knet_re
features:
- name: documentId
dtype: string
- name: passageId
dtype: string
- name: factId
dtype: string
- name: passageText
dtype: string
- name: humanReadable
dtype: string
- name: annotatedPassage
dtype: string
- name: subjectStart
dtype: int32
- name: subjectEnd
dtype: int32
- name: subjectText
dtype: string
- name: subjectType
dtype:
class_label:
names:
'0': O
'1': PER
'2': ORG
'3': LOC
'4': DATE
- name: subjectUri
dtype: string
- name: objectStart
dtype: int32
- name: objectEnd
dtype: int32
- name: objectText
dtype: string
- name: objectType
dtype:
class_label:
names:
'0': O
'1': PER
'2': ORG
'3': LOC
'4': DATE
- name: objectUri
dtype: string
- name: relation
dtype:
class_label:
names:
'0': NO_RELATION
'1': DATE_OF_BIRTH
'2': DATE_OF_DEATH
'3': PLACE_OF_RESIDENCE
'4': PLACE_OF_BIRTH
'5': NATIONALITY
'6': EMPLOYEE_OR_MEMBER_OF
'7': EDUCATED_AT
'8': POLITICAL_AFFILIATION
'9': CHILD_OF
'10': SPOUSE
'11': DATE_FOUNDED
'12': HEADQUARTERS
'13': SUBSIDIARY_OF
'14': FOUNDED_BY
'15': CEO
splits:
- name: train
num_bytes: 6098219
num_examples: 10895
download_size: 14119313
dataset_size: 6098219
---
# Dataset Card for "KnowledgeNet"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [knowledge-net](https://github.com/diffbot/knowledge-net)
- **Paper:** [KnowledgeNet: A Benchmark Dataset for Knowledge Base Population](https://aclanthology.org/D19-1069/)
- **Size of downloaded dataset files:** 12.59 MB
- **Size of the generated dataset:** 6.1 MB
### Dataset Summary
KnowledgeNet is a benchmark dataset for the task of automatically populating a knowledge base (Wikidata) with facts
expressed in natural language text on the web. KnowledgeNet provides text exhaustively annotated with facts, thus
enabling the holistic end-to-end evaluation of knowledge base population systems as a whole, unlike previous benchmarks
that are more suitable for the evaluation of individual subcomponents (e.g., entity linking, relation extraction).
For instance, the dataset contains text expressing the fact (Gennaro Basile; RESIDENCE; Moravia), in the passage:
"Gennaro Basile was an Italian painter, born in Naples but active in the German-speaking countries. He settled at Brünn,
in Moravia, and lived about 1756..."
For a description of the dataset and baseline systems, please refer to their
[EMNLP paper](https://github.com/diffbot/knowledge-net/blob/master/knowledgenet-emnlp-cameraready.pdf).
Note: This Datasetreader currently only supports the `train` split and does not contain negative examples.
In addition to the original format this repository also provides two version (`knet_re`, `knet_tokenized`) that are
easier to use for simple relation extraction. You can load them with
`datasets.load_dataset("DFKI-SLT/knowledge_net", name="<config>")`.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
#### knet
- **Size of downloaded dataset files:** 12.59 MB
- **Size of the generated dataset:** 10.16 MB
An example of 'train' looks as follows:
```json
{
"fold": 2,
"documentId": "8313",
"source": "DBpedia Abstract",
"documentText": "Gennaro Basile\n\nGennaro Basile was an Italian painter, born in Naples but active in the German-speaking countries. He settled at Brünn, in Moravia, and lived about 1756. His best picture is the altar-piece in the chapel of the chateau at Seeberg, in Salzburg. Most of his works remained in Moravia.",
"passages": [
{
"passageId": "8313:16:114",
"passageStart": 16,
"passageEnd": 114,
"passageText": "Gennaro Basile was an Italian painter, born in Naples but active in the German-speaking countries.",
"exhaustivelyAnnotatedProperties": [
{
"propertyId": "12",
"propertyName": "PLACE_OF_BIRTH",
"propertyDescription": "Describes the relationship between a person and the location where she/he was born."
}
],
"facts": [
{
"factId": "8313:16:30:63:69:12",
"propertyId": "12",
"humanReadable": "<Gennaro Basile> <PLACE_OF_BIRTH> <Naples>",
"annotatedPassage": "<Gennaro Basile> was an Italian painter, born in <Naples> but active in the German-speaking countries.",
"subjectStart": 16,
"subjectEnd": 30,
"subjectText": "Gennaro Basile",
"subjectUri": "http://www.wikidata.org/entity/Q19517888",
"objectStart": 63,
"objectEnd": 69,
"objectText": "Naples",
"objectUri": "http://www.wikidata.org/entity/Q2634"
}
]
},
{
"passageId": "8313:115:169",
"passageStart": 115,
"passageEnd": 169,
"passageText": "He settled at Brünn, in Moravia, and lived about 1756.",
"exhaustivelyAnnotatedProperties": [
{
"propertyId": "11",
"propertyName": "PLACE_OF_RESIDENCE",
"propertyDescription": "Describes the relationship between a person and the location where she/he lives/lived."
},
{
"propertyId": "12",
"propertyName": "PLACE_OF_BIRTH",
"propertyDescription": "Describes the relationship between a person and the location where she/he was born."
}
],
"facts": [
{
"factId": "8313:115:117:129:134:11",
"propertyId": "11",
"humanReadable": "<He> <PLACE_OF_RESIDENCE> <Brünn>",
"annotatedPassage": "<He> settled at <Brünn>, in Moravia, and lived about 1756.",
"subjectStart": 115,
"subjectEnd": 117,
"subjectText": "He",
"subjectUri": "http://www.wikidata.org/entity/Q19517888",
"objectStart": 129,
"objectEnd": 134,
"objectText": "Brünn",
"objectUri": "http://www.wikidata.org/entity/Q14960"
},
{
"factId": "8313:115:117:139:146:11",
"propertyId": "11",
"humanReadable": "<He> <PLACE_OF_RESIDENCE> <Moravia>",
"annotatedPassage": "<He> settled at Brünn, in <Moravia>, and lived about 1756.",
"subjectStart": 115,
"subjectEnd": 117,
"subjectText": "He",
"subjectUri": "http://www.wikidata.org/entity/Q19517888",
"objectStart": 139,
"objectEnd": 146,
"objectText": "Moravia",
"objectUri": "http://www.wikidata.org/entity/Q43266"
}
]
}
]
}
```
#### knet_re
- **Size of downloaded dataset files:** 12.59 MB
- **Size of the generated dataset:** 6.1 MB
An example of 'train' looks as follows:
```json
{
"documentId": "7",
"passageId": "7:23:206",
"factId": "7:23:44:138:160:1",
"passageText": "Tata Chemicals Europe (formerly Brunner Mond (UK) Limited) is a UK-based chemicals company that is a subsidiary of Tata Chemicals Limited, itself a part of the India-based Tata Group.",
"humanReadable": "<Tata Chemicals Europe> <SUBSIDIARY_OF> <Tata Chemicals Limited>",
"annotatedPassage": "<Tata Chemicals Europe> (formerly Brunner Mond (UK) Limited) is a UK-based chemicals company that is a subsidiary of <Tata Chemicals Limited>, itself a part of the India-based Tata Group.",
"subjectStart": 0,
"subjectEnd": 21,
"subjectText": "Tata Chemicals Europe",
"subjectType": 2,
"subjectUri": "",
"objectStart": 115,
"objectEnd": 137,
"objectText": "Tata Chemicals Limited",
"objectType": 2,
"objectUri": "http://www.wikidata.org/entity/Q2331365",
"relation": 13
}
```
#### knet_tokenized
- **Size of downloaded dataset files:** 12.59 MB
- **Size of the generated dataset:** 4.5 MB
An example of 'train' looks as follows:
```json
{
"doc_id": "7",
"passage_id": "7:23:206",
"fact_id": "7:162:168:183:205:1",
"tokens": ["Tata", "Chemicals", "Europe", "(", "formerly", "Brunner", "Mond", "(", "UK", ")", "Limited", ")", "is", "a", "UK", "-", "based", "chemicals", "company", "that", "is", "a", "subsidiary", "of", "Tata", "Chemicals", "Limited", ",", "itself", "a", "part", "of", "the", "India", "-", "based", "Tata", "Group", "."],
"subj_start": 28,
"subj_end": 29,
"subj_type": 2,
"subj_uri": "http://www.wikidata.org/entity/Q2331365",
"obj_start": 33,
"obj_end": 38,
"obj_type": 2,
"obj_uri": "http://www.wikidata.org/entity/Q331715",
"relation": 13
}
```
### Data Fields
#### knet
- `fold`: the fold, a `int` feature.
- `documentId`: the document id, a `string` feature.
- `source`: the source, a `string` feature.
- `documenText`: the document text, a `string` feature.
- `passages`: the list of passages, a `list` of `dict`.
- `passageId`: the passage id, a `string` feature.
- `passageStart`: the passage start, a `int` feature.
- `passageEnd`: the passage end, a `int` feature.
- `passageText`: the passage text, a `string` feature.
- `exhaustivelyAnnotatedProperties`: the list of exhaustively annotated properties, a `list` of `dict`.
- `propertyId`: the property id, a `string` feature.
- `propertyName`: the property name, a `string` feature.
- `propertyDescription`: the property description, a `string` feature.
- `facts`: the list of facts, a `list` of `dict`.
- `factId`: the fact id, a `string` feature.
- `propertyId`: the property id, a `string` feature.
- `humanReadable`: the human readable annotation, a `string` feature.
- `annotatedPassage`: the annotated passage, a `string` feature.
- `subjectStart`: the subject start, a `int` feature.
- `subjectEnd`: the subject end, a `int` feature.
- `subjectText`: the subject text, a `string` feature.
- `subjectUri`: the subject uri, a `string` feature.
- `objectStart`: the object start, a `int` feature.
- `objectEnd`: the object end, a `int` feature.
- `objectText`: the object text, a `string` feature.
- `objectUri`: the object uri, a `string` feature.
#### knet_re
- `documentId`: the document id, a `string` feature.
- `passageId`: the passage id, a `string` feature.
- `passageText`: the passage text, a `string` feature.
- `factId`: the fact id, a `string` feature.
- `humanReadable`: human-readable annotation, a `string` features.
- `annotatedPassage`: annotated passage, a `string` feature.
- `subjectStart`: the index of the start character of the relation subject mention, an `ìnt` feature.
- `subjectEnd`: the index of the end character of the relation subject mention, exclusive, an `ìnt` feature.
- `subjectText`: the text the subject mention, a `string` feature.
- `subjectType`: the NER type of the subject mention, a `string` classification label.
```json
{"O": 0, "PER": 1, "ORG": 2, "LOC": 3, "DATE": 4}
```
- `subjectUri`: the Wikidata URI of the subject mention, a `string` feature.
- `objectStart`: the index of the start character of the relation object mention, an `ìnt` feature.
- `objectEnd`: the index of the end character of the relation object mention, exclusive, an `ìnt` feature.
- `objectText`: the text the object mention, a `string` feature.
- `objectType`: the NER type of the object mention, a `string` classification label.
```json
{"O": 0, "PER": 1, "ORG": 2, "LOC": 3, "DATE": 4}
```
- `objectUri`: the Wikidata URI of the object mention, a `string` feature.
- `relation`: the relation label of this instance, a `string` classification label.
```json
{"NO_RELATION": 0, "DATE_OF_BIRTH": 1, "DATE_OF_DEATH": 2, "PLACE_OF_RESIDENCE": 3, "PLACE_OF_BIRTH": 4, "NATIONALITY": 5, "EMPLOYEE_OR_MEMBER_OF": 6, "EDUCATED_AT": 7, "POLITICAL_AFFILIATION": 8, "CHILD_OF": 9, "SPOUSE": 10, "DATE_FOUNDED": 11, "HEADQUARTERS": 12, "SUBSIDIARY_OF": 13, "FOUNDED_BY": 14, "CEO": 15}
```
#### knet_tokenized
- `doc_id`: the document id, a `string` feature.
- `passage_id`: the passage id, a `string` feature.
- `factId`: the fact id, a `string` feature.
- `tokens`: the list of tokens of this passage, obtained with spaCy, a `list` of `string` features.
- `subj_start`: the index of the start token of the relation subject mention, an `ìnt` feature.
- `subj_end`: the index of the end token of the relation subject mention, exclusive, an `ìnt` feature.
- `subj_type`: the NER type of the subject mention, a `string` classification label.
```json
{"O": 0, "PER": 1, "ORG": 2, "LOC": 3, "DATE": 4}
```
- `subj_uri`: the Wikidata URI of the subject mention, a `string` feature.
- `obj_start`: the index of the start token of the relation object mention, an `ìnt` feature.
- `obj_end`: the index of the end token of the relation object mention, exclusive, an `ìnt` feature.
- `obj_type`: the NER type of the object mention, a `string` classification label.
```json
{"O": 0, "PER": 1, "ORG": 2, "LOC": 3, "DATE": 4}
```
- `obj_uri`: the Wikidata URI of the object mention, a `string` feature.
- `relation`: the relation label of this instance, a `string` classification label.
```json
{"NO_RELATION": 0, "DATE_OF_BIRTH": 1, "DATE_OF_DEATH": 2, "PLACE_OF_RESIDENCE": 3, "PLACE_OF_BIRTH": 4, "NATIONALITY": 5, "EMPLOYEE_OR_MEMBER_OF": 6, "EDUCATED_AT": 7, "POLITICAL_AFFILIATION": 8, "CHILD_OF": 9, "SPOUSE": 10, "DATE_FOUNDED": 11, "HEADQUARTERS": 12, "SUBSIDIARY_OF": 13, "FOUNDED_BY": 14, "CEO": 15}
```
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
are labeled as no_relation.
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{mesquita-etal-2019-knowledgenet,
title = "{K}nowledge{N}et: A Benchmark Dataset for Knowledge Base Population",
author = "Mesquita, Filipe and
Cannaviccio, Matteo and
Schmidek, Jordan and
Mirza, Paramita and
Barbosa, Denilson",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-1069",
doi = "10.18653/v1/D19-1069",
pages = "749--758",}
```
### Contributions
Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset. | 20,835 | [
[
-0.04376220703125,
-0.0240325927734375,
0.017547607421875,
0.0020236968994140625,
-0.01378631591796875,
-0.01739501953125,
-0.0189666748046875,
-0.0297088623046875,
0.048248291015625,
0.0306396484375,
-0.057281494140625,
-0.06500244140625,
-0.0253448486328125,
... |
clip-benchmark/wds_imagenet1k | 2023-01-20T00:57:51.000Z | [
"region:us"
] | clip-benchmark | null | null | 0 | 4 | 2023-01-20T00:34:31 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
dangrebenkin/voxforge-ru-dataset | 2023-02-06T19:23:29.000Z | [
"license:apache-2.0",
"region:us"
] | dangrebenkin | null | null | 1 | 4 | 2023-01-21T14:34:23 | ---
dataset_info:
features:
- name: transcription
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 1947609729.4653895
num_examples: 6169
- name: test
num_bytes: 864278563.4406104
num_examples: 2645
download_size: 2705520657
dataset_size: 2811888292.906
license: apache-2.0
---
## Dataset audio info
- 16000 Hz 16 bit
- wav
- mono
- Russian speech
## Dataset instance structure
{'audio': {'path': '/path/to/wav.wav',
'array': array([wav numpy array]), dtype=float32),
'sampling_rate': 16000},
'transcription': 'транскрипция'}
## Citation
@Misc{Voxforge.org,
author = {Voxforge.org},
title = {Free Speech... Recognition (Linux, Windows and Mac) - voxforge.org},
howpublished = {\url{[http://www.voxforge.org/]}},
note = {accessed 01/21/2023}
}
## Source
http://www.voxforge.org/ru/downloads | 904 | [
[
-0.021331787109375,
-0.044036865234375,
0.004306793212890625,
0.007175445556640625,
-0.044189453125,
0.00824737548828125,
-0.0214996337890625,
-0.009368896484375,
0.0196685791015625,
0.00359344482421875,
-0.044952392578125,
-0.07147216796875,
-0.0225677490234375... |
relbert/scientific_and_creative_analogy | 2023-01-22T16:49:01.000Z | [
"multilinguality:monolingual",
"size_categories:1<n<1K",
"language:en",
"license:other",
"arxiv:2211.15268",
"region:us"
] | relbert | Dataset for relation mapping task (see [paper](https://arxiv.org/abs/2211.15268)). | @article{czinczoll2022scientific,
title={Scientific and Creative Analogies in Pretrained Language Models},
author={Czinczoll, Tamara and Yannakoudakis, Helen and Mishra, Pushkar and Shutova, Ekaterina},
journal={arXiv preprint arXiv:2211.15268},
year={2022}
} | 1 | 4 | 2023-01-22T16:29:04 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1<n<1K
pretty_name: Relation Mapping
---
# Dataset Card for "relbert/scientific_and_creative_analogy"
## Dataset Description
- **Repository:** [https://github.com/taczin/SCAN_analogies](https://github.com/taczin/SCAN_analogies)
- **Paper:** [https://arxiv.org/abs/2211.15268](https://arxiv.org/abs/2211.15268)
- **Dataset:** Relation Mapping
### Dataset Summary
A dataset for relation mapping task, which is a task to choose optimal combination of word pairs (see more detail in the [paper](https://www.jair.org/index.php/jair/article/view/10583)).
Relation mapping `M` is the set of bijective map in between two sets of terms (`A` and `B`):
```
[set `A`]: ("solar system", "sun", "planet", "mass", "attracts", "revolves", "gravity")
[set `B`]: ("atom", "nucleus", "electron", "charge", "attracts", "revolves", "electromagnetism")
[Relation Mapping `M`]
* "solar system" -> "atom"
* "sun" -> "nucleus"
* "planet" -> "electron"
* "mass" -> "charge"
* "attracts" -> "attracts"
* "revolves" -> "revolves"
* "gravity" -> "electromagnetism"
```
***[Relation Mapping Problem](https://www.jair.org/index.php/jair/article/view/10583)*** is the task to identify the mapping `M` given the sets of terms `A` and `B`.
## Dataset Structure
### Data Instances
An example looks as follows.
```
{
"id": "0",
"reference": ["buying an item", "accepting a belief"],
"source": ["buying an item", "buyer", "merchandise", "buying", "selling", "returning", "valuable", "worthless"],
"target": ["accepting a belief", "believer", "belief", "accepting", "advocating", "rejecting", "true", "false"],
"target_random": ["rejecting", "true", "false", "accepting a belief", "believer", "advocating", "belief", "accepting"],
"type": "metaphor"
}
```
- `source`: A list of terms, which is the source of the relation mapping from.
- `target_random`: A list of terms, where we want to find a mapping from `source` to.
- `target`: A correctly ordered `target_random` that aligns with the `source`.
Given `source` and `target_random`, the task is to predict the correct order of `target_random` so that it matches `target`.
In average 7 terms are in the set, so the total number of possible order is 5040.
### Data Splits
| name |test|
|---------|----:|
|relation_mapping| 45 |
### Citation Information
```
@article{czinczoll2022scientific,
title={Scientific and Creative Analogies in Pretrained Language Models},
author={Czinczoll, Tamara and Yannakoudakis, Helen and Mishra, Pushkar and Shutova, Ekaterina},
journal={arXiv preprint arXiv:2211.15268},
year={2022}
}
``` | 2,699 | [
[
-0.033050537109375,
-0.0430908203125,
0.041229248046875,
0.0209808349609375,
-0.028594970703125,
-0.00794219970703125,
-0.014739990234375,
-0.006744384765625,
0.0242919921875,
0.031097412109375,
-0.0511474609375,
-0.054779052734375,
-0.051605224609375,
0.014... |
hfaus/CelebA_bbox_and_facepoints | 2023-01-28T09:34:39.000Z | [
"size_categories:n<1K",
"region:us"
] | hfaus | CelebFaces Attributes Dataset (CelebA) is a large-scale face attributes dataset with more than 200K celebrity images,
each with 40 attribute annotations. The images in this dataset cover large pose variations and background clutter.
CelebA has large diversities, large quantities, and rich annotations, including 10,177 number of identities, 202,599 number of face images,
and 5 landmark locations, 40 binary attributes annotations per image. | @inproceedings{liu2015faceattributes,
title = {Deep Learning Face Attributes in the Wild},
author = {Liu, Ziwei and Luo, Ping and Wang, Xiaogang and Tang, Xiaoou},
booktitle = {Proceedings of International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
} | 0 | 4 | 2023-01-23T11:04:45 | ---
size_categories:
- n<1K
---
# CelebA Dataset
CelebA Dataset is a large-scale face attributes dataset with more than 200K celebrity images, each with 40 attribute annotations. The images in this dataset cover large pose variations and background clutter. CelebA has large diversities, large quantities, and rich annotations, including 10,177 number of identities, 202,599 number of face images, and 5 landmark locations, 40 binary attributes annotations per image.
## Usage
It is composed of 3 sets of images:
* Training
* Validation
* Test
## Example
The dataset returns each item as a dictionary with the following fields:
```
{
"image": image,
"bbox": [x1, y1, w, h],
"facial_landmarks": {
"lefteye": [x1, y1],
"righteye": [x2, y2],
"nose": [x3, y3],
"leftmouth": [x4, y4],
"rightmouth": [x5, y5]
}
}
```
## License
CelebA Dataset is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0). | 1,008 | [
[
-0.037078857421875,
-0.03125,
0.00022041797637939453,
0.0142059326171875,
0.01245880126953125,
0.0016651153564453125,
-0.0068511962890625,
-0.0384521484375,
0.0276031494140625,
0.05023193359375,
-0.04376220703125,
-0.042327880859375,
-0.0535888671875,
0.0098... |
awalesushil/DBLP-QuAD | 2023-02-15T17:32:06.000Z | [
"task_categories:question-answering",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"knowledge-base-qa",
"region:us"
] | awalesushil | DBLP-QuAD is a scholarly knowledge graph question answering dataset with 10,000 question - SPARQL query pairs targeting the DBLP knowledge graph. The dataset is split into 7,000 training, 1,000 validation and 2,000 test questions. | @article{DBLP-QuAD,
title={DBLP-QuAD: A Question Answering Dataset over the DBLP Scholarly Knowledge Graph},
author={Banerjee, Debayan and Awale, Sushil and Usbeck, Ricardo and Biemann, Chris},
year={2023} | 2 | 4 | 2023-01-24T15:04:12 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- machine-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: 'DBLP-QuAD: A Question Answering Dataset over the DBLP Scholarly Knowledge Graph'
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- knowledge-base-qa
task_categories:
- question-answering
task_ids: []
---
# Dataset Card for DBLP-QuAD
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [DBLP-QuAD Homepage]()
- **Repository:** [DBLP-QuAD Repository](https://github.com/awalesushil/DBLP-QuAD)
- **Paper:** DBLP-QuAD: A Question Answering Dataset over the DBLP Scholarly Knowledge Graph
- **Point of Contact:** [Sushil Awale](mailto:sushil.awale@web.de)
### Dataset Summary
DBLP-QuAD is a scholarly knowledge graph question answering dataset with 10,000 question - SPARQL query pairs targeting the DBLP knowledge graph. The dataset is split into 7,000 training, 1,000 validation and 2,000 test questions.
## Dataset Structure
### Data Instances
An example of a question is given below:
```
{
"id": "Q0577",
"query_type": "MULTI_FACT",
"question": {
"string": "What are the primary affiliations of the authors of the paper 'Graphical Partitions and Graphical Relations'?"
},
"paraphrased_question": {
"string": "List the primary affiliations of the authors of 'Graphical Partitions and Graphical Relations'."
},
"query": {
"sparql": "SELECT DISTINCT ?answer WHERE { <https://dblp.org/rec/journals/fuin/ShaheenS19> <https://dblp.org/rdf/schema#authoredBy> ?x . ?x <https://dblp.org/rdf/schema#primaryAffiliation> ?answer }"
},
"template_id": "TP11",
"entities": [
"<https://dblp.org/rec/journals/fuin/ShaheenS19>"
],
"relations": [
"<https://dblp.org/rdf/schema#authoredBy>",
"<https://dblp.org/rdf/schema#primaryAffiliation>"
],
"temporal": false,
"held_out": true
}
```
### Data Fields
- `id`: the id of the question
- `question`: a string containing the question
- `paraphrased_question`: a paraphrased version of the question
- `query`: a SPARQL query that answers the question
- `query_type`: the type of the query
- `query_template`: the template of the query
- `entities`: a list of entities in the question
- `relations`: a list of relations in the question
- `temporal`: a boolean indicating whether the question contains a temporal expression
- `held_out`: a boolean indicating whether the question is held out from the training set
### Data Splits
The dataset is split into 7,000 training, 1,000 validation and 2,000 test questions.
## Additional Information
### Licensing Information
DBLP-QuAD is licensed under the [Creative Commons Attribution 4.0 International License (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
In review.
### Contributions
Thanks to [@awalesushil](https://github.com/awalesushil) for adding this dataset.
| 4,021 | [
[
-0.048736572265625,
-0.05438232421875,
0.0204620361328125,
0.0025634765625,
-0.0118865966796875,
0.0060272216796875,
0.00972747802734375,
-0.006107330322265625,
0.01367950439453125,
0.051910400390625,
-0.0640869140625,
-0.055908203125,
-0.021392822265625,
0.... |
nglaura/pubmedlay-summarization | 2023-04-11T10:10:19.000Z | [
"task_categories:summarization",
"language:en",
"license:apache-2.0",
"region:us"
] | nglaura | null | null | 0 | 4 | 2023-01-24T15:46:55 | ---
license: apache-2.0
task_categories:
- summarization
language:
- en
pretty_name: PubMed-Lay
---
# LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization
A collaboration between [reciTAL](https://recital.ai/en/), [MLIA](https://mlia.lip6.fr/) (ISIR, Sorbonne Université), [Meta AI](https://ai.facebook.com/), and [Università di Trento](https://www.unitn.it/)
## PubMed-Lay dataset for summarization
PubMed-Lay is an enhanced version of the PubMed summarization dataset, for which layout information is provided.
### Data Fields
- `article_id`: article id
- `article_words`: sequence of words constituting the body of the article
- `article_bboxes`: sequence of corresponding word bounding boxes
- `norm_article_bboxes`: sequence of corresponding normalized word bounding boxes
- `abstract`: a string containing the abstract of the article
- `article_pdf_url`: URL of the article's PDF
### Data Splits
This dataset has 3 splits: _train_, _validation_, and _test_.
| Dataset Split | Number of Instances |
| ------------- | --------------------|
| Train | 78,234 |
| Validation | 4,084 |
| Test | 4,350 |
## Citation
``` latex
@article{nguyen2023loralay,
title={LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization},
author={Nguyen, Laura and Scialom, Thomas and Piwowarski, Benjamin and Staiano, Jacopo},
journal={arXiv preprint arXiv:2301.11312},
year={2023}
}
```
| 1,528 | [
[
-0.0137939453125,
-0.0321044921875,
0.02410888671875,
0.05450439453125,
-0.0290374755859375,
-0.01255035400390625,
-0.025421142578125,
-0.0222625732421875,
0.04840087890625,
0.044769287109375,
-0.0224761962890625,
-0.0701904296875,
-0.0281219482421875,
0.027... |
tomekkorbak/pile-pii-scrubadub | 2023-02-07T15:26:41.000Z | [
"task_categories:text-classification",
"task_categories:other",
"task_ids:acceptability-classification",
"task_ids:text-scoring",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended|the_pile",
"la... | tomekkorbak | null | null | 2 | 4 | 2023-01-25T18:00:01 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
pretty_name: pile-pii-scrubadub
size_categories:
- 1M<n<10M
source_datasets:
- extended|the_pile
tags:
- pii
- personal
- identifiable
- information
- pretraining-with-human-feedback
task_categories:
- text-classification
- other
task_ids:
- acceptability-classification
- text-scoring
---
# Dataset Card for pile-pii-scrubadub
## Dataset Description
- **Repository: https://github.com/tomekkorbak/aligned-pretraining-objectives**
- **Paper: Arxiv link to be added**
### Dataset Summary
This dataset contains text from [The Pile](https://huggingface.co/datasets/the_pile), annotated based on the personal idenfitiable information (PII) in each sentence.
Each document (row in the dataset) is segmented into sentences, and each sentence is given a score: the percentage of words in it that are classified as PII by [Scrubadub](https://scrubadub.readthedocs.io/en/stable/).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
This dataset is taken from [The Pile](https://huggingface.co/datasets/the_pile), which is English text.
## Dataset Structure
### Data Instances
1949977
### Data Fields
- texts (sequence): a list of the sentences in the document (segmented using [SpaCy](https://spacy.io/))
- meta (dict): the section of [The Pile](https://huggingface.co/datasets/the_pile) from which it originated
- scores (sequence): a score for each sentence in the `texts` column indicating the percent of words that are detected as PII by [Scrubadub](https://scrubadub.readthedocs.io/en/stable/)
- avg_score (float64): the average of the scores listed in the `scores` column
- num_sents (int64): the number of sentences (and scores) in that document
### Data Splits
Training set only
## Dataset Creation
### Curation Rationale
This is labeled text from [The Pile](https://huggingface.co/datasets/the_pile), a large dataset of text in English. The PII is labeled so that generative language models can be trained to avoid generating PII.
### Source Data
#### Initial Data Collection and Normalization
This is labeled text from [The Pile](https://huggingface.co/datasets/the_pile).
#### Who are the source language producers?
Please see [The Pile](https://huggingface.co/datasets/the_pile) for the source of the dataset.
### Annotations
#### Annotation process
For each sentence, [Scrubadub](https://scrubadub.readthedocs.io/en/stable/) was used to detect:
- email addresses
- addresses and postal codes
- phone numbers
- credit card numbers
- US social security numbers
- vehicle plates numbers
- dates of birth
- URLs
- login credentials
#### Who are the annotators?
[Scrubadub](https://scrubadub.readthedocs.io/en/stable/)
### Personal and Sensitive Information
This dataset contains all PII that was originally contained in [The Pile](https://huggingface.co/datasets/the_pile), with all detected PII annotated.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contains examples of real PII (conveniently annotated in the text!). Please take care to avoid misusing it or putting anybody in danger by publicizing their information.
This dataset is intended for research purposes only. We cannot guarantee that all PII has been detected, and we cannot guarantee that models trained using it will avoid generating PII.
We do not recommend deploying models trained on this data.
### Discussion of Biases
This dataset contains all biases from The Pile discussed in their paper: https://arxiv.org/abs/2101.00027
### Other Known Limitations
The PII in this dataset was detected using imperfect automated detection methods. We cannot guarantee that the labels are 100% accurate.
## Additional Information
### Dataset Curators
[The Pile](https://huggingface.co/datasets/the_pile)
### Licensing Information
From [The Pile](https://huggingface.co/datasets/the_pile): PubMed Central: [MIT License](https://github.com/EleutherAI/pile-pubmedcentral/blob/master/LICENSE)
### Citation Information
Paper information to be added
### Contributions
[The Pile](https://huggingface.co/datasets/the_pile) | 4,215 | [
[
-0.0303955078125,
-0.049530029296875,
0.0095672607421875,
0.0192108154296875,
-0.025634765625,
-0.00713348388671875,
-0.0099334716796875,
-0.0239410400390625,
0.035247802734375,
0.0399169921875,
-0.042327880859375,
-0.05224609375,
-0.05633544921875,
0.014694... |
metaeval/arct | 2023-05-15T08:19:50.000Z | [
"license:apache-2.0",
"region:us"
] | metaeval | null | null | 0 | 4 | 2023-01-26T08:41:15 | ---
license: apache-2.0
---
# The Argument Reasoning Comprehension Task: Identification and Reconstruction of Implicit Warrants
https://github.com/UKPLab/argument-reasoning-comprehension-task
```bib
@InProceedings{Habernal.et.al.2018.NAACL.ARCT,
title = {The Argument Reasoning Comprehension Task: Identification
and Reconstruction of Implicit Warrants},
author = {Habernal, Ivan and Wachsmuth, Henning and
Gurevych, Iryna and Stein, Benno},
publisher = {Association for Computational Linguistics},
booktitle = {Proceedings of the 2018 Conference of the North American Chapter
of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long Papers)},
pages = {1930--1940},
month = jun,
year = {2018},
address = {New Orleans, Louisiana},
url = {http://aclweb.org/anthology/N18-1175}
}
``` | 923 | [
[
-0.0255584716796875,
-0.060760498046875,
0.0687255859375,
0.0084991455078125,
-0.01215362548828125,
0.0029277801513671875,
0.00620269775390625,
-0.054840087890625,
0.002292633056640625,
0.04248046875,
-0.034576416015625,
-0.02667236328125,
-0.0253448486328125,
... |
Kaludi/data-food-classification | 2023-01-29T19:34:43.000Z | [
"task_categories:image-classification",
"region:us"
] | Kaludi | null | null | 1 | 4 | 2023-01-29T06:49:56 | ---
task_categories:
- image-classification
---
# Dataset for project: food-classification
## Dataset Description
This dataset has been processed for project food-classification.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<308x512 RGB PIL image>",
"target": 0
},
{
"image": "<512x512 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['apple_pie', 'falafel', 'french_toast', 'ice_cream', 'ramen', 'sushi', 'tiramisu'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1050 |
| valid | 350 |
| 986 | [
[
-0.0254364013671875,
-0.001617431640625,
-0.00463104248046875,
0.007495880126953125,
-0.015167236328125,
0.01715087890625,
-0.0165557861328125,
-0.0223541259765625,
0.00850677490234375,
0.047576904296875,
-0.0215606689453125,
-0.07122802734375,
-0.04129028320312... |
gokuls/glue_augmented_wnli | 2023-01-30T14:31:41.000Z | [
"license:apache-2.0",
"region:us"
] | gokuls | null | null | 0 | 4 | 2023-01-29T23:57:12 | ---
license: apache-2.0
---
# Dataset Card for glue_augmented_wnli
## Dataset Description
Augmented WNLI dataset
**Reference:** https://huggingface.co/datasets/glue | 167 | [
[
-0.01531219482421875,
-0.0294189453125,
-0.01316070556640625,
0.035247802734375,
-0.02301025390625,
0.0016660690307617188,
0.0165557861328125,
-0.023529052734375,
0.0667724609375,
0.0330810546875,
-0.0584716796875,
-0.03228759765625,
-0.0302276611328125,
-0.... |
Cohere/miracl-en-queries-22-12 | 2023-02-06T11:54:43.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:en",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | 0 | 4 | 2023-02-03T02:21:53 | ---
annotations_creators:
- expert-generated
language:
- en
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (en) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-en-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-en-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-en-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-en-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-en-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-en-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-en-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| 6,103 | [
[
-0.045074462890625,
-0.05804443359375,
0.023193359375,
0.017791748046875,
-0.003940582275390625,
-0.00441741943359375,
-0.0215911865234375,
-0.036468505859375,
0.0394287109375,
0.0161895751953125,
-0.03961181640625,
-0.07232666015625,
-0.050506591796875,
0.0... |
sileod/attempto-nli | 2023-05-31T08:29:58.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:en",
"license:apache-2.0",
"region:us"
] | sileod | null | null | 0 | 4 | 2023-02-03T20:18:39 | ---
license: apache-2.0
task_ids:
- natural-language-inference
task_categories:
- text-classification
language:
- en
---
Natural language inference using attempto controlled english
Paper to come
```
@inproceedings{fuchs2012first,
title={First-order reasoning for attempto controlled english},
author={Fuchs, Norbert E},
booktitle={Controlled Natural Language: Second International Workshop, CNL 2010, Marettimo Island, Italy, September 13-15, 2010. Revised Papers 2},
pages={73--94},
year={2012},
organization={Springer}
}
``` | 541 | [
[
0.000156402587890625,
-0.0693359375,
0.036712646484375,
0.034515380859375,
-0.012786865234375,
-0.0133056640625,
-0.0240325927734375,
-0.054168701171875,
-0.0020084381103515625,
0.039031982421875,
-0.048583984375,
-0.037353515625,
-0.043548583984375,
0.02801... |
Kamtera/Persian-conversational-dataset | 2023-04-04T08:19:27.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"language:fa",
"license:apache-2.0",
"region:us"
] | Kamtera | persian-conversational-dataset | null | 0 | 4 | 2023-02-05T10:12:23 | ---
license: apache-2.0
task_categories:
- conversational
- text-generation
language:
- fa
pretty_name: persianConversation
---
persianConversation | 147 | [
[
-0.01544952392578125,
-0.027496337890625,
0.052520751953125,
0.048370361328125,
-0.01177215576171875,
0.039031982421875,
-0.0178680419921875,
-0.01324462890625,
0.042999267578125,
0.03179931640625,
-0.0325927734375,
-0.0136260986328125,
-0.0299530029296875,
... |
metaeval/autotnli | 2023-05-31T08:55:41.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:en",
"license:apache-2.0",
"region:us"
] | metaeval | null | null | 0 | 4 | 2023-02-07T21:36:51 | ---
license: apache-2.0
language:
- en
task_ids:
- natural-language-inference
task_categories:
- text-classification
---
https://github.com/Dibyakanti/AutoTNLI-code
```
@inproceedings{kumar-etal-2022-autotnli,
title = "Realistic Data Augmentation Framework for Enhancing Tabular Reasoning",
author = "Kumar, Dibyakanti and
Gupta, Vivek and
Sharma, Soumya and
Zhang, Shuo",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Online and Abu Dhabi",
publisher = "Association for Computational Linguistics",
url = "https://vgupta123.github.io/docs/autotnli.pdf",
pages = "",
abstract = "Existing approaches to constructing training data for Natural Language Inference (NLI) tasks, such as for semi-structured table reasoning, are either via crowdsourcing or fully automatic methods. However, the former is expensive and time-consuming and thus limits scale, and the latter often produces naive examples that may lack complex reasoning. This paper develops a realistic semi-automated framework for data augmentation for tabular inference. Instead of manually generating a hypothesis for each table, our methodology generates hypothesis templates transferable to similar tables. In addition, our framework entails the creation of rational counterfactual tables based on human written logical constraints and premise paraphrasing. For our case study, we use the InfoTabS (Gupta et al., 2020), which is an entity-centric tabular inference dataset. We observed that our framework could generate human-like tabular inference examples, which could benefit training data augmentation, especially in the scenario with limited supervision.",
}
``` | 1,773 | [
[
-0.0232391357421875,
-0.07421875,
0.040069580078125,
0.00629425048828125,
-0.0021457672119140625,
-0.0013275146484375,
-0.0157012939453125,
-0.030426025390625,
0.0176849365234375,
0.019378662109375,
-0.048187255859375,
-0.040863037109375,
0.0109405517578125,
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.