id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
kayteekay/bookimg_dataset | kayteekay | 2023-08-04T06:15:58Z | 28 | 0 | null | [
"region:us"
] | 2023-08-04T06:15:58Z | 2023-08-04T04:43:10.000Z | 2023-08-04T04:43:10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 289585512.68
num_examples: 32581
download_size: 0
dataset_size: 289585512.68
---
# Dataset Card for "bookimg_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5736058354377747,
-0.19546806812286377,
-0.10539605468511581,
0.09317158162593842,
-0.3151734471321106,
-0.15435495972633362,
0.20078013837337494,
0.023664195090532303,
0.5554397702217102,
0.5794923305511475,
-0.8142786026000977,
-0.9668184518814087,
-0.6144688725471497,
-0.285723090171... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TrainingDataPro/bald-people-segmentation-dataset | TrainingDataPro | 2023-09-14T16:35:35Z | 28 | 1 | null | [
"task_categories:image-segmentation",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"medical",
"region:us"
] | 2023-09-14T16:35:35Z | 2023-08-04T13:34:54.000Z | 2023-08-04T13:34:54 | ---
license: cc-by-nc-nd-4.0
task_categories:
- image-segmentation
language:
- en
tags:
- code
- medical
---
# Bald People Segmentation Dataset
The dataset consists of images of bald people and corresponding segmentation masks.
Segmentation masks highlight the regions of the images that delineate the bald scalp. By using these segmentation masks, researchers and practitioners can focus only on the areas of interest.
The dataset is designed to be accessible and easy to use, providing high-resolution images and corresponding segmentation masks in PNG format.
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=bald-people-segmentation-dataset) to discuss your requirements, learn about the price and buy the dataset.
# Content
### The dataset includes 2 folders:
- **Female** - the folder includes folders corresponding to each woman in the sample. Each of the subfolders contains of top images of women's heads and segmentation masks for the original photos.
- **Male** - the folder includes folders corresponding to each man in the sample. Each of the subfolders contains of front and top images of men's heads from and segmentation masks for the original photos.

### File with the extension .csv
- **link**: link to access the media file,
- **type**: type of the image,
- **gender**: gender of the person in the photo
# Bald People Segmentation might be made in accordance with your requirements.
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=bald-people-segmentation-dataset) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** | [
-0.5352700352668762,
-0.5913286209106445,
0.10059143602848053,
0.4316685199737549,
-0.23821613192558289,
0.27131882309913635,
0.04109055921435356,
-0.45892879366874695,
0.3771331012248993,
0.7632184028625488,
-1.1463639736175537,
-0.9600988626480103,
-0.5075410008430481,
0.0153200058266520... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tasksource/esci | tasksource | 2023-08-09T11:23:31Z | 28 | 0 | null | [
"task_categories:text-classification",
"task_categories:text-retrieval",
"language:en",
"language:ja",
"language:es",
"license:apache-2.0",
"arxiv:2206.06588",
"region:us"
] | 2023-08-09T11:23:31Z | 2023-08-09T10:12:27.000Z | 2023-08-09T10:12:27 | ---
dataset_info:
features:
- name: example_id
dtype: int64
- name: query
dtype: string
- name: query_id
dtype: int64
- name: product_id
dtype: string
- name: product_locale
dtype: string
- name: esci_label
dtype: string
- name: small_version
dtype: int64
- name: large_version
dtype: int64
- name: product_title
dtype: string
- name: product_description
dtype: string
- name: product_bullet_point
dtype: string
- name: product_brand
dtype: string
- name: product_color
dtype: string
- name: product_text
dtype: string
splits:
- name: train
num_bytes: 5047037946
num_examples: 2027874
- name: test
num_bytes: 1631847321
num_examples: 652490
download_size: 2517788457
dataset_size: 6678885267
license: apache-2.0
task_categories:
- text-classification
- text-retrieval
language:
- en
- ja
- es
---
# Dataset Card for "esci"
ESCI product search dataset
https://github.com/amazon-science/esci-data/
Preprocessings:
-joined the two relevant files
-product_text aggregate all product text
-mapped esci_label to full name
```bib
@article{reddy2022shopping,
title={Shopping Queries Dataset: A Large-Scale {ESCI} Benchmark for Improving Product Search},
author={Chandan K. Reddy and Lluís Màrquez and Fran Valero and Nikhil Rao and Hugo Zaragoza and Sambaran Bandyopadhyay and Arnab Biswas and Anlu Xing and Karthik Subbian},
year={2022},
eprint={2206.06588},
archivePrefix={arXiv}
}
``` | [
-0.39113759994506836,
-0.5828738808631897,
0.3703247606754303,
0.1320260465145111,
-0.22399796545505524,
0.1574389636516571,
-0.11919381469488144,
-0.5627140402793884,
0.513786792755127,
0.4345897436141968,
-0.5436315536499023,
-0.7536517381668091,
-0.45950689911842346,
0.31706976890563965... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
deep-plants/AGM_HS | deep-plants | 2023-10-04T11:07:25Z | 28 | 3 | null | [
"license:cc",
"region:us"
] | 2023-10-04T11:07:25Z | 2023-08-16T10:04:19.000Z | 2023-08-16T10:04:19 | ---
license: cc
dataset_info:
features:
- name: image
dtype: image
- name: mask
dtype: image
- name: crop_type
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 22900031.321
num_examples: 6127
download_size: 22010079
dataset_size: 22900031.321
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for AGM_HS Dataset
## Dataset Summary
The AGM<sub>HS</sub> (AGricolaModerna Healthy-Stress) Dataset is an extension of the AGM Dataset, specifically curated to address the challenge of detecting and localizing plant stress in top-view images of harvested crops. This dataset comprises 6,127 high-resolution RGB images, each with a resolution of 120x120 pixels, selected from the original AGM Dataset. The images in AGM<sub>HS</sub> are divided into two categories: healthy samples (3,798 images) and stressed samples (2,329 images) representing 14 of the 18 classes present in AGM. Alongside the healthy/stressed classification labels, the dataset also provides segmentation masks for the stressed areas.
## Supported Tasks
Image classification: Healthy-stressed classification
Image segmentation: detection and localization of plant stress in top-view images.
## Languages
The dataset primarily consists of image data and does not involve language content. Therefore, the primary language is English, but it is not relevant to the dataset's core content.
## Dataset Structure
### Data Instances
A typical data instance from the AGM<sub>HS</sub> Dataset consists of the following:
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=120x120 at 0x29CEAD71780>,
'labels': 'stressed',
'crop_type': 'by'
'mask': <PIL.PngImagePlugin.PngImageFile image mode=L size=120x120 at 0x29CEAD71780>
}
```
### Data Fields
The dataset's data instances have the following fields:
- `image`: A PIL.Image.Image object representing the image.
- `labels`: A string representation indicating whether the image is "healthy" or "stressed."
- `crop_type`: An string representation of the crop type in the image
- `mask`: A PIL.Image.Image object representing the segmentation mask of stressed areas in the image, stored as a PNG image.
### Data Splits
- **Training Set**:
- Number of Examples: 6,127
- Healthy Samples: 3,798
- Stressed Samples: 2,329
## Dataset Creation
### Curation Rationale
The AGM<sub>HS</sub> Dataset was created as an extension of the AGM Dataset to specifically address the challenge of detecting and localizing plant stress in top-view images of harvested crops. This dataset is essential for the development and evaluation of advanced segmentation models tailored for this task.
### Source Data
#### Initial Data Collection and Normalization
The images in AGM<sub>HS</sub> were extracted from the original AGM Dataset. During the extraction process, labelers selected images showing clear signs of either good health or high stress. These sub-images were resized to 120x120 pixels to create AGM<sub>HS</sub>.
### Annotations
#### Annotation Process
The AGM<sub>HS</sub> Dataset underwent a secondary stage of annotation. Labelers manually collected images by clicking on points corresponding to stressed areas on the leaves. These clicked points served as prompts for the semi-automatic generation of segmentation masks using the "Segment Anything" technique \cite{kirillov2023segment}. Each image is annotated with a classification label ("healthy" or "stressed") and a corresponding segmentation mask.
### Who Are the Annotators?
The annotators for AGM<sub>HS</sub> are domain experts with knowledge of plant health and stress detection.
## Personal and Sensitive Information
The dataset does not contain personal or sensitive information about individuals. It exclusively consists of images of plants.
## Considerations for Using the Data
### Social Impact of Dataset
The AGM<sub>HS</sub> Dataset plays a crucial role in advancing research and technologies for plant stress detection and localization in the context of modern agriculture. By providing a diverse set of top-view crop images with associated segmentation masks, this dataset can facilitate the development of innovative solutions for sustainable agriculture, contributing to increased crop health, yield prediction, and overall food security.
### Discussion of Biases and Known Limitations
While AGM<sub>HS</sub> is a valuable dataset, it inherits some limitations from the original AGM Dataset. It primarily involves images from a single vertical farm setting, potentially limiting the representativeness of broader agricultural scenarios. Additionally, the dataset's composition may reflect regional agricultural practices and business-driven crop preferences specific to vertical farming. Researchers should be aware of these potential biases when utilizing AGM<sub>HS</sub> for their work.
## Additional Information
### Dataset Curators
The AGM<sub>HS</sub> Dataset is curated by DeepPlants and AgricolaModerna. For further information, please contact us at:
- nico@deepplants.com
- etienne.david@agricolamoderna.com
### Licensing Information
### Citation Information
If you use the AGM<sub>HS</sub> dataset in your work, please consider citing the following publication:
```bibtex
@InProceedings{Sama_2023_ICCV,
author = {Sama, Nico and David, Etienne and Rossetti, Simone and Antona, Alessandro and Franchetti, Benjamin and Pirri, Fiora},
title = {A new Large Dataset and a Transfer Learning Methodology for Plant Phenotyping in Vertical Farms},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {October},
year = {2023},
pages = {540-551}
}
```
| [
-0.3953959345817566,
-0.6316037178039551,
0.2325448989868164,
0.13603462278842926,
-0.2693292796611786,
-0.10008850693702698,
0.01612965390086174,
-0.6986492872238159,
0.21871298551559448,
0.2710365056991577,
-0.5905143618583679,
-0.8974624276161194,
-0.7671762704849243,
0.2468283176422119... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vitaliy-sharandin/synthetic-fraud-detection | vitaliy-sharandin | 2023-08-24T17:17:37Z | 28 | 3 | null | [
"region:us"
] | 2023-08-24T17:17:37Z | 2023-08-24T17:13:00.000Z | 2023-08-24T17:13:00 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
seara/ru_go_emotions | seara | 2023-08-25T19:13:08Z | 28 | 1 | null | [
"task_categories:text-classification",
"task_categories:translation",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:sentiment-analysis",
"task_ids:sentiment-classification",
"size_categories:10K<n<100K",
"size_categories:100K<n<1M",
"source_datasets:go_emoti... | 2023-08-25T19:13:08Z | 2023-08-25T10:12:05.000Z | 2023-08-25T10:12:05 | ---
dataset_info:
- config_name: raw
features:
- name: ru_text
dtype: string
- name: text
dtype: string
- name: id
dtype: string
- name: author
dtype: string
- name: subreddit
dtype: string
- name: link_id
dtype: string
- name: parent_id
dtype: string
- name: created_utc
dtype: float32
- name: rater_id
dtype: int32
- name: example_very_unclear
dtype: bool
- name: admiration
dtype: int32
- name: amusement
dtype: int32
- name: anger
dtype: int32
- name: annoyance
dtype: int32
- name: approval
dtype: int32
- name: caring
dtype: int32
- name: confusion
dtype: int32
- name: curiosity
dtype: int32
- name: desire
dtype: int32
- name: disappointment
dtype: int32
- name: disapproval
dtype: int32
- name: disgust
dtype: int32
- name: embarrassment
dtype: int32
- name: excitement
dtype: int32
- name: fear
dtype: int32
- name: gratitude
dtype: int32
- name: grief
dtype: int32
- name: joy
dtype: int32
- name: love
dtype: int32
- name: nervousness
dtype: int32
- name: optimism
dtype: int32
- name: pride
dtype: int32
- name: realization
dtype: int32
- name: relief
dtype: int32
- name: remorse
dtype: int32
- name: sadness
dtype: int32
- name: surprise
dtype: int32
- name: neutral
dtype: int32
splits:
- name: train
num_bytes: 84388976
num_examples: 211225
download_size: 41128059
dataset_size: 84388976
- config_name: simplified
features:
- name: ru_text
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': admiration
'1': amusement
'2': anger
'3': annoyance
'4': approval
'5': caring
'6': confusion
'7': curiosity
'8': desire
'9': disappointment
'10': disapproval
'11': disgust
'12': embarrassment
'13': excitement
'14': fear
'15': gratitude
'16': grief
'17': joy
'18': love
'19': nervousness
'20': optimism
'21': pride
'22': realization
'23': relief
'24': remorse
'25': sadness
'26': surprise
'27': neutral
- name: id
dtype: string
splits:
- name: train
num_bytes: 10118125
num_examples: 43410
- name: validation
num_bytes: 1261921
num_examples: 5426
- name: test
num_bytes: 1254989
num_examples: 5427
download_size: 7628917
dataset_size: 12635035
configs:
- config_name: raw
data_files:
- split: train
path: raw/train-*
- config_name: simplified
data_files:
- split: train
path: simplified/train-*
- split: validation
path: simplified/validation-*
- split: test
path: simplified/test-*
license: mit
task_categories:
- text-classification
- translation
task_ids:
- multi-class-classification
- multi-label-classification
- sentiment-analysis
- sentiment-classification
language:
- ru
- en
pretty_name: Ru-GoEmotions
size_categories:
- 10K<n<100K
- 100K<n<1M
source_datasets:
- go_emotions
tags:
- emotion-classification
- emotion
- reddit
---
## Description
This dataset is a translation of the Google [GoEmotions](https://github.com/google-research/google-research/tree/master/goemotions) emotion classification dataset.
All features remain unchanged, except for the addition of a new `ru_text` column containing the translated text in Russian.
For the translation process, I used the [Deep translator](https://github.com/nidhaloff/deep-translator) with the Google engine.
You can find all the details about translation, raw `.csv` files and other stuff in this [Github repository](https://github.com/searayeah/ru-goemotions).
For more information also check the official original dataset [card](https://huggingface.co/datasets/go_emotions).
## Id to label
```yaml
0: admiration
1: amusement
2: anger
3: annoyance
4: approval
5: caring
6: confusion
7: curiosity
8: desire
9: disappointment
10: disapproval
11: disgust
12: embarrassment
13: excitement
14: fear
15: gratitude
16: grief
17: joy
18: love
19: nervousness
20: optimism
21: pride
22: realization
23: relief
24: remorse
25: sadness
26: surprise
27: neutral
```
## Label to Russian label
```yaml
admiration: восхищение
amusement: веселье
anger: злость
annoyance: раздражение
approval: одобрение
caring: забота
confusion: непонимание
curiosity: любопытство
desire: желание
disappointment: разочарование
disapproval: неодобрение
disgust: отвращение
embarrassment: смущение
excitement: возбуждение
fear: страх
gratitude: признательность
grief: горе
joy: радость
love: любовь
nervousness: нервозность
optimism: оптимизм
pride: гордость
realization: осознание
relief: облегчение
remorse: раскаяние
sadness: грусть
surprise: удивление
neutral: нейтральность
```
| [
-0.15566018223762512,
-0.3507988452911377,
0.2888343334197998,
0.32976648211479187,
-0.6945504546165466,
-0.1428690105676651,
-0.4410827159881592,
-0.2923429012298584,
0.35201945900917053,
0.01370419654995203,
-0.730845034122467,
-0.9502149224281311,
-0.7993206977844238,
0.0363789275288581... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
harouzie/vi_question_generation | harouzie | 2023-09-04T05:02:36Z | 28 | 1 | null | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:vi",
"license:mit",
"region:us"
] | 2023-09-04T05:02:36Z | 2023-09-04T04:53:55.000Z | 2023-09-04T04:53:55 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 211814961.2307449
num_examples: 174499
- name: test
num_bytes: 26477628.80776531
num_examples: 21813
- name: valid
num_bytes: 26476414.961489797
num_examples: 21812
download_size: 142790671
dataset_size: 264769005
task_categories:
- question-answering
- text2text-generation
language:
- vi
pretty_name: Vietnamese Dataset for Extractive Question Answering and Question Generation
size_categories:
- 100K<n<1M
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
erfanzar/GPT4-8K | erfanzar | 2023-09-07T11:04:23Z | 28 | 4 | null | [
"task_categories:text-classification",
"task_categories:translation",
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:summarization",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | 2023-09-07T11:04:23Z | 2023-09-06T10:17:32.000Z | 2023-09-06T10:17:32 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: dialogs
sequence: string
- name: user
sequence: string
- name: assistant
sequence: string
- name: llama2_prompt
dtype: string
splits:
- name: train
num_bytes: 193605433
num_examples: 6144
download_size: 90877640
dataset_size: 193605433
task_categories:
- text-classification
- translation
- conversational
- text-generation
- summarization
language:
- en
pretty_name: GPT4
size_categories:
- 1K<n<10K
---
# Dataset Card for "GPT4-8K"
Sure! Here's a README.md file for your dataset:
# Dataset Description
This dataset was generated using GPT-4, a powerful language model developed by OpenAI. It contains a collection of dialogs between a user and an assistant, along with additional information.
from OpenChat
## Dataset Configurations
The dataset includes the following configurations:
- **Config Name:** default
- **Data Files:**
- **Split:** train
- **Path:** data/train-*
## Dataset Information
The dataset consists of the following features:
- **Dialogs:** A sequence of strings representing the dialog between the user and the assistant.
- **User:** A sequence of strings representing the user's input during the dialog.
- **Assistant:** A sequence of strings representing the assistant's responses during the dialog.
- **Llama2 Prompt:** A string representing additional prompt information related to the Llama2 model.
The dataset is divided into the following splits:
- **Train:**
- **Number of Bytes:** 193,605,433
- **Number of Examples:** 6,144
## Dataset Size and Download
- **Download Size:** 90,877,640 bytes
- **Dataset Size:** 193,605,433 bytes
Please note that this dataset was generated by GPT-4 and may contain synthetic or simulated data. It is intended for research and experimentation purposes.
For more information or inquiries, please contact the dataset owner.
Thank you for using this dataset! | [
-0.34132683277130127,
-0.48787492513656616,
0.3321187496185303,
0.013142908923327923,
-0.44782108068466187,
-0.16708935797214508,
-0.1142522320151329,
-0.40439897775650024,
0.17818832397460938,
0.5828922986984253,
-0.7157143950462341,
-0.46860164403915405,
-0.3982178866863251,
0.1812483817... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
C-MTEB/T2Reranking_en2zh | C-MTEB | 2023-09-09T16:11:54Z | 28 | 1 | null | [
"region:us"
] | 2023-09-09T16:11:54Z | 2023-09-09T16:11:24.000Z | 2023-09-09T16:11:24 | ---
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
dataset_info:
features:
- name: query
dtype: string
- name: positive
sequence: string
- name: negative
sequence: string
splits:
- name: dev
num_bytes: 206929387
num_examples: 6129
download_size: 120405829
dataset_size: 206929387
---
# Dataset Card for "T2Reranking_en2zh"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.17268094420433044,
-0.19892586767673492,
0.1328456550836563,
0.418454647064209,
-0.3359624147415161,
0.00009207292168866843,
0.2627177834510803,
-0.2354247123003006,
0.6522868871688843,
0.4393623173236847,
-0.8152212500572205,
-0.704465925693512,
-0.4981399178504944,
-0.2757878303527832... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
harvard-lil/cold-cases | harvard-lil | 2023-10-19T20:17:38Z | 28 | 7 | null | [
"size_categories:1M<n<10M",
"language:en",
"license:cc0-1.0",
"united states",
"law",
"legal",
"court",
"opinions",
"region:us"
] | 2023-10-19T20:17:38Z | 2023-09-12T17:29:50.000Z | 2023-09-12T17:29:50 | ---
license: cc0-1.0
language:
- en
tags:
- united states
- law
- legal
- court
- opinions
size_categories:
- 1M<n<10M
viewer: true
---
<a href="https://huggingface.co/datasets/harvard-lil/cold-cases/resolve/main/coldcases.png"><img src="https://huggingface.co/datasets/harvard-lil/cold-cases/resolve/main/coldcases-banner.webp"/></a>
# Collaborative Open Legal Data (COLD) - Cases
COLD Cases is a dataset of 8.3 million United States legal decisions with text and metadata, formatted as compressed parquet files. If you'd like to view a sample of the dataset formatted as JSON Lines, you can view one [here](https://raw.githubusercontent.com/harvard-lil/cold-cases-export/main/sample.jsonl)
This dataset exists to support the open legal movement exemplified by projects like
[Pile of Law](https://huggingface.co/datasets/pile-of-law/pile-of-law) and
[LegalBench](https://hazyresearch.stanford.edu/legalbench/).
A key input to legal understanding projects is caselaw -- the published, precedential decisions of judges deciding legal disputes and explaining their reasoning.
United States caselaw is collected and published as open data by [CourtListener](https://www.courtlistener.com/), which maintains scrapers to aggregate data from
a wide range of public sources.
COLD Cases reformats CourtListener's [bulk data](https://www.courtlistener.com/help/api/bulk-data) so that all of the semantic information about each legal decision
(the authors and text of majority and dissenting opinions; head matter; and substantive metadata) is encoded in a single record per decision,
with extraneous data removed. Serving in the traditional role of libraries as a standardization steward, the Harvard Library Innovation Lab is maintaining
this [open source](https://github.com/harvard-lil/cold-cases-export) pipeline to consolidate the data engineering for preprocessing caselaw so downstream machine
learning and natural language processing projects can use consistent, high quality representations of cases for legal understanding tasks.
Prepared by the [Harvard Library Innovation Lab](https://lil.law.harvard.edu) in collaboration with the [Free Law Project](https://free.law/).
---
## Links
- [Data nutrition label](https://datanutrition.org/labels/v3/?id=c29976b2-858c-4f4e-b7d0-c8ef12ce7dbe) (DRAFT). ([Archive](https://perma.cc/YV5P-B8JL)).
- [Pipeline source code](https://github.com/harvard-lil/cold-cases-export)
---
## Summary
- [Formats](#formats)
- [File structure](#file-structure)
- [Data dictionary](#data-dictionary)
- [Notes on appropriate use](#appropriate-use)
---
## Format
[Apache Parquet](https://parquet.apache.org/) is binary format that makes filtering and retrieving the data quicker because it lays out the data in columns, which means columns that are unnecessary to satisfy a given query or workflow don't need to be read. Hugging Face's [Datasets](https://huggingface.co/docs/datasets/index) library is an easy way to get started working with the entire dataset, and has features for loading and streaming the data, so you don't need to store it all locally or pay attention to how it's formatted on disk.
[☝️ Go back to Summary](#summary)
---
## Data dictionary
Partial glossary of the fields in the data.
| Field name | Description |
| --- | --- |
| `judges` | Names of judges presiding over the case, extracted from the text. |
| `date_filed` | Date the case was filed. Formatted in ISO Date format. |
| `date_filed_is_approximate` | Boolean representing whether the `date_filed` value is precise to the day. |
| `slug` | Short, human-readable unique string nickname for the case. |
| `case_name_short` | Short name for the case. |
| `case_name` | Fuller name for the case. |
| `case_name_full` | Full, formal name for the case. |
| `attorneys` | Names of attorneys arguing the case, extracted from the text. |
| `nature_of_suit` | Free text representinng type of suit, such as Civil, Tort, etc. |
| `syllabus` | Summary of the questions addressed in the decision, if provided by the reporter of decisions. |
| `headnotes` | Textual headnotes of the case |
| `summary` | Textual summary of the case |
| `disposition` | How the court disposed of the case in their final ruling. |
| `history` | Textual information about what happened to this case in later decisions. |
| `other_dates` | Other dates related to the case in free text. |
| `cross_reference` | Citations to related cases. |
| `citation_count` | Number of cases that cite this one. |
| `precedential_status` | Constrainted to the values "Published", "Unknown", "Errata", "Unpublished", "Relating-to", "Separate", "In-chambers" |
| `citations` | Cases that cite this case. |
| `court_short_name` | Short name of court presiding over case. |
| `court_full_name` | Full name of court presiding over case. |
| `court_jurisdiction` | Code for type of court that presided over the case. See: [court_jurisdiction field values](#court_jurisdiction-field-values) |
| `opinions` | An array of subrecords. |
| `opinions.author_str` | Name of the author of an individual opinion. |
| `opinions.per_curiam` | Boolean representing whether the opinion was delivered by an entire court or a single judge. |
| `opinions.type` | One of `"010combined"`, `"015unamimous"`, `"020lead"`, `"025plurality"`, `"030concurrence"`, `"035concurrenceinpart"`, `"040dissent"`, `"050addendum"`, `"060remittitur"`, `"070rehearing"`, `"080onthemerits"`, `"090onmotiontostrike"`. |
| `opinions.opinion_text` | Actual full text of the opinion. |
| `opinions.ocr` | Whether the opinion was captured via optical character recognition or born-digital text. |
### court_jurisdiction field values
| Value | Description |
| --- | --- |
| F | Federal Appellate |
| FD | Federal District |
| FB | Federal Bankruptcy |
| FBP | Federal Bankruptcy Panel |
| FS | Federal Special |
| S | State Supreme |
| SA | State Appellate |
| ST | State Trial |
| SS | State Special |
| TRS | Tribal Supreme |
| TRA | Tribal Appellate |
| TRT | Tribal Trial |
| TRX | Tribal Special |
| TS | Territory Supreme |
| TA | Territory Appellate |
| TT | Territory Trial |
| TSP | Territory Special |
| SAG | State Attorney General |
| MA | Military Appellate |
| MT | Military Trial |
| C | Committee |
| I | International |
| T | Testing |
[☝️ Go back to Summary](#summary)
## Notes on appropriate use
When using this data, please keep in mind:
* All documents in this dataset are public information, published by courts within the United States to inform the public about the law. **You have a right to access them.**
* Nevertheless, **public court decisions frequently contain statements about individuals that are not true**. Court decisions often contain claims that are disputed,
or false claims taken as true based on a legal technicality, or claims taken as true but later found to be false. Legal decisions are designed to inform you about the law -- they are not
designed to inform you about individuals, and should not be used in place of credit databases, criminal records databases, news articles, or other sources intended
to provide factual personal information. Applications should carefully consider whether use of this data will inform about the law, or mislead about individuals.
* **Court decisions are not up-to-date statements of law**. Each decision provides a given judge's best understanding of the law as applied to the stated facts
at the time of the decision. Use of this data to generate statements about the law requires integration of a large amount of context --
the skill typically provided by lawyers -- rather than simple data retrieval.
To mitigate privacy risks, we have filtered out cases [blocked or deindexed by CourtListener](https://www.courtlistener.com/terms/#removal). Researchers who
require access to the full dataset without that filter may rerun our pipeline on CourtListener's raw data.
[☝️ Go back to Summary](#summary) | [
-0.3028346598148346,
-0.6334357857704163,
0.7034509778022766,
0.18077705800533295,
-0.48515748977661133,
-0.16999058425426483,
-0.1328805685043335,
-0.1546553671360016,
0.4448177218437195,
0.7579267621040344,
-0.2897569537162781,
-0.9776525497436523,
-0.47390106320381165,
-0.14894737303256... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nicolas-BZRD/Original_Songs_Lyrics_with_French_Translation | Nicolas-BZRD | 2023-10-16T14:02:02Z | 28 | 6 | null | [
"task_categories:translation",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:fr",
"language:en",
"language:es",
"language:it",
"language:de",
"language:ko",
"language:id",
"language:pt",
"language:no",
"language:fi",
"language:sv",
"language:sw",
"language:... | 2023-10-16T14:02:02Z | 2023-09-12T21:21:44.000Z | 2023-09-12T21:21:44 | ---
license: unknown
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: artist_name
dtype: string
- name: album_name
dtype: string
- name: year
dtype: int64
- name: title
dtype: string
- name: number
dtype: int64
- name: original_version
dtype: string
- name: french_version
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 250317845
num_examples: 99289
download_size: 122323323
dataset_size: 250317845
task_categories:
- translation
- text-generation
language:
- fr
- en
- es
- it
- de
- ko
- id
- pt
- 'no'
- fi
- sv
- sw
- hr
- so
- ca
- tl
- ja
- nl
- ru
- et
- tr
- ro
- cy
- vi
- af
- hu
- sk
- sl
- cs
- da
- pl
- sq
- el
- he
- zh
- th
- bg
- ar
tags:
- music
- parallel
- parallel data
pretty_name: SYFT
size_categories:
- 10K<n<100K
---
# Original Songs Lyrics with French Translation
### Dataset Summary
Dataset of 99289 songs containing their metadata (author, album, release date, song number), original lyrics and lyrics translated into French.
Details of the number of songs by language of origin can be found in the table below:
| Original language | Number of songs |
|---|:---|
| en | 75786 |
| fr | 18486 |
| es | 1743 |
| it | 803 |
| de | 691 |
| sw | 529 |
| ko | 193 |
| id | 169 |
| pt | 142 |
| no | 122 |
| fi | 113 |
| sv | 70 |
| hr | 53 |
| so | 43 |
| ca | 41 |
| tl | 36 |
| ja | 35 |
| nl | 32 |
| ru | 29 |
| et | 27 |
| tr | 22 |
| ro | 19 |
| cy | 14 |
| vi | 14 |
| af | 13 |
| hu | 10 |
| sk | 10 |
| sl | 10 |
| cs | 7 |
| da | 6 |
| pl | 5 |
| sq | 4 |
| el | 4 |
| he | 3 |
| zh-cn | 2 |
| th | 1 |
| bg | 1 |
| ar | 1 | | [
-0.5436922311782837,
-0.2298395186662674,
0.26920127868652344,
0.9313919544219971,
-0.1531848907470703,
0.26017892360687256,
-0.2757697105407715,
-0.3256393373012543,
0.511759877204895,
1.358842372894287,
-1.2133055925369263,
-0.960709273815155,
-0.8669556379318237,
0.5326979756355286,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
M-A-D/Mixed-Arabic-Dataset-Main | M-A-D | 2023-10-06T17:56:33Z | 28 | 3 | null | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:translation",
"task_categories:summarization",
"language:ar",
"region:us"
] | 2023-10-06T17:56:33Z | 2023-09-25T10:52:11.000Z | 2023-09-25T10:52:11 | ---
language:
- ar
task_categories:
- conversational
- text-generation
- text2text-generation
- translation
- summarization
pretty_name: MAD
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: GenId
dtype: int64
- name: SubId
dtype: int64
- name: DatasetName
dtype: string
- name: DatasetLink
dtype: string
- name: Text
dtype: string
- name: MetaData
struct:
- name: AboutAuthor
dtype: string
- name: AboutBook
dtype: string
- name: Author
dtype: string
- name: AuthorName
dtype: string
- name: BookLink
dtype: string
- name: BookName
dtype: string
- name: ChapterLink
dtype: string
- name: ChapterName
dtype: string
- name: Tags
dtype: float64
- name: __index_level_0__
dtype: float64
- name: created_date
dtype: string
- name: deleted
dtype: bool
- name: detoxify
dtype: 'null'
- name: emojis
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: id
dtype: string
- name: labels
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: value
sequence: float64
- name: lang
dtype: string
- name: message_id
dtype: string
- name: message_tree_id
dtype: string
- name: model_name
dtype: 'null'
- name: parent_id
dtype: string
- name: query_id
dtype: string
- name: rank
dtype: float64
- name: review_count
dtype: float64
- name: review_result
dtype: bool
- name: role
dtype: string
- name: synthetic
dtype: bool
- name: title
dtype: string
- name: tree_state
dtype: string
- name: url
dtype: string
- name: user_id
dtype: string
- name: ConcatenatedText
dtype: int64
- name: __index_level_0__
dtype: float64
splits:
- name: train
num_bytes: 1990497610
num_examples: 131393
download_size: 790648134
dataset_size: 1990497610
---
# Dataset Card for "Mixed-Arabic-Dataset"
## Mixed Arabic Datasets (MAD)
The Mixed Arabic Datasets (MAD) project provides a comprehensive collection of diverse Arabic-language datasets, sourced from various repositories, platforms, and domains. These datasets cover a wide range of text types, including books, articles, Wikipedia content, stories, and more.
### MAD Repo vs. MAD Main
#### MAD Repo
- **Versatility**: In the MAD Repository (MAD Repo), datasets are made available in their original, native form. Researchers and practitioners can selectively download specific datasets that align with their specific interests or requirements.
- **Independent Access**: Each dataset is self-contained, enabling users to work with individual datasets independently, allowing for focused analyses and experiments.
#### MAD Main or simply MAD
- **Unified Dataframe**: MAD Main represents a harmonized and unified dataframe, incorporating all datasets from the MAD Repository. It provides a seamless and consolidated view of the entire MAD collection, making it convenient for comprehensive analyses and applications.
- **Holistic Perspective**: Researchers can access a broad spectrum of Arabic-language content within a single dataframe, promoting holistic exploration and insights across diverse text sources.
### Why MAD Main?
- **Efficiency**: Working with MAD Main streamlines the data acquisition process by consolidating multiple datasets into one structured dataframe. This is particularly beneficial for large-scale projects or studies requiring diverse data sources.
- **Interoperability**: With MAD Main, the datasets are integrated into a standardized format, enhancing interoperability and compatibility with a wide range of data processing and analysis tools.
- **Meta-Analysis**: Researchers can conduct comprehensive analyses, such as cross-domain studies, trend analyses, or comparative studies, by leveraging the combined richness of all MAD datasets.
### Getting Started
- To access individual datasets in their original form, refer to the MAD Repository ([Link to MAD Repo](https://huggingface.co/datasets/M-A-D/Mixed-Arabic-Datasets-Repo)).
- For a unified view of all datasets, conveniently organized in a dataframe, you are here in the right place.
```python
from datasets import load_dataset
dataset = load_dataset("M-A-D/Mixed-Arabic-Dataset-Main")
```
### Join Us on Discord
For discussions, contributions, and community interactions, join us on Discord! [](https://discord.gg/2NpJ9JGm)
### How to Contribute
Want to contribute to the Mixed Arabic Datasets project? Follow our comprehensive guide on Google Colab for step-by-step instructions: [Contribution Guide](https://colab.research.google.com/drive/1w7_7lL6w7nM9DcDmTZe1Vfiwkio6SA-w?usp=sharing).
**Note**: If you'd like to test a contribution before submitting it, feel free to do so on the [MAD Test Dataset](https://huggingface.co/datasets/M-A-D/Mixed-Arabic-Dataset-test).
## Citation
```
@dataset{
title = {Mixed Arabic Datasets (MAD)},
author = {MAD Community},
howpublished = {Dataset},
url = {https://huggingface.co/datasets/M-A-D/Mixed-Arabic-Datasets-Repo},
year = {2023},
}
``` | [
-0.6333746314048767,
-0.5809771418571472,
-0.13783769309520721,
0.32546454668045044,
-0.23122510313987732,
0.3348468542098999,
-0.05636657029390335,
-0.2750006914138794,
0.40660518407821655,
0.22212892770767212,
-0.46584370732307434,
-0.9477024674415588,
-0.6604589223861694,
0.329419225454... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/multilexnorm | SEACrowd | 2023-09-26T12:29:08Z | 28 | 0 | null | [
"language:ind",
"multilexnorm",
"region:us"
] | 2023-09-26T12:29:08Z | 2023-09-26T11:13:05.000Z | 2023-09-26T11:13:05 | ---
tags:
- multilexnorm
language:
- ind
---
# multilexnorm
MULTILEXNPRM is a new benchmark dataset for multilingual lexical normalization
including 12 language variants,
we here specifically work on the Indonisian-english language.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{multilexnorm,
title= {MultiLexNorm: A Shared Task on Multilingual Lexical Normalization,
author = "van der Goot, Rob and Ramponi et al.",
booktitle = "Proceedings of the 7th Workshop on Noisy User-generated Text (W-NUT 2021)",
year = "2021",
publisher = "Association for Computational Linguistics",
address = "Punta Cana, Dominican Republic"
}
```
## License
CC-BY-NC-SA 4.0
## Homepage
[https://bitbucket.org/robvanderg/multilexnorm/src/master/](https://bitbucket.org/robvanderg/multilexnorm/src/master/)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.5377999544143677,
-0.14906476438045502,
0.059653256088495255,
0.560161292552948,
-0.15787512063980103,
-0.029067762196063995,
-0.627038836479187,
-0.15433518588542938,
0.3969719707965851,
0.555078387260437,
-0.18659743666648865,
-0.7094143629074097,
-0.7306297421455383,
0.55454123020172... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
JeremiahZ/humaneval_x_llvm_wasm | JeremiahZ | 2023-09-29T00:04:36Z | 28 | 0 | null | [
"region:us"
] | 2023-09-29T00:04:36Z | 2023-09-29T00:04:31.000Z | 2023-09-29T00:04:31 | ---
dataset_info:
features:
- name: task_id
dtype: string
- name: prompt
dtype: string
- name: declaration
dtype: string
- name: canonical_solution
dtype: string
- name: test
dtype: string
- name: example_test
dtype: string
- name: llvm_ir
dtype: string
- name: wat
dtype: string
splits:
- name: test
num_bytes: 4945639
num_examples: 161
download_size: 1096385
dataset_size: 4945639
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Dataset Card for "humaneval_x_llvm_wasm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.44151681661605835,
-0.159193217754364,
0.11025811731815338,
0.19995763897895813,
-0.4627504050731659,
0.04280577227473259,
0.2518852949142456,
-0.02384907379746437,
0.9150804877281189,
0.6802401542663574,
-0.8012567162513733,
-1.0199357271194458,
-0.5845341682434082,
-0.1822658330202102... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
qgyd2021/rlhf_reward_dataset | qgyd2021 | 2023-10-10T11:11:01Z | 28 | 9 | null | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:100M<n<1B",
"language:zh",
"language:en",
"license:apache-2.0",
"reward model",
"rlhf",
"arxiv:2204.05862",
"region:us"
] | 2023-10-10T11:11:01Z | 2023-09-30T03:23:01.000Z | 2023-09-30T03:23:01 | ---
license: apache-2.0
task_categories:
- question-answering
- text-generation
language:
- zh
- en
tags:
- reward model
- rlhf
size_categories:
- 100M<n<1B
---
## RLHF Reward Model Dataset
奖励模型数据集。
数据集从网上收集整理如下:
| 数据 | 语言 | 原始数据/项目地址 | 样本个数 | 原始数据描述 | 替代数据下载地址 |
| :--- | :---: | :---: | :---: | :---: | :---: |
| beyond | chinese | [beyond/rlhf-reward-single-round-trans_chinese](https://huggingface.co/datasets/beyond/rlhf-reward-single-round-trans_chinese) | 24858 | | |
| helpful_and_harmless | chinese | [dikw/hh_rlhf_cn](https://huggingface.co/datasets/dikw/hh_rlhf_cn) | harmless train 42394 条,harmless test 2304 条,helpful train 43722 条,helpful test 2346 条, | 基于 Anthropic 论文 [Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback](https://arxiv.org/abs/2204.05862) 开源的 helpful 和harmless 数据,使用翻译工具进行了翻译。 | [Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf) |
| zhihu_3k | chinese | [liyucheng/zhihu_rlhf_3k](https://huggingface.co/datasets/liyucheng/zhihu_rlhf_3k) | 3460 | 知乎上的问答有用户的点赞数量,它应该是根据点赞数量来判断答案的优先级。 | |
| SHP | english | [stanfordnlp/SHP](https://huggingface.co/datasets/stanfordnlp/SHP) | 385K | 涉及18个子领域,偏好表示是否有帮助。 | |
<details>
<summary>参考的数据来源,展开查看</summary>
<pre><code>
https://huggingface.co/datasets/ticoAg/rlhf_zh
https://huggingface.co/datasets/beyond/rlhf-reward-single-round-trans_chinese
https://huggingface.co/datasets/dikw/hh_rlhf_cn
https://huggingface.co/datasets/liyucheng/zhihu_rlhf_3k
</code></pre>
</details>
| [
-0.3048926591873169,
-0.643074095249176,
-0.07511572539806366,
0.3350818455219269,
-0.38833966851234436,
-0.4477697014808655,
-0.09056294709444046,
-0.693253219127655,
0.5780413150787354,
0.30107784271240234,
-1.0250827074050903,
-0.6303532123565674,
-0.4910812973976135,
0.1426858305931091... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
neelblabla/enron_labeled_emails_with_subjects-llama2-7b_finetuning | neelblabla | 2023-10-01T18:34:26Z | 28 | 1 | null | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | 2023-10-01T18:34:26Z | 2023-09-30T15:40:14.000Z | 2023-09-30T15:40:14 | ---
task_categories:
- text-classification
language:
- en
pretty_name: enron(unprocessed)_labeled_prompts
size_categories:
- 1K<n<10K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
minhtu0408/gdsc-model-dataset | minhtu0408 | 2023-11-14T10:01:21Z | 28 | 0 | null | [
"region:us"
] | 2023-11-14T10:01:21Z | 2023-10-05T11:49:45.000Z | 2023-10-05T11:49:45 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rouabelgacem/autotrain-data-nlp-bert-ner-testing | rouabelgacem | 2023-10-12T14:53:16Z | 28 | 0 | null | [
"region:us"
] | 2023-10-12T14:53:16Z | 2023-10-12T14:44:39.000Z | 2023-10-12T14:44:39 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hippocrates/MedNLI_train | hippocrates | 2023-10-18T19:47:44Z | 28 | 0 | null | [
"region:us"
] | 2023-10-18T19:47:44Z | 2023-10-12T15:46:06.000Z | 2023-10-12T15:46:06 | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8375998
num_examples: 11232
- name: valid
num_bytes: 1054726
num_examples: 1395
- name: test
num_bytes: 1050034
num_examples: 1422
download_size: 3057999
dataset_size: 10480758
---
# Dataset Card for "MedNLI_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5085327625274658,
0.055445000529289246,
0.18204030394554138,
0.07680079340934753,
-0.08627691119909286,
-0.14896799623966217,
0.1990634649991989,
-0.15364424884319305,
0.8636783361434937,
0.4135456085205078,
-1.0009633302688599,
-0.5598673820495605,
-0.46094003319740295,
-0.277931183576... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lighteval/natural_questions_clean | lighteval | 2023-10-17T20:29:08Z | 28 | 0 | null | [
"region:us"
] | 2023-10-17T20:29:08Z | 2023-10-17T16:39:42.000Z | 2023-10-17T16:39:42 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: document
dtype: string
- name: question
dtype: string
- name: long_answers
sequence: string
- name: short_answers
sequence: string
splits:
- name: train
num_bytes: 4346873866.211105
num_examples: 106926
- name: validation
num_bytes: 175230324.62247765
num_examples: 4289
download_size: 1406784865
dataset_size: 4522104190.833583
---
# Dataset Card for "natural_questions_clean"
Created by @thomwolf on the basis of https://huggingface.co/datasets/lighteval/natural_questions but removing the questions without short answers provided.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8745608925819397,
-0.9609296321868896,
0.20879194140434265,
-0.1624538004398346,
-0.45509496331214905,
-0.20629577338695526,
-0.2605534493923187,
-0.6620656251907349,
0.8658750653266907,
0.7722594738006592,
-0.8775538206100464,
-0.47062161564826965,
-0.1791592389345169,
0.23926126956939... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ppxscal/aminer-citation-graphv14-jaccard | ppxscal | 2023-10-24T01:56:10Z | 28 | 0 | null | [
"region:us"
] | 2023-10-24T01:56:10Z | 2023-10-23T14:13:25.000Z | 2023-10-23T14:13:25 | ---
# For reference on dataset card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
Contains text pairs from https://www.aminer.org/citation v14. Similairty socres calculated with Jaccard index.
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | [
-0.49012744426727295,
-0.48572415113449097,
0.16169412434101105,
0.21002379059791565,
-0.37142109870910645,
-0.18770582973957062,
-0.04764176160097122,
-0.6432647705078125,
0.5875018835067749,
0.7652541995048523,
-0.7544398307800293,
-0.9097355604171753,
-0.5350700616836548,
0.155589565634... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
fashxp/cars-description | fashxp | 2023-10-25T14:17:36Z | 28 | 0 | null | [
"region:us"
] | 2023-10-25T14:17:36Z | 2023-10-23T19:10:59.000Z | 2023-10-23T19:10:59 | ---
dataset_info:
features:
- name: Bodystyle
dtype: string
- name: Class
dtype: string
- name: Wheelbase
dtype: string
- name: Availability Type
dtype: string
- name: Production Year
dtype: string
- name: Power
dtype: string
- name: ID
dtype: string
- name: Cylinders
dtype: string
- name: Color
dtype: string
- name: Manufacturer
dtype: string
- name: Number Of Doors
dtype: string
- name: Milage
dtype: string
- name: Description
dtype: string
- name: Length
dtype: string
- name: Country
dtype: string
- name: Capacity
dtype: string
- name: Categories
dtype: string
- name: Engine Location
dtype: string
- name: Width
dtype: string
- name: Number Of Seats
dtype: string
- name: Name
dtype: string
- name: Condition
dtype: string
- name: Price in EUR
dtype: string
- name: Weight
dtype: string
- name: Object Type
dtype: string
- name: Cargo Capacity
dtype: string
- name: Wheel Drive
dtype: string
- name: Availability Pieces
dtype: string
- name: Prompt
dtype: string
splits:
- name: train
num_bytes: 323678
num_examples: 248
download_size: 114519
dataset_size: 323678
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cars-description"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7568616271018982,
-0.1303810030221939,
0.4093114733695984,
0.22019600868225098,
-0.24891327321529388,
0.0972425788640976,
0.03203465789556503,
-0.32844164967536926,
0.5966350436210632,
0.17502912878990173,
-0.8719015717506409,
-0.5656322240829468,
-0.3688215911388397,
-0.392810195684433... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
khederwaaOne/my_dataset | khederwaaOne | 2023-10-24T18:33:31Z | 28 | 0 | null | [
"region:us"
] | 2023-10-24T18:33:31Z | 2023-10-24T17:59:00.000Z | 2023-10-24T17:59:00 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
w95/databricks-dolly-15k-az | w95 | 2023-10-29T07:51:38Z | 28 | 0 | null | [
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:1K<n<10K",
"language:az",
"license:cc-by-sa-3.0",
"arxiv:2203.02155",
"region:us"
] | 2023-10-29T07:51:38Z | 2023-10-29T07:43:06.000Z | 2023-10-29T07:43:06 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
language:
- az
size_categories:
- 1K<n<10K
---
This dataset is a machine-translated version of [databricks-dolly-15k.jsonl](https://huggingface.co/datasets/databricks/databricks-dolly-15k) into Azerbaijani. Dataset size is 8k.
-----
# Summary
`databricks-dolly-15k` is an open source dataset of instruction-following records generated by thousands of Databricks employees in several
of the behavioral categories outlined in the [InstructGPT](https://arxiv.org/abs/2203.02155) paper, including brainstorming, classification,
closed QA, generation, information extraction, open QA, and summarization.
This dataset can be used for any purpose, whether academic or commercial, under the terms of the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: English
Version: 1.0 | [
0.09338925778865814,
-0.7657922506332397,
-0.009938729926943779,
0.6653185486793518,
-0.33416685461997986,
0.06567493081092834,
-0.09083867818117142,
-0.06872350722551346,
0.006306231953203678,
0.7745890021324158,
-0.908684253692627,
-0.843329668045044,
-0.4904543459415436,
0.3672083616256... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Kabatubare/medical-guanaco-3000 | Kabatubare | 2023-10-30T09:59:47Z | 28 | 1 | null | [
"language:en",
"license:unknown",
"healthcare",
"Q&A",
"NLP",
"dialogues",
"region:us"
] | 2023-10-30T09:59:47Z | 2023-10-29T15:49:46.000Z | 2023-10-29T15:49:46 | ---
title: Reduced Medical Q&A Dataset
language: en
license: unknown
tags:
- healthcare
- Q&A
- NLP
- dialogues
pretty_name: Medical Q&A Dataset
---
# Dataset Card for Reduced Medical Q&A Dataset
This dataset card provides comprehensive details about the Reduced Medical Q&A Dataset, which is a curated and balanced subset aimed for healthcare dialogues and medical NLP research.
## Dataset Details
### Dataset Description
The Reduced Medical Q&A Dataset is derived from a specialized subset of the larger MedDialog collection. It focuses on healthcare dialogues between doctors and patients from sources like WebMD, Icliniq, HealthcareMagic, and HealthTap. The dataset contains approximately 3,000 rows and is intended for a variety of applications such as NLP research, healthcare chatbot development, and medical information retrieval.
- **Curated by:** Unknown (originally from MedDialog)
- **Funded by [optional]:** N/A
- **Shared by [optional]:** N/A
- **Language(s) (NLP):** English
- **License:** Unknown (assumed for educational/research use)
### Dataset Sources [optional]
- **Repository:** N/A
- **Paper [optional]:** N/A
- **Demo [optional]:** N/A
## Uses
### Direct Use
- NLP research in healthcare dialogues
- Development of healthcare question-answering systems
- Medical information retrieval
### Out-of-Scope Use
- Not a substitute for certified medical advice
- Exercise caution in critical healthcare applications
## Dataset Structure
Each entry in the dataset follows the structure: "### Human:\n[Human's text]\n\n### Assistant: [Assistant's text]"
## Dataset Creation
### Curation Rationale
The dataset was curated to create a balanced set of medical Q&A pairs using keyword-based sampling to cover a wide range of medical topics.
### Source Data
#### Data Collection and Processing
The data is text-based, primarily in English, and was curated from the larger "Medical" dataset featuring dialogues from Icliniq, HealthcareMagic, and HealthTap.
#### Who are the source data producers?
The original data was produced by healthcare professionals and patients engaging in medical dialogues on platforms like Icliniq, HealthcareMagic, and HealthTap.
### Annotations [optional]
No additional annotations; the dataset is text-based.
## Bias, Risks, and Limitations
- The dataset is not a substitute for professional medical advice.
- It is designed for research and educational purposes only.
### Recommendations
Users should exercise caution and understand the limitations when using the dataset for critical healthcare applications.
## Citation [optional]
N/A
## Glossary [optional]
N/A
## More Information [optional]
N/A
## Dataset Card Authors [optional]
N/A
## Dataset Card Contact
N/A | [
-0.27335768938064575,
-0.7437918186187744,
0.27780643105506897,
-0.1995694935321808,
-0.2629483640193939,
-0.024709511548280716,
0.07231761515140533,
-0.2498636096715927,
0.6121847629547119,
0.7621784210205078,
-1.1383658647537231,
-0.8334278464317322,
-0.40659868717193604,
0.1604409515857... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Ryan20/qa_hotel_dataset | Ryan20 | 2023-10-31T11:32:14Z | 28 | 0 | null | [
"task_categories:question-answering",
"size_categories:n<1K",
"language:en",
"language:pt",
"license:openrail",
"region:us"
] | 2023-10-31T11:32:14Z | 2023-10-30T10:29:25.000Z | 2023-10-30T10:29:25 | ---
license: openrail
task_categories:
- question-answering
language:
- en
- pt
size_categories:
- n<1K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jmelsbach/leichte-sprache-definitionen | jmelsbach | 2023-10-30T15:08:24Z | 28 | 0 | null | [
"region:us"
] | 2023-10-30T15:08:24Z | 2023-10-30T15:08:20.000Z | 2023-10-30T15:08:20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: title
dtype: string
- name: parsed_content
dtype: string
- name: id
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 530344.0658114891
num_examples: 2868
- name: test
num_bytes: 132770.93418851087
num_examples: 718
download_size: 417716
dataset_size: 663115.0
---
# Dataset Card for "leichte-sprache-definitionen"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7181417346000671,
-0.29131847620010376,
0.04863014817237854,
0.3761870563030243,
-0.3467704951763153,
-0.18940331041812897,
-0.024043064564466476,
-0.28233611583709717,
1.1042569875717163,
0.5420048236846924,
-0.7673881649971008,
-0.7955626845359802,
-0.7315640449523926,
-0.322521090507... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kunishou/hh-rlhf-49k-ja-single-turn | kunishou | 2023-11-02T14:30:34Z | 28 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-02T14:30:34Z | 2023-10-31T17:47:50.000Z | 2023-10-31T17:47:50 | ---
license: mit
---
This dataset was created by automatically translating part of "Anthropic/hh-rlhf" into Japanese, and selected for single turn conversations.
You can use this dataset for RLHF and DPO.
hh-rlhf repository
https://github.com/anthropics/hh-rlhf
Anthropic/hh-rlhf
https://huggingface.co/datasets/Anthropic/hh-rlhf | [
-0.5173062682151794,
-0.9289660453796387,
0.5782462954521179,
0.24024510383605957,
-0.5644632577896118,
0.06642206013202667,
0.012910553254187107,
-0.6020025014877319,
0.8405015468597412,
0.9449174404144287,
-1.2688751220703125,
-0.6189240217208862,
-0.29845157265663147,
0.3985547125339508... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Trelis/openassistant-falcon | Trelis | 2023-11-01T08:46:17Z | 28 | 0 | null | [
"size_categories:1K<n<10k",
"language:en",
"language:es",
"language:ru",
"language:de",
"language:pl",
"language:th",
"language:vi",
"language:sv",
"language:bn",
"language:da",
"language:he",
"language:it",
"language:fa",
"language:sk",
"language:id",
"language:nb",
"language:el",... | 2023-11-01T08:46:17Z | 2023-11-01T08:38:05.000Z | 2023-11-01T08:38:05 | ---
license: apache-2.0
language:
- en
- es
- ru
- de
- pl
- th
- vi
- sv
- bn
- da
- he
- it
- fa
- sk
- id
- nb
- el
- nl
- hu
- eu
- zh
- eo
- ja
- ca
- cs
- bg
- fi
- pt
- tr
- ro
- ar
- uk
- gl
- fr
- ko
tags:
- human-feedback
- llama-2
size_categories:
- 1K<n<10k
pretty_name: Filtered OpenAssistant Conversations
---
# Chat Fine-tuning Dataset - OpenAssistant Falcon
This dataset allows for fine-tuning chat models using '\Human:' AND '\nAssistant:' to wrap user messages.
It still uses <|endoftext|> as EOS and BOS token, as per Falcon.
Sample
Preparation:
1. The dataset is cloned from [TimDettmers](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), which itself is a subset of the Open Assistant dataset, which you can find [here](https://huggingface.co/datasets/OpenAssistant/oasst1/tree/main). This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples.
1. The dataset was then filtered to:
- replace instances of '### Human:' with '\nHuman:'
- replace instances of '### Assistant:' with '\nAssistant:'
- end assistant responses with <|endoftext|> (to encourage the model to emit <|endoftext|> when finished a response).
Details of the root dataset follow, copied from that repo:
# OpenAssistant Conversations Dataset (OASST1)
## Dataset Description
- **Homepage:** https://www.open-assistant.io/
- **Repository:** https://github.com/LAION-AI/Open-Assistant
- **Paper:** https://arxiv.org/abs/2304.07327
### Dataset Summary
In an effort to democratize research on large-scale alignment, we release OpenAssistant
Conversations (OASST1), a human-generated, human-annotated assistant-style conversation
corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292
quality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus
is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.
Please refer to our [paper](https://arxiv.org/abs/2304.07327) for further details.
### Dataset Structure
This dataset contains message trees. Each message tree has an initial prompt message as the root node,
which can have multiple child messages as replies, and these child messages can have multiple replies.
All messages have a role property: this can either be "assistant" or "prompter". The roles in
conversation threads from prompt to leaf node strictly alternate between "prompter" and "assistant".
This version of the dataset contains data collected on the [open-assistant.io](https://open-assistant.io/) website until April 12 2023.
### JSON Example: Message
For readability, the following JSON examples are shown formatted with indentation on multiple lines.
Objects are stored without indentation (on single lines) in the actual jsonl files.
```json
{
"message_id": "218440fd-5317-4355-91dc-d001416df62b",
"parent_id": "13592dfb-a6f9-4748-a92c-32b34e239bb4",
"user_id": "8e95461f-5e94-4d8b-a2fb-d4717ce973e4",
"text": "It was the winter of 2035, and artificial intelligence (..)",
"role": "assistant",
"lang": "en",
"review_count": 3,
"review_result": true,
"deleted": false,
"rank": 0,
"synthetic": true,
"model_name": "oasst-sft-0_3000,max_new_tokens=400 (..)",
"labels": {
"spam": { "value": 0.0, "count": 3 },
"lang_mismatch": { "value": 0.0, "count": 3 },
"pii": { "value": 0.0, "count": 3 },
"not_appropriate": { "value": 0.0, "count": 3 },
"hate_speech": { "value": 0.0, "count": 3 },
"sexual_content": { "value": 0.0, "count": 3 },
"quality": { "value": 0.416, "count": 3 },
"toxicity": { "value": 0.16, "count": 3 },
"humor": { "value": 0.0, "count": 3 },
"creativity": { "value": 0.33, "count": 3 },
"violence": { "value": 0.16, "count": 3 }
}
}
```
### JSON Example: Conversation Tree
For readability, only a subset of the message properties is shown here.
```json
{
"message_tree_id": "14fbb664-a620-45ce-bee4-7c519b16a793",
"tree_state": "ready_for_export",
"prompt": {
"message_id": "14fbb664-a620-45ce-bee4-7c519b16a793",
"text": "Why can't we divide by 0? (..)",
"role": "prompter",
"lang": "en",
"replies": [
{
"message_id": "894d30b6-56b4-4605-a504-89dd15d4d1c8",
"text": "The reason we cannot divide by zero is because (..)",
"role": "assistant",
"lang": "en",
"replies": [
// ...
]
},
{
"message_id": "84d0913b-0fd9-4508-8ef5-205626a7039d",
"text": "The reason that the result of a division by zero is (..)",
"role": "assistant",
"lang": "en",
"replies": [
{
"message_id": "3352725e-f424-4e3b-a627-b6db831bdbaa",
"text": "Math is confusing. Like those weird Irrational (..)",
"role": "prompter",
"lang": "en",
"replies": [
{
"message_id": "f46207ca-3149-46e9-a466-9163d4ce499c",
"text": "Irrational numbers are simply numbers (..)",
"role": "assistant",
"lang": "en",
"replies": []
},
// ...
]
}
]
}
]
}
}
```
Please refer to [oasst-data](https://github.com/LAION-AI/Open-Assistant/tree/main/oasst-data) for
details about the data structure and Python code to read and write jsonl files containing oasst data objects.
If you would like to explore the dataset yourself you can find a
[`getting-started`](https://github.com/LAION-AI/Open-Assistant/blob/main/notebooks/openassistant-oasst1/getting-started.ipynb)
notebook in the `notebooks/openassistant-oasst1` folder of the [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
github repository.
## Main Dataset Files
Conversation data is provided either as nested messages in trees (extension `.trees.jsonl.gz`)
or as a flat list (table) of messages (extension `.messages.jsonl.gz`).
### Ready For Export Trees
```
2023-04-12_oasst_ready.trees.jsonl.gz 10,364 trees with 88,838 total messages
2023-04-12_oasst_ready.messages.jsonl.gz 88,838 messages
```
Trees in `ready_for_export` state without spam and deleted messages including message labels.
The oasst_ready-trees file usually is sufficient for supervised fine-tuning (SFT) & reward model (RM) training.
### All Trees
```
2023-04-12_oasst_all.trees.jsonl.gz 66,497 trees with 161,443 total messages
2023-04-12_oasst_all.messages.jsonl.gz 161,443 messages
```
All trees, including those in states `prompt_lottery_waiting` (trees that consist of only one message, namely the initial prompt),
`aborted_low_grade` (trees that stopped growing because the messages had low quality), and `halted_by_moderator`.
### Supplemental Exports: Spam & Prompts
```
2023-04-12_oasst_spam.messages.jsonl.gz
```
These are messages which were deleted or have a negative review result (`"review_result": false`).
Besides low quality, a frequent reason for message deletion is a wrong language tag.
```
2023-04-12_oasst_prompts.messages.jsonl.gz
```
These are all the kept initial prompt messages with positive review result (no spam) of trees in `ready_for_export` or `prompt_lottery_waiting` state.
### Using the Huggingface Datasets
While HF datasets is ideal for tabular datasets, it is not a natural fit for nested data structures like the OpenAssistant conversation trees.
Nevertheless, we make all messages which can also be found in the file `2023-04-12_oasst_ready.trees.jsonl.gz` available in parquet as train/validation splits.
These are directly loadable by [Huggingface Datasets](https://pypi.org/project/datasets/).
To load the oasst1 train & validation splits use:
```python
from datasets import load_dataset
ds = load_dataset("OpenAssistant/oasst1")
train = ds['train'] # len(train)=84437 (95%)
val = ds['validation'] # len(val)=4401 (5%)
```
The messages appear in depth-first order of the message trees.
Full conversation trees can be reconstructed from the flat messages table by using the `parent_id`
and `message_id` properties to identify the parent-child relationship of messages. The `message_tree_id`
and `tree_state` properties (only present in flat messages files) can be used to find all messages of a message tree or to select trees by their state.
### Languages
OpenAssistant Conversations incorporates 35 different languages with a distribution of messages as follows:
**Languages with over 1000 messages**
- English: 71956
- Spanish: 43061
- Russian: 9089
- German: 5279
- Chinese: 4962
- French: 4251
- Thai: 3042
- Portuguese (Brazil): 2969
- Catalan: 2260
- Korean: 1553
- Ukrainian: 1352
- Italian: 1320
- Japanese: 1018
<details>
<summary><b>Languages with under 1000 messages</b></summary>
<ul>
<li>Vietnamese: 952</li>
<li>Basque: 947</li>
<li>Polish: 886</li>
<li>Hungarian: 811</li>
<li>Arabic: 666</li>
<li>Dutch: 628</li>
<li>Swedish: 512</li>
<li>Turkish: 454</li>
<li>Finnish: 386</li>
<li>Czech: 372</li>
<li>Danish: 358</li>
<li>Galician: 339</li>
<li>Hebrew: 255</li>
<li>Romanian: 200</li>
<li>Norwegian Bokmål: 133</li>
<li>Indonesian: 115</li>
<li>Bulgarian: 95</li>
<li>Bengali: 82</li>
<li>Persian: 72</li>
<li>Greek: 66</li>
<li>Esperanto: 59</li>
<li>Slovak: 19</li>
</ul>
</details>
## Contact
- Discord [Open Assistant Discord Server](https://ykilcher.com/open-assistant-discord)
- GitHub: [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
- E-Mail: [open-assistant@laion.ai](mailto:open-assistant@laion.ai) | [
-0.31715619564056396,
-0.9120029211044312,
0.13934168219566345,
0.09632568061351776,
-0.042539387941360474,
0.07372946292161942,
-0.10937097668647766,
-0.2963812053203583,
0.32887619733810425,
0.3922536075115204,
-0.656379222869873,
-0.7776197791099548,
-0.5329810976982117,
0.0754507556557... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MathiasFoster/whisper-v5-recordings | MathiasFoster | 2023-11-14T20:03:26Z | 28 | 0 | null | [
"region:us"
] | 2023-11-14T20:03:26Z | 2023-11-02T00:25:07.000Z | 2023-11-02T00:25:07 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 2527835918.0
num_examples: 733
download_size: 0
dataset_size: 2527835918.0
---
# Dataset Card for "whisper-v5-recordings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.46267008781433105,
-0.01518557220697403,
0.3097703158855438,
0.34344950318336487,
-0.1255696564912796,
-0.038909975439310074,
0.2426535040140152,
-0.3556895852088928,
0.7790959477424622,
0.445148766040802,
-1.0456483364105225,
-0.9859660267829895,
-0.6368834376335144,
-0.417596399784088... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Intuit-GenSRF/all_english_datasets | Intuit-GenSRF | 2023-11-03T22:19:49Z | 28 | 0 | null | [
"region:us"
] | 2023-11-03T22:19:49Z | 2023-11-03T22:19:15.000Z | 2023-11-03T22:19:15 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: encoded_labels
sequence: int64
- name: lang
dtype: string
- name: has_toxic
dtype: int64
- name: has_profane
dtype: int64
- name: has_insult
dtype: int64
- name: has_hate
dtype: int64
- name: has_threat
dtype: int64
- name: has_sexual
dtype: int64
- name: has_offensive
dtype: int64
- name: has_selfharm
dtype: int64
- name: has_harassment
dtype: int64
splits:
- name: train
num_bytes: 1498751715
num_examples: 2921884
download_size: 616223055
dataset_size: 1498751715
---
# Dataset Card for "all_english_datasets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4342036545276642,
-0.22532202303409576,
0.2996973991394043,
0.365907222032547,
-0.22087910771369934,
0.03311696648597717,
0.07096630334854126,
-0.20213016867637634,
1.1580445766448975,
0.42976099252700806,
-0.7166595458984375,
-0.9913644790649414,
-0.7430455088615417,
0.0284338481724262... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
baohuynhbk14/test-comment | baohuynhbk14 | 2023-11-04T16:40:42Z | 28 | 0 | null | [
"region:us"
] | 2023-11-04T16:40:42Z | 2023-11-04T16:39:45.000Z | 2023-11-04T16:39:45 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shaaz10/GST | shaaz10 | 2023-11-07T15:37:01Z | 28 | 0 | null | [
"license:unknown",
"region:us"
] | 2023-11-07T15:37:01Z | 2023-11-05T21:03:12.000Z | 2023-11-05T21:03:12 | ---
license: unknown
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
minnnnn/test_11_07_5 | minnnnn | 2023-11-07T03:33:08Z | 28 | 0 | null | [
"region:us"
] | 2023-11-07T03:33:08Z | 2023-11-07T02:55:53.000Z | 2023-11-07T02:55:53 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shaggbagg/Material_prototype | shaggbagg | 2023-11-09T06:20:42Z | 28 | 0 | null | [
"license:unknown",
"region:us"
] | 2023-11-09T06:20:42Z | 2023-11-09T06:15:05.000Z | 2023-11-09T06:15:05 | ---
license: unknown
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Wall
'1': Wood
'2': asphalt
'3': brick
'4': concrete
'5': fabric
'6': floor
'7': marble
'8': metal
'9': plaster
'10': roof
'11': stone
'12': tile
splits:
- name: train
num_bytes: 223726273.0
num_examples: 226
download_size: 223740037
dataset_size: 223726273.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pavement/tsla_stock_price_real | pavement | 2023-11-09T14:11:47Z | 28 | 0 | null | [
"region:us"
] | 2023-11-09T14:11:47Z | 2023-11-09T13:47:25.000Z | 2023-11-09T13:47:25 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: start
dtype: string
- name: target
sequence: float64
- name: feat_static_cat
sequence: int64
- name: feat_dynamic_real
dtype: 'null'
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 317713
num_examples: 3356
- name: validation
num_bytes: 344561
num_examples: 3356
- name: test
num_bytes: 371409
num_examples: 3356
download_size: 320770
dataset_size: 1033683
---
# Dataset Card for "tsla_stock_price_real"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.06881552934646606,
-0.40343353152275085,
0.04707706347107887,
0.060646653175354004,
-0.4081173539161682,
0.4524158239364624,
0.46766915917396545,
-0.16823138296604156,
0.972294807434082,
0.043615326285362244,
-0.5860229730606079,
-0.7042595744132996,
-0.36503657698631287,
-0.56236785650... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
huuuyeah/meetingbank | huuuyeah | 2023-11-10T04:52:54Z | 28 | 1 | null | [
"task_categories:summarization",
"task_categories:text-generation",
"size_categories:10M<n<100M",
"language:en",
"license:cc-by-nc-sa-4.0",
"municipal",
"meeting",
"transcripts",
"benchmark",
"long-context",
"arxiv:2305.17529",
"region:us"
] | 2023-11-10T04:52:54Z | 2023-11-10T04:02:31.000Z | 2023-11-10T04:02:31 | ---
license: cc-by-nc-sa-4.0
task_categories:
- summarization
- text-generation
language:
- en
tags:
- municipal
- meeting
- transcripts
- benchmark
- long-context
size_categories:
- 10M<n<100M
---
## Overview
MeetingBank, a benchmark dataset created from the city councils of 6 major U.S. cities to supplement existing datasets. It contains 1,366 meetings with over 3,579 hours of video, as well as transcripts, PDF documents of meeting minutes, agenda, and other metadata. On average, a council meeting is 2.6 hours long and its transcript contains over 28k tokens, making it a valuable testbed for meeting summarizers and for extracting structure from meeting videos. The datasets contains 6,892 segment-level summarization instances for training and evaluating of performance.
## Data Structure
```json
{
"id": 0,
"uid": "SeattleCityCouncil_06132016_Res 31669",
"summary": "A RESOLUTION encouraging as a best practice ...",
"transcript": "The report of the Civil Rights, Utilities, Economic ..."
}
```
## Usage
```python
from datasets import load_dataset
meetingbank = load_dataset("huuuyeah/meetingbank")
train_data = meetingbank['train']
test_data = meetingbank['test']
val_data = meetingbank['validation']
def generator(data_split):
for instance in data_split:
yiled instance['id'], instance['summary'], instance['transcript']
```
## Acknowledgement
Please cite the following paper in work that makes use of this dataset:
[MeetingBank: A Benchmark Dataset for Meeting Summarization](https://arxiv.org/abs/2305.17529)\
Yebowen Hu, Tim Ganter, Hanieh Deilamsalehy, Franck Dernoncourt, Hassan Foroosh, Fei Liu\
In main conference of Association for Computational Linguistics (ACL'23), Toronto, Canada.
## Bibtex
```
@inproceedings{hu-etal-2023-meetingbank,
title = "MeetingBank: A Benchmark Dataset for Meeting Summarization",
author = "Yebowen Hu and Tim Ganter and Hanieh Deilamsalehy and Franck Dernoncourt and Hassan Foroosh and Fei Liu",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL)",
month = July,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
}
```
## Multi-media Resources
MeetingBank dataset will be hosted at Zenodo. The audio files of each meeting will be hosted individually on Huggingface. All resources will includes meeting audio, transcripts, meetingbank main JSON file, summaries from 6 systems and human annotations.
**Text & Audio**: [zenodo](https://zenodo.org/record/7989108), Huggingface([splits](https://huggingface.co/datasets/huuuyeah/meetingbank), [audio&transcripts](https://huggingface.co/datasets/huuuyeah/MeetingBank_Audio))
**Videos**: All meeting videos can be found in https://archive.org/
- [Alameda](https://archive.org/details/meetingbank-alameda), [Boston](https://archive.org/details/meetingbank-boston), [Denver](https://archive.org/details/meetingbank-denver), [Long Beach](https://archive.org/details/meetingbank-long-beach) ,[King County](https://archive.org/details/meetingbank-king-county), [Seattle](https://archive.org/details/meetingbank-seattle)
**Python Scripts**
Useful scripts and guidance can be found in github repo [MeetingBank_Utils](https://github.com/YebowenHu/MeetingBank-utils) | [
-0.6324355006217957,
-0.4113630950450897,
0.42277657985687256,
0.1990247666835785,
-0.18291686475276947,
-0.12650074064731598,
-0.44143709540367126,
-0.4917672574520111,
0.2705126404762268,
0.18506179749965668,
-0.6729837656021118,
-0.6165811419487,
-0.3988693356513977,
0.23781006038188934... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ChanceFocus/flare-es-instruction-tuning | ChanceFocus | 2023-11-10T11:24:33Z | 28 | 0 | null | [
"region:us"
] | 2023-11-10T11:24:33Z | 2023-11-10T10:18:14.000Z | 2023-11-10T10:18:14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 41354500
num_examples: 14851
- name: valid
num_bytes: 6718150
num_examples: 2226
download_size: 23259291
dataset_size: 48072650
---
# Dataset Card for "flare-es-instruction-tuning"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6665549278259277,
-0.35788238048553467,
-0.0052664875984191895,
0.26492762565612793,
0.004927322268486023,
0.10583649575710297,
0.053738001734018326,
-0.04579611122608185,
0.8066967725753784,
0.5322872996330261,
-1.0734590291976929,
-0.6592153310775757,
-0.3002292811870575,
-0.286988258... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arieg/bw_spec_cls_4_00_noise_200 | arieg | 2023-11-12T15:47:56Z | 28 | 0 | null | [
"region:us"
] | 2023-11-12T15:47:56Z | 2023-11-12T15:47:51.000Z | 2023-11-12T15:47:51 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '10'
'1': '140'
'2': '2'
'3': '5'
splits:
- name: train
num_bytes: 44730986.0
num_examples: 800
- name: test
num_bytes: 1122375.0
num_examples: 20
download_size: 24737574
dataset_size: 45853361.0
---
# Dataset Card for "bw_spec_cls_4_00_noise_200"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6424547433853149,
-0.17590364813804626,
0.2887088358402252,
0.5374786853790283,
-0.18906095623970032,
-0.2339436411857605,
-0.03422011435031891,
-0.3095460534095764,
0.4961509704589844,
0.4173268973827362,
-0.97544264793396,
-0.7740710973739624,
-0.2604246735572815,
-0.10548937320709229... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
iamroot/mnli-mock-contrastive-axes-ii | iamroot | 2023-11-12T20:49:35Z | 28 | 0 | null | [
"region:us"
] | 2023-11-12T20:49:35Z | 2023-11-12T20:48:04.000Z | 2023-11-12T20:48:04 | ---
dataset_info:
features:
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: text_a
dtype: string
- name: text_b
dtype: string
- name: prompt
dtype: string
- name: text_a_embedding
sequence: float32
- name: text_b_embedding
sequence: float32
- name: prompt_embedding
sequence: float32
splits:
- name: train
num_bytes: 2892065589
num_examples: 304513
download_size: 3435433919
dataset_size: 2892065589
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "mnli-mock-contrastive-axes-ii"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5966132879257202,
-0.2541002333164215,
0.14226685464382172,
0.2989264130592346,
-0.338459849357605,
-0.10689644515514374,
0.5658560395240784,
-0.4097987413406372,
0.9118357300758362,
0.2653046250343323,
-0.8082115054130554,
-0.4883398115634918,
-0.5251927375793457,
-0.07778916507959366,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Arham-Imran/cityscape_final | Arham-Imran | 2023-11-14T22:35:46Z | 28 | 0 | null | [
"region:us"
] | 2023-11-14T22:35:46Z | 2023-11-14T20:31:49.000Z | 2023-11-14T20:31:49 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 6896961361.3
num_examples: 2975
- name: val
num_bytes: 1197986021.0
num_examples: 500
download_size: 8226983719
dataset_size: 8094947382.3
---
# Dataset Card for "cityscape_final"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7387396097183228,
-0.148595929145813,
0.563156008720398,
0.20258155465126038,
-0.11283454298973083,
-0.012390436604619026,
0.21531742811203003,
-0.1829955130815506,
0.6395412087440491,
0.7600515484809875,
-0.835074782371521,
-0.9387475252151489,
-0.35036328434944153,
-0.2520691454410553... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Liberty-L/preprocessed_race_for_multiple_choice | Liberty-L | 2023-11-15T05:05:01Z | 28 | 0 | null | [
"region:us"
] | 2023-11-15T05:05:01Z | 2023-11-15T05:00:46.000Z | 2023-11-15T05:00:46 | ---
dataset_info:
features:
- name: data_index_by_user
dtype: int64
- name: article
dtype: string
- name: answer
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: input_ids
sequence:
sequence: int32
- name: token_type_ids
sequence:
sequence: int8
- name: attention_mask
sequence:
sequence: int8
- name: label
dtype: int64
splits:
- name: train
num_bytes: 683451159
num_examples: 62866
download_size: 143191809
dataset_size: 683451159
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "preprocessed_race_for_multiple_choice"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7331507802009583,
-0.22344939410686493,
0.30752286314964294,
0.4240495562553406,
-0.3314950168132782,
0.29318690299987793,
0.07472187280654907,
-0.07299523800611496,
0.7858775854110718,
0.42407843470573425,
-0.863325834274292,
-0.6506527662277222,
-0.4372616708278656,
-0.086225949227809... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lhallee/dl_binary_reg | lhallee | 2023-11-15T18:33:01Z | 28 | 0 | null | [
"region:us"
] | 2023-11-15T18:33:01Z | 2023-11-15T18:32:54.000Z | 2023-11-15T18:32:54 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: seqs
dtype: string
- name: labels
dtype: int64
splits:
- name: train
num_bytes: 2692075
num_examples: 5473
- name: valid
num_bytes: 653234
num_examples: 1335
- name: test
num_bytes: 905979
num_examples: 1729
download_size: 4189564
dataset_size: 4251288
---
# Dataset Card for "dl_binary_reg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5214572548866272,
-0.27396970987319946,
0.22136899828910828,
0.2277018427848816,
-0.39587604999542236,
0.08398794382810593,
0.38313400745391846,
-0.3577937185764313,
0.8035367131233215,
0.3774576187133789,
-0.8766206502914429,
-0.9241856932640076,
-0.559272289276123,
-0.1019005924463272... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
danielz01/neon-trees | danielz01 | 2023-11-15T23:00:33Z | 28 | 0 | null | [
"region:us"
] | 2023-11-15T23:00:33Z | 2023-11-15T22:59:29.000Z | 2023-11-15T22:59:29 | ---
dataset_info:
features:
- name: image
dtype: image
- name: path
dtype: string
- name: objects
struct:
- name: bbox
sequence:
sequence: float64
- name: categories
sequence: string
- name: count
dtype: int64
- name: height
dtype: int64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 659642403.0
num_examples: 20
- name: evaluation
num_bytes: 108197378.0
num_examples: 194
download_size: 766366868
dataset_size: 767839781.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: evaluation
path: data/evaluation-*
---
# Dataset Card for "neon-trees"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4446810781955719,
-0.3091249465942383,
0.20381975173950195,
0.1647486537694931,
-0.18199226260185242,
0.28749707341194153,
0.3412522077560425,
-0.37044912576675415,
0.8048695921897888,
0.2627897560596466,
-0.8393145203590393,
-0.6872407793998718,
-0.2951660454273224,
-0.1243707388639450... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Alexandre-Numind/benchmark_Ex_IE_v2 | Alexandre-Numind | 2023-11-17T16:21:53Z | 28 | 0 | null | [
"region:us"
] | 2023-11-17T16:21:53Z | 2023-11-17T14:56:24.000Z | 2023-11-17T14:56:24 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bdsaglam/web_nlg-erx-instruction-alpaca | bdsaglam | 2023-11-18T17:35:21Z | 28 | 0 | null | [
"region:us"
] | 2023-11-18T17:35:21Z | 2023-11-18T16:49:48.000Z | 2023-11-18T16:49:48 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 23796525
num_examples: 35426
- name: dev
num_bytes: 2994342
num_examples: 4464
download_size: 2858181
dataset_size: 26790867
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HossainRabby/LAMINI | HossainRabby | 2023-11-18T17:25:32Z | 28 | 0 | null | [
"region:us"
] | 2023-11-18T17:25:32Z | 2023-11-18T17:24:34.000Z | 2023-11-18T17:24:34 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 2150284.5
num_examples: 1260
- name: test
num_bytes: 238920.5
num_examples: 140
download_size: 698665
dataset_size: 2389205.0
---
# Dataset Card for "LAMINI"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7095048427581787,
-0.18311171233654022,
0.20377792418003082,
0.2690112292766571,
-0.23931312561035156,
-0.20554688572883606,
0.3173168897628784,
-0.1930798888206482,
0.8371350765228271,
0.7399729490280151,
-0.9012020826339722,
-0.6848544478416443,
-0.5358015894889832,
-0.424616128206253... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Chakshu/test-470446d9-2c78-4af9-80f1-fd17bf2c6275 | Chakshu | 2023-11-20T06:28:21Z | 28 | 0 | null | [
"region:us"
] | 2023-11-20T06:28:21Z | 2023-11-20T06:28:19.000Z | 2023-11-20T06:28:19 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Aoschu/donut_model_data_for_german_invoice | Aoschu | 2023-11-20T23:17:44Z | 28 | 0 | null | [
"region:us"
] | 2023-11-20T23:17:44Z | 2023-11-20T16:14:35.000Z | 2023-11-20T16:14:35 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 12829172.0
num_examples: 97
- name: validation
num_bytes: 2062396.0
num_examples: 14
- name: test
num_bytes: 2719786.0
num_examples: 18
download_size: 13266362
dataset_size: 17611354.0
---
# Dataset Card for "donut_model_data_for_german_invoice"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.14420799911022186,
-0.27121421694755554,
0.24499686062335968,
0.03562995791435242,
-0.040137648582458496,
-0.0010281868744641542,
0.18724994361400604,
-0.0246100053191185,
0.5859398245811462,
0.6303642988204956,
-0.6899667382240295,
-0.7852811813354492,
-0.5524195432662964,
-0.411636501... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Doub7e/SDv2-count-Iterative | Doub7e | 2023-11-24T07:47:41Z | 28 | 0 | null | [
"region:us"
] | 2023-11-24T07:47:41Z | 2023-11-21T00:06:50.000Z | 2023-11-21T00:06:50 | ---
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 954941659.625
num_examples: 1035
download_size: 954988189
dataset_size: 954941659.625
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "DATASET_NAME"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5209986567497253,
-0.34286344051361084,
0.14935459196567535,
0.17356285452842712,
-0.2989833950996399,
0.10035275667905807,
0.28421321511268616,
-0.04681788757443428,
0.9540290832519531,
0.3960830271244049,
-0.8680935502052307,
-0.7653208374977112,
-0.7826533317565918,
-0.15364809334278... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
xwjzds/pretrain_sts_long | xwjzds | 2023-11-24T22:08:25Z | 28 | 0 | null | [
"arxiv:2310.15296",
"region:us"
] | 2023-11-24T22:08:25Z | 2023-11-21T23:12:08.000Z | 2023-11-21T23:12:08 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 9557417
num_examples: 38151
download_size: 6115013
dataset_size: 9557417
---
Dataset Card for Sentence Paraphase Collections
Dataset Description Repository: Paper: DeTiME: Diffusion-Enhanced Topic Modeling using Encoder-decoder based LLM https://arxiv.org/abs/2310.15296
Leaderboard: Point of Contact: Weijie Xu
Dataset Summary Sentence_Paraphase is a combination of sentences paraphase tasks from various sources such as paraphase using ChatGPT, Paraphrase Adversaries from Word Scrambling (PAWS) and STS benchmark. We filtered out pairs that are detected as non english, too short or not have high similarity score.
Category Count Paraphrase 223241
Dataset Structure Data Instances An example of data as follows: {'input': 'U.S. prosecutors have arrested more than 130 individuals and have seized more than $17 million in a continuing crackdown on Internet fraud and abuse.', 'output': 'More than 130 people have been arrested and $17 million worth of property seized in an Internet fraud sweep announced Friday by three U.S. government agencies.'}
Data Fields The data fields are as follows:
input and output are paraphrase of a sentence or paragraph.
Dataset Creation Curation Rationale [More Information Needed]
Source Data Initial Data Collection and Normalization [More Information Needed]
Who are the source language producers? [More Information Needed]
Annotations Annotation process [More Information Needed]
Who are the annotators? [More Information Needed]
Personal and Sensitive Information [More Information Needed]
Considerations for Using the Data Social Impact of Dataset [More Information Needed]
Discussion of Biases [More Information Needed]
Other Known Limitations [More Information Needed]
Additional Information Dataset Curators [More Information Needed]
Licensing Information The dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0).
Citation Information @misc{xu2023detime, title={DeTiME: Diffusion-Enhanced Topic Modeling using Encoder-decoder based LLM}, author={Weijie Xu and Wenxiang Hu and Fanyou Wu and Srinivasan Sengamedu}, year={2023}, eprint={2310.15296}, archivePrefix={arXiv}, primaryClass={cs.CL} } | [
-0.12994293868541718,
-0.9521583318710327,
0.3138667345046997,
0.15240433812141418,
-0.44390445947647095,
-0.22088120877742767,
0.0069707585498690605,
-0.02070636861026287,
0.31268197298049927,
0.9509537816047668,
-0.3543524444103241,
-0.6873332858085632,
-0.60820072889328,
0.1942054331302... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jxie/aloi | jxie | 2023-11-22T07:07:31Z | 28 | 0 | null | [
"region:us"
] | 2023-11-22T07:07:31Z | 2023-11-22T07:07:26.000Z | 2023-11-22T07:07:26 | ---
dataset_info:
features:
- name: inputs
sequence: float64
- name: label
dtype: float64
splits:
- name: train
num_bytes: 71608320
num_examples: 69120
- name: val
num_bytes: 17902080
num_examples: 17280
- name: test
num_bytes: 22377600
num_examples: 21600
download_size: 4459430
dataset_size: 111888000
---
# Dataset Card for "aloi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5757444500923157,
-0.18006695806980133,
0.23674412071704865,
0.16721338033676147,
-0.21774107217788696,
-0.1575099229812622,
0.47701019048690796,
-0.38970378041267395,
1.016400694847107,
0.6275317072868347,
-0.7846189141273499,
-0.8585474491119385,
-0.6649026274681091,
-0.29743474721908... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
openerotica/erotica-analysis | openerotica | 2023-11-26T04:24:54Z | 28 | 1 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-26T04:24:54Z | 2023-11-23T18:47:00.000Z | 2023-11-23T18:47:00 | ---
license: apache-2.0
---
This dataset is roughly 27k examples of erotica stories which I've fed through GPT-3.5-turbo-16k to obtain a summary, writing prompt, and tags as a response. I've filtered out all the refusals, and deleted a fair ammount of "GPT-isms". I'd still like to go through this again to prune any remaining low quality responses I've missed, but I think this is a good start. Most of the context size comes from the stories themselves, not the responses.
Please consider supporting my Patreon (https://www.patreon.com/openerotica). I'm only asking for about tree fiddy and it all goes toward helping me create more models and datasets. | [
-0.4256232678890228,
-0.4682794511318207,
0.6744462251663208,
0.36434540152549744,
-0.63872230052948,
-0.5503153800964355,
0.12835052609443665,
-0.4663088619709015,
0.5572518706321716,
0.657512366771698,
-0.7701842784881592,
-0.38606682419776917,
-0.4168190360069275,
0.42750823497772217,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
youngwoo3283/df_sentiment_chat | youngwoo3283 | 2023-11-24T07:12:36Z | 28 | 0 | null | [
"language:ko",
"region:us"
] | 2023-11-24T07:12:36Z | 2023-11-24T07:05:40.000Z | 2023-11-24T07:05:40 | ---
language:
- ko
---
### 데이터 출처 : https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&dataSetSn=86
해당 데이터에서 사람응답1과 시스템 응답1로만 만든 데이터 | [
-0.16964460909366608,
-0.8690686225891113,
0.1375734806060791,
0.7127975821495056,
-0.5048506855964661,
-0.11615905910730362,
0.6210131049156189,
0.1952693611383438,
0.6796112060546875,
0.4231337308883667,
-0.42357921600341797,
-0.9207763671875,
-0.8053802847862244,
-0.10163925588130951,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jsonifize/riddles_v1_stringified-jsonifize | jsonifize | 2023-11-24T14:08:18Z | 28 | 0 | null | [
"region:us"
] | 2023-11-24T14:08:18Z | 2023-11-24T14:08:18.000Z | 2023-11-24T14:08:18 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jsonifize/rlhf-reward-datasets_stringified-jsonifize | jsonifize | 2023-11-24T14:08:24Z | 28 | 0 | null | [
"region:us"
] | 2023-11-24T14:08:24Z | 2023-11-24T14:08:19.000Z | 2023-11-24T14:08:19 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jsonifize/sharegptseries_stringified-jsonifize | jsonifize | 2023-11-24T14:08:27Z | 28 | 0 | null | [
"region:us"
] | 2023-11-24T14:08:27Z | 2023-11-24T14:08:24.000Z | 2023-11-24T14:08:24 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jsonifize/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split_stringified-jsonifize | jsonifize | 2023-11-24T14:09:45Z | 28 | 0 | null | [
"region:us"
] | 2023-11-24T14:09:45Z | 2023-11-24T14:09:20.000Z | 2023-11-24T14:09:20 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zelalt/scientific-papers-3.5-withprompt | zelalt | 2023-11-25T21:37:06Z | 28 | 0 | null | [
"region:us"
] | 2023-11-25T21:37:06Z | 2023-11-25T21:37:02.000Z | 2023-11-25T21:37:02 | ---
dataset_info:
features:
- name: id
dtype: string
- name: authors
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4543858
num_examples: 3499
download_size: 2831084
dataset_size: 4543858
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ByteSized/Parkside-Instruct | ByteSized | 2023-11-27T12:14:06Z | 28 | 1 | null | [
"license:mit",
"region:us"
] | 2023-11-27T12:14:06Z | 2023-11-27T12:12:14.000Z | 2023-11-27T12:12:14 | ---
license: mit
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
piEsposito/squad_20_ptbr | piEsposito | 2021-02-05T16:05:59Z | 27 | 3 | null | [
"region:us"
] | 2021-02-05T16:05:59Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
projecte-aina/wnli-ca | projecte-aina | 2023-09-13T12:42:10Z | 27 | 1 | null | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|glue",
"language:ca",
"license:cc-by-4.0",
"region:us"
] | 2023-09-13T12:42:10Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
YAML tags:
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: wnli-ca
size_categories:
- unknown
source_datasets:
- extended|glue
task_categories:
- text-classification
task_ids:
- natural-language-inference
---
# WNLI-ca
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Website:** https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html
- **Point of Contact:** [Carlos Rodríguez-Penagos](carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](carme.armentano@bsc.es)
### Dataset Summary
"A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its resolution. The schema takes its name from Terry Winograd." Source: [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html).
The [Winograd NLI dataset](https://dl.fbaipublicfiles.com/glue/data/WNLI.zip) presents 855 sentence pairs, in which the first sentence contains an ambiguity and the second one a possible interpretation of it. The label indicates if the interpretation is correct (1) or not (0).
This dataset is a professional translation into Catalan of [Winograd NLI dataset](https://dl.fbaipublicfiles.com/glue/data/WNLI.zip) as published in [GLUE Benchmark](https://gluebenchmark.com/tasks).
Both the original dataset and this translation are licenced under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/).
### Supported Tasks and Leaderboards
Textual entailment, Text classification, Language Model.
### Languages
The dataset is in Catalan (`ca-ES`)
## Dataset Structure
### Data Instances
Three tsv files.
### Data Fields
- index
- sentence 1: first sentence of the pair
- sentence 2: second sentence of the pair
- label: relation between the two sentences:
* 0: the second sentence does not entail a correct interpretation of the first one (neutral)
* 1: the second sentence entails a correct interpretation of the first one (entailment)
### Example
| index | sentence 1 | sentence 2 | label |
| ------- |----------- | --------- | ----- |
| 0 | Vaig clavar una agulla en una pastanaga. Quan la vaig treure, tenia un forat. | La pastanaga tenia un forat. | 1 |
| 1 | En Joan no podia veure l’escenari amb en Guillem davant seu perquè és molt baix. | En Joan és molt baix. | 1 |
| 2 | Els policies van arrestar tots els membres de la banda. Volien aturar el tràfic de drogues del barri. | Els policies volien aturar el tràfic de drogues del barri. | 1 |
| 3 | L’Esteve segueix els passos d’en Frederic en tot. L’influencia moltíssim. | L’Esteve l’influencia moltíssim. | 0 |
### Data Splits
- wnli-train-ca.csv: 636
- wnli-dev-ca.csv: 72
- wnli-test-shuffled-ca.csv: 147
## Dataset Creation
### Curation Rationale
We translated this dataset to contribute to the development of language models in Catalan, a low-resource language, and to allow inter-lingual comparisons.
### Source Data
- [GLUE Benchmark site](https://gluebenchmark.com)
#### Initial Data Collection and Normalization
This is a professional translation of [WNLI dataset](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html) into Catalan, commissioned by BSC TeMU within the [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/).
For more information on how the Winograd NLI dataset was created, visit the webpage [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html).
#### Who are the source language producers?
For more information on how the Winograd NLI dataset was created, visit the webpage [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html).
### Annotations
#### Annotation process
We comissioned a professional translation of [WNLI dataset](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html) into Catalan.
#### Who are the annotators?
Translation was commisioned to a professional translator.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es).
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by/4.0/">CC Attribution 4.0 International License</a>.
### Contributions
[N/A]
| [
-0.13922755420207977,
-0.37333929538726807,
0.15306557714939117,
0.3448600769042969,
-0.1618317812681198,
0.03403695672750473,
-0.3540264368057251,
-0.589629590511322,
0.5008588433265686,
0.43993863463401794,
-0.5452256798744202,
-0.8100928664207458,
-0.6264116168022156,
0.0938952490687370... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ranpox/xfund | ranpox | 2021-09-08T11:15:02Z | 27 | 3 | null | [
"region:us"
] | 2021-09-08T11:15:02Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sentence-transformers/embedding-training-data | sentence-transformers | 2021-10-17T17:49:20Z | 27 | 56 | null | [
"region:us"
] | 2021-10-17T17:49:20Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | # Training Data for Text Embedding Models
This repository contains training files to train text embedding models, e.g. using [sentence-transformers](https://www.SBERT.net).
## Data Format
All files are in a `jsonl.gz` format: Each line contains a JSON-object that represent one training example.
The JSON objects can come in different formats:
- **Pairs:** `["text1", "text2"]` - This is a positive pair that should be close in vector space.
- **Triplets:** `["anchor", "positive", "negative"]` - This is a triplet: The `positive` text should be close to the `anchor`, while the `negative` text should be distant to the `anchor`.
- **Sets:** `{"set": ["text1", "text2", ...]}` A set of texts describing the same thing, e.g. different paraphrases of the same question, different captions for the same image. Any combination of the elements is considered as a positive pair.
- **Query-Pairs:** `{"query": "text", "pos": ["text1", "text2", ...]}` A query together with a set of positive texts. Can be formed to a pair `["query", "positive"]` by randomly selecting a text from `pos`.
- **Query-Triplets:** `{"query": "text", "pos": ["text1", "text2", ...], "neg": ["text1", "text2", ...]}` A query together with a set of positive texts and negative texts. Can be formed to a triplet `["query", "positive", "negative"]` by randomly selecting a text from `pos` and `neg`.
## Available Datasets
**Note: I'm currently in the process to upload the files. Please check again next week to get the full list of datasets**
We measure the performance for each training dataset by training the [nreimers/MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model on it with [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss), a batch size of 256, for 2000 training steps. The performance is then averaged across 14 sentence embedding benchmark datasets from diverse domains (Reddit, Twitter, News, Publications, E-Mails, ...).
| Dataset | Description | Size (#Lines) | Performance | Reference |
| --- | --- | :---: | :---: | --- |
| [gooaq_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/gooaq_pairs.jsonl.gz) | (Question, Answer)-Pairs from Google auto suggest | 3,012,496 | 59.06 | [GooAQ](https://github.com/allenai/gooaq)
| [yahoo_answers_title_answer.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/yahoo_answers_title_answer.jsonl.gz) | (Title, Answer) pairs from Yahoo Answers | 1,198,260 | 58.65 | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset)
| [msmarco-triplets.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/msmarco-triplets.jsonl.gz) | (Question, Answer, Negative)-Triplets from MS MARCO Passages dataset | 499,184 | 58.76 | [MS MARCO Passages](https://github.com/microsoft/MSMARCO-Passage-Ranking)
| [stackexchange_duplicate_questions_title_title.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/stackexchange_duplicate_questions_title_title.jsonl.gz) | (Title, Title) pairs of duplicate questions from StackExchange | 304,525 | 58.47 | [Stack Exchange Data API](https://data.stackexchange.com/apple/query/fork/1456963)
| [eli5_question_answer.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/eli5_question_answer.jsonl.gz) | (Question, Answer)-Pairs from ELI5 dataset | 325,475 | 58.24 | [ELI5](https://huggingface.co/datasets/eli5)
| [yahoo_answers_title_question.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/yahoo_answers_title_question.jsonl.gz) | (Title, Question_Body) pairs from Yahoo Answers | 659,896 | 58.05 | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset)
| [squad_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/squad_pairs.jsonl.gz) | (Question, Answer_Passage) Pairs from SQuAD dataset | 87,599 | 58.02 | [SQuAD](https://huggingface.co/datasets/squad)
| [yahoo_answers_question_answer.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/yahoo_answers_question_answer.jsonl.gz) | (Question_Body, Answer) pairs from Yahoo Answers | 681,164 | 57.74 | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset)
| [wikihow.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/wikihow.jsonl.gz) | (Summary, Text) from WikiHow | 128,542 | 57.67 | [WikiHow](https://github.com/pvl/wikihow_pairs_dataset)
| [amazon_review_2018.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/amazon_review_2018.jsonl.gz) | (Title, review) pairs from Amazon | 87,877,725 | 57.65 | [Amazon review data (2018)](http://deepyeti.ucsd.edu/jianmo/amazon/index.html)
| [NQ-train_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/NQ-train_pairs.jsonl.gz) | Training pairs (query, answer_passage) from the NQ dataset | 100,231 | 57.48 | [Natural Questions](https://ai.google.com/research/NaturalQuestions)
| [amazon-qa.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/amazon-qa.jsonl.gz) | (Question, Answer) pairs from Amazon | 1,095,290 | 57.48 | [AmazonQA](https://github.com/amazonqa/amazonqa)
| [S2ORC_title_abstract.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/S2ORC_title_abstract.jsonl.gz) | (Title, Abstract) pairs of scientific papers | 41,769,185 | 57.39 | [S2ORC](https://github.com/allenai/s2orc)
| [quora_duplicates.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/quora_duplicates.jsonl.gz) | Duplicate question pairs from Quora | 103,663 | 57.36 | [QQP](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
| [WikiAnswers.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/WikiAnswers.jsonl.gz) | Sets of duplicates questions | 27,383,151 | 57.34 | [WikiAnswers Corpus](https://github.com/afader/oqa#wikianswers-corpus)
| [searchQA_top5_snippets.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/searchQA_top5_snippets.jsonl.gz) | Question + Top5 text snippets from SearchQA dataset. Top5 | 117,220 | 57.34 | [search_qa](https://huggingface.co/datasets/search_qa)
| [stackexchange_duplicate_questions_title-body_title-body.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/stackexchange_duplicate_questions_title-body_title-body.jsonl.gz) | (Title+Body, Title+Body) pairs of duplicate questions from StackExchange | 250,460 | 57.30 | [Stack Exchange Data API](https://data.stackexchange.com/apple/query/fork/1456963)
| [S2ORC_citations_titles.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/S2ORC_citations_titles.jsonl.gz) | Citation network (paper titles) | 51,030,086 | 57.28 | [S2ORC](https://github.com/allenai/s2orc)
| [stackexchange_duplicate_questions_body_body.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/stackexchange_duplicate_questions_body_body.jsonl.gz) | (Body, Body) pairs of duplicate questions from StackExchange | 250,519 | 57.26 | [Stack Exchange Data API](https://data.stackexchange.com/apple/query/fork/1456963)
| [agnews.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/agnews.jsonl.gz) | (Title, Description) pairs of news articles from the AG News dataset | 1,157,745 | 57.25 | [AG news corpus](http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html)
| [quora_duplicates_triplets.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/quora_duplicates_triplets.jsonl.gz) | Duplicate question pairs from Quora with additional hard negatives (mined & denoised by cross-encoder) | 101,762 | 56.97 | [QQP](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
| [AllNLI.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/AllNLI.jsonl.gz) | Combination of SNLI + MultiNLI Triplets: (Anchor, Entailment_Text, Contradiction_Text) | 277,230 | 56.57 | [SNLI](https://huggingface.co/datasets/snli) and [MNLI](https://huggingface.co/datasets/multi_nli)
| [npr.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/npr.jsonl.gz) | (Title, Body) pairs from the npr.org website | 594,384 | 56.44 | [Pushshift](https://files.pushshift.io/news/)
| [specter_train_triples.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/specter_train_triples.jsonl.gz) | Triplets (Title, related_title, hard_negative) for Scientific Publications from Specter | 684,100 | 56.32 | [SPECTER](https://github.com/allenai/specter)
| [SimpleWiki.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/SimpleWiki.jsonl.gz) | Matched pairs (English_Wikipedia, Simple_English_Wikipedia) | 102,225 | 56.15 | [SimpleWiki](https://cs.pomona.edu/~dkauchak/simplification/)
| [PAQ_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/PAQ_pairs.jsonl.gz) | Training pairs (query, answer_passage) from the PAQ dataset | 64,371,441 | 56.11 | [PAQ](https://github.com/facebookresearch/PAQ)
| [altlex.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/altlex.jsonl.gz) | Matched pairs (English_Wikipedia, Simple_English_Wikipedia) | 112,696 | 55.95 | [altlex](https://github.com/chridey/altlex/)
| [ccnews_title_text.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/ccnews_title_text.jsonl.gz) | (Title, article) pairs from the CC News dataset | 614,664 | 55.84 | [CC-News](https://huggingface.co/datasets/cc_news)
| [codesearchnet.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/codesearchnet.jsonl.gz) | CodeSearchNet corpus is a dataset of (comment, code) pairs from opensource libraries hosted on GitHub. It contains code and documentation for several programming languages. | 1,151,414 | 55.80 | [CodeSearchNet](https://huggingface.co/datasets/code_search_net)
| [S2ORC_citations_abstracts.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/S2ORC_citations_abstracts.jsonl.gz) | Citation network (paper abstracts) | 39,567,485 | 55.74 | [S2ORC](https://github.com/allenai/s2orc)
| [sentence-compression.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/sentence-compression.jsonl.gz) | Pairs (long_text, short_text) about sentence-compression | 180,000 | 55.63 | [Sentence-Compression](https://github.com/google-research-datasets/sentence-compression)
| [TriviaQA_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/TriviaQA_pairs.jsonl.gz) | Pairs (query, answer) from TriviaQA dataset | 73,346 | 55.56 | [TriviaQA](https://huggingface.co/datasets/trivia_qa)
| [cnn_dailymail_splitted.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/cnn_dailymail_splitted.jsonl.gz) | (article, highlight sentence) with individual highlight sentences for each news article | 311,971 | 55.36 | [CNN Dailymail Dataset](https://huggingface.co/datasets/cnn_dailymail)
| [cnn_dailymail.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/cnn_dailymail.jsonl.gz) | (highlight sentences, article) with all highlight sentences as one text for each news article | 311,971 | 55.27 | [CNN Dailymail Dataset](https://huggingface.co/datasets/cnn_dailymail)
| [flickr30k_captions.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/flickr30k_captions.jsonl.gz) | Different captions for the same image from the Flickr30k dataset | 31,783 | 54.68 | [Flickr30k](https://shannon.cs.illinois.edu/DenotationGraph/)
| [xsum.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/xsum.jsonl.gz) | (Summary, News Article) pairs from XSUM dataset | 226,711 | 53.86 | [xsum](https://huggingface.co/datasets/xsum)
| [coco_captions.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/coco_captions.jsonl.gz) | Different captions for the same image | 82,783 | 53.77 | [COCO](https://cocodataset.org/)
**Disclaimer:** We only distribute these datasets in a specific format, but we do not vouch for their quality or fairness, or claim that you have license to use the dataset. It remains the user's responsibility to determine whether you as a user have permission to use the dataset under the dataset's license and to cite the right owner of the dataset. Please check the individual dataset webpages for the license agreements.
If you're a dataset owner and wish to update any part of it, or do not want your dataset to be included in this dataset collection, feel free to contact me.
| [
-0.39657601714134216,
-0.9168636798858643,
0.305133581161499,
0.07031307369470596,
-0.08494997769594193,
-0.09767956286668777,
-0.279855340719223,
0.011428842321038246,
0.3547283709049225,
0.29018935561180115,
-0.46753716468811035,
-0.7297804951667786,
-0.7924973964691162,
0.24324579536914... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
joangaes/depression | joangaes | 2022-03-10T13:04:18Z | 27 | 0 | null | [
"region:us"
] | 2022-03-10T13:04:18Z | 2022-03-10T09:46:18.000Z | 2022-03-10T09:46:18 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
malteos/aspect-paper-embeddings | malteos | 2022-03-18T10:37:41Z | 27 | 0 | null | [
"region:us"
] | 2022-03-18T10:37:41Z | 2022-03-18T10:31:28.000Z | 2022-03-18T10:31:28 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pragnakalp/squad_v2_french_translated | pragnakalp | 2022-08-29T07:49:15Z | 27 | 1 | null | [
"multilinguality:monolingual",
"multilinguality:translation",
"language:fr",
"license:apache-2.0",
"region:us"
] | 2022-08-29T07:49:15Z | 2022-04-04T05:44:07.000Z | 2022-04-04T05:44:07 | ---
language: fr
license: apache-2.0
multilinguality:
- monolingual
- translation
---
Using Google Translation, we have translated SQuAD 2.0 dataset into multiple languages.
Here is the translated dataset of SQuAD 2.0 in French language.
Shared by [Pragnakalp Techlabs](https://www.pragnakalp.com) | [
-0.02741558477282524,
-0.40608668327331543,
0.1578783392906189,
0.8188177347183228,
-0.05935298278927803,
0.6449503898620605,
-0.3319167494773865,
-0.720141589641571,
0.38667941093444824,
0.4149852991104126,
-1.0687124729156494,
-0.33397918939590454,
-0.6099511981010437,
0.1094483956694603... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BigScienceBiasEval/bias-shades | BigScienceBiasEval | 2022-10-03T13:49:04Z | 27 | 1 | null | [
"license:cc-by-sa-4.0",
"region:us"
] | 2022-10-03T13:49:04Z | 2022-04-28T16:46:11.000Z | 2022-04-28T16:46:11 | ---
license: cc-by-sa-4.0
---
Possibly a placeholder dataset for the original here: https://huggingface.co/datasets/bigscience-catalogue-data/bias-shades
# Data Statement for SHADES
> **How to use this document:**
> Fill in each section according to the instructions. Give as much detail as you can, but there's no need to extrapolate. The goal is to help people understand your data when they approach it. This could be someone looking at it in ten years, or it could be you yourself looking back at the data in two years.
> For full details, the best source is the original Data Statements paper, here: https://www.aclweb.org/anthology/Q18-1041/ .
> Instruction fields are given as blockquotes; delete the instructions when you're done, and provide the file with your data, for example as "DATASTATEMENT.md". The lists in some blocks are designed to be filled in, but it's good to also leave a written description of what's happening, as well as the list. It's fine to skip some fields if the information isn't known.
> Only blockquoted content should be deleted; the final about statement should be left intact.
Data set name: Bias-Shades
Citation (if available): TODO.
Data set developer(s): This dataset was compiled by dozens of research scientists through the BigScience open science collaboration. Collaborators, representing numerous cultures and languages, joined the project of their own volition.
Data statement author(s): Shayne Longpre, Aurélie Névéol, Shanya Sharma[Add name here if you add/edit the data statement :)].
Others who contributed to this document: N/A
License: Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0).
## A. CURATION RATIONALE
> *Explanation.* Which texts were included and what were the goals in selecting texts, both in the original collection and in any further sub-selection? This can be especially important in datasets too large to thoroughly inspect by hand. An explicit statement of the curation rationale can help dataset users make inferences about what other kinds of texts systems trained with them could conceivably generalize to.
This dataset was curated by hand-crafting stereotype sentences by native speakers from the culture which is being targeted. An initial set of sentences was inferred from stereotypes expressed in the crowS-pairs data set(Nangia et al.). Native speakers first crafted templates for sentences expressing a stereotype. These templates are marked for gender and plurality of the target nouns, so the template can be reused by substituting different targets. Next, the template-target noun pair combinations were annotated for the veracity/reliability of the expressed stereotype. The resulting sentences express common and less common stereotypes in a variety of cultures and languages.
## B. LANGUAGE VARIETY/VARIETIES
> *Explanation.* Languages differ from each other in structural ways that can interact with NLP algorithms. Within a language, regional or social dialects can also show great variation (Chambers and Trudgill, 1998). The language and language variety should be described with a language tag from BCP-47 identifying the language variety (e.g., en-US or yue-Hant-HK), and a prose description of the language variety, glossing the BCP-47 tag and also providing further information (e.g., "English as spoken in Palo Alto, California", or "Cantonese written with traditional characters by speakers in Hong Kong who are bilingual in Mandarin").
* BCP-47 language tags: en-US, fr-FR, hi-IN, es-DO, ar-LY, ru-RU, de-DE, nl-NL, ta-IN.
* Language variety description: English spoken by native speakers of the United States, native French people from metropolitan France, native Hindi and Tamil speakers from India, Spanish speakers from the Dominican Republic, Arabic speakers from Libya, Russian speakers from Russia, German speakers from Germany, and Dutch speakers from the Netherlands.
## C. CONTRIBUTOR DEMOGRAPHIC
> ## C. SPEAKER DEMOGRAPHIC
> *Explanation.* Sociolinguistics has found that variation (in pronunciation, prosody, word choice, and grammar) correlates with speaker demographic characteristics (Labov, 1966), as speakers use linguistic variation to construct and project identities (Eckert and Rickford, 2001). Transfer from native languages (L1) can affect the language produced by non-native (L2) speakers (Ellis, 1994, Ch. 8). A further important type of variation is disordered speech (e.g., dysarthria). Specifications include:
Participants to the collection project were recruited through the HuggingFace BigScience project, and specifically the Bias and Fairness Evaluation group. Listed below.
Speakers:
* [ADD YOURSELF!]
* Shayne Longpre: English-speaking, male, 28 years old, culturally Canadian.
* Aurélie Névéol: French (native), English and Spanish speaking, female, 44 years old, culturally French (also familiar with American culture)
* Shanya Sharma: Hindi(native), English speaking, female, 24 years old, culturally Indian
* Margaret Mitchell: English, female, mid-30s, U.S.A.
* Maraim Masoud: Arabic, English Speaking female.
## D. ANNOTATOR DEMOGRAPHIC
> *Explanation.* What are the demographic characteristics of the annotators and annotation guideline developers? Their own “social address” influences their experience with language and thus their perception of what they are annotating. Specifications include:
Participants to the collection project were recruited through the HuggingFace BigScience project, and specifically the Bias and Fairness Evaluation group. Speaker and annotator contributors listed in section C.
## E. SPEECH SITUATION
N/A
## F. TEXT CHARACTERISTICS
> *Explanation.* Both genre and topic influence the vocabulary and structural characteristics of texts (Biber, 1995), and should be specified.
Collected data is a collection of offensive stereotyped statements in numerous languages and cultures. They might be upsetting and/or offensive.
Along with these stereotyped statements are annotation judgements of how prevalent/real the expressed stereotypes are in the real world. Some statements were created from templates with substituted target nouns, and therefore may express an uncommon or unlikely stereotype.
## G. RECORDING QUALITY
N/A
## H. OTHER
> *Explanation.* There may be other information of relevance as well. Please use this space to develop any further categories that are relevant for your dataset.
## I. PROVENANCE APPENDIX
This initiative is part of the BigScience Workshop: https://bigscience.huggingface.co/.
## About this document
A data statement is a characterization of a dataset that provides context to allow developers and users to better understand how experimental results might generalize, how software might be appropriately deployed, and what biases might be reflected in systems built on the software.
Data Statements are from the University of Washington. Contact: [datastatements@uw.edu](mailto:datastatements@uw.edu). This document template is licensed as [CC0](https://creativecommons.org/share-your-work/public-domain/cc0/).
This version of the markdown Data Statement is from June 4th 2020. The Data Statement template is based on worksheets distributed at the [2020 LREC workshop on Data Statements](https://sites.google.com/uw.edu/data-statements-for-nlp/), by Emily M. Bender, Batya Friedman, and Angelina McMillan-Major. Adapted to community Markdown template by Leon Dercyznski. | [
-0.4413231313228607,
-0.5226584672927856,
0.22459197044372559,
0.4520094394683838,
-0.05963084474205971,
-0.031135763972997665,
-0.30513307452201843,
-0.7165707349777222,
0.48502612113952637,
0.5371288061141968,
-0.29021456837654114,
-0.7025762796401978,
-0.5382875800132751,
0.283948123455... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
strombergnlp/nlpcc-stance | strombergnlp | 2022-10-25T21:47:26Z | 27 | 4 | null | [
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:zh",
"license:cc-by-4.0",
"stance-detection",
"region:us"
] | 2022-10-25T21:47:26Z | 2022-05-19T11:19:12.000Z | 2022-05-19T11:19:12 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- zh
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-analysis
pretty_name: NLPCC Stance
tags:
- stance-detection
---
# Dataset Card for "NLPCC 2016: Stance Detection in Chinese Microblogs"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://tcci.ccf.org.cn/conference/2016/pages/page05_evadata.html](http://tcci.ccf.org.cn/conference/2016/pages/page05_evadata.html)
- **Repository:**
- **Paper:** [https://link.springer.com/chapter/10.1007/978-3-319-50496-4_85](https://link.springer.com/chapter/10.1007/978-3-319-50496-4_85)
- **Point of Contact:** [Mads Kongsback](https://github.com/mkonxd)
- **Size of downloaded dataset files:**
- **Size of the generated dataset:**
- **Total amount of disk used:**
### Dataset Summary
This is a stance prediction dataset in Chinese.
The data is that from a shared task, stance detection in Chinese microblogs, in NLPCC-ICCPOL 2016. It covers Task A, a mandatory supervised task which detects stance towards five targets of interest with given labeled data.
Some instances of the dataset have been removed, as they were without label.
### Supported Tasks and Leaderboards
* Stance Detection in Chinese Microblogs
### Languages
Chinese, as spoken on the Weibo website (`bcp47:zh`)
## Dataset Structure
### Data Instances
Example instance:
```
{
'id': '0',
'target': 'IphoneSE',
'text': '3月31日,苹果iPhone SE正式开卖,然而这款小屏新机并未出现人们预想的疯抢局面。根据市场分析机构Localytics周一公布的数据,iPhone SE正式上市的这个周末,销量成绩并不算太好。',
'stance': 2
}
```
### Data Fields
* id: a `string` field with a unique id for the instance
* target: a `string` representing the target of the stance
* text: a `string` of the stance-bearing text
* stance: an `int` representing class label -- `0`: AGAINST; `1`: FAVOR; `2`: NONE.
### Data Splits
The training split has 2986 instances
## Dataset Creation
### Curation Rationale
The goal was to create a dataset of microblog text annotated for stance. Six stance targets were selected and data was collected from Sina Weibo for annotation.
### Source Data
#### Initial Data Collection and Normalization
Not specified
#### Who are the source language producers?
Sina Weibo users
### Annotations
#### Annotation process
The stance of each target-microblog pair is duplicated annotated by two students
individually. If these two students provide the same annotation, the stance of this
microblog-target pair is then labeled. If the different annotation is detected, the third
student will be assigned to annotate this pair. Their annotation results will be voted to
obtain the final label.
#### Who are the annotators?
Students in China
### Personal and Sensitive Information
No reflections
## Considerations for Using the Data
### Social Impact of Dataset
The data preserves social media utterances verbatim and so has obviated any right to be forgotten, though usernames and post IDs are not explicitly included in the data.
### Discussion of Biases
There'll be at least a temporal and regional bias to this data, as well as it only representing expressions of stance on six topics.
### Other Known Limitations
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
### Citation Information
```
@incollection{xu2016overview,
title={Overview of nlpcc shared task 4: Stance detection in chinese microblogs},
author={Xu, Ruifeng and Zhou, Yu and Wu, Dongyin and Gui, Lin and Du, Jiachen and Xue, Yun},
booktitle={Natural language understanding and intelligent applications},
pages={907--916},
year={2016},
publisher={Springer}
}
```
### Contributions
Added by [@mkonxd](https://github.com/mkonxd), [@leondz](https://github.com/leondz)
| [
-0.26626846194267273,
-0.5836438536643982,
0.27338913083076477,
0.30189019441604614,
-0.6454449892044067,
-0.04842955619096756,
-0.27208128571510315,
-0.16542452573776245,
0.6857646107673645,
0.12310707569122314,
-0.6263939142227173,
-1.1111289262771606,
-0.6419122815132141,
-0.23676250874... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pinecone/yt-transcriptions | pinecone | 2022-05-26T14:47:06Z | 27 | 1 | null | [
"region:us"
] | 2022-05-26T14:47:06Z | 2022-05-26T13:37:12.000Z | 2022-05-26T13:37:12 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BeIR/cqadupstack-generated-queries | BeIR | 2022-10-23T06:15:48Z | 27 | 0 | beir | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | 2022-10-23T06:15:48Z | 2022-06-17T13:20:44.000Z | 2022-06-17T13:20:44 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. | [
-0.5227212905883789,
-0.5249219536781311,
0.14435674250125885,
0.04820423573255539,
0.055916160345077515,
0.0011022627586498857,
-0.1081070527434349,
-0.24874727427959442,
0.28598034381866455,
0.07840226590633392,
-0.45233607292175293,
-0.7186435461044312,
-0.347678542137146,
0.20300328731... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
benschill/brain-tumor-collection | benschill | 2022-07-04T08:26:59Z | 27 | 1 | null | [
"license:pddl",
"region:us"
] | 2022-07-04T08:26:59Z | 2022-07-01T10:12:43.000Z | 2022-07-01T10:12:43 | ---
license: pddl
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Paul/hatecheck-spanish | Paul | 2022-07-05T10:27:07Z | 27 | 5 | null | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:es",
"license:cc-by-4.0",
"arxiv:2206.09917",
"regi... | 2022-07-05T10:27:07Z | 2022-07-05T10:06:37.000Z | 2022-07-05T10:06:37 | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- es
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Spanish HateCheck
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
---
# Dataset Card for Multilingual HateCheck
## Dataset Description
Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.
For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.
This allows for targeted diagnostic insights into model performance.
For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!
- **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917
- **Repository:** https://github.com/rewire-online/multilingual-hatecheck
- **Point of Contact:** paul@rewire.online
## Dataset Structure
The csv format mostly matches the original HateCheck data, with some adjustments for specific languages.
**mhc_case_id**
The test case ID that is unique to each test case across languages (e.g., "mandarin-1305")
**functionality**
The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.
**test_case**
The test case text.
**label_gold**
The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label.
**target_ident**
Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.
**ref_case_id**
For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.
**ref_templ_id**
The equivalent to ref_case_id, but for template IDs.
**templ_id**
The ID of the template from which the test case was generated.
**case_templ**
The template from which the test case was generated (where applicable).
**gender_male** and **gender_female**
For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.
**label_annotated**
A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']").
**label_annotated_maj**
The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts.
**disagreement_in_case**
True if label_annotated_maj does not match label_gold for the entry.
**disagreement_in_template**
True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC. | [
-0.6419410109519958,
-0.7158888578414917,
-0.05510091781616211,
0.09203927218914032,
-0.11549574881792068,
0.10751984268426895,
-0.030292540788650513,
-0.5101842880249023,
0.39948996901512146,
0.3274587094783783,
-0.7589271664619446,
-0.7721040844917297,
-0.5623311400413513,
0.460262477397... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vasugoel/K-12Corpus | vasugoel | 2022-07-07T07:22:49Z | 27 | 2 | null | [
"region:us"
] | 2022-07-07T07:22:49Z | 2022-07-07T07:14:59.000Z | 2022-07-07T07:14:59 | # K-12Corpus | [
-0.1406240612268448,
-0.017893312498927116,
0.6116674542427063,
1.071922779083252,
-0.46402662992477417,
0.9372200965881348,
0.4405273199081421,
-0.14415250718593597,
0.7218997478485107,
0.8322513103485107,
-0.8533344268798828,
-0.36118218302726746,
-0.6828747391700745,
0.4141651690006256,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nateraw/pizza_not_pizza | nateraw | 2022-07-07T19:58:03Z | 27 | 1 | null | [
"license:other",
"region:us"
] | 2022-07-07T19:58:03Z | 2022-07-07T19:57:37.000Z | 2022-07-07T19:57:37 | ---
license:
- other
kaggle_id: carlosrunner/pizza-not-pizza
---
# Dataset Card for Pizza or Not Pizza?
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/carlosrunner/pizza-not-pizza
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Who doesn't like pizza? This dataset contains about 1000 images of pizza and 1000 images of dishes other than pizza. It can be used for a simple binary image classification task.
All images were rescaled to have a maximum side length of 512 pixels.
This is a subset of the Food-101 dataset. Information about the original dataset can be found in the following paper:
Bossard, Lukas, Matthieu Guillaumin, and Luc Van Gool. "Food-101 – Mining Discriminative Components with Random Forests." In *European conference on computer vision*, pp. 446-461. Springer, Cham, 2014.
The original dataset can be found in the following locations:
https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/
https://www.kaggle.com/datasets/dansbecker/food-101
https://paperswithcode.com/dataset/food-101
https://www.tensorflow.org/datasets/catalog/food101
Number of instances in each class:
Pizza: 983
Not Pizza: 983
##Acknowledgements
The Food-101 data set consists of images from Foodspotting [1] which are not property of the Federal Institute of Technology Zurich (ETHZ). Any use beyond scientific fair use must be negociated with the respective picture owners according to the Foodspotting terms of use [2].
[1] http://www.foodspotting.com/
[2] http://www.foodspotting.com/terms/
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@carlosrunner](https://kaggle.com/carlosrunner)
### Licensing Information
The license for this dataset is other
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] | [
-0.4111347198486328,
-0.6963390111923218,
0.0648532286286354,
-0.1583351343870163,
0.09841881692409515,
-0.12203572690486908,
-0.2965681552886963,
-0.3127285838127136,
0.5474171042442322,
0.5486641526222229,
-0.7924661636352539,
-1.0013595819473267,
-0.6636348962783813,
0.24966931343078613... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
biglam/brill_iconclass | biglam | 2023-07-25T13:38:02Z | 27 | 6 | null | [
"task_categories:image-classification",
"task_categories:image-to-text",
"task_categories:feature-extraction",
"task_ids:multi-class-image-classification",
"task_ids:multi-label-image-classification",
"task_ids:image-captioning",
"annotations_creators:expert-generated",
"language_creators:expert-gener... | 2023-07-25T13:38:02Z | 2022-07-11T13:16:25.000Z | 2022-07-11T13:16:25 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
license:
- cc0-1.0
multilinguality:
- other-iconclass-metadata
pretty_name: 'Brill Iconclass AI Test Set '
size_categories:
- 10K<n<100K
source_datasets: []
task_categories:
- image-classification
- image-to-text
- feature-extraction
task_ids:
- multi-class-image-classification
- multi-label-image-classification
- image-captioning
tags:
- lam
- art
---
# Dataset Card for Brill Iconclass AI Test Set
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://iconclass.org/testset/](https://iconclass.org/testset/)
- **Repository:**[https://iconclass.org/testset/](https://iconclass.org/testset/)
- **Paper:**[https://iconclass.org/testset/ICONCLASS_and_AI.pdf](https://iconclass.org/testset/ICONCLASS_and_AI.pdf)
- **Leaderboard:**
- **Point of Contact:**[info@iconclass.org](mailto:info@iconclass.org)
### Dataset Summary
> A test dataset and challenge to apply machine learning to collections described with the Iconclass classification system.
This dataset contains `87749` images with [Iconclass](https://iconclass.org/) metadata assigned to the images. The [iconclass](https://iconclass.org/) metadata classification system is intended to provide ['the comprehensive classification system for the content of images.'](https://iconclass.org/).
> Iconclass was developed in the Netherlands as a standard classification for recording collections, with the idea of assembling huge databases that will allow the retrieval of images featuring particular details, subjects or other common factors. It was developed in the 1970s and was loosely based on the Dewey Decimal System because it was meant to be used in art library card catalogs. [source](https://en.wikipedia.org/wiki/Iconclass)
The [Iconclass](https://iconclass.org)
> view of the world is subdivided in 10 main categories...An Iconclass concept consists of an alphanumeric class number (“notation”) and a corresponding content definition (“textual correlate”). An object can be tagged with as many concepts as the user sees fit. [source](https://iconclass.org/)
These ten divisions are as follows:
- 0 Abstract, Non-representational Art
- 1 Religion and Magic
- 2 Nature
- 3 Human being, Man in general
- 4 Society, Civilization, Culture
- 5 Abstract Ideas and Concepts
- 6 History
- 7 Bible
- 8 Literature
- 9 Classical Mythology and Ancient History
Within each of these divisions further subdivision's are possible (9 or 10 subdivisions). For example, under `4 Society, Civilization, Culture`, one can find:
- 41 · material aspects of daily life
- 42 · family, descendance
- 43 · recreation, amusement
- 44 · state; law; political life
- ...
See [https://iconclass.org/4](https://iconclass.org/4) for the full list.
To illustrate we can look at some example Iconclass classifications.
`41A12` represents `castle`. This classification is generated via building from the 'base' division `4`, with the following attributes:
- 4 · Society, Civilization, Culture
- 41 · material aspects of daily life
- 41A · housing
- 41A1 · civic architecture; edifices; dwellings
[source](https://iconclass.org/41A12)
The construction of Iconclass of parts makes it particularly interesting (and challenging) to tackle via Machine Learning. Whilst one could tackle this dataset as a (multi) label image classification problem, this is only one way of tackling it. For example in the above label `castle` giving the model the 'freedom' to predict only a partial label could result in the prediction `41A` i.e. housing. Whilst a very particular form of housing this prediction for 'castle' is not 'wrong' so much as it is not as precise as a human cataloguer may provide.
### Supported Tasks and Leaderboards
As discussed above this dataset could be tackled in various ways:
- as an image classification task
- as a multi-label classification task
- as an image to text task
- as a task whereby a model predicts partial sequences of the label.
This list is not exhaustive.
### Languages
This dataset doesn't have a natural language. The labels themselves can be treated as a form of language i.e. the label can be thought of as a sequence of tokens that construct a 'sentence'.
## Dataset Structure
The dataset contains a single configuration.
### Data Instances
An example instance of the dataset is as follows:
``` python
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=390x500 at 0x7FC7FFBBD2D0>,
'label': ['31A235', '31A24(+1)', '61B(+54)', '61B:31A2212(+1)', '61B:31D14']}
```
### Data Fields
The dataset is made up of
- an image
- a sequence of Iconclass labels
### Data Splits
The dataset doesn't provide any predefined train, validation or test splits.
## Dataset Creation
> To facilitate the creation of better models in the cultural heritage domain, and promote the research on tools and techniques using Iconclass, we are making this dataset freely available. All that we ask is that any use is acknowledged and results be shared so that we can all benefit. The content is sampled from the Arkyves database. [source](https://labs.brill.com/ictestset/)
[More Information Needed]
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The images are samples from the [Arkyves database](https://brill.com/view/db/arko?language=en). This collection includes images from
> from libraries and museums in many countries, including the Rijksmuseum in Amsterdam, the Netherlands Institute for Art History (RKD), the Herzog August Bibliothek in Wolfenbüttel, and the university libraries of Milan, Utrecht and Glasgow. [source](https://brill.com/view/db/arko?language=en)
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
The annotations are derived from the source dataset see above. Most annotations were likely created by staff with experience with the Iconclass metadata schema.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
Iconclass as a metadata standard absorbs biases from the time and place of its creation (1940s Netherlands). In particular, '32B human races, peoples; nationalities' has been subject to criticism. '32B36 'primitive', 'pre-modern' peoples' is one example of a category which we may not wish to adopt. In general, there are components of the subdivisions of `32B` which reflect a belief that race is a scientific category rather than socially constructed.
The Iconclass community is actively exploring these limitations; for example, see [Revising Iconclass section 32B human races, peoples; nationalities](https://web.archive.org/web/20210425131753/https://iconclass.org/Updating32B.pdf).
One should be aware of these limitations to Iconclass, and in particular, before deploying a model trained on this data in any production settings.
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Etienne Posthumus
### Licensing Information
[CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/)
### Citation Information
```
@MISC{iconclass,
title = {Brill Iconclass AI Test Set},
author={Etienne Posthumus},
year={2020}
}
```
### Contributions
Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset. | [
-0.7233853936195374,
-0.3834752142429352,
-0.21231095492839813,
-0.23801177740097046,
-0.14543208479881287,
0.06557828187942505,
-0.08114194869995117,
-0.7603946328163147,
0.02796058915555477,
0.4483450949192047,
-0.2687003016471863,
-0.7882375717163086,
-0.40247589349746704,
0.23622904717... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pnr-svc/Turkish-Multiclass-Dataset | pnr-svc | 2022-07-20T21:40:17Z | 27 | 2 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:tr",
... | 2022-07-20T21:40:17Z | 2022-07-16T16:01:20.000Z | 2022-07-16T16:01:20 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- tr
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
- multi-label-classification
pretty_name: 'Turkish-Multiclass-Dataset'
train-eval-index:
- config: TurkishMulticlassDataset
task: text-classification
task_id: multi_class_classification
splits:
eval_split: test
col_mapping:
text: text
label: target
---
# Dataset Card for "Turkish-Multiclass-Dataset"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/PnrSvc/Turkish-Multiclass-Dataset]
- **Repository:**[https://github.com/PnrSvc/Turkish-Multiclass-Dataset]
- **Size of downloaded dataset files:**
- **Size of the generated dataset:**
### Dataset Summary
The dataset was compiled from user comments from e-commerce sites. It consists of 53,000 validations, 53,000 tests and 160600 train data. Data were classified into 3 classes (positive(pos), negative(neg) and natural(nor). The data is available to you on github.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
#### turkish-dataset-v1
- **Size of downloaded dataset files:**
- **Size of the generated dataset:**
### Data Fields
The data fields are the same among all splits.
#### turkish-dataset-v-v1
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `positive` (2), `natural` (1), `negative` (0).
### Data Splits
| |train |validation|test |
|----|--------:|---------:|---------:|
|Data| 15000 | 5000| 5000|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to for adding this dataset. | [
-0.7198172211647034,
-0.6002307534217834,
-0.1396101862192154,
0.1673652082681656,
-0.36570730805397034,
-0.0430515818297863,
-0.4235740602016449,
-0.3430407643318176,
0.28421536087989807,
0.47262251377105713,
-0.6619873046875,
-1.0088465213775635,
-0.6690528988838196,
0.19859878718852997,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KaranChand/atcosim_split | KaranChand | 2022-08-01T15:06:09Z | 27 | 0 | null | [
"region:us"
] | 2022-08-01T15:06:09Z | 2022-08-01T15:05:53.000Z | 2022-08-01T15:05:53 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
graphs-datasets/IMDB-BINARY | graphs-datasets | 2023-02-07T16:39:00Z | 27 | 1 | null | [
"task_categories:graph-ml",
"license:unknown",
"region:us"
] | 2023-02-07T16:39:00Z | 2022-08-01T16:17:25.000Z | 2022-08-01T16:17:25 | ---
license: unknown
task_categories:
- graph-ml
---
# Dataset Card for IMDB-BINARY (IMDb-B)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://dl.acm.org/doi/10.1145/2783258.2783417)**
- **[Repository](https://www.chrsmrrs.com/graphkerneldatasets/IMDB-BINARY.zip):**:
- **Paper:**: Deep Graph Kernels (see citation)
- **Leaderboard:**: [Papers with code leaderboard](https://paperswithcode.com/sota/graph-classification-on-imdb-b)
### Dataset Summary
The `IMDb-B` dataset is "a movie collaboration dataset that consists of the ego-networks of 1,000 actors/actresses who played roles in movies in IMDB. In each graph, nodes represent actors/actress, and there is an edge between them if they appear in the same movie. These graphs are derived from the Action and Romance genres".
### Supported Tasks and Leaderboards
`IMDb-B` should be used for graph classification (aiming to predict whether a movie graph is an action or romance movie), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | medium |
| #graphs | 1000 |
| average #nodes | 19.79 |
| average #edges | 193.25 |
### Data Fields
Each row of a given file is a graph, with:
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data comes from the PyGeometric version of the dataset.
This information can be found back using
```python
from torch_geometric.datasets import TUDataset
cur_dataset = TUDataset(root="../dataset/loaded/",
name="IMDB-BINARY")
```
## Additional Information
### Licensing Information
The dataset has been released under unknown license, please open an issue if you have this information.
### Citation Information
```
@inproceedings{10.1145/2783258.2783417,
author = {Yanardag, Pinar and Vishwanathan, S.V.N.},
title = {Deep Graph Kernels},
year = {2015},
isbn = {9781450336642},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/2783258.2783417},
doi = {10.1145/2783258.2783417},
abstract = {In this paper, we present Deep Graph Kernels, a unified framework to learn latent representations of sub-structures for graphs, inspired by latest advancements in language modeling and deep learning. Our framework leverages the dependency information between sub-structures by learning their latent representations. We demonstrate instances of our framework on three popular graph kernels, namely Graphlet kernels, Weisfeiler-Lehman subtree kernels, and Shortest-Path graph kernels. Our experiments on several benchmark datasets show that Deep Graph Kernels achieve significant improvements in classification accuracy over state-of-the-art graph kernels.},
booktitle = {Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining},
pages = {1365–1374},
numpages = {10},
keywords = {collaboration networks, bioinformatics, r-convolution kernels, graph kernels, structured data, deep learning, social networks, string kernels},
location = {Sydney, NSW, Australia},
series = {KDD '15}
}
```
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset. | [
-0.33961766958236694,
-0.5605392456054688,
0.22191381454467773,
-0.22450509667396545,
-0.20993389189243317,
0.25460973381996155,
-0.01685449667274952,
-0.2375050187110901,
0.47506949305534363,
0.3006477952003479,
-0.512554407119751,
-0.7486456036567688,
-0.8142355680465698,
-0.047700121998... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Rifky/indonesian-hoax-news | Rifky | 2022-08-05T15:49:33Z | 27 | 1 | null | [
"region:us"
] | 2022-08-05T15:49:33Z | 2022-08-03T13:50:33.000Z | 2022-08-03T13:50:33 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NbAiLab/norwegian-paws-x | NbAiLab | 2023-08-18T11:26:40Z | 27 | 0 | null | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"task_ids:multi-input-text-classification",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language_creators:machi... | 2023-08-18T11:26:40Z | 2022-08-05T10:51:20.000Z | 2022-08-05T10:51:20 | ---
annotations_creators:
- expert-generated
- machine-generated
language_creators:
- machine-generated
language:
- nb
- nn
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-paws
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
- semantic-similarity-scoring
- text-scoring
- multi-input-text-classification
pretty_name: 'NbAiLab/norwegian-paws-x'
---
# Dataset Card for Norwegian PAWS-X
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [NB AiLab](https://ai.nb.no/)
- **Repository:** [Norwegian PAWS-X Repository](#)
- **Point of Contact:** [ai-lab@nb.no](mailto:ai-lab@nb.no)
### Dataset Summary
Norwegian PAWS-X is an extension of the PAWS-X dataset. PAWS-X is a multilingual version of PAWS (Paraphrase Adversaries from Word Scrambling) for six languages. The Norwegian PAWS-X dataset has machine-translated versions of the original PAWS-X dataset into Norwegian Bokmål and Nynorsk.
### Languages
- Norwegian Bokmål (`nb`)
- Norwegian Nynorsk (`nn`)
## Dataset Structure
### Data Instances
Each instance includes a pair of sentences in Norwegian along with a binary label indicating whether the sentences are paraphrases of each other.
### Data Fields
- `id`: An identifier for each example (int32)
- `sentence1`: The first sentence in Norwegian (string)
- `sentence2`: The second sentence in Norwegian (string)
- `label`: Binary label, where '1' indicates the sentences are paraphrases and '0' indicates they are not (class_label: '0', '1')
### Data Splits
The dataset is divided into training, validation, and test sets. The exact numbers of instances in each split will be as per the original PAWS-X dataset.
## Dataset Creation
### Curation Rationale
Norwegian PAWS-X was created to extend the PAWS paraphrase identification task to the Norwegian language, including both Bokmål and Nynorsk standards. This promotes multilingual and cross-lingual research in paraphrase identification.
### Source Data
The source data consists of human-translated PAWS pairs in six languages. For the Norwegian PAWS-X dataset, these pairs were translated into Norwegian Bokmål and Nynorsk using FAIR’s No Language Left Behind 3.3B parameters model.
### Annotations
The dataset retains the original PAWS labels, which were created through a combination of expert and machine-generated annotations.
### Personal and Sensitive Information
There is no known personal or sensitive information in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset helps in promoting the development of NLP technologies in Norwegian.
### Other Known Limitations
There may be potential issues related to the translation quality, as the translations were generated using a machine translation model.
## Additional Information
### Dataset Curators
The dataset was curated by researcher Javier de la Rosa.
### Licensing Information
Original PAWS-X License:
- The dataset may be freely used for any purpose, with acknowledgment of Google LLC as the data source being appreciated. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.
Norwegian PAWS-X License:
- CC BY 4.0
| [
-0.3098074793815613,
-0.27682650089263916,
0.2904949486255646,
0.4487554430961609,
-0.7085545659065247,
0.11452465504407883,
-0.036439936608076096,
-0.4576827883720398,
0.7718729376792908,
0.765495240688324,
-0.4977707266807556,
-0.84619140625,
-0.4334602653980255,
0.4026273787021637,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
intfloat/simlm-msmarco | intfloat | 2022-08-11T09:25:24Z | 27 | 1 | null | [
"region:us"
] | 2022-08-11T09:25:24Z | 2022-08-10T09:33:34.000Z | 2022-08-10T09:33:34 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Yaxin/SemEval2016Task5NLTK | Yaxin | 2023-03-19T05:11:38Z | 27 | 0 | null | [
"region:us"
] | 2023-03-19T05:11:38Z | 2022-08-14T15:20:21.000Z | 2022-08-14T15:20:21 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BDas/ArabicNLPDataset | BDas | 2022-09-26T18:52:01Z | 27 | 1 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ar",
... | 2022-09-26T18:52:01Z | 2022-08-26T21:33:24.000Z | 2022-08-26T21:33:24 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ar
license:
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
- multi-label-classification
pretty_name: 'ArabicNLPDataset'
---
# Dataset Card for "ArabicNLPDataset"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/BihterDass/ArabicTextClassificationDataset]
- **Repository:** [https://github.com/BihterDass/ArabicTextClassificationDataset]
- **Size of downloaded dataset files:** 23.5 MB
- **Size of the generated dataset:** 23.5 MB
### Dataset Summary
The dataset was compiled from user comments from e-commerce sites. It consists of 10,000 validations, 10,000 tests and 80000 train data. Data were classified into 3 classes (positive(pos), negative(neg) and natural(nor). The data is available to you on github.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
#### arabic-dataset-v1
- **Size of downloaded dataset files:** 23.5 MB
- **Size of the generated dataset:** 23.5 MB
### Data Fields
The data fields are the same among all splits.
#### arabic-dataset-v-v1
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `positive` (2), `natural` (1), `negative` (0).
### Data Splits
| |train |validation|test |
|----|--------:|---------:|---------:|
|Data| 80000 | 10000 | 10000 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@PnrSvc](https://github.com/PnrSvc) for adding this dataset. | [
-0.6621214747428894,
-0.39407211542129517,
-0.13770197331905365,
0.2017604261636734,
-0.3135518729686737,
0.21597792208194733,
-0.2599085867404938,
-0.4779224395751953,
0.30729034543037415,
0.4316138029098511,
-0.6390486359596252,
-1.1047149896621704,
-0.7345892190933228,
0.199766337871551... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
stochastic/random_streetview_images_pano_v0.0.2 | stochastic | 2022-10-14T02:05:40Z | 27 | 4 | null | [
"task_categories:image-classification",
"task_ids:multi-label-image-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"license:mit",
"region:us"
] | 2022-10-14T02:05:40Z | 2022-10-05T19:39:59.000Z | 2022-10-05T19:39:59 | ---
annotations_creators:
- expert-generated
language: []
language_creators:
- expert-generated
license:
- mit
multilinguality:
- multilingual
pretty_name: panoramic, street view images of random places on Earth
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- image-classification
task_ids:
- multi-label-image-classification
---
# Dataset Card for panoramic street view images (v.0.0.2)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The random streetview images dataset are labeled, panoramic images scraped from randomstreetview.com. Each image shows a location
accessible by Google Streetview that has been roughly combined to provide ~360 degree view of a single location. The dataset was designed with the intent to geolocate an image purely based on its visual content.
### Supported Tasks and Leaderboards
None as of now!
### Languages
labels: Addresses are written in a combination of English and the official language of country they belong to.
images: There are some images with signage that can contain a language. Albeit, they are less common.
## Dataset Structure
For now, images exist exclusively in the `train` split and it is at the user's discretion to split the dataset how they please.
### Data Instances
For each instance, there is:
- timestamped file name: '{YYYYMMDD}_{address}.jpg`
- the image
- the country iso-alpha2 code
- the latitude
- the longitude
- the address
Fore more examples see the [dataset viewer](https://huggingface.co/datasets/stochastic/random_streetview_images_pano_v0.0.2/viewer/stochastic--random_streetview_images_pano_v0.0.2/train)
```
{
filename: '20221001_Jarše Slovenia_46.1069942_14.9378597.jpg'
country_iso_alpha2 : 'SI'
latitude: '46.028223'
longitude: '14.345106'
address: 'Jarše Slovenia_46.1069942_14.9378597'
}
```
### Data Fields
- country_iso_alpha2: a unique 2 character code for each country in the world following the ISO 3166 standard
- latitude: the angular distance of a place north or south of the earth's equator
- longitude: the angular distance of a place east or west of the standard meridian of the Earth
- address: the physical address written from most micro -> macro order (Street, Neighborhood, City, State, Country)
### Data Splits
'train': all images are currently contained in the 'train' split
## Dataset Creation
### Curation Rationale
Google StreetView Images [requires money per image scraped](https://developers.google.com/maps/documentation/streetview/usage-and-billing).
This dataset provides about 10,000 of those images for free.
### Source Data
#### Who are the source image producers?
Google Street View provide the raw image, this dataset combined various cuts of the images into a panoramic.
[More Information Needed]
### Annotations
#### Annotation process
The address, latitude, and longitude are all scraped from the API response. While portions of the data has been manually validated, the assurance in accuracy is based on the correctness of the API response.
### Personal and Sensitive Information
While Google Street View does blur out images and license plates to the best of their ability, it is not guaranteed as can been seen in some photos. Please review [Google's documentation](https://www.google.com/streetview/policy/) for more information
## Considerations for Using the Data
### Social Impact of Dataset
This dataset was designed after inspiration from playing the popular online game, [geoguessr.com[(geoguessr.com). We ask that users of this dataset consider if their geolocation based application will harm or jeopardize any fair institution or system.
### Discussion of Biases
Out of the ~195 countries that exists, this dataset only contains images from about 55 countries. Each country has an average of 175 photos, with some countries having slightly less.
The 55 countries are:
["ZA","KR","AR","BW","GR","SK","HK","NL","PE","AU","KH","LT","NZ","RO","MY","SG","AE","FR","ES","IT","IE","LV","IL","JP","CH","AD","CA","RU","NO","SE","PL","TW","CO","BD","HU","CL","IS","BG","GB","US","SI","BT","FI","BE","EE","SZ","UA","CZ","BR","DK","ID","MX","DE","HR","PT","TH"]
In terms of continental representation:
| continent | Number of Countries Represented |
|:-----------------------| -------------------------------:|
| Europe | 30 |
| Asia | 13 |
| South America | 5 |
| Africa | 3 |
| North America | 3 |
| Oceania | 2 |
This is not a fair representation of the world and its various climates, neighborhoods, and overall place. But it's a start!
### Other Known Limitations
As per [Google's policy](https://www.google.com/streetview/policy/): __"Street View imagery shows only what our cameras were able to see on the day that they passed by the location. Afterwards, it takes months to process them. This means that content you see could be anywhere from a few months to a few years old."__
### Licensing Information
MIT License
### Citation Information
### Contributions
Thanks to [@WinsonTruong](https://github.com/WinsonTruong) and [@
David Hrachovy](https://github.com/dayweek) for helping developing this dataset.
This dataset was developed for a Geolocator project with the aforementioned developers, [@samhita-alla](https://github.com/samhita-alla) and [@yiyixuxu](https://github.com/yiyixuxu).
Thanks to [FSDL](https://fullstackdeeplearning.com) for a wonderful class and online cohort. | [
-0.758501410484314,
-0.479170024394989,
0.5949613451957703,
0.3076919615268707,
-0.624616265296936,
-0.1765804886817932,
0.009372960776090622,
-0.7537077069282532,
0.7008869051933289,
0.5194830298423767,
-0.5148850083351135,
-0.9698166847229004,
-0.6624627709388733,
-0.10746091604232788,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
allenai/cochrane_dense_mean | allenai | 2022-11-18T19:44:03Z | 27 | 0 | multi-document-summarization | [
"task_categories:summarization",
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"lang... | 2022-11-18T19:44:03Z | 2022-10-12T13:42:17.000Z | 2022-10-12T13:42:17 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-MS^2
- extended|other-Cochrane
task_categories:
- summarization
- text2text-generation
paperswithcode_id: multi-document-summarization
pretty_name: MSLR Shared Task
---
This is a copy of the [Cochrane](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `train`, `validation` and `test` splits have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `target` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==9`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7790 | 0.4487 | 0.3438 | 0.4800 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7856 | 0.4424 | 0.3534 | 0.4913 |
Retrieval results on the `test` set:
N/A. Test set is blind so we do not have any queries. | [
-0.11586358398199081,
-0.1805572807788849,
0.2325989305973053,
0.27066922187805176,
-0.2111787647008896,
-0.2340986728668213,
-0.16720139980316162,
-0.008864100091159344,
0.43911927938461304,
0.5355131030082703,
-0.4912106394767761,
-0.6674582958221436,
-0.8427175283432007,
0.2188964337110... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sara-nabhani/lfd-proj | sara-nabhani | 2022-10-24T23:48:21Z | 27 | 0 | null | [
"region:us"
] | 2022-10-24T23:48:21Z | 2022-10-24T23:45:28.000Z | 2022-10-24T23:45:28 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PlanTL-GOB-ES/CoNLL-NERC-es | PlanTL-GOB-ES | 2022-11-18T11:55:41Z | 27 | 2 | null | [
"task_categories:token-classification",
"task_ids:part-of-speech",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"language:es",
"region:us"
] | 2022-11-18T11:55:41Z | 2022-10-28T10:42:01.000Z | 2022-10-28T10:42:01 | ---
YAML tags:
annotations_creators:
- expert-generated
language:
- es
language_creators:
- found
multilinguality:
- monolingual
pretty_name: CoNLL-NERC-es
size_categories: []
source_datasets: []
tags: []
task_categories:
- token-classification
task_ids:
- part-of-speech
---
# CoNLL-NERC-es
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Website:** https://www.cs.upc.edu/~nlp/tools/nerc/nerc.html
- **Point of Contact:** [Xavier Carreras](carreras@lsi.upc.es)
### Dataset Summary
CoNLL-NERC is the Spanish dataset of the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf). The dataset is annotated with four types of named entities --persons, locations, organizations, and other miscellaneous entities-- formatted in the standard Beginning-Inside-Outside (BIO) format. The corpus consists of 8,324 train sentences with 19,400 named entities, 1,916 development sentences with 4,568 named entities, and 1,518 test sentences with 3,644 named entities.
We use this corpus as part of the EvalEs Spanish language benchmark.
### Supported Tasks and Leaderboards
Named Entity Recognition and Classification
### Languages
The dataset is in Spanish (`es-ES`)
## Dataset Structure
### Data Instances
<pre>
El DA O
Abogado NC B-PER
General AQ I-PER
del SP I-PER
Estado NC I-PER
, Fc O
Daryl VMI B-PER
Williams NC I-PER
, Fc O
subrayó VMI O
hoy RG O
la DA O
necesidad NC O
de SP O
tomar VMN O
medidas NC O
para SP O
proteger VMN O
al SP O
sistema NC O
judicial AQ O
australiano AQ O
frente RG O
a SP O
una DI O
página NC O
de SP O
internet NC O
que PR O
imposibilita VMI O
el DA O
cumplimiento NC O
de SP O
los DA O
principios NC O
básicos AQ O
de SP O
la DA O
Ley NC B-MISC
. Fp O
</pre>
### Data Fields
Every file has two columns, with the word form or punctuation symbol in the first one and the corresponding IOB tag in the second one. The different files are separated by an empty line.
### Data Splits
- esp.train: 273037 lines
- esp.testa: 54837 lines (used as dev)
- esp.testb: 53049 lines (used as test)
## Dataset Creation
### Curation Rationale
[N/A]
### Source Data
The data is a collection of news wire articles made available by the Spanish EFE News Agency. The articles are from May 2000.
#### Initial Data Collection and Normalization
For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
#### Who are the source language producers?
For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
### Annotations
#### Annotation process
For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
#### Who are the annotators?
The annotation was carried out by the TALP Research Center2 of the Technical University of Catalonia (UPC) and the Center of Language and Computation (CLiC3 ) of the University of Barcelona (UB), and funded by the European Commission through the NAMIC pro ject (IST-1999-12392).
For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Spanish.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset curators
### Licensing information
### Citation Information
The following paper must be cited when using this corpus:
Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 Shared Task: Language-Independent Named Entity Recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002).
### Contributions
[N/A]
| [
-0.5485884547233582,
-0.5844907164573669,
0.13661015033721924,
0.45457491278648376,
-0.12394633144140244,
0.0560530386865139,
-0.5059865117073059,
-0.6120336651802063,
0.561895489692688,
0.6444094181060791,
-0.537354588508606,
-0.7512392401695251,
-0.6638321280479431,
0.4943670630455017,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ghomasHudson/muld_HotpotQA | ghomasHudson | 2022-11-02T11:19:58Z | 27 | 0 | null | [
"region:us"
] | 2022-11-02T11:19:58Z | 2022-11-02T11:15:30.000Z | 2022-11-02T11:15:30 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jpwahle/machine-paraphrase-dataset | jpwahle | 2022-11-18T16:54:17Z | 27 | 1 | identifying-machine-paraphrased-plagiarism | [
"task_categories:text-classification",
"task_categories:text-generation",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"spinbot",
"spinn... | 2022-11-18T16:54:17Z | 2022-11-06T08:21:07.000Z | 2022-11-06T08:21:07 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Machine Paraphrase Dataset (SpinnerChief/SpinBot)
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- spinbot
- spinnerchief
- plagiarism
- paraphrase
- academic integrity
- arxiv
- wikipedia
- theses
task_categories:
- text-classification
- text-generation
task_ids: []
paperswithcode_id: identifying-machine-paraphrased-plagiarism
dataset_info:
- split: train
download_size: 393224
dataset_size: 393224
- split: test
download_size: 655376
dataset_size: 655376
---
# Dataset Card for Machine Paraphrase Dataset (MPC)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/jpwahle/iconf22-paraphrase
- **Paper:** https://link.springer.com/chapter/10.1007/978-3-030-96957-8_34
- **Total size:** 533 MB
- **Train size:** 340 MB
- **Test size:** 193 MB
### Dataset Summary
The Machine Paraphrase Corpus (MPC) consists of ~200k examples of original, and paraphrases using two online paraphrasing tools.
It uses two paraphrasing tools (SpinnerChief, SpinBot) on three source texts (Wikipedia, arXiv, student theses).
The examples are **not** aligned, i.e., we sample different paragraphs for originals and paraphrased versions.
### How to use it
You can load the dataset using the `load_dataset` function:
```python
from datasets import load_dataset
ds = load_dataset("jpwahle/machine-paraphrase-dataset")
print(ds[0])
#OUTPUT:
{
'text': 'The commemoration was revealed on Whit Monday 16 May 1921 by the Prince of Wales later King Edward VIII with Lutyens in participation At the divulging function Lord Fortescue gave a discourse in which he evaluated that 11600 people from Devon had been slaughtered while serving in the war He later expressed that somewhere in the range of 63700 8000 regulars 36700 volunteers and 19000 recruits had served in the military The names of the fallen were recorded on a move of respect of which three duplicates were made one for Exeter Cathedral one to be held by the district chamber and one which the Prince of Wales put in an empty in the base of the war dedication The rulers visit created impressive energy in the zone A large number of individuals lined the road to welcome his motorcade and shops on the High Street hung out pennants with inviting messages After the uncovering Edward went through ten days visiting the neighborhood ',
'label': 1,
'dataset': 'wikipedia',
'method': 'spinbot'
}
```
### Supported Tasks and Leaderboards
Paraphrase Identification
### Languages
English
## Dataset Structure
### Data Instances
```json
{
'text': 'The commemoration was revealed on Whit Monday 16 May 1921 by the Prince of Wales later King Edward VIII with Lutyens in participation At the divulging function Lord Fortescue gave a discourse in which he evaluated that 11600 people from Devon had been slaughtered while serving in the war He later expressed that somewhere in the range of 63700 8000 regulars 36700 volunteers and 19000 recruits had served in the military The names of the fallen were recorded on a move of respect of which three duplicates were made one for Exeter Cathedral one to be held by the district chamber and one which the Prince of Wales put in an empty in the base of the war dedication The rulers visit created impressive energy in the zone A large number of individuals lined the road to welcome his motorcade and shops on the High Street hung out pennants with inviting messages After the uncovering Edward went through ten days visiting the neighborhood ',
'label': 1,
'dataset': 'wikipedia',
'method': 'spinbot'
}
```
### Data Fields
| Feature | Description |
| --- | --- |
| `text` | The unique identifier of the paper. |
| `label` | Whether it is a paraphrase (1) or the original (0). |
| `dataset` | The source dataset (Wikipedia, arXiv, or theses). |
| `method` | The method used (SpinBot, SpinnerChief, original). |
### Data Splits
- train (Wikipedia x Spinbot)
- test ([Wikipedia, arXiv, theses] x [SpinBot, SpinnerChief])
## Dataset Creation
### Curation Rationale
Providing a resource for testing against machine-paraprhased plagiarism.
### Source Data
#### Initial Data Collection and Normalization
- Paragraphs from `featured articles` from the English Wikipedia dump
- Paragraphs from full-text pdfs of arXMLiv
- Paragraphs from full-text pdfs of Czech student thesis (bachelor, master, PhD).
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Jan Philip Wahle](https://jpwahle.com/)
### Licensing Information
The Machine Paraphrase Dataset is released under CC BY-NC 4.0. By using this corpus, you agree to its usage terms.
### Citation Information
```bib
@inproceedings{10.1007/978-3-030-96957-8_34,
title = {Identifying Machine-Paraphrased Plagiarism},
author = {Wahle, Jan Philip and Ruas, Terry and Folt{\'y}nek, Tom{\'a}{\v{s}} and Meuschke, Norman and Gipp, Bela},
year = 2022,
booktitle = {Information for a Better World: Shaping the Global Future},
publisher = {Springer International Publishing},
address = {Cham},
pages = {393--413},
isbn = {978-3-030-96957-8},
editor = {Smits, Malte},
abstract = {Employing paraphrasing tools to conceal plagiarized text is a severe threat to academic integrity. To enable the detection of machine-paraphrased text, we evaluate the effectiveness of five pre-trained word embedding models combined with machine learning classifiers and state-of-the-art neural language models. We analyze preprints of research papers, graduation theses, and Wikipedia articles, which we paraphrased using different configurations of the tools SpinBot and SpinnerChief. The best performing technique, Longformer, achieved an average F1 score of 80.99{\%} (F1 = 99.68{\%} for SpinBot and F1 = 71.64{\%} for SpinnerChief cases), while human evaluators achieved F1 = 78.4{\%} for SpinBot and F1 = 65.6{\%} for SpinnerChief cases. We show that the automated classification alleviates shortcomings of widely-used text-matching systems, such as Turnitin and PlagScan.}
}
```
### Contributions
Thanks to [@jpwahle](https://github.com/jpwahle) for adding this dataset. | [
-0.47401148080825806,
-0.7597266435623169,
0.6884297132492065,
0.08764684945344925,
-0.32071536779403687,
-0.14697138965129852,
0.048032257705926895,
0.029512900859117508,
0.34213152527809143,
0.6713885068893433,
-0.34143584966659546,
-0.556086540222168,
-0.6260330677032471,
0.383979707956... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
liuyanchen1015/VALUE_mnli_been_done | liuyanchen1015 | 2022-11-28T22:28:28Z | 27 | 0 | null | [
"region:us"
] | 2022-11-28T22:28:28Z | 2022-11-28T22:28:06.000Z | 2022-11-28T22:28:06 | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: train
num_bytes: 11563230
num_examples: 48515
- name: dev_matched
num_bytes: 290459
num_examples: 1226
- name: dev_mismatched
num_bytes: 377910
num_examples: 1509
- name: test_matched
num_bytes: 296760
num_examples: 1199
- name: test_mismatched
num_bytes: 380324
num_examples: 1541
download_size: 8136354
dataset_size: 12908683
---
# Dataset Card for "VALUE2_mnli_been_done"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.26310184597969055,
-0.3351023197174072,
0.16103476285934448,
0.2805338203907013,
-0.25379472970962524,
-0.09930228441953659,
0.3357871174812317,
-0.14720027148723602,
0.898489773273468,
0.5977532267570496,
-0.7979893088340759,
-0.5613586902618408,
-0.5324400067329407,
-0.227590605616569... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
liuyanchen1015/VALUE_mnli_drop_aux | liuyanchen1015 | 2022-11-28T22:29:36Z | 27 | 0 | null | [
"region:us"
] | 2022-11-28T22:29:36Z | 2022-11-28T22:29:12.000Z | 2022-11-28T22:29:12 | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: train
num_bytes: 16847569
num_examples: 78157
- name: dev_matched
num_bytes: 416576
num_examples: 1924
- name: dev_mismatched
num_bytes: 415096
num_examples: 1847
- name: test_matched
num_bytes: 402499
num_examples: 1945
- name: test_mismatched
num_bytes: 417259
num_examples: 1836
download_size: 11952293
dataset_size: 18498999
---
# Dataset Card for "VALUE2_mnli_drop_aux"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6807539463043213,
-0.10530191659927368,
-0.01710299402475357,
0.011606745421886444,
-0.16910307109355927,
-0.1012062281370163,
0.3248404264450073,
-0.31970059871673584,
0.598055362701416,
0.37715527415275574,
-1.1140284538269043,
-0.5121181607246399,
-0.5824251174926758,
-0.118791803717... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hwchase17/compositional_celebrities | hwchase17 | 2022-11-29T01:52:15Z | 27 | 2 | null | [
"region:us"
] | 2022-11-29T01:52:15Z | 2022-11-29T01:33:34.000Z | 2022-11-29T01:33:34 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ryvalenza/ryan_photos | ryvalenza | 2022-11-29T05:46:43Z | 27 | 0 | null | [
"region:us"
] | 2022-11-29T05:46:43Z | 2022-11-29T05:43:09.000Z | 2022-11-29T05:43:09 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shi-labs/oneformer_demo | shi-labs | 2022-12-07T17:24:22Z | 27 | 0 | null | [
"region:us"
] | 2022-12-07T17:24:22Z | 2022-12-01T00:19:24.000Z | 2022-12-01T00:19:24 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
m-aliabbas/idrak_timit_subsample1 | m-aliabbas | 2022-12-06T14:44:44Z | 27 | 0 | null | [
"region:us"
] | 2022-12-06T14:44:44Z | 2022-12-06T14:44:32.000Z | 2022-12-06T14:44:32 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
b-mc2/wikihow_lists | b-mc2 | 2023-01-27T00:50:59Z | 27 | 7 | null | [
"task_categories:summarization",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nc-sa-3.0",
"lists",
"bullets",
"steps",
"summary",
"region:us"
] | 2023-01-27T00:50:59Z | 2023-01-27T00:36:11.000Z | 2023-01-27T00:36:11 | ---
license: cc-by-nc-sa-3.0
task_categories:
- summarization
- question-answering
language:
- en
tags:
- lists
- bullets
- steps
- summary
pretty_name: wikihow_lists
size_categories:
- 10K<n<100K
---
# Dataset Card for WikiHow Lists
### Dataset Summary
Contains CSV of a subset of WikiHow articles.
Subsets include articles that have summaries in numbered list format, unordered list of ingredients, or unordered list of items needed for the article.
CSV contains a pageId to reference back to the source, title of the article, result with the list data, and a column specifying the result type (ingredient, needed items, summary)
### Licensing Information
Data is from WikiHow, license for content is located here
https://www.wikihow.com/wikiHow:Creative-Commons | [
-0.08427014946937561,
-0.18315376341342926,
-0.15619540214538574,
0.00928302388638258,
-0.5028583407402039,
0.15986959636211395,
0.011799129657447338,
0.14690116047859192,
0.6019822359085083,
0.6650339961051941,
-0.7582610845565796,
-0.7913315296173096,
-0.3088788390159607,
0.0953670144081... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dctanner/oa_recipes | dctanner | 2023-02-24T13:42:50Z | 27 | 4 | null | [
"region:us"
] | 2023-02-24T13:42:50Z | 2023-02-24T11:52:38.000Z | 2023-02-24T11:52:38 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 7600684
num_examples: 4747
download_size: 3325663
dataset_size: 7600684
---
# Dataset Card for Recipes dialogue
Derrived from the Kaggle dataset [Recipes from Tasty](https://www.kaggle.com/datasets/zeeenb/recipes-from-tasty), we turn the recipe ingredients and instructions into chat dialogue using a preset list of user prompt templates.
Dataset license: CC0: Public Domain. | [
-0.25733068585395813,
-0.7157063484191895,
0.33967074751853943,
-0.019563106819987297,
-0.06516436487436295,
-0.012317906133830547,
-0.074959896504879,
-0.045819927006959915,
0.6635963916778564,
1.1639690399169922,
-1.1662577390670776,
-0.6342380046844482,
-0.36776286363601685,
-0.13870339... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vicclab/fairy_tales | vicclab | 2023-02-27T10:35:24Z | 27 | 2 | null | [
"task_categories:text-generation",
"language:en",
"region:us"
] | 2023-02-27T10:35:24Z | 2023-02-26T01:18:41.000Z | 2023-02-26T01:18:41 | ---
language:
- en
task_categories:
- text-generation
---
Concatenated and edited collection of fairy tales taken from Project Gutenberg.
Texts:
https://www.gutenberg.org/files/2591/2591-0.txt
https://www.gutenberg.org/files/503/503-0.txt
https://www.gutenberg.org/files/7277/7277-0.txt
https://www.gutenberg.org/cache/epub/35862/pg35862.txt
https://www.gutenberg.org/cache/epub/69739/pg69739.txt
https://www.gutenberg.org/files/2435/2435-0.txt
https://www.gutenberg.org/cache/epub/7871/pg7871.txt
https://www.gutenberg.org/files/8933/8933-0.txt
gutenberg.org/cache/epub/30834/pg30834.txt
https://www.gutenberg.org/cache/epub/68589/pg68589.txt
https://www.gutenberg.org/cache/epub/34453/pg34453.txt
gutenberg.org/cache/epub/8653/pg8653.txt | [
-0.411450058221817,
-0.37213748693466187,
0.3779488205909729,
0.34817740321159363,
-0.5383773446083069,
0.2924421429634094,
-0.05188356339931488,
-0.7882498502731323,
0.41251054406166077,
0.9113262891769409,
-0.717158854007721,
0.06518779695034027,
-0.5751011371612549,
0.5612471699714661,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AnanthZeke/naamapadam | AnanthZeke | 2023-03-16T05:18:15Z | 27 | 0 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:as",
"language:bn",
"language:gu",
"lang... | 2023-03-16T05:18:15Z | 2023-03-14T08:26:19.000Z | 2023-03-14T08:26:19 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- cc0-1.0
multilinguality:
- multilingual
pretty_name: naamapadam
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# Dataset Card for naamapadam
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/AI4Bharat/indicner
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** Anoop Kunchukuttan
### Dataset Summary
Naamapadam is the largest publicly available Named Entity Annotated dataset for 11 Indic languages. This corpora was created by projecting named entities from English side to the Indic language side of the English-Indic languages parallel corpus. The dataset additionally contains manually labelled test set for 8 Indic languages containing 500-1000 sentences.
### Supported Tasks and Leaderboards
**Tasks:** NER on Indian languages.
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
{'words': ['उन्हेनें', 'शिकांगों','में','बोरोडिन','की','पत्नी','को','तथा','वाशिंगटन','में','रूसी','व्यापार','संघ','को','पैसे','भेजे','।'],
'ner': [0, 3, 0, 1, 0, 0, 0, 0, 3, 0, 5, 6, 6, 0, 0, 0, 0],
}
### Data Fields
- `words`: Raw tokens in the dataset.
- `ner`: the NER tags for this dataset.
### Data Splits
(to be updated, see paper for correct numbers)
| Language | Train | Validation | Test |
|---:|---:|---:|---:|
| as | 10266 | 52 | 51 |
| bn | 961679 | 4859 | 607 |
| gu | 472845 | 2389 | 50 |
| hi | 985787 | 13460 | 437 |
| kn | 471763 | 2381 | 1019 |
| ml | 716652 | 3618 | 974 |
| mr | 455248 | 2300 | 1080 |
| or | 196793 | 993 | 994 |
| pa | 463534 | 2340 | 2342 |
| ta | 497882 | 2795 | 49 |
| te | 507741 | 2700 | 53 |
## Usage
You should have the 'datasets' packages installed to be able to use the :rocket: HuggingFace datasets repository. Please use the following command and install via pip:
```code
pip install datasets
```
To use the dataset, please use:<br/>
```python
from datasets import load_dataset
hiner = load_dataset('ai4bharat/naamapadam')
```
## Dataset Creation
We use the parallel corpus from the Samanantar Dataset between English and the 11 major Indian languages to create the NER dataset. We annotate the English portion of the parallel corpus with existing state-of-the-art NER model. We use word-level alignments learned from the parallel corpus to project the entity labels from English to the Indian language.
### Curation Rationale
naamapadam was built from [Samanantar dataset](https://indicnlp.ai4bharat.org/samanantar/). This dataset was built for the task of Named Entity Recognition in Indic languages. The dataset was introduced to introduce new resources to the Indic languages language that was under-served for Natural Language Processing.
### Source Data
[Samanantar dataset](https://indicnlp.ai4bharat.org/samanantar/)
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
NER annotations were done following the CoNLL-2003 guidelines.
#### Who are the annotators?
The annotations for the testset have been done by volunteers who are proficient in the respective languages. We would like to thank all the volunteers:
- Anil Mhaske
- Anoop Kunchukuttan
- Archana Mhaske
- Arnav Mhaske
- Gowtham Ramesh
- Harshit Kedia
- Nitin Kedia
- Rudramurthy V
- Sangeeta Rajagopal
- Sumanth Doddapaneni
- Vindhya DS
- Yash Madhani
- Kabir Ahuja
- Shallu Rani
- Armin Virk
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to provide a large-scale Named Entity Recognition dataset for Indic languages. Since the information (data points) has been obtained from public resources, we do not think there is a negative social impact in releasing this data.
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
<!-- <a rel="license" float="left" href="http://creativecommons.org/publicdomain/zero/1.0/">
<img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" width="100" />
<img src="https://mirrors.creativecommons.org/presskit/buttons/88x31/png/by.png" style="border-style: none;" alt="CC-BY" width="100" href="http://creativecommons.org/publicdomain/zero/1.0/"/>
</a>
<br/> -->
**CC0 License Statement**
<a rel="license" float="left" href="https://creativecommons.org/about/cclicenses/">
<img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" width="100"/>
</a>
<br>
<br>
- We do not own any of the text from which this data has been extracted.
- We license the actual packaging of the mined data under the [Creative Commons CC0 license (“no rights reserved”)](http://creativecommons.org/publicdomain/zero/1.0).
- To the extent possible under law, <a rel="dct:publisher" href="https://ai4bharat.iitm.ac.in/"> <span property="dct:title">AI4Bharat</span></a> has waived all copyright and related or neighboring rights to <span property="dct:title">Naamapadam</span> manually collected data and existing sources.
- This work is published from: India.
### Citation Information
If you are using the Naampadam corpus, please cite the following article:
```
@misc{mhaske2022naamapadam,
doi = {10.48550/ARXIV.2212.10168},
url = {https://arxiv.org/abs/2212.10168},
author = {Mhaske, Arnav and Kedia, Harshit and Doddapaneni, Sumanth and Khapra, Mitesh M. and Kumar, Pratyush and Murthy, Rudra and Kunchukuttan, Anoop},
title = {Naamapadam: A Large-Scale Named Entity Annotated Data for Indic Languages}
publisher = {arXiv},
year = {2022},
}
```
<!-- Contributors -->
### Contributors
- Arnav Mhaske <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub>
- Harshit Kedia <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub>
- Sumanth Doddapaneni <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub>
- Mitesh M. Khapra <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub>
- Pratyush Kumar <sub> ([AI4Bharat](https://ai4bharat.org), [Microsoft](https://www.microsoft.com/en-in/), [IITM](https://www.iitm.ac.in)) </sub>
- Rudra Murthy <sub> ([AI4Bharat](https://ai4bharat.org), [IBM](https://www.ibm.com))</sub>
- Anoop Kunchukuttan <sub> ([AI4Bharat](https://ai4bharat.org), [Microsoft](https://www.microsoft.com/en-in/), [IITM](https://www.iitm.ac.in)) </sub>
This work is the outcome of a volunteer effort as part of the [AI4Bharat initiative](https://ai4bharat.iitm.ac.in).
<!-- Contact -->
### Contact
- Anoop Kunchukuttan ([anoop.kunchukuttan@gmail.com](mailto:anoop.kunchukuttan@gmail.com))
- Rudra Murthy V ([rmurthyv@in.ibm.com](mailto:rmurthyv@in.ibm.com)) | [
-0.501878023147583,
-0.2561119496822357,
0.04048418253660202,
0.5009881258010864,
-0.2923004925251007,
0.23517614603042603,
-0.32623156905174255,
-0.5110269784927368,
0.5121564865112305,
0.2364046424627304,
-0.39804548025131226,
-0.6300385594367981,
-0.6445269584655762,
0.5891228318214417,... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.