id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
KaiLv/UDR_COPA | 2023-06-21T12:15:53.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 29 | 2023-06-21T12:15:46 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: string
- name: premise
dtype: string
- name: question
dtype: string
- name: mirrored
dtype: bool
- name: choices
dtype: string
- name: len_question
dtype: int64
- name: max_len_choices
dtype: int64
splits:
- name: train
num_bytes: 110350
num_examples: 500
- name: test
num_bytes: 107164
num_examples: 500
download_size: 129892
dataset_size: 217514
---
# Dataset Card for "UDR_COPA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 664 | [
[
-0.0340576171875,
-0.0104827880859375,
0.007099151611328125,
0.0224151611328125,
-0.025421142578125,
0.015899658203125,
0.034576416015625,
-0.02276611328125,
0.0482177734375,
0.03533935546875,
-0.042022705078125,
-0.061553955078125,
-0.0372314453125,
-0.0083... |
KaiLv/UDR_CosmosQA | 2023-06-21T12:35:02.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 29 | 2023-06-21T12:34:48 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: question
dtype: string
- name: label
dtype: string
- name: choices
dtype: string
- name: len_question
dtype: int64
- name: max_len_choices
dtype: int64
splits:
- name: train
num_bytes: 11188271
num_examples: 18770
- name: test
num_bytes: 3979297
num_examples: 6030
- name: validation
num_bytes: 1722925
num_examples: 2603
- name: debug
num_bytes: 2985534
num_examples: 5000
download_size: 11095169
dataset_size: 19876027
---
# Dataset Card for "UDR_CosmosQA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 736 | [
[
-0.044219970703125,
-0.0038280487060546875,
0.0132904052734375,
0.0121002197265625,
-0.007904052734375,
0.0099029541015625,
0.033233642578125,
-0.0020904541015625,
0.050018310546875,
0.0338134765625,
-0.059600830078125,
-0.05340576171875,
-0.028167724609375,
... |
shinonomelab/cleanvid-15m_map | 2023-07-02T04:22:55.000Z | [
"task_categories:text-to-video",
"task_categories:video-classification",
"size_categories:10M<n<100M",
"language:en",
"license:cc-by-4.0",
"captions",
"metadata",
"region:us"
] | shinonomelab | null | null | 9 | 29 | 2023-06-27T04:45:10 | ---
license: cc-by-4.0
dataset_info:
features:
- name: id
dtype: int64
- name: description
dtype: string
- name: duration
dtype: float64
- name: aspectratio
dtype: string
- name: videourl
dtype: string
- name: author
dtype: string
- name: categories
dtype: string
- name: framerate
dtype: float64
- name: r18
dtype: int64
splits:
- name: train
num_bytes: 16755833083
num_examples: 14394510
download_size: 5410262648
dataset_size: 16755833083
task_categories:
- text-to-video
- video-classification
language:
- en
tags:
- captions
- metadata
pretty_name: CleanVid Map (15M)
size_categories:
- 10M<n<100M
---
# CleanVid Map (15M) 🎥
### TempoFunk Video Generation Project
CleanVid-15M is a large-scale dataset of videos with multiple metadata entries such as:
- Textual Descriptions 📃
- Recording Equipment 📹
- Categories 🔠
- Framerate 🎞️
- Aspect Ratio 📺
CleanVid aim is to improve the quality of WebVid-10M dataset by adding more data and cleaning the dataset by dewatermarking the videos in it.
This dataset includes only the map with the urls and metadata, with 3,694,510 more entries than the original WebVid-10M dataset.
Note that the videos are low-resolution, ranging from 240p to 480p. But this shouldn't be a problem as resolution scaling is difficult in Text-To-Video models.
More Datasets to come for high-res use cases.
CleanVid is the foundation dataset for the TempoFunk Video Generation project.
Built from a crawl of Shutterstock from June 25, 2023.
## Format 📊
- id: Integer (int64) - Shutterstock video ID
- description: String - Description of the video
- duration: Float(64) - Duration of the video in seconds
- aspectratio: String - Aspect Ratio of the video separated by colons (":")
- videourl: String - Video URL for the video in the entry, MP4 format. WEBM format is also available most of the times (by changing the extension at the end of the URL.).
- author: String - JSON-String containing information of the author such as `Recording Equipment`, `Style`, `Nationality` and others.
- categories: String - JSON-String containing the categories of the videos. (Values from shutterstock, not by us.)
- framerate: Float(64) - Framerate of the video
- r18: Bit (int64) - Wether the video is marked as mature content. 0 = Safe For Work; 1 = Mature Content
## Code 👩💻
If you want to re-create this dataset on your own, code is available here:
https://github.com/chavinlo/tempofunk-scrapper/tree/refractor1/sites/shutterstock
Due to rate-limitations, you might need to obtain a proxy. Functionality for proxies is included in the repository.
## Sample 🧪
```json
{
"id": 1056934082,
"description": "Rio, Brazil - February 24, 2020: parade of the samba school Mangueira, at the Marques de Sapucai Sambodromo",
"duration": 9.76,
"aspectratio": "16:9",
"videourl": "https://www.shutterstock.com/shutterstock/videos/1056934082/preview/stock-footage-rio-brazil-february-parade-of-the-samba-school-mangueira-at-the-marques-de-sapucai.mp4",
"author": {
"accountsId": 101974372,
"contributorId": 62154,
"bio": "Sempre produzindo mais",
"location": "br",
"website": "www.dcpress.com.br",
"contributorTypeList": [
"photographer"
],
"equipmentList": [
"300mm f2.8",
"24-70mm",
"70-200mm",
"Nikon D7500 ",
"Nikon Df",
"Flashs Godox"
],
"styleList": [
"editorial",
"food",
"landscape"
],
"subjectMatterList": [
"photographer",
"people",
"nature",
"healthcare",
"food_and_drink"
],
"facebookUsername": "celso.pupo",
"googlePlusUsername": "celsopupo",
"twitterUsername": "celsopupo",
"storageKey": "/contributors/62154/avatars/thumb.jpg",
"cdnThumbPath": "/contributors/62154/avatars/thumb.jpg",
"displayName": "Celso Pupo",
"vanityUrlUsername": "rodrigues",
"portfolioUrlSuffix": "rodrigues",
"portfolioUrl": "https://www.shutterstock.com/g/rodrigues",
"instagramUsername": "celsopupo",
"hasPublicSets": true,
"instagramUrl": "https://www.instagram.com/celsopupo",
"facebookUrl": "https://www.facebook.com/celso.pupo",
"twitterUrl": "https://twitter.com/celsopupo"
},
"categories": [
"People"
],
"framerate": 29.97,
"r18": 0
}
```
## Credits 👥
### Main
- Lopho - Part of TempoFunk Video Generation
- Chavinlo - Part of TempoFunk Video Generation & CleanVid Crawling, Scraping and Formatting
```
@InProceedings{Bain21,
author = "Max Bain and Arsha Nagrani and G{\"u}l Varol and Andrew Zisserman",
title = "Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval",
booktitle = "IEEE International Conference on Computer Vision",
year = "2021",
}
```
### Extra
- Salt - Base Threading Code (2022) | 4,863 | [
[
-0.0487060546875,
-0.0276947021484375,
0.0052032470703125,
0.0227813720703125,
-0.044097900390625,
0.017303466796875,
-0.0052642822265625,
-0.0238800048828125,
0.0308074951171875,
-0.0050811767578125,
-0.042449951171875,
-0.057220458984375,
-0.046112060546875,
... |
SiberiaSoft/SiberianDatasetXL | 2023-07-24T00:28:56.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:conversational",
"size_categories:100K<n<1M",
"language:ru",
"license:mit",
"region:us"
] | SiberiaSoft | null | null | 2 | 29 | 2023-07-07T16:44:34 | ---
license: mit
task_categories:
- text-generation
- text2text-generation
- conversational
language:
- ru
size_categories:
- 100K<n<1M
---
### SiberiaSoft/SiberianDatasetXL
Датасет инструкций, диалогов, QA
## Процентное содержание задач:
| Задача | Процентное содержание |
|:-----------------------------------------------------------------------------:|:---------------------:|
| Живые с контекстом | 38.746% |
| QA с длинными ответами | 11.907% |
| russian_instructions_2 Den4ikAI/russian_instructions_2 (очищенный) | 9.65% |
| QA по тексту Den4ikAI/ru_sberquad_long_answers | 9.203% |
| QA с короткими ответами | 8.57% |
| Инструкции с IlyaGusev/ru_turbo_alpaca_evol_instruct (очень жестко очищенные) | 6.087% |
| Персонализированные диалоги с контекстом | 5.795% |
| Инструкции с its5Q/yandex-q | 4.373% |
| QA с использованием Wikipedia | 2.822% |
| Инструкции с lksy/ru_instruct_gpt4 (жестко очищенные) | 2.741% |
| Решение проблем | 0.085% |
| QA объясни ребенку | 0.02% |
### Citation
```
@MISC{SiberianDatasetXL,
author = {Denis Petrov, Ivan Ramovich},
title = {Russian dataset for Instruct/Chat models},
url = {https://huggingface.co/datasets/SiberiaSoft/SiberianDatasetXL},
year = 2023
}
``` | 1,972 | [
[
-0.020355224609375,
-0.034210205078125,
0.01477813720703125,
0.031036376953125,
-0.0439453125,
0.00418853759765625,
0.018096923828125,
-0.0166778564453125,
0.033935546875,
-0.0007338523864746094,
-0.06585693359375,
-0.049713134765625,
-0.020172119140625,
-0.... |
aditijha/processed_lima | 2023-08-29T05:26:26.000Z | [
"region:us"
] | aditijha | null | null | 2 | 29 | 2023-07-16T21:33:28 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 2942583
num_examples: 1000
- name: test
num_bytes: 80137
num_examples: 300
download_size: 31591
dataset_size: 3022720
---
# Dataset Card for "processed_lima"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 451 | [
[
-0.0281219482421875,
-0.0345458984375,
0.0309600830078125,
0.036865234375,
-0.03228759765625,
-0.01209259033203125,
0.027313232421875,
-0.0208282470703125,
0.0733642578125,
0.05560302734375,
-0.060638427734375,
-0.055389404296875,
-0.06146240234375,
-0.00927... |
jamescalam/llama-2-arxiv-papers | 2023-07-25T03:11:43.000Z | [
"language:en",
"arxiv:2307.09288",
"region:us"
] | jamescalam | null | null | 2 | 29 | 2023-07-25T03:02:42 | ---
language:
- en
pretty_name: Chunked Arxiv Papers for Llama 2
---
This dataset contains papers related to (and including) the [Llama 2 research paper](https://arxiv.org/abs/2307.09288). Related papers were identified by following a trail of references, extracting those papers with the [`arxiv-bot`](https://github.com/aurelio-labs/arxiv-bot) package, and repeating. | 370 | [
[
-0.01326751708984375,
-0.037384033203125,
0.04229736328125,
-0.0034027099609375,
-0.01053619384765625,
0.019866943359375,
0.033905029296875,
-0.045501708984375,
0.0245361328125,
0.053741455078125,
-0.0352783203125,
-0.01739501953125,
-0.04168701171875,
0.003... |
tasksource/esci | 2023-08-09T11:23:31.000Z | [
"task_categories:text-classification",
"task_categories:text-retrieval",
"language:en",
"language:ja",
"language:es",
"license:apache-2.0",
"arxiv:2206.06588",
"region:us"
] | tasksource | null | null | 0 | 29 | 2023-08-09T10:12:27 | ---
dataset_info:
features:
- name: example_id
dtype: int64
- name: query
dtype: string
- name: query_id
dtype: int64
- name: product_id
dtype: string
- name: product_locale
dtype: string
- name: esci_label
dtype: string
- name: small_version
dtype: int64
- name: large_version
dtype: int64
- name: product_title
dtype: string
- name: product_description
dtype: string
- name: product_bullet_point
dtype: string
- name: product_brand
dtype: string
- name: product_color
dtype: string
- name: product_text
dtype: string
splits:
- name: train
num_bytes: 5047037946
num_examples: 2027874
- name: test
num_bytes: 1631847321
num_examples: 652490
download_size: 2517788457
dataset_size: 6678885267
license: apache-2.0
task_categories:
- text-classification
- text-retrieval
language:
- en
- ja
- es
---
# Dataset Card for "esci"
ESCI product search dataset
https://github.com/amazon-science/esci-data/
Preprocessings:
-joined the two relevant files
-product_text aggregate all product text
-mapped esci_label to full name
```bib
@article{reddy2022shopping,
title={Shopping Queries Dataset: A Large-Scale {ESCI} Benchmark for Improving Product Search},
author={Chandan K. Reddy and Lluís Màrquez and Fran Valero and Nikhil Rao and Hugo Zaragoza and Sambaran Bandyopadhyay and Arnab Biswas and Anlu Xing and Karthik Subbian},
year={2022},
eprint={2206.06588},
archivePrefix={arXiv}
}
``` | 1,498 | [
[
-0.0279541015625,
-0.041595458984375,
0.02642822265625,
0.00942230224609375,
-0.016021728515625,
0.01123046875,
-0.00848388671875,
-0.0401611328125,
0.036712646484375,
0.0310211181640625,
-0.038818359375,
-0.0538330078125,
-0.032806396484375,
0.0226593017578... |
BrunoGR/Twitter_Sentiment_Analysis_Train_Corpus_in_Spanish | 2023-08-10T01:48:16.000Z | [
"language:es",
"license:apache-2.0",
"region:us"
] | BrunoGR | null | null | 0 | 29 | 2023-08-10T01:45:10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: etiqueta
dtype: string
- name: texto
dtype: string
splits:
- name: train
num_bytes: 134544035
num_examples: 1082821
- name: test
num_bytes: 41458582
num_examples: 334641
download_size: 89208506
dataset_size: 176002617
license: apache-2.0
language:
- es
pretty_name: e
---
# Dataset Card for "Twitter_Sentiment_Analysis_Train_Corpus_in_Spanish"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 676 | [
[
-0.03204345703125,
-0.0156402587890625,
0.004520416259765625,
0.059722900390625,
-0.0146484375,
0.0285797119140625,
-0.003814697265625,
-0.00750732421875,
0.06597900390625,
0.01517486572265625,
-0.0623779296875,
-0.073974609375,
-0.05938720703125,
-0.0130004... |
mHossain/indic_model_indic_test_data_paraphrase_detection | 2023-08-20T19:26:15.000Z | [
"region:us"
] | mHossain | null | null | 0 | 29 | 2023-08-20T19:26:11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 3849846.3
num_examples: 36000
- name: test
num_bytes: 427760.7
num_examples: 4000
download_size: 1899118
dataset_size: 4277607.0
---
# Dataset Card for "indic_model_indic_test_data_paraphrase_detection"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 654 | [
[
-0.0322265625,
-0.032196044921875,
0.0247802734375,
0.0272216796875,
-0.029083251953125,
-0.01409912109375,
0.0059661865234375,
-0.00435638427734375,
0.033294677734375,
0.047332763671875,
-0.03900146484375,
-0.066650390625,
-0.035675048828125,
0.008857727050... |
thesistranslation/distilled-ccmatrix-es-en | 2023-10-03T09:21:19.000Z | [
"language:es",
"language:en",
"region:us"
] | thesistranslation | null | null | 0 | 29 | 2023-08-26T13:47:17 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: translation
dtype:
translation:
languages:
- es
- en
splits:
- name: train
num_bytes: 7090174966
num_examples: 30000000
download_size: 4926077685
dataset_size: 7090174966
language:
- es
- en
---
# Dataset Card for "distilled-ccmatrix-es-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 493 | [
[
-0.043701171875,
-0.0196533203125,
0.028167724609375,
0.0154266357421875,
-0.036956787109375,
0.0298004150390625,
-0.0020313262939453125,
0.0116119384765625,
0.054718017578125,
0.0247802734375,
-0.056854248046875,
-0.065673828125,
-0.06146240234375,
-0.00018... |
fondant-ai/fondant-cc-25m | 2023-09-28T08:00:56.000Z | [
"task_categories:text-to-image",
"size_categories:10M<n<100M",
"license:cc",
"art",
"region:us"
] | fondant-ai | null | null | 38 | 29 | 2023-09-15T18:56:54 | ---
license: cc
task_categories:
- text-to-image
tags:
- art
size_categories:
- 10M<n<100M
---
# Dataset Card for Fondant Creative Commons 25 million (fondant-cc-25m)

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Changelog](#changelog)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [How to use it](#how-to-use-it)
- [How to contribute](#how-to-contribute)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Data Collection and Preprocessing](#data-collection-and-preprocessing)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Opting out](#opting-out)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Disclaimer](#disclaimer)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Contact](#contact)
## Dataset Description
- **Homepage:** https://www.fondant.ai/
- **Repository:** https://github.com/ml6team/fondant
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** info@fondant.ai
### Changelog
|Release|Description|
|-|-|
|v0.1| Release of the Fondant-cc-25m dataset
### Dataset Summary
Fondant-cc-25m contains 25 million image URLs with their respective [Creative Commons](https://creativecommons.org/)
license information collected from the [Common Crawl web corpus](https://commoncrawl.org/).
The dataset was created using [Fondant](https://fondant.ai), an open source framework that aims to simplify and speed up
large-scale data processing by making self-contained pipeline components reusable across pipelines, infrastructures and shareable within the community.
### Supported Tasks and Leaderboards
This dataset can be used for training or fine-tuning image generation or computer vision models.
### How to use it
To execute the pipeline locally, you must have [docker compose](https://docs.docker.com/compose/),
[Python](https://python.org) >=3.8 and [Git](https://git-scm.com/) installed on your system.
To ensure a successful example run, please allocate at least 8GB of RAM to your Docker environment.
**Note:** For Apple M1/M2 ship users:
- Make sure that Docker uses linux/amd64 platform and not arm64. In Docker Dashboard go to Settings>Features in development, make sure to uncheck `Use containerid for pulling and storing images`.
- For improved execution speed, check the box that says `Use Rosetta for x86/amd64 emulation on Apple Silicon`.
We have prepared a sample Fondant pipeline for downloading the dataset.
1) Install Fondant by running:
```bash
pip install fondant
```
2) Clone the [Fondant GitHub repository](https://github.com/ml6team/fondant)
```bash
git clone https://github.com/ml6team/fondant.git
```
3) Make sure that Docker Compose is running, navigate to `fondant/examples/pipelines/filter-cc-25m`, and initiate the pipeline by executing:
```bash
fondant run pipeline --local
```
**Note:** For local testing purposes, the pipeline will only download the first 100,000 images.
If you want to download the full dataset, you will need to modify the component arguments in the `pipeline.py` file,
specifically the following part:
```python
load_from_hf_hub = ComponentOp(
component_dir="components/load_from_hf_hub",
arguments={
"dataset_name": "fondant-ai/fondant-cc-25m",
"column_name_mapping": load_component_column_mapping,
"n_rows_to_load": <HERE INSERT THE NUMBER OF IMAGES YOU WANT TO DOWNLOAD>
},
)
```
4) To visually inspect the results quickly, you can use:
```bash
fondant explore --base_path ./data
```
5) You can also choose to download images to your local machine if you prefer, we have provided an [example script](https://huggingface.co/datasets/fondant-ai/fondant-cc-25m/blob/main/extract_images.py)
that enabled this:
To run the script, you can simply execute the following:
```bash
python extract_images.py --parquet_file <Path to the Parquet file or folder containing the images> --save_folder <The folder where to save the images to>
```
### How to contribute
If you want to contribute to the dataset, the best way is to help us develop pipeline components for further processing.
Creating custom pipelines for specific purposes requires different building blocks.
Fondant pipelines can mix reusable components and custom components.

Components we are currently looking to add are the following ([GitHub issues](https://github.com/ml6team/fondant/issues?q=is%3Aissue+is%3Aopen+label%3A%22Component+Contribution%22)):
- 👯 Image-based deduplication
- 🖥️✎ Automatic captioning
- 🎨 Visual quality / aesthetic quality estimation
- 🔏 Watermark detection
- 🔞 Not safe for work (NSFW) content detection
- 📇 CLIP embedding generation
- 😐 Face detection
- 🙋🏻♂️ Personal Identifiable Information (PII) detection
- 📝 Text detection
- 🤖 AI generated image detection
- 👬 Image-text CLIP similarity
- 👨🎨 Any components that you propose to develop
We are also looking for core framework contributors and users who are willing to give feedback on usability and suggest potential improvements
## Dataset Structure
### Data Instances
Each data instance corresponds to one image. The URL of the image is in the `image_url` feature, and other features (`alt_text`, `webpage_url`, etc) provide some
metadata. Note that images have been deduplicated only based on their URLs.
### Data Fields
- `image_url` (string): image url to download the image
- `alt_text` (string): alternative text of the image
- `webpage_url` (string): webpage source of the image
- `license_type` (string): creative commons license type of the image
- `license_location` (string): location of the license on the webpage
- `surt_url` (string): sort friendly image url with top level domain as the prefix
### Data Splits
We do not provide any canonical splits for fondant-cc-25m.
## Dataset Creation
### Curation Rationale
Current AI image generation models such as Stable Diffusion and Dall-E are trained on hundreds of millions of images from the public Internet
including copyrighted work. This creates legal risks and uncertainties for users of these images and is unfair towards copyright holders who
may not want their proprietary work reproduced without consent.
By releasing a Creative Commons image dataset, we hope to mitigate legal risks and empower ethical AI development that respects copyright.
This dataset is the first step towards our goal of a 500M Creative Commons image dataset.
### Source Data
fondant-cc-25m is built from CommonCrawl dumps. These dumps are constructed from crawling publicly available web pages.
### Data Collection and Preprocessing
Permissive licenses have minimal restrictions on how the image can be copied, modified, and redistributed.
The full list of licenses can be found [here](https://creativecommons.org/about/cclicenses/).
We examined HTML tags of the webpages for the presence of Creative Commons license URLs. A webpage was marked permissive only when a license URL was found in
its footer, aside or sidebar. This was the case only in around 0.164% of a 100k random sample from Common Crawl. This suggests that image generation models
trained on a random sample from the public internet may be trained on up to 99.836% copyrighted images.
Subsequently, all the image URLs present on the web page were collected together with the license information. A manual check of a random
sample of 1032 images showed that 96.32% were attributed the correct license whil 3.68% were not.
False positives could be due to parsing errors but also incorrect attributions: images indicated by the publisher to be CC which are not.
More information on our approach can be found in [this blogpost](https://blog.ml6.eu/ai-image-generation-without-copyright-infringement-a9901b64541c).
### Personal and Sensitive Information
The released dataset may contain sensitive information such as names, emails and addresses that have previously been published to the Internet.
In the event that the dataset contains personal information, researchers should only use public, non-personal information in support of conducting
and publishing their [open-access](https://en.wikipedia.org/wiki/Open_access) research. Personal information should not be used for spamming purposes,
including sending unsolicited emails or selling of personal information. Complaints, removal requests, and "do not contact" requests can be sent to info@fondant.ai.
The PII filtering pipeline for this dataset is still a work in progress. Researchers that wish to contribute to the anonymization pipeline of the project can join
[here](https://github.com/ml6team/fondant/tree/main#-contributing).
### Opting out
Fondant-cc-25m is based on CommonCrawl. Their crawler honors opt-out requests in the robots.txt, see the
[CC FAQ](https://commoncrawl.org/big-picture/frequently-asked-questions/) for details.
We are giving the public the ability to have their image removed from the dataset upon request. The process for submitting and enacting removal requests will keep
evolving throughout the project as we receive feedback and build up more data governance tools.
If you'd like to have your data removed from the dataset, [contact us](mailto:info@fondant.ai).
## Considerations for Using the Data
### Disclaimer
Fondant is making significant efforts to respect the intellectual property rights of third parties by publishing a dataset of
Creative Commons licensed images. Under no circumstances can Fondant be held liable by a third party for (i) the accuracy or correctness
of the content, (ii) an alleged infringement of intellectual property rights or (iii) any other alleged claim, action, injunction or suit
resulting from the publication or use of the dataset.
### Discussion of Biases
As toxic or biased data is prevalent on the internet, it is possible that our dataset contains such content.
## Additional Information
### Dataset Curators
1. Sharon Grundmann, ML6, sharon.grundmann@ml6.eu
2. Matthias Richter, ML6, matthias.richter@ml6.eu
3. Robbe Sneyders, ML6, robbe.sneyders@ml6.eu
### Licensing Information
Fondant-cc-25m is a collection of images with various Creative Commons and other public licenses. Any use of all or part of the images gathered in Fondant-cc-25m
must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.
The list of Creative Commons license types included in the dataset can be found [here](https://creativecommons.org/about/cclicenses/).
### Contact
- Email: [info@fondant.ai](mailto:info@fondant.ai)
- Discord: [https://discord.gg/HnTdWhydGp](https://discord.gg/HnTdWhydGp) | 11,368 | [
[
-0.0458984375,
-0.04205322265625,
0.0010595321655273438,
0.031494140625,
-0.036834716796875,
-0.033050537109375,
-0.01006317138671875,
-0.028289794921875,
0.0297088623046875,
0.0614013671875,
-0.049591064453125,
-0.05572509765625,
-0.041778564453125,
0.00895... |
Hack90/virus_dna_dedup_minihash_0.9_kmer_7 | 2023-10-22T22:04:50.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 29 | 2023-09-15T23:14:53 | ---
dataset_info:
features:
- name: sequence_x
dtype: string
- name: similarity_filter
dtype: float64
- name: id
dtype: string
- name: sequence_y
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
- name: missing_seq_count
dtype: int64
- name: missingness
dtype: float64
- name: seq_filled
dtype: string
- name: __index_level_0__
dtype: int64
- name: spaced_sequence
dtype: string
splits:
- name: train
num_bytes: 522191271
num_examples: 10885
download_size: 234031394
dataset_size: 522191271
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "virus_dna_dedup_minihash_0.9_kmer_7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 954 | [
[
-0.03240966796875,
-0.01824951171875,
0.01020050048828125,
-0.0210113525390625,
-0.03741455078125,
-0.0006303787231445312,
0.033111572265625,
0.0033550262451171875,
0.055816650390625,
0.0157470703125,
-0.041168212890625,
-0.045257568359375,
-0.060791015625,
... |
infinityofspace/python_codestyles-mixed1-500 | 2023-10-18T20:56:48.000Z | [
"size_categories:100K<n<1M",
"license:mit",
"python",
"code-style",
"mixed",
"doi:10.57967/hf/1231",
"region:us"
] | infinityofspace | null | null | 0 | 29 | 2023-09-17T18:21:31 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: code
dtype: string
- name: code_codestyle
dtype: int64
- name: style_context
dtype: string
- name: style_context_codestyle
dtype: int64
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1794945328.216033
num_examples: 153992
- name: test
num_bytes: 326644128.3197262
num_examples: 28194
download_size: 645473358
dataset_size: 2121589456.5357592
license: mit
tags:
- python
- code-style
- mixed
size_categories:
- 100K<n<1M
---
# Dataset Card for "python_codestyles-mixed1-500"
This dataset contains negative and positive examples with python code of compliance with a code style. A positive
example represents compliance with the code style (label is 1). Each example is composed of two components, the first
component consists of a code that either conforms to the code style or violates it and the second component
corresponding to an example code that already conforms to a code style.
The dataset combines both
datasets [infinityofspace/python_codestyles-random-500](https://huggingface.co/datasets/infinityofspace/python_codestyles-random-500)
and [infinityofspace/python_codestyles-single-500](https://huggingface.co/datasets/infinityofspace/python_codestyles-single-500)
by randomly selecting half of the examples from each of the two datasets.
The code styles in the combined dataset differ in at least one and exactly one codestyle rule, which is called a
`mixed` codestyle dataset variant. The dataset consists of a training and test group, with none of the code styles
overlapping between groups. In addition, both groups contain completely different underlying codes.
The examples contain source code from the following repositories:
| repository | tag or commit |
|:-----------------------------------------------------------------------:|:----------------------------------------:|
| [TheAlgorithms/Python](https://github.com/TheAlgorithms/Python) | f614ed72170011d2d439f7901e1c8daa7deac8c4 |
| [huggingface/transformers](https://github.com/huggingface/transformers) | v4.31.0 |
| [huggingface/datasets](https://github.com/huggingface/datasets) | 2.13.1 |
| [huggingface/diffusers](https://github.com/huggingface/diffusers) | v0.18.2 |
| [huggingface/accelerate](https://github.com/huggingface/accelerate) | v0.21.0 | | 2,709 | [
[
-0.041778564453125,
-0.0311431884765625,
-0.01318359375,
0.039337158203125,
-0.01324462890625,
-0.01380157470703125,
-0.0129852294921875,
-0.0164947509765625,
0.043121337890625,
0.03399658203125,
-0.054443359375,
-0.041473388671875,
-0.0220947265625,
0.01692... |
mcaleste/sat_multiple_choice_math_may_23 | 2023-10-14T02:23:29.000Z | [
"size_categories:n<1K",
"language:en",
"region:us"
] | mcaleste | null | null | 0 | 29 | 2023-09-18T21:30:36 | ---
language:
- en
size_categories:
- n<1K
---
This is the set of math SAT questions from the May 2023 SAT, taken from here: https://www.mcelroytutoring.com/lower.php?url=44-official-sat-pdfs-and-82-official-act-pdf-practice-tests-free.
Questions that included images were not included but all other math questions, including those that have tables were included. | 365 | [
[
-0.0516357421875,
-0.07745361328125,
0.050079345703125,
0.01331329345703125,
0.0008797645568847656,
-0.03375244140625,
0.051544189453125,
-0.0205078125,
0.03411865234375,
0.08428955078125,
-0.08856201171875,
-0.0024204254150390625,
-0.0240631103515625,
0.002... |
warshakhan/donut_vqa_ISynHMP_all_labels | 2023-09-19T08:43:22.000Z | [
"region:us"
] | warshakhan | null | null | 0 | 29 | 2023-09-19T08:39:47 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 580858079.0
num_examples: 2800
- name: valid
num_bytes: 85643829.0
num_examples: 400
- name: test
num_bytes: 172886967.0
num_examples: 800
download_size: 804946514
dataset_size: 839388875.0
---
# Dataset Card for "donut_vqa_ISynHMP_all_labels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 713 | [
[
-0.0164642333984375,
-0.0106201171875,
0.0219879150390625,
0.0081939697265625,
-0.00399017333984375,
0.0196685791015625,
0.0136260986328125,
-0.00934600830078125,
0.07562255859375,
0.037200927734375,
-0.061492919921875,
-0.060211181640625,
-0.049285888671875,
... |
longface/prontoqa-train | 2023-10-31T07:29:06.000Z | [
"region:us"
] | longface | null | null | 0 | 29 | 2023-09-21T08:55:08 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tyzhu/squad_title_v4_train_30_eval_10 | 2023-09-26T09:49:20.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 29 | 2023-09-26T09:04:51 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 555104
num_examples: 368
- name: validation
num_bytes: 50807
num_examples: 50
download_size: 105632
dataset_size: 605911
---
# Dataset Card for "squad_title_v4_train_30_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 760 | [
[
-0.0301513671875,
-0.0016422271728515625,
0.01451873779296875,
0.03546142578125,
-0.001117706298828125,
0.0304718017578125,
0.0207977294921875,
0.007488250732421875,
0.038970947265625,
0.020111083984375,
-0.0760498046875,
-0.0462646484375,
-0.033935546875,
0... |
renumics/emodb | 2023-10-04T04:49:52.000Z | [
"region:us"
] | renumics | null | null | 0 | 29 | 2023-10-04T04:49:02 | ---
dataset_info:
features:
- name: age
dtype: float32
- name: gender
dtype:
class_label:
names:
'0': female
'1': male
- name: emotion
dtype:
class_label:
names:
'0': anger
'1': boredom
'2': disgust
'3': fear
'4': happiness
'5': neutral
'6': sadness
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 47623397.0
num_examples: 535
download_size: 46870260
dataset_size: 47623397.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "emodb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 797 | [
[
-0.052093505859375,
-0.04217529296875,
0.0249176025390625,
0.01512908935546875,
-0.01447296142578125,
0.002292633056640625,
0.024810791015625,
-0.00685882568359375,
0.07684326171875,
0.035430908203125,
-0.056610107421875,
-0.06268310546875,
-0.03521728515625,
... |
HamdanXI/cleaned_daily_dialog_sentence | 2023-10-04T07:42:26.000Z | [
"region:us"
] | HamdanXI | null | null | 0 | 29 | 2023-10-04T07:40:14 | ---
dataset_info:
features:
- name: dialogue
dtype: string
splits:
- name: train
num_bytes: 5434241
num_examples: 77350
download_size: 3467625
dataset_size: 5434241
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cleaned_daily_dialog_sentence"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 462 | [
[
-0.01727294921875,
-0.050323486328125,
0.02008056640625,
0.009857177734375,
-0.0176239013671875,
-0.0210418701171875,
0.010284423828125,
-0.0180816650390625,
0.04302978515625,
0.060516357421875,
-0.07666015625,
-0.05316162109375,
-0.0189971923828125,
0.00439... |
Sharka/CIVQA_easyocr_simple_valid_2 | 2023-10-04T09:39:42.000Z | [
"region:us"
] | Sharka | null | null | 0 | 29 | 2023-10-04T09:25:12 | ---
dataset_info:
features:
- name: id
dtype: string
- name: words
sequence: string
- name: answers
dtype: string
- name: bboxes
sequence:
sequence: float32
- name: answers_bboxes
sequence:
sequence: float32
- name: questions
dtype: string
- name: image
dtype: string
splits:
- name: validation
num_bytes: 31568299194
num_examples: 34159
download_size: 10965715031
dataset_size: 31568299194
---
# Dataset Card for "CIVQA_easyocr_simple_valid_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 649 | [
[
-0.021484375,
-0.01073455810546875,
0.017608642578125,
0.0180816650390625,
-0.01556396484375,
-0.0115203857421875,
0.0166168212890625,
0.00749969482421875,
0.01904296875,
0.0293731689453125,
-0.03564453125,
-0.048309326171875,
-0.029022216796875,
-0.01681518... |
rishiraj/guanaco-style-metamath | 2023-10-05T11:06:29.000Z | [
"region:us"
] | rishiraj | null | null | 1 | 29 | 2023-10-04T18:42:25 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.035064697265625,
0.0465087890625,
0.052490234375,
0.00505828857421875,
0.051361083984375,
0.01702880859375,
-0.05206298828125,
-0.01497650146484375,
-0.060302734375,
0.03790283203... |
Intuit-GenSRF/toxigen-train | 2023-10-05T01:45:00.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 29 | 2023-10-05T01:44:57 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 138945158
num_examples: 250951
download_size: 3070653
dataset_size: 138945158
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "toxigen-train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 485 | [
[
-0.034820556640625,
0.01128387451171875,
0.0278778076171875,
0.03277587890625,
-0.00475311279296875,
-0.0043487548828125,
0.00798797607421875,
-0.004512786865234375,
0.038360595703125,
0.0297393798828125,
-0.062225341796875,
-0.056793212890625,
-0.04656982421875... |
chats-bug/multiple-subject-gen | 2023-10-06T06:30:21.000Z | [
"region:us"
] | chats-bug | null | null | 0 | 29 | 2023-10-05T19:18:11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: subject_lines
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 78493229
num_examples: 59489
- name: test
num_bytes: 4030472
num_examples: 3132
download_size: 10833380
dataset_size: 82523701
---
# Dataset Card for "multiple-subject-gen"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 628 | [
[
-0.04254150390625,
-0.0240631103515625,
0.029937744140625,
0.0251922607421875,
0.0007925033569335938,
0.00934600830078125,
0.0031795501708984375,
-0.008056640625,
0.036895751953125,
0.0341796875,
-0.06939697265625,
-0.0465087890625,
-0.0439453125,
-0.0078353... |
approach0/retrieval-augment-finetune.old | 2023-10-12T23:22:47.000Z | [
"region:us"
] | approach0 | null | null | 0 | 29 | 2023-10-05T23:44:15 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: note
dtype: string
- name: problem_id
dtype: string
- name: problem
dtype: string
- name: solution
dtype: string
- name: aug_query
dtype: string
- name: aug_result
dtype: string
- name: answer
dtype: string
- name: correct
dtype: bool
- name: relevance
dtype: int64
splits:
- name: train
num_bytes: 5910768
num_examples: 2649
download_size: 0
dataset_size: 5910768
---
# Dataset Card for "retrieval-augment-finetune"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 743 | [
[
-0.045196533203125,
-0.0279693603515625,
0.01568603515625,
-0.001262664794921875,
-0.012054443359375,
-0.01067352294921875,
0.0103607177734375,
-0.013702392578125,
0.058258056640625,
0.0298004150390625,
-0.04620361328125,
-0.035675048828125,
-0.0282745361328125,... |
microsoft/kitab | 2023-10-25T00:39:04.000Z | [
"license:mit",
"arxiv:2310.15511",
"region:us"
] | microsoft | null | null | 5 | 29 | 2023-10-10T21:20:10 | ---
license: mit
configs:
- config_name: one-book-constraints
data_files:
- split: test
path: "data/KITAB-ONE-BOOK-CONSTRAINTS.json"
- config_name: two-book-constraints
data_files:
- split: test
path: "data/KITAB-TWO-BOOK-CONSTRAINTS.json"
- config_name: author-metadata
data_files:
- split: test
path: "data/KITAB-author-metadata.json"
config_names:
- one-book-constraints
- two-book-constraints
- author-metadata
---
## Overview
🕮 KITAB is a challenging dataset and a dynamic data collection approach for testing abilities of Large Language Models (LLMs) in answering information retrieval queries with constraint filters. A filtering query with constraints can be of the form `"List all books written by Toni Morrison that were published between 1970-1980"`. The dataset was originally contributed by the paper ["KITAB: Evaluating LLMs on Constraint Satisfaction for Information Retrieval"](https://arxiv.org/abs/2310.15511) Marah I Abdin, Suriya Gunasekar, Varun Chandrasekaran, Jerry Li, Mert Yuksekgonul, Rahee Ghosh Peshawaria, Ranjita Naik, and Besmira Nushi. 2023. The dataset is named after the word [kitab](https://en.wikipedia.org/wiki/Kitab), which is the word for "book" in Arabic, Swahili, Urdu, Hindi and various Indian and Turkic languages.
KITAB consists of book-related data across more than 600 authors and 13,000 queries with varying number of constraints and complexity. In each query in the dataset, the first constraint is always fixed to an author and the following can vary among the following types of book constraints to test for different constraint satisfaction capabilities:
- lexical (title starts or ends with a letter, word count in title)
- temporal (published between start and end year)
- named entity (city or human name present or not present in title)
## What is available in this repository?
This repository contains the following artifacts:
- All data for the KITAB sample used in the original paper. This consists of the set of authors, their corresponding books, and the set of queries with constraints.
- Example code for generating a new sample with a different set of authors. Here the sampling and data collection steps do not include the generation of queries as these may change according to the evaluation usage needs for the data. The example code also shows how to evaluate a potential model output with a list of books against the provided ground truth in KITAB, by following the same evaluation process as in the original paper. Note that this evaluation tends to relax some of the constraint satisfaction requirements in particular when the model may come up with only a partial title.
- All prompts that were used in the original paper to evaluate GPT-4 and GPT-3.5.
## Data
- [KITAB-ONE-BOOK-CONSTRAINTS.json](./data/KITAB-ONE-BOOK-CONSTRAINTS.json) and [KITAB-TWO-BOOK-CONSTRAINTS.json](./data/KITAB-TWO-BOOK-CONSTRAINTS.json) - correspond to queries with one and two book constraints. Each file has all the sufficient information that can be used to recreate a prompt query including the author, their birth year, number of sitelinks on WikiData, the constraint type(s), the constraint(s) expressed in natural language, the list of all books by the author, and the mapped list of books by the author that satisfy the constraint(s).
```
KITAB-ONE-BOOK-CONSTRAINTS_features = {
"Author": "author name",
"Birth Year": "author birth year",
"# of sitelinks": "number of external links related to the author",
"constraint_id": "unique id for the constraint",
"constraint_type": "type of the constraint",
"constraints": "the constraint",
"mapped_books": "list of books by the author mapped to the constraint",
"all_books": "full list of books by author post cleaning from openlibrary",
"raw_books": "raw list of books by author from openlibrary",
}
```
- [KITAB-author-metadata.json](./data/KITAB-author-metadata.json) - contains the set of 611 authors along with their birth year, the number of sitelinks in Wikidata, and their corresponding Open Library and WikiData identifiers.
- [KITAB-book-metadata.tar.gz](./data/KITAB-book-metadata.tar.gz) - contains a json file per author with all books retrieved from OpenLibrary for that author. The files contain the following information per title: the Open Library Id for the book, the Wikidata ID (if it exists), list of languages in which it was published, number of editions, number of words in the title, the earliest publishing year, city names found in the title (if any), a modified version of the title in lowercase that stripes stop words like "A" and "The" from the title, a set of of other redundant versions of the same title as found in Open Library (if any).
## Code and evaluation scripts
Example notebooks included in this repository:
- [collect_authors_from_wikidata.py](./code/data_sampling/collect_authors_from_wikidata.py) and [wikidata_open_library_author_profiling.ipynb](./code/data_sampling/wikidata_open_library_author_profiling.ipynb) - example code for generating a new author sample from WikiData and OpenLibrary. Here, we also make available the longer list of authors that was originally sampled from WikiData to facilitate the sampling process although future work may also choose to repeat this step as needed. The full list can be found in: [wikidata_authors_crawl.csv](./code/data_sampling/wikidata_authors_crawl.csv).
- [fetch_book_data.py](./code/data_sampling/fetch_book_data.py) - example code for collecting book data for the set of authors sampled in the previous steps. Pulls data from OpenLibrary and WikiData to curate and clean the sample.
- [evaluation.ipynb](./code/evaluation.ipynb) - example code for evaluating model outputs from our [prompts](./prompts/) against ground truth KITAB data. Here, we also make available the GPT-4 output on human name detection, although as models improve future work may also choose to repeat this step as needed. Results can be found in: [gpt_4_name_data_processed.csv](./code/utils/gpt_4_name_data_processed.csv).
## Prompts
We use the following prompt templates for different experimental conditions on the KITAB data:
[**ALL-BOOKS**]() \([Template 1](./prompts/Template_1.md)\): List all books from the author. This condition enables us to estimate an upper bound of model performance in retrieving relevant information for all queries, regardless of other constraints.
[**NO-CONTEXT**]() \([Template 2a](./prompts/Template_2a.md)\): List all books from the author that also satisfy other book constraints.
[**WITH-CONTEXT**]() \([Template 2b](./prompts/Template_2b.md)\): First, provide a full list of books from the author as input context to the model. Then, ask the model to list all books from the author that also satisfy other book constraints.
[**SELF-CONTEXT**]() \([Template 3](./prompts/Template_3.md)\): Ask the model to first self-retrieve all books from the author, and then use that list to find those that also satisfy book constraints.
[**NAME-CHECK**]() \([Template 4](./prompts/Template_4.md)\): Ask the model to find all book in a given list that contain a human name.
## Data Collection and Statistics
The author list was initially randomly sampled from [WikiData](https://www.wikidata.org/) and then filtered down to 611 authors to avoid potentially inaccurate data and extreme outliers. For example, this involved removing authors that have very few or too many books and authors that were born before 1850. The collected book data was derived from [Open Library](https://openlibrary.org/) and contains all books from the author that are tagged to be in English by Open Library or detected to be in English by the Language Detection service from the [Azure Cognitive Services API](https://learn.microsoft.com/en-us/azure/ai-services/language-service/language-detection/overview). More details about author sampling and book data collection and cleaning are present in the paper.
Since there exists a large number of constraint instances depending on their cardinality, we subsample from the potential large set of queries in a way that ensures a balanced representation across constraint types, and a variety of constraints that have different constrainedness (i.e., defined as the complement of the ratio between the number of books that satisfy the constraints with the total number of all books from the author). The dataset also contains “unsatisfiable” constraints, which do not match any book titles in our data. This constitutes 7.99% of the queries with only one book constraint. The final dataset contains 8239 single-constraint queries and 4750 double-constraint queries. The table below shows how these queries are distributed across different constraint types. For all double-constraint queries, both constraints are individually satisfiable and generated by combining our single constraint data. Only 0.76% of the queries are jointly unsatisfiable across both constraints.
<aside>
<center>
<style type="text/css">
.tg {border-collapse:collapse;border-color:#ccc;border-spacing:0;border-style:solid;border-width:1px;}
.tg td{background-color:#fff;border-color:#ccc;border-style:solid;border-width:0px;color:#333;
font-family:Arial, sans-serif;font-size:14px;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{background-color:#50B49A;border-color:#ccc;border-style:solid;border-width:0px;color:#333;
font-family:Arial, sans-serif;font-size:14px;font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;color:white}
.tg .tg-m5nv{border-color:#cccccc;text-align:center;vertical-align:top}
.tg .tg-x9uu{border-color:#cccccc;font-weight:bold;text-align:center;vertical-align:top}
.tg .tg-2bev{border-color:#cccccc;text-align:left;vertical-align:top}
.tg .tg-3cmc{border-color:#cccccc;text-align:right;vertical-align:top}
</style>
<table class="tg">
<caption>KITAB statistics on constraint frequency and average constrainedness. Two book constraint queries have more than one constraint type.
<br>
Constrainedness is defined as the complement of the ratio between the number of solutions S that satisfy the constraint and the total number of items in the domain N (higher constrainedness, more complex), i.e., κ = 1 - S/N.
</caption>
<thead>
<tr>
<th class="tg-m5nv"></th>
<th class="tg-x9uu" colspan="2">One book constraints</th>
<th class="tg-x9uu" colspan="2">Two book constraints</th>
</tr>
<tr>
<th class="tg-m5nv"><span style="font-weight:bold">Constraint Type</span></th>
<th class="tg-m5nv"><span style="font-weight:bold"># queries</span></td>
<th class="tg-x9uu"><span style="font-weight:bold">constrainedness</span></td>
<th class="tg-x9uu"><span style="font-weight:bold"># queries</span></td>
<th class="tg-x9uu"><span style="font-weight:bold">constrainedness</span></td>
</tr>
</thead>
<tbody>
<colgroup>
<col style="width: 120px">
<col style="width: 80px">
<col style="width: 100px">
<col style="width: 80px">
<col style="width: 100px">
</colgroup>
<tr>
<td class="tg-2bev">starts-with</td>
<td class="tg-3cmc">598</td>
<td class="tg-3cmc">0.90</td>
<td class="tg-3cmc">2163</td>
<td class="tg-3cmc">0.92</td>
</tr>
<tr>
<td class="tg-2bev">ends-with</td>
<td class="tg-3cmc">482</td>
<td class="tg-3cmc">0.89</td>
<td class="tg-3cmc">1782</td>
<td class="tg-3cmc">0.91</td>
</tr>
<tr>
<td class="tg-2bev">word-count</td>
<td class="tg-3cmc">1672</td>
<td class="tg-3cmc">0.53</td>
<td class="tg-3cmc">1630</td>
<td class="tg-3cmc">0.81</td>
</tr>
<tr>
<td class="tg-2bev">human-name</td>
<td class="tg-3cmc">611</td>
<td class="tg-3cmc">0.77</td>
<td class="tg-3cmc">292</td>
<td class="tg-3cmc">0.89</td>
</tr>
<tr>
<td class="tg-2bev">no-human-name</td>
<td class="tg-3cmc">611</td>
<td class="tg-3cmc">0.23</td>
<td class="tg-3cmc">801</td>
<td class="tg-3cmc">0.78</td>
</tr>
<tr>
<td class="tg-2bev">city-name</td>
<td class="tg-3cmc">611</td>
<td class="tg-3cmc">0.92</td>
<td class="tg-3cmc">197</td>
<td class="tg-3cmc">0.81</td>
</tr>
<tr>
<td class="tg-2bev">no-city-name</td>
<td class="tg-3cmc">611</td>
<td class="tg-3cmc">0.08</td>
<td class="tg-3cmc">831</td>
<td class="tg-3cmc">0.77</td>
</tr>
<tr>
<td class="tg-2bev">publishing-year</td>
<td class="tg-3cmc">3043</td>
<td class="tg-3cmc">0.80</td>
<td class="tg-3cmc">1804</td>
<td class="tg-3cmc">0.89</td>
</tr>
<tr>
<td class="tg-2bev">Summary</td>
<td class="tg-3cmc">8239</td>
<td class="tg-3cmc">0.67</td>
<td class="tg-3cmc">4750</td>
<td class="tg-3cmc">0.87</td>
</tr>
</tbody>
</table>
</center>
<br><br>
</aside>
<figure><center>
<img src="figures/popularity_wide.png" width="1000">
<figcaption>Distribution of KITAB queries across author popularity as measured by the number of sitelinks on Wikidata,
for queries with a single book constraint (left) and two book constraints (right).</figcaption>
</center>
</figure>
<figure><center>
<img src="figures/constrainedness_wide.png" width="1000">
<figcaption>Distribution of queries across author constrainedness as measured by the complement of the ratio
between the number of books that satisfy the book constraints and the total number of books from the author.
Distribution is shown for queries with a single book constraint (left) and two book constraints (right). Note
that most of the distribution in the lower range of constrainedness is dominated by constraints that require no
human name or no city name in the title, which are naturally easier to satisfy.</figcaption></center>
</figure>
## Responsible AI Considerations
*Data Cleaning*: Despite our best efforts in collecting a complete and accurate set of books, we also faced a variety of challenges in retrieval and cleaning, which we further describe in Appendix C.1 in the paper. To estimate the extent of which potential data cleaning issues may impact the data quality of KITAB and further evaluation, we also undertook a manual data annotation exercise during which we searched on the web for titles provided by GPT4 and GPT3.5 but that were marked as not from the author in our dataset. In summary, we find that based on a manual annotation of a subsample of queries, less than 5% of the queries to GPT4 and less than 6% of the queries to GPT3.5 may potentially be affected by cases where the model finds a book title that is not in KITAB and that will consequentially be marked as not from the author during our evaluation. While this can be remediated by using further data sources, the impact of missing information on model comparison is minor.
*Human Names*: Entity recognition for human names was done using both [Azure Cognitive Services API](https://learn.microsoft.com/en-us/azure/ai-services/language-service/language-detection/overview) and GPT4 (Template 4 in Appendix D in the paper), as we found the two approaches to be complementary for detecting names from different cultures. Note that even after using both these resources, there may still be names that are not recognized by either of these APIs, which is a testimony that more work is required in improving the quality of service of entity recognition for fairness across different languages and cultures.
*City Names*: For city names, we use [Azure Cognitive Services API](https://learn.microsoft.com/en-us/azure/ai-services/language-service/named-entity-recognition/overview) along with [Geonames](https://public.opendatasoft.com/explore/dataset/geonames-all-cities-with-a-population-1000), a database of cities with more than 1000 inhabitants.
*Author representation*: The list of authors in KITAB was sampled randomly from a large set of authors present in Open Library. We see that the rate of irrelevant information generated by current models increases with a lower number of sitelinks in Wikidata. Since the number of sitelinks may also correlate with the age (birth year) of the author or even their nationality and how well their community is linked to the World Wide Web, this observation has important implications on model quality of service across different geographical regions and author popularity and age. While KITAB naturally does contain more authors with a lower number of sitelinks (as indicated by its long-tail distribution of author count vs. their popularity), future fairness measurement investigations in this regard may also need to oversample explicitly from cohorts belonging to given demographic and geographical attributes.
## State-of-the-art results on KITAB
<aside>
<center>
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-qwh1{border-color:#cccccc;font-weight:bold;text-align:left;vertical-align:top}
.tg .tg-omta{background-color:#50b49a;border-color:#cccccc;color:#ffffff;text-align:left;vertical-align:top}
.tg .tg-h4uz{background-color:#50b49a;border-color:#cccccc;color:#ffffff;font-weight:bold;text-align:center;vertical-align:top}
.tg .tg-tr5t{border-color:#cccccc;text-align:right;vertical-align:top}
</style>
<table class="tg" style="undefined;table-layout: fixed; width: 675px">
<colgroup>
<col style="width: 87.130435px">
<col style="width: 42px">
<col style="width: 42px">
<col style="width: 42px">
<col style="width: 42px">
<col style="width: 42px">
<col style="width: 42px">
<col style="width: 42px">
<col style="width: 42px">
<col style="width: 42px">
<col style="width: 42px">
<col style="width: 42px">
<col style="width: 42px">
<col style="width: 42px">
<col style="width: 42px">
<col style="width: 42px">
</colgroup>
<thead>
<tr>
<th class="tg-omta" rowspan="2"></th>
<th class="tg-h4uz" colspan="3" rowspan="2">Irrelevant Information ↓</th>
<th class="tg-h4uz" colspan="6">Relevant Information<br>(Books from the author)</th>
<th class="tg-h4uz" colspan="3" rowspan="2">Completeness ↑ </th>
<th class="tg-h4uz" colspan="3" rowspan="2">All Correct ↑ </th>
</tr>
<tr>
<th class="tg-h4uz" colspan="3">Satisfied ↑ </th>
<th class="tg-h4uz" colspan="3">Unsatisfied ↓</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-qwh1">GPT-4</td>
<td class="tg-tr5t">0.26</td>
<td class="tg-tr5t">0.33</td>
<td class="tg-tr5t">0.00</td>
<td class="tg-tr5t">0.51</td>
<td class="tg-tr5t">0.49</td>
<td class="tg-tr5t">0.78</td>
<td class="tg-tr5t">0.24</td>
<td class="tg-tr5t">0.19</td>
<td class="tg-tr5t">0.21</td>
<td class="tg-tr5t">0.24</td>
<td class="tg-tr5t">0.26</td>
<td class="tg-tr5t">0.70</td>
<td class="tg-tr5t">0.08</td>
<td class="tg-tr5t">0.08</td>
<td class="tg-tr5t">0.31</td>
</tr>
<tr>
<td class="tg-qwh1">GPT-3.5</td>
<td class="tg-tr5t">0.20</td>
<td class="tg-tr5t">0.44</td>
<td class="tg-tr5t">0.00</td>
<td class="tg-tr5t">0.44</td>
<td class="tg-tr5t">0.26</td>
<td class="tg-tr5t">0.68</td>
<td class="tg-tr5t">0.36</td>
<td class="tg-tr5t">0.30</td>
<td class="tg-tr5t">0.32</td>
<td class="tg-tr5t">0.16</td>
<td class="tg-tr5t">0.16</td>
<td class="tg-tr5t">0.47</td>
<td class="tg-tr5t">0.07</td>
<td class="tg-tr5t">0.02</td>
<td class="tg-tr5t">0.15</td>
</tr>
</tbody>
<caption>Aggregated model performance on KITAB for three experimental conditions <br>
NO-CONTEXT | SELF-CONTEXT | WITH-CONTEXT} (see definitions in the prompts section) <br> for queries requesting a list of books from a given author satisfying one additional book constraint. Both models have high rates of irrelevant information and poor constraint satisfaction across the board. Context availability mitigates irrelevant information rate, but constraint satisfaction still remains low. Full correctness (i.e., perfect match of the post-processed model output and the ground truth) is strikingly low across all conditions and models but there is visible improvement for WITH-CONTEXT.</caption>
</table>
</center>
</aside>
## How to cite
<pre>
@inproceedings{abdin2023kitab,
title={KITAB: Evaluating LLMs on Constraint Satisfaction for Information Retrieval},
author={Abdin, Marah I and Gunasekar, Suriya and Chandrasekaran, Varun and Li, Jerry and Yuksekgonul, Mert and Peshawaria, Rahee Ghosh and Naik, Ranjita and Nushi, Besmira},
journal={arXiv preprint arXiv:2310.15511},
year={2023}
}
</pre>
## Contributors
[Marah I Abdin](https://www.linkedin.com/in/marah-abdin/), [Suriya Gunasekar](https://sgunasekar.github.io/), [Varun Chandrasekaran](https://ece.illinois.edu/about/directory/faculty/varunc), [Jerry Li](https://jerryzli.github.io/), [Mert Yuksekgonul](https://mertyg.github.io/), [Rahee Ghosh Peshawaria](https://www.linkedin.com/in/rahee-ghosh-peshawaria/), [Ranjita Naik](https://github.com/ranjita-naik), [Besmira Nushi](https://besmiranushi.com/) | 21,280 | [
[
-0.03717041015625,
-0.0247344970703125,
0.0231781005859375,
-0.015289306640625,
-0.005046844482421875,
-0.0096893310546875,
-0.01088714599609375,
-0.0247955322265625,
0.0031261444091796875,
0.060546875,
-0.053863525390625,
-0.059356689453125,
-0.01611328125,
... |
csupiisc/plmn1.5l | 2023-10-11T17:56:24.000Z | [
"region:us"
] | csupiisc | null | null | 0 | 29 | 2023-10-11T06:48:15 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 754298
num_examples: 10000
download_size: 299510
dataset_size: 754298
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "plmn1.5l"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 434 | [
[
-0.061065673828125,
-0.002246856689453125,
0.00647735595703125,
0.0281829833984375,
-0.031646728515625,
-0.01029205322265625,
0.026824951171875,
0.007129669189453125,
0.043365478515625,
0.04986572265625,
-0.0721435546875,
-0.0650634765625,
-0.036041259765625,
... |
datkai/news-chuan-hoa | 2023-10-17T03:54:45.000Z | [
"region:us"
] | datkai | null | null | 0 | 29 | 2023-10-17T03:52:16 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
dhcppc0/ksc500 | 2023-10-17T05:56:26.000Z | [
"region:us"
] | dhcppc0 | null | null | 0 | 29 | 2023-10-17T05:52:17 | ---
dataset_info:
features:
- name: array
sequence: float32
- name: sampling_rate
dtype: int64
- name: text
dtype: string
splits:
- name: test
num_bytes: 23715007
num_examples: 50
- name: train
num_bytes: 248781258
num_examples: 500
download_size: 273740677
dataset_size: 272496265
---
# Dataset Card for "ksc500"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 492 | [
[
-0.051025390625,
0.00537872314453125,
0.0294647216796875,
0.015167236328125,
-0.0085906982421875,
0.0027561187744140625,
0.03021240234375,
-0.0052032470703125,
0.062225341796875,
0.033172607421875,
-0.0706787109375,
-0.0589599609375,
-0.03704833984375,
-0.01... |
dhruv107/receipt_oct17_combined | 2023-10-23T17:05:50.000Z | [
"region:us"
] | dhruv107 | null | null | 0 | 29 | 2023-10-17T15:58:14 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 501004298.0
num_examples: 490
- name: test
num_bytes: 33934204.0
num_examples: 32
- name: validation
num_bytes: 108765220.0
num_examples: 92
download_size: 564389593
dataset_size: 643703722.0
---
# Dataset Card for "receipt_oct17_combined"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 545 | [
[
-0.028228759765625,
-0.0018482208251953125,
0.0127716064453125,
0.01508331298828125,
-0.0411376953125,
-0.006443023681640625,
0.023040771484375,
-0.0301971435546875,
0.06268310546875,
0.049530029296875,
-0.03851318359375,
-0.03912353515625,
-0.042327880859375,
... |
magnus42/GPTWebScrapingPythonCode | 2023-11-01T09:58:43.000Z | [
"region:us"
] | magnus42 | null | null | 0 | 29 | 2023-10-26T08:12:12 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
CJWeiss/multilexsum | 2023-10-26T20:50:07.000Z | [
"region:us"
] | CJWeiss | null | null | 0 | 29 | 2023-10-26T20:49:26 | ---
dataset_info:
features:
- name: id
dtype: string
- name: sources
sequence: string
- name: summary/long
dtype: string
- name: summary/short
dtype: string
- name: summary/tiny
dtype: string
splits:
- name: train
num_bytes: 1381375968
num_examples: 3404
- name: test
num_bytes: 265556706
num_examples: 681
- name: valid
num_bytes: 199444854
num_examples: 454
download_size: 833868199
dataset_size: 1846377528
---
# Dataset Card for "multilexsum"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 647 | [
[
-0.040374755859375,
-0.0033855438232421875,
0.0144500732421875,
0.0272674560546875,
0.0018243789672851562,
0.00220489501953125,
0.0149078369140625,
-0.01192474365234375,
0.061859130859375,
0.037445068359375,
-0.046844482421875,
-0.044158935546875,
-0.05538940429... |
kuanhuggingface/tencent_tts_encodec | 2023-10-27T06:38:12.000Z | [
"region:us"
] | kuanhuggingface | null | null | 0 | 29 | 2023-10-27T06:35:46 | ---
dataset_info:
features:
- name: file_id
dtype: string
- name: wav_id
dtype: int64
- name: instruction
dtype: string
- name: transcription
dtype: string
- name: src_encodec_0
sequence: int64
- name: src_encodec_1
sequence: int64
- name: src_encodec_2
sequence: int64
- name: src_encodec_3
sequence: int64
- name: src_encodec_4
sequence: int64
- name: src_encodec_5
sequence: int64
- name: src_encodec_6
sequence: int64
- name: src_encodec_7
sequence: int64
- name: tgt_encodec_0
sequence: int64
- name: tgt_encodec_1
sequence: int64
- name: tgt_encodec_2
sequence: int64
- name: tgt_encodec_3
sequence: int64
- name: tgt_encodec_4
sequence: int64
- name: tgt_encodec_5
sequence: int64
- name: tgt_encodec_6
sequence: int64
- name: tgt_encodec_7
sequence: int64
splits:
- name: train
num_bytes: 18591475302
num_examples: 266780
- name: validation
num_bytes: 528038974
num_examples: 7620
- name: test
num_bytes: 508595448
num_examples: 7620
download_size: 474697551
dataset_size: 19628109724
---
# Dataset Card for "tencent_tts_encodec"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,328 | [
[
-0.03143310546875,
-0.0167999267578125,
0.022735595703125,
0.0299072265625,
-0.019256591796875,
0.011474609375,
-0.00725555419921875,
0.0015001296997070312,
0.0701904296875,
0.03192138671875,
-0.04901123046875,
-0.059967041015625,
-0.039520263671875,
0.00776... |
shibing624/source_code | 2022-10-30T06:30:07.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100M<n<200M",
"source_datasets:https://github.com/shibing624/code-autocomplete",
"source_datasets:https://github.com/... | shibing624 | 纯文本数据,内容:高质量编程源代码,包括Python,Java,CPP源代码 | null | 4 | 28 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
- gfdl
multilinguality:
- monolingual
size_categories:
- 100M<n<200M
source_datasets:
- https://github.com/shibing624/code-autocomplete
- https://github.com/bharathgs/Awesome-pytorch-list
- https://github.com/akullpp/awesome-java
- https://github.com/fffaraz/awesome-cpp
task_categories:
- text-generation
task_ids:
- language-modeling
---
# Dataset Card for "SourceCode"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [code-autocomplete](https://github.com/shibing624/code-autocomplete)
- **Leaderboard:** [leaderboard](https://github.com/shibing624/code-autocomplete) (located on the homepage)
- **Size of downloaded dataset files:** 105 MB
- **Total amount of disk used:** 570 MB
### Dataset Summary
Source code dataset is a collection of Github awesome repos, it contains Python, Java, C++, and other programming languages.
This dataset can be used in different NLP tasks like language modeling and text generation tasks.
data source:
- PYTHON_CODE: https://github.com/bharathgs/Awesome-pytorch-list
- JAVA_CODE: https://github.com/akullpp/awesome-java
- CPP_CODE: https://github.com/fffaraz/awesome-cpp
### Supported Tasks and Leaderboards
- language modeling
- code generation tasks, **Leaderboard:** [code-autocomplete](https://github.com/shibing624/code-autocomplete)
### Languages
- programming languages: Python, Java, C++
- natural language: English
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": """
import json
import argparse
def _parse_args():
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawTextHelpFormatter,
)
parser.add_argument(
'--model-file',
required=True,
help=(
'A pt file from '
'https://github.com/pytorch/fairseq/tree/main/examples/hubert'
)
)
return parser.parse_args()
"""
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
#### python
```shell
$ wc -l python/*
10000 python/test.txt
5215412 python/train.txt
10000 python/valid.txt
5235412 total
```
#### java
```shell
$ wc -l java/*
950083 java/test.txt
2802880 java/train.txt
940803 java/valid.txt
4693766 total
```
#### cpp
```shell
$ wc -l cpp/*
1060014 cpp/test.txt
3119241 cpp/train.txt
1099124 cpp/valid.txt
5278379 total
```
## Dataset Creation
### Curation Rationale
As code generation dataset, I upload it to huggingface datasets.
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
Citation:
APA:
```latex
Xu, M. code-autocomplete: Code AutoComplete with GPT2 model (Version 0.0.4) [Computer software]. https://github.com/shibing624/code-autocomplete
```
BibTeX:
```latex
@software{Xu_code-autocomplete_Code_AutoComplete,
author = {Xu, Ming},
title = {code-autocomplete: Code AutoComplete with GPT2 model},
url = {https://github.com/shibing624/code-autocomplete},
version = {0.0.4}
}
```
### Annotations
#### Annotation process
#### Who are the annotators?
nobody
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
This dataset was developed as a benchmark for evaluating code generation model.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
Github awesome programing code repos.
### Licensing Information
GNU Free Documentation License v1.3 or later.
For research use only.
### Contributions
Thanks to [@shibing624](https://github.com/shibing624) add this dataset.
| 4,808 | [
[
-0.0310516357421875,
-0.03375244140625,
0.008087158203125,
0.016845703125,
-0.001956939697265625,
0.01169586181640625,
-0.033782958984375,
-0.0243072509765625,
0.0182037353515625,
0.02203369140625,
-0.03369140625,
-0.059478759765625,
-0.045440673828125,
0.00... |
copenlu/sufficient_facts | 2022-08-05T08:33:48.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|fever",
"source_datasets:extended|hover",
"source_datasets:extended|fever_gold_... | copenlu | SufficientFacts is a diagnostic test dataset for fact checking with insufficient evidence. | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | 3 | 28 | 2022-03-30T19:12:14 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: sufficient_facts
size_categories:
- 1K<n<10K
source_datasets:
- extended|fever
- extended|hover
- extended|fever_gold_evidence
task_categories:
- text-classification
task_ids:
- fact-checking
---
# Dataset Card for sufficient_facts
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/copenlu/sufficient_facts
- **Repository:** https://github.com/copenlu/sufficient_facts
- **Paper:** Will be uploaded soon...
- **Leaderboard:**
- **Point of Contact:** https://apepa.github.io/
### Dataset Summary
This is the dataset SufficientFacts, introduced in the paper "Fact Checking with Insufficient Evidence", accepted at the TACL journal in 2022.
Automating the fact checking (FC) process relies on information obtained from external sources. In this work, we posit that it is crucial for FC models to make veracity predictions only when there is sufficient evidence and otherwise indicate when it is not enough. To this end, we are the first to study what information FC models consider sufficient by introducing a novel task and advancing it with three main contributions. First, we conduct an in-depth empirical analysis of the task with a new fluency-preserving method for omitting information from the evidence at the constituent and sentence level. We identify when models consider the remaining evidence (in)sufficient for FC, based on three trained models with different Transformer architectures and three FC datasets. Second, we ask annotators whether the omitted evidence was important for FC, resulting in a novel diagnostic dataset, **SufficientFacts**, for FC with omitted evidence. We find that models are least successful in detecting missing evidence when adverbial modifiers are omitted (21% accuracy), whereas it is easiest for omitted date modifiers (63% accuracy). Finally, we propose a novel data augmentation strategy for contrastive self-learning of missing evidence by employing the proposed omission method combined with tri-training. It improves performance for Evidence Sufficiency Prediction by up to 17.8 F1 score, which in turn improves FC performance by up to 2.6 F1 score.
### Languages
English
## Dataset Structure
The dataset consists of three files, each for one of the datasets -- FEVER, HoVer, and VitaminC.
Each file consists of json lines of the format:
```json
{
"claim": "Unison (Celine Dion album) was originally released by Atlantic Records.",
"evidence": [
[
"Unison (Celine Dion album)",
"The album was originally released on 2 April 1990 ."
]
],
"label_before": "REFUTES",
"label_after": "NOT ENOUGH",
"agreement": "agree_ei",
"type": "PP",
"removed": ["by Columbia Records"],
"text_orig": "[[Unison (Celine Dion album)]] The album was originally released on 2 April 1990 <span style=\"color:red;\">by Columbia Records</span> ."
}
```
### Data Instances
* FEVER: 600 consituent-level, 400 sentence-level;
* HoVer - 600 consituent-level, 400 sentence-level;
* VitaminC - 600 consituent-level.
### Data Fields
* `claim` - the claim that is being verified
* `evidence` - the augmented evidence for the claim, i.e. the evidence with some removed information
* `label_before` - the original label for the claim-evidence pair, before information was removed from the evidence
* `label_after` - the label for the augmented claim-evidence pair, after information was removed from the evidence, as annotated by crowd-source workers
* `type` - type of the information removed from the evidence. The types are fine-grained and their mapping to the general types -- 7 constituent and 1 sentence type can be found in [types.json](types.json) file.
* `removed` - the text of the removed information from the evidence
* `text_orig` - the original text of the evidence, as presented to crowd-source workers, the text of the removed information is inside `<span style=\"color:red;\"></span>` tags.
### Data Splits
| name |test_fever|test_hover|test_vitaminc|
|----------|-------:|-----:|-------:|
|test| 1000| 1000| 600|
Augmented from the test splits of the corresponding datasets.
### Annotations
#### Annotation process
The workers were provided with the following task description:
For each evidence text, some facts have been removed (marked in <span style="color:red;">red</span>).
You should annotate whether, <b>given the remaining facts in the evidence text, the evidence is still enough for verifying the claim.</b> <br></br>
<ul>
<li>You should select <i><b>'ENOUGH -- IRRELEVANT'</b></i>, if the <b>remaining information is still <i>enough</i></b> for verifying the claim because the <b>removed information is irrelevant</b> for identifying the evidence as SUPPORTS or REFUTES. See examples 1 and 2.</li>
<li>You should select <i><b>'ENOUGH -- REPEATED'</b></i>, if the <b>remaining information is still <i>enough</i></b> for verifying the claim because the <b>removed information is relevant but is also present (repeated) in the remaining (not red) text.</b> See example 3.</li>
<li>You should select <i><b>'NOT ENOUGH'</b></i> -- when <b>1) the removed information is <i>relevant</i></b> for verifying the claim <b> AND 2) it is <i>not present (repeated)</i> in the remaining text.</b> See examples 4, 5, and 6.</li>
<!--<li>You should select <i><b>'CHANGED INFO'</b></i> in the rare cases when the remaining evidence has <b>changed the support for the claim</b></li>-->
</ul>
<b>Note: You should not incorporate your own knowledge or beliefs! You should rely only on the evidence provided for the claim.</b>
The annotators were then given example instance annotations.
Finally, annotators were asked to complete a qualification test in order to be allowed to annotate instances for the task.
The resulting inter-annotator agreement for SufficientFacts is 0.81 Fleiss'k from three annotators.
#### Who are the annotators?
The annotations were performed by workers at Amazon Mechanical Turk.
## Additional Information
### Licensing Information
MIT
### Citation Information
```
@article{10.1162/tacl_a_00486,
author = {Atanasova, Pepa and Simonsen, Jakob Grue and Lioma, Christina and Augenstein, Isabelle},
title = "{Fact Checking with Insufficient Evidence}",
journal = {Transactions of the Association for Computational Linguistics},
volume = {10},
pages = {746-763},
year = {2022},
month = {07},
abstract = "{Automating the fact checking (FC) process relies on information obtained from external sources. In this work, we posit that it is crucial for FC models to make veracity predictions only when there is sufficient evidence and otherwise indicate when it is not enough. To this end, we are the first to study what information FC models consider sufficient by introducing a novel task and advancing it with three main contributions. First, we conduct an in-depth empirical analysis of the task with a new fluency-preserving method for omitting information from the evidence at the constituent and sentence level. We identify when models consider the remaining evidence (in)sufficient for FC, based on three trained models with different Transformer architectures and three FC datasets. Second, we ask annotators whether the omitted evidence was important for FC, resulting in a novel diagnostic dataset, SufficientFacts1, for FC with omitted evidence. We find that models are least successful in detecting missing evidence when adverbial modifiers are omitted (21\\% accuracy), whereas it is easiest for omitted date modifiers (63\\% accuracy). Finally, we propose a novel data augmentation strategy for contrastive self-learning of missing evidence by employing the proposed omission method combined with tri-training. It improves performance for Evidence Sufficiency Prediction by up to 17.8 F1 score, which in turn improves FC performance by up to 2.6 F1 score.}",
issn = {2307-387X},
doi = {10.1162/tacl_a_00486},
url = {https://doi.org/10.1162/tacl\_a\_00486},
eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00486/2037141/tacl\_a\_00486.pdf},
}
```
### Contributions
Thanks to [@apepa](https://github.com/apepa) for adding this dataset. | 9,382 | [
[
-0.038909912109375,
-0.04852294921875,
0.0264892578125,
0.01239776611328125,
-0.0133209228515625,
-0.02191162109375,
-0.00524139404296875,
-0.03741455078125,
0.028717041015625,
0.041839599609375,
-0.03875732421875,
-0.032928466796875,
-0.046173095703125,
0.0... |
batubayk/TR-News | 2023-03-04T22:39:35.000Z | [
"task_categories:summarization",
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:tr",
"region:us"
] | batubayk | null | null | 4 | 28 | 2022-04-18T17:23:02 | ---
task_categories:
- summarization
- text-classification
- text-generation
- text2text-generation
language:
- tr
pretty_name: TR-News
size_categories:
- 100K<n<1M
---
# Citation
If you use the dataset, please cite the paper:
@article{10.1007/s10579-021-09568-y,
year = {2022},
title = {{Abstractive text summarization and new large-scale datasets for agglutinative languages Turkish and Hungarian}},
author = {Baykara, Batuhan and Güngör, Tunga},
journal = {Language Resources and Evaluation},
issn = {1574-020X},
doi = {10.1007/s10579-021-09568-y},
pages = {1--35}} | 612 | [
[
-0.019073486328125,
-0.04071044921875,
0.002498626708984375,
0.019195556640625,
-0.02642822265625,
-0.0023746490478515625,
-0.031402587890625,
-0.005100250244140625,
0.0284271240234375,
0.024261474609375,
0.00839996337890625,
-0.039459228515625,
-0.0464172363281... |
bigscience-data/roots_fr_wikisource | 2022-12-12T10:39:13.000Z | [
"language:fr",
"license:cc-by-sa-3.0",
"region:us"
] | bigscience-data | null | null | 0 | 28 | 2022-05-18T09:14:12 | ---
language: fr
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_fr_wikisource
# wikisource_filtered
- Dataset uid: `wikisource_filtered`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 2.6306 % of total
- 12.7884 % of fr
- 19.8886 % of indic-bn
- 20.9966 % of indic-ta
- 2.3478 % of ar
- 4.7068 % of indic-hi
- 18.0998 % of indic-te
- 1.7155 % of es
- 19.4800 % of indic-kn
- 9.1737 % of indic-ml
- 17.1771 % of indic-mr
- 17.1870 % of indic-gu
- 70.3687 % of indic-as
- 1.0165 % of pt
- 7.8642 % of indic-pa
- 1.3501 % of vi
- 4.9411 % of indic-or
- 0.5307 % of ca
- 2.3593 % of id
- 1.5928 % of eu
### BigScience processing steps
#### Filters applied to: fr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: indic-bn
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: ar
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: es
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: indic-kn
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- remove_wiki_mojibake
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-mr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-gu
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-as
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
#### Filters applied to: pt
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-pa
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: vi
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-or
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
#### Filters applied to: ca
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: id
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: eu
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
| 4,382 | [
[
-0.049835205078125,
-0.041748046875,
0.022247314453125,
0.01168060302734375,
-0.007534027099609375,
0.00007802248001098633,
-0.0102081298828125,
-0.015289306640625,
0.043243408203125,
0.0207366943359375,
-0.052276611328125,
-0.0599365234375,
-0.04052734375,
... |
cat-state/mscoco-1st-caption | 2022-05-29T20:30:35.000Z | [
"license:cc-by-4.0",
"region:us"
] | cat-state | null | null | 0 | 28 | 2022-05-29T19:58:35 | ---
license: cc-by-4.0
---
To reproduce, run `pip install -r requirements.txt` and `download.sh`.
| 99 | [
[
-0.0179901123046875,
-0.030242919921875,
0.0276947021484375,
0.038543701171875,
-0.033721923828125,
-0.0230255126953125,
0.016632080078125,
-0.0138092041015625,
0.047821044921875,
0.049957275390625,
-0.068359375,
-0.009185791015625,
-0.055023193359375,
0.036... |
SpeedOfMagic/ontonotes_english | 2022-07-01T16:06:06.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other",
"language:en",
"license:unknown",
"region:us"
] | SpeedOfMagic | null | null | 2 | 28 | 2022-06-28T17:34:30 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: ontonotes_english
size_categories:
- 10K<n<100K
source_datasets:
- extended|other
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# Dataset Card for ontonotes_english
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CoNLL-2012 Shared Task](https://conll.cemantix.org/2012/data.html), [Author's page](https://cemantix.org/data/ontonotes.html)
- **Repository:**
- **Paper:** [Towards Robust Linguistic Analysis using OntoNotes](https://aclanthology.org/W13-3516/)
- **Leaderboard:** [Papers With Code](https://paperswithcode.com/sota/named-entity-recognition-ner-on-ontonotes-v5)
- **Point of Contact:**
### Dataset Summary
This is preprocessed version of what I assume is OntoNotes v5.0.
Instead of having sentences stored in files, files are unpacked and sentences are the rows now. Also, fields were renamed in order to match [conll2003](https://huggingface.co/datasets/conll2003).
The source of data is from private repository, which in turn got data from another public repository, location of which is unknown :)
Since data from all repositories had no license (creator of the private repository told me so), there should be no licensing issues. But bear in mind, I don't give any guarantees that this is real OntoNotes, and may differ as a result.
### Supported Tasks and Leaderboards
- [Named Entity Recognition on Ontonotes v5 (English)](https://paperswithcode.com/sota/named-entity-recognition-ner-on-ontonotes-v5)
- [Coreference Resolution on OntoNotes](https://paperswithcode.com/sota/coreference-resolution-on-ontonotes)
- [Semantic Role Labeling on OntoNotes](https://paperswithcode.com/sota/semantic-role-labeling-on-ontonotes)
### Languages
English
## Dataset Structure
### Data Instances
```
{
'tokens': ['Well', ',', 'the', 'Hundred', 'Regiments', 'Offensive', 'was', 'divided', 'into', 'three', 'phases', '.'],
'ner_tags': [0, 0, 29, 30, 30, 30, 0, 0, 0, 27, 0, 0]
}
```
### Data Fields
- **`tokens`** (*`List[str]`*) : **`words`** in original dataset
- **`ner_tags`** (*`List[ClassLabel]`*) : **`named_entities`** in original dataset. The BIO tags for named entities in the sentence.
- tag set : `datasets.ClassLabel(num_classes=37, names=["O", "B-PERSON", "I-PERSON", "B-NORP", "I-NORP", "B-FAC", "I-FAC", "B-ORG", "I-ORG", "B-GPE", "I-GPE", "B-LOC", "I-LOC", "B-PRODUCT", "I-PRODUCT", "B-DATE", "I-DATE", "B-TIME", "I-TIME", "B-PERCENT", "I-PERCENT", "B-MONEY", "I-MONEY", "B-QUANTITY", "I-QUANTITY", "B-ORDINAL", "I-ORDINAL", "B-CARDINAL", "I-CARDINAL", "B-EVENT", "I-EVENT", "B-WORK_OF_ART", "I-WORK_OF_ART", "B-LAW", "I-LAW", "B-LANGUAGE", "I-LANGUAGE",])`
### Data Splits
_train_, _validation_, and _test_
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
No license
### Citation Information
```
@inproceedings{pradhan-etal-2013-towards,
title = "Towards Robust Linguistic Analysis using {O}nto{N}otes",
author = {Pradhan, Sameer and
Moschitti, Alessandro and
Xue, Nianwen and
Ng, Hwee Tou and
Bj{\"o}rkelund, Anders and
Uryupina, Olga and
Zhang, Yuchen and
Zhong, Zhi},
booktitle = "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W13-3516",
pages = "143--152",
}
```
### Contributions
Thanks to the author of private repository, that uploaded this dataset. | 5,430 | [
[
-0.02008056640625,
-0.04974365234375,
0.023162841796875,
0.01422119140625,
-0.0159454345703125,
-0.002162933349609375,
-0.038787841796875,
-0.032073974609375,
0.038177490234375,
0.053070068359375,
-0.04473876953125,
-0.0704345703125,
-0.047760009765625,
0.03... |
SerdarHelli/SegmentationOfTeethPanoramicXRayImages | 2022-10-29T20:05:26.000Z | [
"task_categories:image-segmentation",
"task_ids:semantic-segmentation",
"size_categories:n<1K",
"teeth-segmentation",
"dental-imaging",
"medical-imaging",
"region:us"
] | SerdarHelli | null | null | 7 | 28 | 2022-06-29T21:07:00 | ---
size_categories:
- n<1K
task_categories:
- image-segmentation
task_ids:
- semantic-segmentation
tags:
- teeth-segmentation
- dental-imaging
- medical-imaging
train-eval-index:
- config: plain_text
task: semantic_segmentation
task_id: semantic_segmentation
splits:
train_split: train
eval_split: test
col_mapping:
image: image
label: image
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/SerdarHelli/Segmentation-of-Teeth-in-Panoramic-X-ray-Image-Using-U-Net](https://github.com/SerdarHelli/Segmentation-of-Teeth-in-Panoramic-X-ray-Image-Using-U-Net)
- **Repository:** [https://github.com/SerdarHelli/Segmentation-of-Teeth-in-Panoramic-X-ray-Image-Using-U-Net](https://github.com/SerdarHelli/Segmentation-of-Teeth-in-Panoramic-X-ray-Image-Using-U-Net)
- **Paper:** [Tooth Instance Segmentation on Panoramic Dental Radiographs Using U-Nets and Morphological Processing](https://dergipark.org.tr/tr/pub/dubited/issue/68307/950568)
- **Leaderboard:**
- **Point of Contact:** S.Serdar Helli
### Dataset Summary
# Semantic-Segmentation-of-Teeth-in-Panoramic-X-ray-Image
The aim of this study is automatic semantic segmentation and measurement total length of teeth in one-shot panoramic x-ray image by using deep learning method with U-Net Model and binary image analysis in order to provide diagnostic information for the management of dental disorders, diseases, and conditions.
[***Github Link***](https://github.com/SerdarHelli/Segmentation-of-Teeth-in-Panoramic-X-ray-Image-Using-U-Net)
***Original Dataset For Only Images***
DATASET ref - H. Abdi, S. Kasaei, and M. Mehdizadeh, “Automatic segmentation of mandible in panoramic x-ray,” J. Med. Imaging, vol. 2, no. 4, p. 44003, 2015
[Link DATASET for only original images.](https://data.mendeley.com/datasets/hxt48yk462/1)
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
"image": X-ray Image (Image),
"label": Binary Image Segmentation Map (Image)
}
```
## Dataset Creation
### Source Data
***Original Dataset For Only Images***
DATASET ref - H. Abdi, S. Kasaei, and M. Mehdizadeh, “Automatic segmentation of mandible in panoramic x-ray,” J. Med. Imaging, vol. 2, no. 4, p. 44003, 2015
[Link DATASET for only original images.](https://data.mendeley.com/datasets/hxt48yk462/1)
### Annotations
#### Annotation process
The annotation was made manually.
#### Who are the annotators?
S.Serdar Helli
### Other Known Limitations
The X-Ray Images files associated with this dataset are licensed under a Creative Commons Attribution 4.0 International license.
To Check Out For More Information:
***Original Dataset For Only Images***
DATASET ref - H. Abdi, S. Kasaei, and M. Mehdizadeh, “Automatic segmentation of mandible in panoramic x-ray,” J. Med. Imaging, vol. 2, no. 4, p. 44003, 2015
[Link DATASET for only original images.](https://data.mendeley.com/datasets/hxt48yk462/1)
## Additional Information
### Citation Information
For Labelling
```
@article{helli10tooth,
title={Tooth Instance Segmentation on Panoramic Dental Radiographs Using U-Nets and Morphological Processing},
author={HELL{\.I}, Serdar and HAMAMCI, Anda{\c{c}}},
journal={D{\"u}zce {\"U}niversitesi Bilim ve Teknoloji Dergisi},
volume={10},
number={1},
pages={39--50}
}
```
For Original Images
```
@article{abdi2015automatic,
title={Automatic segmentation of mandible in panoramic x-ray},
author={Abdi, Amir Hossein and Kasaei, Shohreh and Mehdizadeh, Mojdeh},
journal={Journal of Medical Imaging},
volume={2},
number={4},
pages={044003},
year={2015},
publisher={SPIE}
}
```
### Contributions
Thanks to [@SerdarHelli](https://github.com/SerdarHelli) for adding this dataset. | 4,388 | [
[
-0.04205322265625,
-0.03570556640625,
0.0216064453125,
-0.0128326416015625,
-0.04217529296875,
-0.0024585723876953125,
0.00307464599609375,
-0.03436279296875,
0.038299560546875,
0.02130126953125,
-0.049957275390625,
-0.06512451171875,
-0.01235198974609375,
0... |
biglam/lampeter_corpus | 2022-09-15T15:52:46.000Z | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"task_ids:multi-class-classification",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-... | biglam | The Lampeter Corpus of Early Modern English Tracts is a collection of texts on
various subject matter published between 1640 and 1740 – a time that is marked by the rise of mass
publication, the development of a public discourse in many areas of everyday life
and, last but not least, the standardisation of British English. | @misc{20.500.12024/3193,
title = {The Lampeter Corpus of Early Modern English Tracts},
url = {http://hdl.handle.net/20.500.12024/3193},
note = {Oxford Text Archive},
copyright = {Distributed by the University of Oxford under a Creative Commons Attribution-{ShareAlike} 3.0 Unported License}, | 1 | 28 | 2022-07-18T21:33:13 | ---
annotations_creators:
- no-annotation
paperswithcode_id: null
language:
- en
language_creators:
- found
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
pretty_name: Lampeter Corpus
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
- multi-class-classification
---
# Dataset Card for lampeter_corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://ota.bodleian.ox.ac.uk/repository/xmlui/handle/20.500.12024/3193
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** Josef Schmied, Claudia Claridge, Rainer Siemund
### Dataset Summary
The Lampeter Corpus of Early Modern English Tracts is a collection of texts on various subject matter published between 1640 and 1740, a time that was marked by the rise of mass publication, the development of public discourse in many areas of everyday life and last but not least, the standardisation of British English. Each text belongs to one of the following genres: Law, Economy, Religion, Politics, Science, Miscellaneous
### Supported Tasks and Leaderboards
- `text-classification`: This dataset comes with dates and genre classifications for each text which can be used to finetune a model for text classification.
### Languages
The text in the dataset is British English. The associated BCP-47 code is `en-GB`
## Dataset Structure
### Data Instances
A typical data point contains an id, a text, the head of the text (which can be missing on some occasions) and the title. The two features which can be used for classification are `date`, which is the year of publication and `genre` which classifies the text into one of six broad areas.
```
{
'id': 'SciB1735',
'text': '\nI. WHEN I read your Defence of the British Mathematicians, I could not, Sir, but admire your Courage in asserting with such undoubting Assurance things so easily disproved. This to me seemed unaccountable, till I reflected on what you say (p. 32.) when upon my having appealed to every thinking Reader, whether it be possible to frame any clear Conception of Fluxions, you express yourself in the following manner, "Pray, Sir, who are those thinking Readers you ap\npeal to? Are they Geometricians, or Persons wholly ignorant of Geometry? If the former, I leave it to them: If the latter, I ask how well are they qualified to judge of the Method of Fluxions"? It must be acknowledged you seem by this Dilemma secure in the favour of one Part of your Readers, and the ignorance of the other. I am nevertheless persuaded there are fair and candid Men among the Mathematicians. And for those who are not Mathematicians, I shall endeavour so to unveil this Mystery, [TRUNCATED]',
'date': '1735',
'genre': 'Science', '
head': 'A DEFENCE OF FREE-THINKING IN Mathematics; &c.\n',
'title': 'A defence of free-thinking in mathematics [...]'
}
```
### Data Fields
The dataset contains the following fields:
- `id`: Unique identifier("string"),
- `text`: ext in the document("string"),
- `date`: Date of publication("date64"),
- `genre`: Broad classification("string"),
- `head`: Often same as title. Can be missing("string"),
- `title`: Title of document("string")
### Data Splits
Train: 120
## Dataset Creation
### Curation Rationale
The period covered by the Lampeter Corpus, 1640 to 1740, marks a crucial period in English history and the elaboration of English as a multi-purpose language. The texts selected for the corpus reflect the standardisation process of English and historical developments between the outbreak of the Civil War and the beginning of the Industrial Revolution. To meet the needs of linguists and historians alike, the Lampeter project has attempted to create a balanced corpus rather than a randomly chosen archive or collection. A balanced corpus, then, is characterised by several transparent sampling criteria.
### Source Data
#### Initial Data Collection and Normalization
The original data is selected according to the following criteria:
- Complete texts only, including dedications, prefaces, postscripts, etc.
- Texts are of varying length, ranging from c. 3,000 to c. 20,000 words.
- Each author appears only once to avoid idiosyncratic language use.
- Major literary figures of the time were excluded since their writing style can be studied in other, existing collections.
- Generally, only first editions of the texts; later editions only if changes were made by the original authors, thus ensuring the authenticity of the language.
#### Who are the source language producers?
Authors of texts between 1640-1740
### Annotations
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
N/A
### Discussion of Biases
The social biases of the time in terms of race, sex, gender, etc. might be encountered in this dataset
### Other Known Limitations
None
## Additional Information
### Dataset Curators
Josef Schmied, Claudia Claridge, Rainer Siemund
### Licensing Information
Creative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0)
### Citation Information
University of Oxford, The Lampeter Corpus of Early Modern English Tracts, Oxford Text Archive, http://hdl.handle.net/20.500.12024/3193. | 6,403 | [
[
-0.03973388671875,
-0.0247955322265625,
0.0219573974609375,
0.006740570068359375,
-0.00626373291015625,
-0.01297760009765625,
-0.0269317626953125,
-0.03875732421875,
0.0457763671875,
0.0245819091796875,
-0.0295562744140625,
-0.039276123046875,
-0.039031982421875... |
BirdL/DALL-E-Cats | 2022-09-28T21:07:37.000Z | [
"task_categories:image-classification",
"task_categories:unconditional-image-generation",
"size_categories:1K<n<10K",
"license:other",
"region:us"
] | BirdL | null | null | 0 | 28 | 2022-08-01T20:37:15 | ---
annotations_creators: []
language: []
language_creators: []
license:
- other
multilinguality: []
pretty_name: DALL-E Cats Dataset
size_categories:
- 1K<n<10K
source_datasets: []
tags: []
task_categories:
- image-classification
- unconditional-image-generation
task_ids: []
---
DALL-E-Cats is a dataset meant to produce a synthetic animal dataset. This is a successor to DALL-E-Dogs. DALL-E-Dogs and DALL-E-Cats will be fed into an image classifier to see how it performs. This is under the [BirdL-AirL License.](https://huggingface.co/spaces/BirdL/license/) | 562 | [
[
-0.0401611328125,
-0.041656494140625,
0.0033588409423828125,
0.0202789306640625,
-0.012054443359375,
0.0312347412109375,
0.026641845703125,
-0.03851318359375,
0.0277557373046875,
0.043853759765625,
-0.045562744140625,
-0.0188751220703125,
0.00397491455078125,
... |
din0s/asqa | 2022-09-20T16:14:54.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|ambig_qa",
"language:en",
"license:apache-2.0",
"factoid questions",
"l... | din0s | null | null | 0 | 28 | 2022-09-19T22:25:51 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- expert-generated
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: ASQA
size_categories:
- 1K<n<10K
source_datasets:
- extended|ambig_qa
tags:
- factoid questions
- long-form answers
task_categories:
- question-answering
task_ids:
- open-domain-qa
---
# Dataset Card for ASQA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/google-research/language/tree/master/language/asqa
- **Paper:** https://arxiv.org/abs/2204.06092
- **Leaderboard:** https://ambigqa.github.io/asqa_leaderboard.html
### Dataset Summary
ASQA is the first long-form question answering dataset that focuses on ambiguous factoid questions. Different from previous long-form answers datasets, each question is annotated with both long-form answers and extractive question-answer pairs, which should be answerable by the generated passage. A generated long-form answer will be evaluated using both ROUGE and QA accuracy. In the paper, we show that these evaluation metrics are well-correlated with human judgments.
### Supported Tasks and Leaderboards
Long-form Question Answering. [Leaderboard](https://ambigqa.github.io/asqa_leaderboard.html)
### Languages
- English
## Dataset Structure
### Data Instances
```py
{
"ambiguous_question": "Where does the civil liberties act place the blame for the internment of u.s. citizens?",
"qa_pairs": [
{
"context": "No context provided",
"question": "Where does the civil liberties act place the blame for the internment of u.s. citizens by apologizing on behalf of them?",
"short_answers": [
"the people of the United States"
],
"wikipage": None
},
{
"context": "No context provided",
"question": "Where does the civil liberties act place the blame for the internment of u.s. citizens by making them pay reparations?",
"short_answers": [
"United States government"
],
"wikipage": None
}
],
"wikipages": [
{
"title": "Civil Liberties Act of 1988",
"url": "https://en.wikipedia.org/wiki/Civil%20Liberties%20Act%20of%201988"
}
],
"annotations": [
{
"knowledge": [
{
"content": "The Civil Liberties Act of 1988 (Pub.L. 100–383, title I, August 10, 1988, 102 Stat. 904, 50a U.S.C. § 1989b et seq.) is a United States federal law that granted reparations to Japanese Americans who had been interned by the United States government during World War II.",
"wikipage": "Civil Liberties Act of 1988"
}
],
"long_answer": "The Civil Liberties Act of 1988 is a United States federal law that granted reparations to Japanese Americans who had been interned by the United States government during World War II. In the act, the blame for the internment of U.S. citizens was placed on the people of the United States, by apologizing on behalf of them. Furthermore, the blame for the internment was placed on the United States government, by making them pay reparations."
}
],
"sample_id": -4557617869928758000
}
```
### Data Fields
- `ambiguous_question`: ambiguous question from AmbigQA.
- `annotations`: long-form answers to the ambiguous question constructed by ASQA annotators.
- `annotations/knowledge`: list of additional knowledge pieces.
- `annotations/knowledge/content`: a passage from Wikipedia.
- `annotations/knowledge/wikipage`: title of the Wikipedia page the passage was taken from.
- `annotations/long_answer`: annotation.
- `qa_pairs`: Q&A pairs from AmbigQA which are used for disambiguation.
- `qa_pairs/context`: additional context provided.
- `qa_pairs/question`: disambiguated question from AmbigQA.
- `qa_pairs/short_answers`: list of short answers from AmbigQA.
- `qa_pairs/wikipage`: title of the Wikipedia page the additional context was taken from.
- `sample_id`: the unique id of the sample
- `wikipages`: list of Wikipedia pages visited by AmbigQA annotators.
- `wikipages/title`: title of the Wikipedia page.
- `wikipages/url`: link to the Wikipedia page.
### Data Splits
| **Split** | **Instances** |
|-----------|---------------|
| Train | 4353 |
| Dev | 948 |
## Additional Information
### Contributions
Thanks to [@din0s](https://github.com/din0s) for adding this dataset. | 4,828 | [
[
-0.042724609375,
-0.0567626953125,
0.038848876953125,
-0.0074920654296875,
-0.01105499267578125,
-0.0029087066650390625,
-0.006824493408203125,
-0.02215576171875,
0.03192138671875,
0.0386962890625,
-0.04669189453125,
-0.04119873046875,
-0.0235443115234375,
0... |
heegyu/namuwiki | 2022-10-01T02:40:40.000Z | [
"task_categories:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:ko",
"license:cc-by-nc-sa-2.0",
"region:us"
] | heegyu | null | null | 2 | 28 | 2022-10-01T00:40:12 | ---
license: cc-by-nc-sa-2.0
language:
- ko
language_creators:
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
task_categories:
- other
---
# namu.wiki database dump
##
https://namu.wiki/ database dump 2022/03/01<br/>
- 867024 rows
- download size: 3GB
## Usage
```bash
pip install datasets
```
```python
from datasets import load_dataset
dataset = load_dataset("heegyu/namuwiki")
print(dataset["train"][0])
```
```
{'title': '!!아앗!!',
'text': '\n[목차]\n\n\'\'\'{{{+1 !!ああっと!!}}}\'\'\'\n\n== 개요 ==\n[[파일:3444050440.jpg|width=60%]]\n▲[[신 세계수의 미궁 2 파프니르기사|신 세계수의 미궁 2]]에서 뜬 !!아앗!!\n\n[[세계수의 미궁 시리즈]]에 전통으로 등장하는 대사. [[세계수의 미궁 2 제왕의 성배|2편]]부터 등장했으며 훌륭한 [[사망 플래그]]의 예시이다.\n\n세계수의 모험가들이 탐험하는 던전인 수해의 구석구석에는 채취/벌채/채굴 포인트가 있으며, 이를 위한 채집 스킬에 투자하면 제한된 채집 기회에서 보다 큰 이득을 챙길 수 있다. 그러나 분배할 수 있는 스킬 포인트는 한정되어 있기 때문에 채집 스킬에 투자하는 만큼 전투 스킬 레벨은 낮아지게 된다.[* 다만 채집 시스템은 신 세계수 시리즈의 그리모어 복제, 복합 채집 스킬인 야생의 감, 5편의 종족 특유 스킬, 크로스의 1레벨이 만렙인 채집 스킬 등으로 편의성이 점차 나아져서 채집 스킬 때문에 스킬 트리가 내려가는 일은 점점 줄어들었다.] !!아앗!!이 발생하는 과정을 요약하면 다음과 같다.\n\n 1. 채집용 캐릭터들로 이루어진 약한 파티(ex: [[레인저(세계수의 미궁 2)|레인저]] 5명)가 수해에 입장한다.\n 1. 필드 전투를 피해 채집 포인트에 도착한 후 열심히 아이템을 캐는 중에...\n 1. \'\'\'!!아앗!!\'\'\' ~~라플레시아가 나타났다!~~\n 이때 등장하는 것은 [[FOE(세계수의 미궁 시리즈)|FOE]]는 아니지만 \'\'\'훨씬 위층에 등장하는 강력한 필드 몬스터이며 선제 공격을 당하게 된다!\'\'\'\n 1. \'\'\'으앙 죽음\'\'\'(hage)\n\n여담으로 !!아앗!!의 유래는 1인칭 던전 크롤러의 원조 [[위저드리]]에서 함정을 건드렸을 때 나오는 대사 Oops!(おおっと!)라고 한다.\n\n== 각 작품에서의 모습 ==\n=== [[세계수의 미궁 2 제왕의 성배]] ===\n!!아앗!!의 악랄함은 첫 등장한 작품이자 시리즈 중에서도 불친절하기로 정평이 난 2편이 절정이었다. 그야말로 위의 !!아앗!! 시퀀스 그대로, 묻지도 따지지도 않고 채집할 때마다 일정 확률로 \'\'\'강제로\'\'\' 전투에 돌입해야 했다. 게다가 이럴 때 쓰라고 있는 레인저의 스킬 \'위험 감지(중간 확률로 적의 선제 공격을 무효화)\'는 정작 작동하지 않는다!\n\n참고로 2편에서 채집 도중 !!아앗!!이 뜰 확률은 [[http://www.atlusnet.jp/topic/detail/910|고작 1%다.]] [[던파확률의 법칙|낮아 보이는 확률이어도 플레이 중 한 번이라도 일어나는 것]]을 경험하는 체감 확률을 고려하여 확률을 설정한다고.\n\n=== [[세계수의 미궁 3 성해의 내방자]] ===\n다행히 채집 중 낮은 확률로 "좋은 아이템을 얻을 수 있을 것 같지만... 주변에서 몬스터들의 기척이 느껴진다."는 메시지가 뜨고 이때 운이 좋으면 레어 아이템을 얻을 수 있지만 반대의 경우 적과 싸우게 되는 것으로 조정되었다.\n\n=== [[세계수의 미궁 4 전승의 거신]] ===\n기본적인 것은 3편과 같지만, 4편에서는 움직이지 않고 채집할 때도 턴이 경과하도록 조정되었기 때문에 주변에 있는 FOE를 잊고 채집에 몰두하다가 FOE와 부딪히면 FOE 버전 !!아앗!!이 뜬다. 그리고 난이도 CASUAL로 플레이시, FOE로 인한 !!아앗!!을 제외하면 절대로 발생하지 않는다.\n\n=== [[신 세계수의 미궁 밀레니엄의 소녀|신 세계수의]] [[신 세계수의 미궁 2 파프니르기사|미궁 시리즈]] ===\n채집 방식이 한 턴으로 끝나는 구조[* 채집으로 한 번 아이템을 획득하면 "다시, (채집 스킬)에 의해..."가 뜨면서 한꺼번에 획득되는 구조.]로 바뀐 덕분인지 강제 조우로 다시 회귀해버렸다(...). 그나마 위험 감지 먹통과 같은 버그성 난점들은 수정되었다. 그 이후에 나온 [[세계수의 미궁 5 오랜 신화의 끝]]과 시리즈의 집대성 작품이자 3DS 마지막 작품인 [[세계수의 미궁 X]]도 마찬가지.\n\n=== [[세계수의 미궁 X]] ===\n본작의 채집은 신 세계수 시리즈와 같은 매커니즘이라 굳이 언급할 필요는 없으나, 퀘스트중에 2편의 !!아앗!! 시퀀스를 재현하면서 \'\'\'라플레시아\'\'\'가 등장하는 퀘스트가 존재한다.(...) 깨알같이 시스템 메세지 창이 아니라 대화창을 이용해서 완벽 재현한 것이 포인트.\n\n=== [[페르소나 Q 섀도우 오브 더 래버린스]] ===\n세계수 시스템을 기반으로 한 [[페르소나 시리즈]]와의 콜라보 작품인 페르소나 Q에서도 등장한다. 3, 4편과 같이 파워 스폿에서 채집 도중 메시지가 뜨며, 실패하면 파티에 참가하고 있는 멤버 중 한 명의 [[http://nico.ms/sm25683358|!!아앗!! 하는 음성]] ~~또는 [[코로마루|개소리]]~~과 함께 그 던전의 \'강적\'인 거대 [[섀도(페르소나 시리즈)|섀도우]]가 나타난다.\n\n그러나 내비 전용 스킬인 뱀눈 노려보기(위험 감지와 같은 효과)와 채집 보조 스킬은 파티의 전투력에 전혀 지장을 주지 않으며, \'대안심\'을 달면 거의 볼 일이 없어져서 초중반 이후에는 존재감이 급격히 줄어든다.\n[[분류:세계수의 미궁 시리즈]]',
'contributors': '110.46.34.123,kirby10,max0243,218.54.117.149,ruby3141,121.165.63.239,iviyuki,1.229.200.194,anatra95,kiri47,175.127.134.2,nickchaos71,chkong1998,kiwitree2,namubot,huwieblusnow',
'namespace': ''}
``` | 3,280 | [
[
-0.04315185546875,
-0.045074462890625,
0.008514404296875,
0.0202484130859375,
-0.03399658203125,
-0.006458282470703125,
0.01555633544921875,
-0.02783203125,
0.067138671875,
0.0311737060546875,
-0.0318603515625,
-0.028594970703125,
-0.045074462890625,
0.00492... |
tomekkorbak/detoxify-pile-chunk3-250000-300000 | 2022-10-06T03:07:35.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 28 | 2022-10-03T19:52:11 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
osanseviero/dummy_ja_audio | 2022-10-07T14:23:30.000Z | [
"region:us"
] | osanseviero | null | null | 0 | 28 | 2022-10-07T14:23:17 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
esb/datasets | 2023-01-16T17:51:39.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
... | esb | null | null | 7 | 28 | 2022-10-24T10:53:50 | ---
annotations_creators:
- expert-generated
- crowdsourced
- machine-generated
language:
- en
language_creators:
- crowdsourced
- expert-generated
license:
- cc-by-4.0
- apache-2.0
- cc0-1.0
- cc-by-nc-3.0
- other
multilinguality:
- monolingual
pretty_name: datasets
size_categories:
- 100K<n<1M
- 1M<n<10M
source_datasets:
- original
- extended|librispeech_asr
- extended|common_voice
tags:
- asr
- benchmark
- speech
- esb
task_categories:
- automatic-speech-recognition
extra_gated_prompt: |-
Three of the ESB datasets have specific terms of usage that must be agreed to before using the data.
To do so, fill in the access forms on the specific datasets' pages:
* Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0
* GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech
* SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech
extra_gated_fields:
I hereby confirm that I have registered on the original Common Voice page and agree to not attempt to determine the identity of speakers in the Common Voice dataset: checkbox
I hereby confirm that I have accepted the terms of usages on GigaSpeech page: checkbox
I hereby confirm that I have accepted the terms of usages on SPGISpeech page: checkbox
---
All eight of datasets in ESB can be downloaded and prepared in just a single line of code through the Hugging Face Datasets library:
```python
from datasets import load_dataset
librispeech = load_dataset("esb/datasets", "librispeech", split="train")
```
- `"esb/datasets"`: the repository namespace. This is fixed for all ESB datasets.
- `"librispeech"`: the dataset name. This can be changed to any of any one of the eight datasets in ESB to download that dataset.
- `split="train"`: the split. Set this to one of train/validation/test to generate a specific split. Omit the `split` argument to generate all splits for a dataset.
The datasets are full prepared, such that the audio and transcription files can be used directly in training/evaluation scripts.
## Dataset Information
A data point can be accessed by indexing the dataset object loaded through `load_dataset`:
```python
print(librispeech[0])
```
A typical data point comprises the path to the audio file and its transcription. Also included is information of the dataset from which the sample derives and a unique identifier name:
```python
{
'dataset': 'librispeech',
'audio': {'path': '/home/sanchit-gandhi/.cache/huggingface/datasets/downloads/extracted/d2da1969fe9e7d06661b5dc370cf2e3c119a14c35950045bcb76243b264e4f01/374-180298-0000.flac',
'array': array([ 7.01904297e-04, 7.32421875e-04, 7.32421875e-04, ...,
-2.74658203e-04, -1.83105469e-04, -3.05175781e-05]),
'sampling_rate': 16000},
'text': 'chapter sixteen i might have told you of the beginning of this liaison in a few lines but i wanted you to see every step by which we came i to agree to whatever marguerite wished',
'id': '374-180298-0000'
}
```
### Data Fields
- `dataset`: name of the ESB dataset from which the sample is taken.
- `audio`: a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `text`: the transcription of the audio file.
- `id`: unique id of the data sample.
### Data Preparation
#### Audio
The audio for all ESB datasets is segmented into sample lengths suitable for training ASR systems. The Hugging Face datasets library decodes audio files on the fly, reading the segments and converting them to a Python arrays. Consequently, no further preparation of the audio is required to be used in training/evaluation scripts.
Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`.
#### Transcriptions
The transcriptions corresponding to each audio file are provided in their 'error corrected' format. No transcription pre-processing is applied to the text, only necessary 'error correction' steps such as removing junk tokens (_<unk>_) or converting symbolic punctuation to spelled out form (_<comma>_ to _,_). As such, no further preparation of the transcriptions is required to be used in training/evaluation scripts.
Transcriptions are provided for training and validation splits. The transcriptions are **not** provided for the test splits. ESB requires you to generate predictions for the test sets and upload them to https://huggingface.co/spaces/esb/leaderboard for scoring.
### Access
All eight of the datasets in ESB are accessible and licensing is freely available. Three of the ESB datasets have specific terms of usage that must be agreed to before using the data. To do so, fill in the access forms on the specific datasets' pages:
* Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0
* GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech
* SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech
### Diagnostic Dataset
ESB contains a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESB validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESB dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions. For more information, visit: [esb/diagnostic-dataset](https://huggingface.co/datasets/esb/diagnostic-dataset).
## Summary of ESB Datasets
| Dataset | Domain | Speaking Style | Train (h) | Dev (h) | Test (h) | Transcriptions | License |
|--------------|-----------------------------|-----------------------|-----------|---------|----------|--------------------|-----------------|
| LibriSpeech | Audiobook | Narrated | 960 | 11 | 11 | Normalised | CC-BY-4.0 |
| Common Voice | Wikipedia | Narrated | 1409 | 27 | 27 | Punctuated & Cased | CC0-1.0 |
| Voxpopuli | European Parliament | Oratory | 523 | 5 | 5 | Punctuated | CC0 |
| TED-LIUM | TED talks | Oratory | 454 | 2 | 3 | Normalised | CC-BY-NC-ND 3.0 |
| GigaSpeech | Audiobook, podcast, YouTube | Narrated, spontaneous | 2500 | 12 | 40 | Punctuated | apache-2.0 |
| SPGISpeech | Fincancial meetings | Oratory, spontaneous | 4900 | 100 | 100 | Punctuated & Cased | User Agreement |
| Earnings-22 | Fincancial meetings | Oratory, spontaneous | 105 | 5 | 5 | Punctuated & Cased | CC-BY-SA-4.0 |
| AMI | Meetings | Spontaneous | 78 | 9 | 9 | Punctuated & Cased | CC-BY-4.0 |
## LibriSpeech
The LibriSpeech corpus is a standard large-scale corpus for assessing ASR systems. It consists of approximately 1,000 hours of narrated audiobooks from the [LibriVox](https://librivox.org) project. It is licensed under CC-BY-4.0.
Example Usage:
```python
librispeech = load_dataset("esb/datasets", "librispeech")
```
Train/validation splits:
- `train` (combination of `train.clean.100`, `train.clean.360` and `train.other.500`)
- `validation.clean`
- `validation.other`
Test splits:
- `test.clean`
- `test.other`
Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument:
```python
librispeech = load_dataset("esb/datasets", "librispeech", subconfig="clean.100")
```
- `clean.100`: 100 hours of training data from the 'clean' subset
- `clean.360`: 360 hours of training data from the 'clean' subset
- `other.500`: 500 hours of training data from the 'other' subset
## Common Voice
Common Voice is a series of crowd-sourced open-licensed speech datasets where speakers record text from Wikipedia in various languages. The speakers are of various nationalities and native languages, with different accents and recording conditions. We use the English subset of version 9.0 (27-4-2022), with approximately 1,400 hours of audio-transcription data. It is licensed under CC0-1.0.
Example usage:
```python
common_voice = load_dataset("esb/datasets", "common_voice", use_auth_token=True)
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## VoxPopuli
VoxPopuli is a large-scale multilingual speech corpus consisting of political data sourced from 2009-2020 European Parliament event recordings. The English subset contains approximately 550 hours of speech largely from non-native English speakers. It is licensed under CC0.
Example usage:
```python
voxpopuli = load_dataset("esb/datasets", "voxpopuli")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## TED-LIUM
TED-LIUM consists of English-language TED Talk conference videos covering a range of different cultural, political, and academic topics. It contains approximately 450 hours of transcribed speech data. It is licensed under CC-BY-NC-ND 3.0.
Example usage:
```python
tedlium = load_dataset("esb/datasets", "tedlium")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## GigaSpeech
GigaSpeech is a multi-domain English speech recognition corpus created from audiobooks, podcasts and YouTube. We provide the large train set (2,500 hours) and the standard validation and test splits. It is licensed under apache-2.0.
Example usage:
```python
gigaspeech = load_dataset("esb/datasets", "gigaspeech", use_auth_token=True)
```
Training/validation splits:
- `train` (`l` subset of training data (2,500 h))
- `validation`
Test splits:
- `test`
Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument:
```python
gigaspeech = load_dataset("esb/datasets", "spgispeech", subconfig="xs", use_auth_token=True)
```
- `xs`: extra-small subset of training data (10 h)
- `s`: small subset of training data (250 h)
- `m`: medium subset of training data (1,000 h)
- `xl`: extra-large subset of training data (10,000 h)
## SPGISpeech
SPGISpeech consists of company earnings calls that have been manually transcribed by S&P Global, Inc according to a professional style guide. We provide the large train set (5,000 hours) and the standard validation and test splits. It is licensed under a Kensho user agreement.
Loading the dataset requires authorization.
Example usage:
```python
spgispeech = load_dataset("esb/datasets", "spgispeech", use_auth_token=True)
```
Training/validation splits:
- `train` (`l` subset of training data (~5,000 h))
- `validation`
Test splits:
- `test`
Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument:
```python
spgispeech = load_dataset("esb/datasets", "spgispeech", subconfig="s", use_auth_token=True)
```
- `s`: small subset of training data (~200 h)
- `m`: medium subset of training data (~1,000 h)
## Earnings-22
Earnings-22 is a 119-hour corpus of English-language earnings calls collected from global companies, with speakers of many different nationalities and accents. It is licensed under CC-BY-SA-4.0.
Example usage:
```python
earnings22 = load_dataset("esb/datasets", "earnings22")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## AMI
The AMI Meeting Corpus consists of 100 hours of meeting recordings from multiple recording devices synced to a common timeline. It is licensed under CC-BY-4.0.
Example usage:
```python
ami = load_dataset("esb/datasets", "ami")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test` | 12,408 | [
[
-0.044677734375,
-0.047454833984375,
0.001880645751953125,
0.033721923828125,
-0.005939483642578125,
-0.007476806640625,
-0.0250701904296875,
-0.0305938720703125,
0.04364013671875,
0.041229248046875,
-0.0599365234375,
-0.045318603515625,
-0.03472900390625,
0... |
stjiris/IRIS_sts | 2023-01-08T02:54:33.000Z | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:semantic-similarity-scoring",
"annotations_creators:automated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K>n",
"source_datasets:original",
"language:pt",
"license:mit",
"region:us"
] | stjiris | null | null | 2 | 28 | 2022-11-30T23:51:04 | ---
pretty_name: IRIS Legal Dataset
annotations_creators:
- automated
language_creators:
- found
language:
- pt
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K>n
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- text-scoring
- semantic-similarity-scoring
---


Work developed as part of [Project IRIS](https://www.inesc-id.pt/projects/PR07005/).
Thesis: [A Semantic Search System for Supremo Tribunal de Justiça](https://rufimelo99.github.io/SemanticSearchSystemForSTJ/)
# Portuguese Legal Sentences
Collection of Legal Sentences pairs from the Portuguese Supreme Court of Justice
The goal of this dataset was to be used for Semantic Textual Similarity
- Values from 0-1: random sentences across documents
- Values from 2-4: sentences from the same summary (implying some level of entailment)
- Values from 4-5: sentences pairs generated through OpenAi' text-davinci-003 ("Escreve por outras palavras:\n\Entrada:\n"+originalQuery + "\Saída: \n")
### Contributions
[@rufimelo99](https://github.com/rufimelo99)
If you use this work, please cite:
```bibtex
@inproceedings{MeloSemantic,
author = {Melo, Rui and Santos, Professor Pedro Alexandre and Dias, Professor Jo{\~ a}o},
title = {A {Semantic} {Search} {System} for {Supremo} {Tribunal} de {Justi}{\c c}a},
}
``` | 1,544 | [
[
-0.00946044921875,
-0.0198516845703125,
0.0546875,
0.00780487060546875,
-0.033447265625,
-0.0293121337890625,
-0.00867462158203125,
-0.0036869049072265625,
0.042144775390625,
0.058868408203125,
-0.041229248046875,
-0.06878662109375,
-0.032989501953125,
0.044... |
tushar117/xalign | 2023-01-01T20:39:30.000Z | [
"task_categories:table-to-text",
"task_ids:rdf-to-text",
"annotations_creators:found",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:kn",
"langua... | tushar117 | It consists of an extensive collection of a high quality cross-lingual fact-to-text dataset where facts are in English
and corresponding sentences are in native language for person biographies. The Train & validation splits are created
using distant supervision methods and Test data is generated through human annotations. | @article{abhishek2022xalign,
title={XAlign: Cross-lingual Fact-to-Text Alignment and Generation for Low-Resource Languages},
author={Abhishek, Tushar and Sagare, Shivprasad and Singh, Bhavyajeet and Sharma, Anubhav and Gupta, Manish and Varma, Vasudeva},
journal={arXiv preprint arXiv:2202.00291},
year={2022}
} | 1 | 28 | 2022-12-29T06:50:10 | ---
annotations_creators:
- found
configs:
- release_v1
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
- en
language_creators:
- crowdsourced
license:
- cc-by-nc-sa-4.0
- mit
multilinguality:
- multilingual
paperswithcode_id: xalign
pretty_name: 'XAlign'
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- xalign
- NLG
- low-resource
- LRL
task_categories:
- table-to-text
task_ids:
- rdf-to-text
---
# Dataset Card for XAlign
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Known Limitations](#known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [XAlign homepage](https://github.com/tushar117/XAlign)
- **Repository:** [XAlign repo](https://github.com/tushar117/XAlign)
- **Paper:** [XAlign: Cross-lingual Fact-to-Text Alignment and Generation for Low-Resource Languages](https://arxiv.org/abs/2202.00291)
- **Leaderboard:** [Papers With Code Leaderboard for XAlign](https://paperswithcode.com/sota/data-to-text-generation-on-xalign)
- **Point of Contact:** [Tushar Abhishek](tushar.abhishek@research.iiit.ac.in)
### Dataset Summary
It consists of an extensive collection of a high quality cross-lingual fact-to-text dataset where facts are in English and corresponding sentences are in native language for person biographies. The Train & validation splits are created using distant supervision methods and Test data is generated through human annotations.
### Supported Tasks and Leaderboards
- 'Data-to-text Generation': XAlign dataset can be used to train cross-lingual data-to-text generation models. The model performance can measured through any text generation evaluation metrics by taking average across all the languages. [Sagare et al. (2022)](https://arxiv.org/abs/2209.11252) reported average BLEU score of 29.27 and average METEOR score of 53.64 over the test set.
- 'Relation Extraction': XAlign could also be used for cross-lingual relation extraction where relations in English can be extracted from associated native sentence.
See [Papers With Code Leaderboard](https://paperswithcode.com/sota/data-to-text-generation-on-xalign) for more models.
### Languages
Assamese (as), Bengali (bn), Gujarati (gu), Hindi (hi), Kannada (kn), Malayalam (ml), Marathi (mr), Oriya (or), Punjabi (pa), Tamil (ta), Telugu (te), and English (en).
## Dataset Structure
### Data Fields
Each record consist of the following entries:
- sentence (string) : Native language wikipedia sentence. (non-native language strings were removed.)
- `facts` (List[Dict]) : List of facts associated with the sentence where each fact is stored as dictionary.
- language (string) : Language identifier.
The `facts` key contains list of facts where each facts is stored as dictionary. A single record within fact list contains following entries:
- subject (string) : central entity.
- object (string) : entity or a piece of information about the subject.
- predicate (string) : relationship that connects the subject and the object.
- qualifiers (List[Dict]) : It provide additional information about the fact, is stored as list of qualifier where each record is a dictionary. The dictionary contains two keys: qualifier_predicate to represent property of qualifer and qualifier_object to store value for the qualifier's predicate.
### Data Instances
Example from English
```
{
"sentence": "Mark Paul Briers (born 21 April 1968) is a former English cricketer.",
"facts": [
{
"subject": "Mark Briers",
"predicate": "date of birth",
"object": "21 April 1968",
"qualifiers": []
},
{
"subject": "Mark Briers",
"predicate": "occupation",
"object": "cricketer",
"qualifiers": []
},
{
"subject": "Mark Briers",
"predicate": "country of citizenship",
"object": "United Kingdom",
"qualifiers": []
}
],
"language": "en"
}
```
Example from one of the low-resource languages (i.e. Hindi)
```
{
"sentence": "बोरिस पास्तेरनाक १९५८ में साहित्य के क्षेत्र में नोबेल पुरस्कार विजेता रहे हैं।",
"facts": [
{
"subject": "Boris Pasternak",
"predicate": "nominated for",
"object": "Nobel Prize in Literature",
"qualifiers": [
{
"qualifier_predicate": "point in time",
"qualifier_subject": "1958"
}
]
}
],
"language": "hi"
}
```
### Data Splits
The XAlign dataset has 3 splits: train, validation, and test. Below are the statistics the dataset.
| Dataset splits | Number of Instances in Split |
| --- | --- |
| Train | 499155 |
| Validation | 55469 |
| Test | 7425 |
## Dataset Creation
### Curation Rationale
Most of the existing Data-to-Text datasets are available in English. Also, the structured Wikidata entries for person entities in low resource languages are minuscule in number compared to that in English. Thus, monolingual Data-to-Text for low resource languages suffers from data sparsity. XAlign dataset would be useful in creation of cross-lingual Data-to-Text generation systems that take a set of English facts as input and generates a sentence capturing the fact-semantics in the specified language.
### Source Data
#### Initial Data Collection and Normalization
The dataset creation process starts with an intial list of ~95K person entities selected from Wikidata and each of which has a link to a corresponding Wikipedia page in at least one of our 11 low resource languages. This leads to a dataset where every instance is a tuple containing entityID, English Wikidata facts, language identifier, Wikipedia URL for the entityID. The facts (in English) are extracted from the 20201221 WikiData dump for each entity using the [WikiData](https://query.wikidata.org) APIs. The facts are gathered only for the speficied Wikidata property (or relation) types that captures most useful factual information for person entities: WikibaseItem, Time, Quantity, and Monolingualtext.This leads to overall ~0.55M data instances across all the 12 languages. Also, for each language, the sentences (along with section information) are extracted from 20210520 Wikipedia XML dump using the pre-processing steps as described [here](https://arxiv.org/abs/2202.00291).
For every (entity, language) pair, the pre-processed dataset contains a set of English Wikidata facts and a set of Wikipedia sentences in that language. In order to create train and validation dataset, these are later passed through a two-stage automatic aligner as proposed in [abhishek et al. (2022)](https://arxiv.org/abs/2202.00291) to associate a sentence with a subset of facts.
#### Who are the source language producers?
The text are extracted from Wikipedia and facts are retrieved from Wikidata.
### Annotations
#### Annotation process
The Manual annotation of Test dataset was done in two phases. For both the phases, the annotators were presented with (low resource language sentence, list of English facts). They were asked to mark facts present in the given sentence. There were also specific guidelines to ignore redundant facts, handle abbreviations, etc. More detailed annotation guidelines and ethical statement are mentioned [here](https://docs.google.com/document/d/1ucGlf-Jm1ywQ_Fjw9f2UqPeMWPlBnlZA46UY7KuZ0EE/edit)
. In the first phase, we got 60 instances labeled per language by a set of 8 expert annotators (trusted graduate students who understood the task very well). In phase 2, we selected 8 annotators per language from the [National Register of Translators](https://www.ntm.org.in/languages/english/nrtdb.aspx}). We tested these annotators using phase 1 data as golden control set, and shortlisted up to 4 annotators per language who scored highest (on Kappa score with golden annotations).
#### Who are the annotators?
Human annotators were selected appropriately (after screening) from [National Translation Mission](https://www.ntm.org.in) for Test set creation.
### Personal and Sensitive Information
The dataset does not involve collection or storage of any personally identifiable information or offensive information at any stage.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of the this dataset is to help develop cross-lingual Data-to-Text generation systems that are vital in many downstream Natural Language Processing (NLP) applications like automated dialog systems, domain-specific chatbots, open domain question answering, authoring sports reports, etc. These systems will be useful for powering business applications like Wikipedia text generation given English Infoboxes, automated generation of non-English product descriptions using English product attributes, etc.
### Known Limitations
The XAlign dataset focus only on person biographies and system developed on this dataset might not be generalized to other domains.
## Additional Information
### Dataset Curators
This dataset is collected by Tushar Abhishek, Shivprasad Sagare, Bhavyajeet Singh, Anubhav Sharma, Manish Gupta and Vasudeva Varma of Information Retrieval and Extraction Lab (IREL), Hyderabad, India. They released [scripts](https://github.com/tushar117/xalign) to collect and process the data into the Data-to-Text format.
### Licensing Information
The XAlign dataset is released under the [MIT License](https://github.com/tushar117/XAlign/blob/main/LICENSE).
### Citation Information
```
@article{abhishek2022xalign,
title={XAlign: Cross-lingual Fact-to-Text Alignment and Generation for Low-Resource Languages},
author={Abhishek, Tushar and Sagare, Shivprasad and Singh, Bhavyajeet and Sharma, Anubhav and Gupta, Manish and Varma, Vasudeva},
journal={arXiv preprint arXiv:2202.00291},
year={2022}
}
```
### Contributions
Thanks to [Tushar Abhishek](https://github.com/tushar117), [Shivprasad Sagare](https://github.com/ShivprasadSagare), [Bhavyajeet Singh](https://github.com/bhavyajeet), [Anubhav Sharma](https://github.com/anubhav-sharma13), [Manish Gupta](https://github.com/blitzprecision) and [Vasudeva Varma](vv@iiit.ac.in) for adding this dataset.
Additional thanks to the annotators from National Translation Mission for their crucial contributions to creation of the test dataset: Bhaswati Bhattacharya, Aditi Sarkar, Raghunandan B. S., Satish M., Rashmi G.Rao, Vidyarashmi PN, Neelima Bhide, Anand Bapat, Krishna Rao N V, Nagalakshmi DV, Aditya Bhardwaj
Vuppula, Nirupama Patel, Asir. T, Sneha Gupta, Dinesh Kumar, Jasmin Gilani, Vivek R, Sivaprasad S, Pranoy J, Ashutosh Bharadwaj, Balaji Venkateshwar, Vinkesh Bansal, Vaishnavi Udyavara, Ramandeep Singh, Khushi Goyal, Yashasvi LN Pasumarthy and Naren Akash. | 11,532 | [
[
-0.03009033203125,
-0.03997802734375,
0.00716400146484375,
-0.0035552978515625,
-0.0196075439453125,
-0.006622314453125,
-0.02764892578125,
-0.0285186767578125,
0.01947021484375,
0.01678466796875,
-0.059906005859375,
-0.056121826171875,
-0.0229644775390625,
... |
qwedsacf/ivypanda-essays | 2023-02-03T21:05:11.000Z | [
"region:us"
] | qwedsacf | null | null | 3 | 28 | 2023-01-23T00:37:04 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Ivypanda essays
## Dataset Description
- **Homepage:** https://laion.ai/
### Dataset Summary
This dataset contains essays from [ivypanda](https://ivypanda.com/essays/).
## Dataset Structure
### Data Fields
`TEXT`: The text of the essay.<br/>
`SOURCE`: A permalink to the ivypanda essay page
| 501 | [
[
-0.01245880126953125,
-0.0322265625,
0.022796630859375,
0.0147857666015625,
0.006526947021484375,
-0.0036029815673828125,
0.0234222412109375,
-0.0044708251953125,
0.04052734375,
0.053253173828125,
-0.05181884765625,
-0.056640625,
-0.025634765625,
0.003032684... |
Cohere/miracl-fr-corpus-22-12 | 2023-02-06T11:57:34.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:fr",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | 0 | 28 | 2023-01-31T06:02:06 | ---
annotations_creators:
- expert-generated
language:
- fr
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (fr) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-fr-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fr-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-fr-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fr-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-fr-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fr-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-fr-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-fr-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-fr-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fr-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-fr-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-fr-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| 6,103 | [
[
-0.045501708984375,
-0.058135986328125,
0.0226287841796875,
0.0187225341796875,
-0.004268646240234375,
-0.004833221435546875,
-0.0222625732421875,
-0.036102294921875,
0.03912353515625,
0.01544189453125,
-0.0399169921875,
-0.0723876953125,
-0.05047607421875,
... |
GBaker/MedQA-USMLE-4-options-hf-MPNet-IR | 2023-03-20T21:53:18.000Z | [
"region:us"
] | GBaker | null | null | 3 | 28 | 2023-03-20T21:53:01 | ---
dataset_info:
features:
- name: id
dtype: string
- name: sent1
dtype: string
- name: sent2
dtype: string
- name: ending0
dtype: string
- name: ending1
dtype: string
- name: ending2
dtype: string
- name: ending3
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 14052739
num_examples: 10178
- name: validation
num_bytes: 1754234
num_examples: 1272
- name: test
num_bytes: 1780124
num_examples: 1273
download_size: 10209487
dataset_size: 17587097
---
# Dataset Card for "MedQA-USMLE-4-options-hf-MPNet-IR"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 750 | [
[
-0.043975830078125,
0.00461578369140625,
0.0225372314453125,
-0.00429534912109375,
-0.0198822021484375,
0.007213592529296875,
0.029449462890625,
0.0085296630859375,
0.050750732421875,
0.039215087890625,
-0.064208984375,
-0.044830322265625,
-0.035400390625,
0... |
Muennighoff/quixbugs | 2023-03-26T16:15:28.000Z | [
"region:us"
] | Muennighoff | null | null | 0 | 28 | 2023-03-26T13:58:52 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
witchling22/ada_002_embeddings | 2023-04-24T20:38:07.000Z | [
"region:us"
] | witchling22 | null | null | 0 | 28 | 2023-04-24T20:37:53 | ---
dataset_info:
features:
- name: context
dtype: string
- name: embeddings
sequence: float64
splits:
- name: train
num_bytes: 199998382
num_examples: 15704
download_size: 147134493
dataset_size: 199998382
---
# Dataset Card for "ada_002_embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 413 | [
[
-0.0308074951171875,
-0.03179931640625,
0.0199432373046875,
0.014007568359375,
-0.0087432861328125,
-0.007091522216796875,
0.036285400390625,
-0.00283050537109375,
0.06915283203125,
0.01971435546875,
-0.038787841796875,
-0.064697265625,
-0.042205810546875,
-... |
EleutherAI/fever | 2023-04-30T00:09:28.000Z | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"knowledge-verification",
"region:us"... | EleutherAI | null | null | 1 | 28 | 2023-04-30T00:07:16 | ---
language:
- en
paperswithcode_id: fever
annotations_creators:
- crowdsourced
language_creators:
- found
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
pretty_name: FEVER
size_categories:
- 100K<n<1M
source_datasets:
- extended|wikipedia
task_categories:
- text-classification
task_ids: []
tags:
- knowledge-verification
dataset_info:
- config_name: v1.0
features:
- name: id
dtype: int32
- name: label
dtype: string
- name: claim
dtype: string
- name: evidence_annotation_id
dtype: int32
- name: evidence_id
dtype: int32
- name: evidence_wiki_url
dtype: string
- name: evidence_sentence_id
dtype: int32
splits:
- name: train
num_bytes: 24147163
num_examples: 263822
- name: dev
num_bytes: 2696375
num_examples: 28625
- name: paper_dev
num_bytes: 1348943
num_examples: 14475
- name: paper_test
num_bytes: 1347432
num_examples: 14150
download_size: 44853972
dataset_size: 40043693
- config_name: v2.0
features:
- name: id
dtype: int32
- name: label
dtype: string
- name: claim
dtype: string
- name: evidence_annotation_id
dtype: int32
- name: evidence_id
dtype: int32
- name: evidence_wiki_url
dtype: string
- name: evidence_sentence_id
dtype: int32
splits:
- name: validation
num_bytes: 306243
num_examples: 2384
download_size: 392466
dataset_size: 306243
- config_name: wiki_pages
features:
- name: id
dtype: string
- name: text
dtype: string
- name: lines
dtype: string
splits:
- name: wikipedia_pages
num_bytes: 7254115038
num_examples: 5416537
download_size: 1713485474
dataset_size: 7254115038
---
# Dataset Card for "fever"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://fever.ai/](https://fever.ai/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
With billions of individual pages on the web providing information on almost every conceivable topic, we should have
the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this
information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to
transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot
of recent research and media coverage: false information coming from unreliable sources.
The FEVER workshops are a venue for work in verifiable knowledge extraction and to stimulate progress in this direction.
- FEVER Dataset: FEVER (Fact Extraction and VERification) consists of 185,445 claims generated by altering sentences
extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims
are classified as Supported, Refuted or NotEnoughInfo. For the first two classes, the annotators also recorded the
sentence(s) forming the necessary evidence for their judgment.
- FEVER 2.0 Adversarial Attacks Dataset: The FEVER 2.0 Dataset consists of 1174 claims created by the submissions of
participants in the Breaker phase of the 2019 shared task. Participants (Breakers) were tasked with generating
adversarial examples that induce classification errors for the existing systems. Breakers submitted a dataset of up to
1000 instances with equal number of instances for each of the three classes (Supported, Refuted NotEnoughInfo). Only
novel claims (i.e. not contained in the original FEVER dataset) were considered as valid entries to the shared task.
The submissions were then manually evaluated for Correctness (grammatical, appropriately labeled and meet the FEVER
annotation guidelines requirements).
### Supported Tasks and Leaderboards
The task is verification of textual claims against textual sources.
When compared to textual entailment (TE)/natural language inference, the key difference is that in these tasks the
passage to verify each claim is given, and in recent years it typically consists a single sentence, while in
verification systems it is retrieved from a large set of documents in order to form the evidence.
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
#### v1.0
- **Size of downloaded dataset files:** 44.86 MB
- **Size of the generated dataset:** 40.05 MB
- **Total amount of disk used:** 84.89 MB
An example of 'train' looks as follows.
```
'claim': 'Nikolaj Coster-Waldau worked with the Fox Broadcasting Company.',
'evidence_wiki_url': 'Nikolaj_Coster-Waldau',
'label': 'SUPPORTS',
'id': 75397,
'evidence_id': 104971,
'evidence_sentence_id': 7,
'evidence_annotation_id': 92206}
```
#### v2.0
- **Size of downloaded dataset files:** 0.39 MB
- **Size of the generated dataset:** 0.30 MB
- **Total amount of disk used:** 0.70 MB
#### wiki_pages
- **Size of downloaded dataset files:** 1.71 GB
- **Size of the generated dataset:** 7.25 GB
- **Total amount of disk used:** 8.97 GB
An example of 'wikipedia_pages' looks as follows.
```
{'text': 'The following are the football -LRB- soccer -RRB- events of the year 1928 throughout the world . ',
'lines': '0\tThe following are the football -LRB- soccer -RRB- events of the year 1928 throughout the world .\n1\t',
'id': '1928_in_association_football'}
```
### Data Fields
The data fields are the same among all splits.
#### v1.0
- `id`: a `int32` feature.
- `label`: a `string` feature.
- `claim`: a `string` feature.
- `evidence_annotation_id`: a `int32` feature.
- `evidence_id`: a `int32` feature.
- `evidence_wiki_url`: a `string` feature.
- `evidence_sentence_id`: a `int32` feature.
#### v2.0
- `id`: a `int32` feature.
- `label`: a `string` feature.
- `claim`: a `string` feature.
- `evidence_annotation_id`: a `int32` feature.
- `evidence_id`: a `int32` feature.
- `evidence_wiki_url`: a `string` feature.
- `evidence_sentence_id`: a `int32` feature.
#### wiki_pages
- `id`: a `string` feature.
- `text`: a `string` feature.
- `lines`: a `string` feature.
### Data Splits
#### v1.0
| | train | dev | paper_dev | paper_test |
|------|-------:|------:|----------:|-----------:|
| v1.0 | 311431 | 37566 | 18999 | 18567 |
#### v2.0
| | validation |
|------|-----------:|
| v2.0 | 2384 |
#### wiki_pages
| | wikipedia_pages |
|------------|----------------:|
| wiki_pages | 5416537 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
FEVER license:
```
These data annotations incorporate material from Wikipedia, which is licensed pursuant to the Wikipedia Copyright Policy. These annotations are made available under the license terms described on the applicable Wikipedia article pages, or, where Wikipedia license terms are unavailable, under the Creative Commons Attribution-ShareAlike License (version 3.0), available at http://creativecommons.org/licenses/by-sa/3.0/ (collectively, the “License Termsâ€). You may not use these files except in compliance with the applicable License Terms.
```
### Citation Information
If you use "FEVER Dataset", please cite:
```bibtex
@inproceedings{Thorne18Fever,
author = {Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit},
title = {{FEVER}: a Large-scale Dataset for Fact Extraction and {VERification}},
booktitle = {NAACL-HLT},
year = {2018}
}
```
If you use "FEVER 2.0 Adversarial Attacks Dataset", please cite:
```bibtex
@inproceedings{Thorne19FEVER2,
author = {Thorne, James and Vlachos, Andreas and Cocarascu, Oana and Christodoulopoulos, Christos and Mittal, Arpit},
title = {The {FEVER2.0} Shared Task},
booktitle = {Proceedings of the Second Workshop on {Fact Extraction and VERification (FEVER)}},
year = {2018}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq),
[@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun),
[@albertvillanova](https://github.com/albertvillanova) for adding this dataset. | 11,269 | [
[
-0.04034423828125,
-0.050628662109375,
0.01125335693359375,
0.010345458984375,
-0.01519012451171875,
-0.0072174072265625,
-0.0207061767578125,
-0.0391845703125,
0.045440673828125,
0.030303955078125,
-0.043243408203125,
-0.061492919921875,
-0.0423583984375,
0... |
FreedomIntelligence/huatuo_knowledge_graph_qa | 2023-07-07T08:46:58.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:zh",
"license:apache-2.0",
"medical",
"arxiv:2305.01526",
"region:us"
] | FreedomIntelligence | null | null | 16 | 28 | 2023-05-06T06:35:38 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- zh
tags:
- medical
size_categories:
- 100K<n<1M
---
# Dataset Card for Huatuo_knowledge_graph_qa
## Dataset Description
- **Homepage: https://www.huatuogpt.cn/**
- **Repository: https://github.com/FreedomIntelligence/HuatuoGPT**
- **Paper: https://arxiv.org/abs/2305.01526**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
We built this QA dataset based on the medical knowledge map, with a total of 798,444 pieces of data, in which the questions are constructed by means of templates, and the answers are the contents of the entries in the knowledge map.
## Dataset Creation
### Source Data
https://cpubmed.openi.org.cn/graph/wiki
https://github.com/zhihao-chen/QASystemOnMedicalGraph
https://github.com/baiyang2464/chatbot-base-on-Knowledge-Graph
## Citation
```
@misc{li2023huatuo26m,
title={Huatuo-26M, a Large-scale Chinese Medical QA Dataset},
author={Jianquan Li and Xidong Wang and Xiangbo Wu and Zhiyi Zhang and Xiaolong Xu and Jie Fu and Prayag Tiwari and Xiang Wan and Benyou Wang},
year={2023},
eprint={2305.01526},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| 1,238 | [
[
-0.0176239013671875,
-0.043182373046875,
0.0300750732421875,
-0.0034275054931640625,
-0.028839111328125,
-0.0078277587890625,
0.00453948974609375,
-0.0168304443359375,
0.0241546630859375,
0.035430908203125,
-0.01788330078125,
-0.0672607421875,
-0.025909423828125... |
under-tree/prepared-yagpt | 2023-05-18T12:26:50.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:ru",
"region:us"
] | under-tree | null | null | 1 | 28 | 2023-05-18T12:17:21 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 42680359.78397168
num_examples: 53550
- name: test
num_bytes: 7532625.216028317
num_examples: 9451
download_size: 25066987
dataset_size: 50212985
task_categories:
- conversational
- text-generation
language:
- ru
pretty_name: Dialogue Dataset for YAGPT ChatBot
size_categories:
- 10K<n<100K
---
# Dataset Card for "prepared-yagpt"
## Short Description
This dataset is aimed for training of chatbots on russian language.
It consists plenty of dialogues that allows you to train you model answer user prompts.
## Notes
1. Special tokens
- history, speaker1, speaker2 (history can be optionally removed, i.e. substituted on empty string)
2. Dataset is based on
- [Matreshka](https://huggingface.co/datasets/zjkarina/matreshka)
- [Yandex-Q](https://huggingface.co/datasets/its5Q/yandex-q)
- [Diasum](https://huggingface.co/datasets/bragovo/diasum)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,114 | [
[
-0.028167724609375,
-0.061798095703125,
0.00472259521484375,
-0.000029981136322021484,
-0.025787353515625,
0.002399444580078125,
-0.0216217041015625,
0.00787353515625,
0.0271759033203125,
0.0362548828125,
-0.07373046875,
-0.04095458984375,
-0.0177459716796875,
... |
edarchimbaud/earnings-estimate-stocks | 2023-10-29T23:10:52.000Z | [
"region:us"
] | edarchimbaud | null | null | 1 | 28 | 2023-05-19T12:04:48 | ---
dataset_info:
features:
- name: symbol
dtype: string
- name: date
dtype: string
- name: current_qtr
dtype: string
- name: no_of_analysts_current_qtr
dtype: int64
- name: next_qtr
dtype: string
- name: no_of_analysts_next_qtr
dtype: int64
- name: current_year
dtype: int64
- name: no_of_analysts_current_year
dtype: int64
- name: next_year
dtype: int64
- name: no_of_analysts_next_year
dtype: int64
- name: avg_estimate_current_qtr
dtype: float64
- name: avg_estimate_next_qtr
dtype: float64
- name: avg_estimate_current_year
dtype: float64
- name: avg_estimate_next_year
dtype: float64
- name: low_estimate_current_qtr
dtype: float64
- name: low_estimate_next_qtr
dtype: float64
- name: low_estimate_current_year
dtype: float64
- name: low_estimate_next_year
dtype: float64
- name: high_estimate_current_qtr
dtype: float64
- name: high_estimate_next_qtr
dtype: float64
- name: high_estimate_current_year
dtype: float64
- name: high_estimate_next_year
dtype: float64
- name: year_ago_eps_current_qtr
dtype: float64
- name: year_ago_eps_next_qtr
dtype: float64
- name: year_ago_eps_current_year
dtype: float64
- name: year_ago_eps_next_year
dtype: float64
splits:
- name: train
num_bytes: 4920329
num_examples: 22195
download_size: 630263
dataset_size: 4920329
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "earnings-estimate-sp500"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Repository:** [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
The earnings-estimate-sp500 dataset provides earnings estimate data for companies in the S&P 500 index.
### Supported Tasks and Leaderboards
The dataset can be used to analyze earnings estimates for systematic trading or financial analysis tasks. The dataset does not specify any associated leaderboards.
### Languages
[N/A]
## Dataset Structure
### Data Instances
[N/A]
### Data Fields
The dataset contains the following fields:
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- date (string): The date associated with the earnings estimate data.
- current_qtr (string): The current quarter.
- no_of_analysts_current_qtr (int64): The number of analysts providing estimates for the current quarter.
- next_qtr (string): The next quarter.
- no_of_analysts_next_qtr (int64): The number of analysts providing estimates for the next quarter.
- current_year (int64): The current year.
- no_of_analysts_current_year (int64): The number of analysts providing estimates for the current year.
- next_year (int64): The next year.
- no_of_analysts_next_year (int64): The number of analysts providing estimates for the next year.
- avg_estimate_current_qtr (float64): The average estimate for the current quarter.
- avg_estimate_next_qtr (float64): The average estimate for the next quarter.
- avg_estimate_current_year (float64): The average estimate for the current year.
- avg_estimate_next_year (float64): The average estimate for the next year.
- low_estimate_current_qtr (float64): The low estimate for the current quarter.
- low_estimate_next_qtr (float64): The low estimate for the next quarter.
- low_estimate_current_year (float64): The low estimate for the current year.
- low_estimate_next_year (float64): The low estimate for the next year.
- high_estimate_current_qtr (float64): The high estimate for the current quarter.
- high_estimate_next_qtr (float64): The high estimate for the next quarter.
- high_estimate_current_year (float64): The high estimate for the current year.
- high_estimate_next_year (float64): The high estimate for the next year.
- year_ago_eps_current_qtr (float64): The earnings per share (EPS) for the current quarter a year ago.
- year_ago_eps_next_qtr (float64): The earnings per share (EPS) for the next quarter a year ago.
- year_ago_eps_current_year (float64): The earnings per share (EPS) for the current year a year ago.
- year_ago_eps_next_year (float64): The earnings per share (EPS) for the next year a year ago.
### Data Splits
The dataset consists of a single split, called "train."
## Additional Information
### Dataset Curators
This dataset does not specify any specific curators.
### Licensing Information
The earnings-estimate-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, earnings-estimate-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. | 5,703 | [
[
-0.01715087890625,
-0.032257080078125,
0.007350921630859375,
0.02911376953125,
-0.0150909423828125,
0.0032405853271484375,
0.00835418701171875,
-0.044830322265625,
0.06365966796875,
0.01186370849609375,
-0.0623779296875,
-0.0382080078125,
-0.04345703125,
-0.... |
edarchimbaud/eps-revisions-stocks | 2023-10-29T23:11:35.000Z | [
"region:us"
] | edarchimbaud | null | null | 0 | 28 | 2023-05-19T14:23:43 | ---
dataset_info:
features:
- name: symbol
dtype: string
- name: date
dtype: string
- name: current_qtr
dtype: string
- name: up_last_7_days_current_qtr
dtype: float64
- name: next_qtr
dtype: string
- name: up_last_7_days_next_qtr
dtype: float64
- name: current_year
dtype: int64
- name: up_last_7_days_current_year
dtype: float64
- name: next_year
dtype: int64
- name: up_last_7_days_next_year
dtype: float64
- name: up_last_30_days_current_qtr
dtype: float64
- name: up_last_30_days_next_qtr
dtype: float64
- name: up_last_30_days_current_year
dtype: float64
- name: up_last_30_days_next_year
dtype: float64
- name: down_last_7_days_current_qtr
dtype: 'null'
- name: down_last_7_days_next_qtr
dtype: 'null'
- name: down_last_7_days_current_year
dtype: 'null'
- name: down_last_7_days_next_year
dtype: 'null'
- name: down_last_30_days_current_qtr
dtype: float64
- name: down_last_30_days_next_qtr
dtype: float64
- name: down_last_30_days_current_year
dtype: float64
- name: down_last_30_days_next_year
dtype: float64
splits:
- name: train
num_bytes: 3207096
num_examples: 20210
download_size: 263892
dataset_size: 3207096
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "eps-revisions-sp500"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://edarchimbaud.substack.com
- **Repository:** https://github.com/edarchimbaud
- **Point of Contact:** contact@edarchimbaud.com
### Dataset Summary
The eps-revisions-sp500 dataset provides information on earnings-per-share (EPS) revisions for companies in the S&P 500 index.
### Supported Tasks and Leaderboards
The dataset can be used to analyze EPS revisions and their impact on the performance of companies in the S&P 500 index. It does not specify any particular leaderboard or evaluation metric.
### Languages
[N/A]
## Dataset Structure
### Data Instances
[N/A]
### Data Fields
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- date (string): A string indicating the date of the recorded data.
- current_qtr (string): A string representing the current quarter.
- up_last_7_days_current_qtr (int64): An integer indicating the number of days the EPS has increased in the current quarter.
- next_qtr (string): A string representing the next quarter.
- up_last_7_days_next_qtr (int64): An integer indicating the number of days the EPS is projected to increase in the next quarter.
- current_year (int64): An integer representing the current year.
- up_last_7_days_current_year (int64): An integer indicating the number of days the EPS has increased in the current year.
- next_year (int64): An integer representing the next year.
- up_last_7_days_next_year (int64): An integer indicating the number of days the EPS is projected to increase in the next year.
- up_last_30_days_current_qtr (int64): An integer indicating the number of days the EPS has increased in the current quarter over the last 30 days.
- up_last_30_days_next_qtr (int64): An integer indicating the number of days the EPS is projected to increase in the next quarter over the last 30 days.
- up_last_30_days_current_year (int64): An integer indicating the number of days the EPS has increased in the current year over the last 30 days.
- up_last_30_days_next_year (int64): An integer indicating the number of days the EPS is projected to increase in the next year over the last 30 days.
- down_last_7_days_current_qtr (null): A null value indicating the absence of data on EPS decrease in the current quarter.
- down_last_7_days_next_qtr (null): A null value indicating the absence of data on EPS decrease in the next quarter.
- down_last_7_days_current_year (null): A null value indicating the absence of data on EPS decrease in the current year.
- down_last_7_days_next_year (null): A null value indicating the absence of data on EPS decrease in the next year.
- down_last_30_days_current_qtr (int64): An integer indicating the number of days the EPS has decreased in the current quarter over the last 30 days.
- down_last_30_days_next_qtr (int64): An integer indicating the number of days the EPS is projected to decrease in the next quarter over the last 30 days.
- down_last_30_days_current_year (int64): An integer indicating the number of days the EPS has decreased in the current year over the last 30 days.
- down_last_30_days_next_year (int64): An integer indicating the number of days the EPS is projected to decrease in the next year over the last 30 days.
### Data Splits
A single split, called train.
## Dataset Creation
### Curation Rationale
The eps-revisions-sp500 dataset was created to provide information on EPS revisions for companies in the S&P 500 index.
### Source Data
#### Initial Data Collection and Normalization
The data was collected from reliable sources and normalized for consistency.
### Annotations
#### Annotation Process
[N/A]
#### Annotators
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The eps-revisions-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The eps-revisions-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, eps-revisions-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. | 6,806 | [
[
-0.0287628173828125,
-0.01357269287109375,
0.0228118896484375,
0.031982421875,
-0.0157928466796875,
-0.002044677734375,
0.01727294921875,
-0.0223846435546875,
0.04669189453125,
0.0269927978515625,
-0.0770263671875,
-0.044677734375,
-0.038238525390625,
0.0205... |
edarchimbaud/eps-trend-stocks | 2023-10-29T23:11:48.000Z | [
"region:us"
] | edarchimbaud | null | null | 1 | 28 | 2023-05-19T15:17:04 | ---
dataset_info:
features:
- name: symbol
dtype: string
- name: date
dtype: string
- name: current_qtr
dtype: string
- name: current_estimate_current_qtr
dtype: float64
- name: next_qtr
dtype: string
- name: current_estimate_next_qtr
dtype: float64
- name: current_year
dtype: int64
- name: current_estimate_current_year
dtype: float64
- name: next_year
dtype: int64
- name: current_estimate_next_year
dtype: float64
- name: 7_days_ago_current_qtr
dtype: float64
- name: 7_days_ago_next_qtr
dtype: float64
- name: 7_days_ago_current_year
dtype: float64
- name: 7_days_ago_next_year
dtype: float64
- name: 30_days_ago_current_qtr
dtype: float64
- name: 30_days_ago_next_qtr
dtype: float64
- name: 30_days_ago_current_year
dtype: float64
- name: 30_days_ago_next_year
dtype: float64
- name: 60_days_ago_current_qtr
dtype: float64
- name: 60_days_ago_next_qtr
dtype: float64
- name: 60_days_ago_current_year
dtype: float64
- name: 60_days_ago_next_year
dtype: float64
- name: 90_days_ago_current_qtr
dtype: float64
- name: 90_days_ago_next_qtr
dtype: float64
- name: 90_days_ago_current_year
dtype: float64
- name: 90_days_ago_next_year
dtype: float64
splits:
- name: train
num_bytes: 4467327
num_examples: 20197
download_size: 790067
dataset_size: 4467327
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "eps-trend-sp500"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Repository:** [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
The "eps-trend-sp500" dataset contains earnings per share (EPS) trend data for companies in the S&P 500 index. It includes information about the EPS estimates for the current quarter, next quarter, current year, and next year, as well as estimates from 7 days ago, 30 days ago, 60 days ago, and 90 days ago.
### Supported Tasks and Leaderboards
The dataset can be used to analyze EPS trends and perform financial analysis tasks. It does not specify any associated leaderboards.
### Languages
The dataset does not specify any specific language.
## Dataset Structure
### Data Instances
The dataset consists of multiple data instances, where each instance represents the EPS trend data for a specific company and date.
### Data Fields
The dataset contains the following fields:
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- date (string): The date associated with the EPS trend data.
- current_qtr (string): The current quarter.
- current_estimate_current_qtr (float64): The current estimate for the EPS in the current quarter.
- next_qtr (string): The next quarter.
- current_estimate_next_qtr (float64): The current estimate for the EPS in the next quarter.
- current_year (int64): The current year.
- current_estimate_current_year (float64): The current estimate for the EPS in the current year.
- next_year (int64): The next year.
- current_estimate_next_year (float64): The current estimate for the EPS in the next year.
- 7_days_ago_current_qtr (float64): The EPS estimate for the current quarter from 7 days ago.
- 7_days_ago_next_qtr (float64): The EPS estimate for the next quarter from 7 days ago.
- 7_days_ago_current_year (float64): The EPS estimate for the current year from 7 days ago.
- 7_days_ago_next_year (float64): The EPS estimate for the next year from 7 days ago.
- 30_days_ago_current_qtr (float64): The EPS estimate for the current quarter from 30 days ago.
- 30_days_ago_next_qtr (float64): The EPS estimate for the next quarter from 30 days ago.
- 30_days_ago_current_year (float64): The EPS estimate for the current year from 30 days ago.
- 30_days_ago_next_year (float64): The EPS estimate for the next year from 30 days ago.
- 60_days_ago_current_qtr (float64): The EPS estimate for the current quarter from 60 days ago.
- 60_days_ago_next_qtr (float64): The EPS estimate for the next quarter from 60 days ago.
- 60_days_ago_current_year (float64): The EPS estimate for the current year from 60 days ago.
- 60_days_ago_next_year (float64): The EPS estimate for the next year from 60 days ago.
- 90_days_ago_current_qtr (float64): The EPS estimate for the current quarter from 90 days ago.
- 90_days_ago_next_qtr (float64): The EPS estimate for the next quarter from 90 days ago.
- 90_days_ago_current_year (float64): The EPS estimate for the current year from 90 days ago.
- 90_days_ago_next_year (float64): The EPS estimate for the next year from 90 days ago.
### Data Splits
The dataset consists of a single split, called "train."
## Additional Information
### Dataset Curators
The eps-trend-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The eps-trend-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, eps-trend-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. | 6,139 | [
[
-0.0297088623046875,
-0.0190887451171875,
0.02020263671875,
0.0303192138671875,
-0.02813720703125,
-0.0106201171875,
0.0117645263671875,
-0.029571533203125,
0.0550537109375,
0.0138397216796875,
-0.06744384765625,
-0.0472412109375,
-0.044708251953125,
0.00623... |
edarchimbaud/revenue-estimate-stocks | 2023-10-29T23:13:09.000Z | [
"region:us"
] | edarchimbaud | null | null | 2 | 28 | 2023-05-19T15:34:56 | ---
dataset_info:
features:
- name: symbol
dtype: string
- name: date
dtype: string
- name: current_qtr
dtype: string
- name: no_of_analysts_current_qtr
dtype: int64
- name: next_qtr
dtype: string
- name: no_of_analysts_next_qtr
dtype: int64
- name: current_year
dtype: int64
- name: no_of_analysts_current_year
dtype: int64
- name: next_year
dtype: int64
- name: no_of_analysts_next_year
dtype: int64
- name: avg_estimate_current_qtr
dtype: string
- name: avg_estimate_next_qtr
dtype: string
- name: avg_estimate_current_year
dtype: string
- name: avg_estimate_next_year
dtype: string
- name: low_estimate_current_qtr
dtype: string
- name: low_estimate_next_qtr
dtype: string
- name: low_estimate_current_year
dtype: string
- name: low_estimate_next_year
dtype: string
- name: high_estimate_current_qtr
dtype: string
- name: high_estimate_next_qtr
dtype: string
- name: high_estimate_current_year
dtype: string
- name: high_estimate_next_year
dtype: string
- name: year_ago_sales_current_qtr
dtype: string
- name: year_ago_sales_next_qtr
dtype: string
- name: year_ago_sales_current_year
dtype: string
- name: year_ago_sales_next_year
dtype: string
- name: sales_growth_yearest_current_qtr
dtype: string
- name: sales_growth_yearest_next_qtr
dtype: string
- name: sales_growth_yearest_current_year
dtype: string
- name: sales_growth_yearest_next_year
dtype: string
splits:
- name: train
num_bytes: 5578147
num_examples: 19714
download_size: 737241
dataset_size: 5578147
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "revenue-estimate-sp500"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://edarchimbaud.substack.com
- **Repository:** https://github.com/edarchimbaud
- **Point of Contact:** contact@edarchimbaud.com
### Dataset Summary
The revenue-estimate-sp500 dataset provides revenue estimate data for companies in the S&P 500 index.
### Supported Tasks and Leaderboards
The dataset can be used to analyze and predict revenue estimates for companies in the S&P 500 index.
## Dataset Structure
### Data Instances
[N/A]
### Data Fields
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- date (string): A string indicating the date of the recorded data.
- current_qtr (string): A string representing the current quarter.
- no_of_analysts_current_qtr (int64): An integer indicating the number of analysts providing estimates for the current quarter.
- next_qtr (string): A string representing the next quarter.
- no_of_analysts_next_qtr (int64): An integer indicating the number of analysts providing estimates for the next quarter.
- current_year (int64): An integer indicating the current year.
- no_of_analysts_current_year (int64): An integer indicating the number of analysts providing estimates for the current year.
- next_year (int64): An integer indicating the next year.
- no_of_analysts_next_year (int64): An integer indicating the number of analysts providing estimates for the next year.
- avg_estimate_current_qtr (string): A string representing the average estimate for the current quarter.
- avg_estimate_next_qtr (string): A string representing the average estimate for the next quarter.
- avg_estimate_current_year (string): A string representing the average estimate for the current year.
- avg_estimate_next_year (string): A string representing the average estimate for the next year.
- low_estimate_current_qtr (string): A string representing the low estimate for the current quarter.
- low_estimate_next_qtr (string): A string representing the low estimate for the next quarter.
- low_estimate_current_year (string): A string representing the low estimate for the current year.
- low_estimate_next_year (string): A string representing the low estimate for the next year.
- high_estimate_current_qtr (string): A string representing the high estimate for the current quarter.
- high_estimate_next_qtr (string): A string representing the high estimate for the next quarter.
- high_estimate_current_year (string): A string representing the high estimate for the current year.
- high_estimate_next_year (string): A string representing the high estimate for the next year.
- year_ago_sales_current_qtr (string): A string representing the year-ago sales for the current quarter.
- year_ago_sales_next_qtr (string): A string representing the year-ago sales for the next quarter.
- year_ago_sales_current_year (string): A string representing the year-ago sales for the current year.
- year_ago_sales_next_year (string): A string representing the year-ago sales for the next year.
- sales_growth_yearest_current_qtr (string): A string representing the sales growth estimate for the current quarter.
- sales_growth_yearest_next_qtr (string): A string representing the sales growth estimate for the next quarter.
- sales_growth_yearest_current_year (string): A string representing the sales growth estimate for the current year.
- sales_growth_yearest_next_year (string): A string representing the sales growth estimate for the next year.
### Data Splits
A single split, called train.
## Dataset Creation
### Curation Rationale
The revenue-estimate-sp500 dataset was created to provide revenue estimate data for companies in the S&P 500 index.
### Source Data
The data was collected and normalized from reliable sources.
## Additional Information
### Dataset Curators
The revenue-estimate-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The revenue-estimate-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, revenue-estimate-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. | 6,725 | [
[
-0.0212249755859375,
-0.03338623046875,
-0.004405975341796875,
0.0289306640625,
-0.013153076171875,
0.0144195556640625,
0.0103912353515625,
-0.025543212890625,
0.04229736328125,
0.0257110595703125,
-0.071044921875,
-0.039947509765625,
-0.01554107666015625,
0... |
fblgit/tree-of-knowledge | 2023-05-24T21:24:32.000Z | [
"license:lgpl-3.0",
"region:us"
] | fblgit | null | null | 4 | 28 | 2023-05-24T15:59:28 | ---
license: lgpl-3.0
---
# tree-of-knowledge-llm
ToK aka Tree of Knowledge for Large Language Models LLM. It's a novel dataset that inspires knowledge symbolic correlation in simple input and output prompts.
https://github.com/fblgit/tree-of-knowledge-llm
The set experimentially can be used with multiple purposes:
* Knowledge Extraction from a Model
* Fine Tuning a model with newer data
* Create Granular Domain Knowledge Sets
* Improve training performance
Syntax Example:
```
{
"instruction": "Describe energy",
"input": "",
"output": "Energy AS ability TO do work OR cause change WITHIN system && forms of energy==[kinetic, potential, thermal, chemical, ... [TYPES]] && conservation of energy==law STATING energy CANNOT be created OR destroyed ONLY transformed BETWEEN forms && energy sources==[fossil fuels, nuclear, solar, wind, ... [
EXAMPLES]] USED TO power human activities"
}
```
## Characteristics
* Introduces condensation masking with `...`
* Introduces hints keywords. example: `[ADVANTAGES]`, `[CHARACTERISTICS]`, `[RULES]`.
* Introduces directional keywords. example: `AS`, `AND`, `IN`, `BETWEEN`, `RANGING`.
* Introduces approach keywords. example: `NOTATED`, `PREDICTING`, `CALCULATED`
* Introduces efficient aggrupations keyword `===`
* Introduces separated relationship keyword `&&`
## Changelog
- 2023-05-20 - Released the first version of the dataset, illustrative examples.
- 2023-05-21 - Added the first 3000 dataset items under `data/` folder. They will be marked with the date of the dataset version.
## Citations
Please cite this repository if you the code.
```
@misc{tree-of-knowledge,
author = {Xavier M},
title = {Tree of Knowledge: ToK aka Tree of Knowledge dataset for Large Language Models LLM,
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/fblgit/tree-of-knowledge}},
}
``` | 1,909 | [
[
-0.008087158203125,
-0.054443359375,
0.019622802734375,
0.0002541542053222656,
-0.01580810546875,
0.02740478515625,
-0.01491546630859375,
-0.00576019287109375,
0.005908966064453125,
0.037384033203125,
-0.044891357421875,
-0.06640625,
-0.03594970703125,
-0.01... |
edarchimbaud/earnings-surprise-stocks | 2023-10-29T23:11:22.000Z | [
"region:us"
] | edarchimbaud | null | null | 1 | 28 | 2023-05-28T22:48:31 | ---
dataset_info:
features:
- name: symbol
dtype: string
- name: date
dtype: string
- name: id
dtype: int64
- name: fiscal_qtr_end
dtype: string
- name: date_reported
dtype: timestamp[ns]
- name: eps
dtype: float64
- name: consensus_forecast
dtype: string
- name: percentage_surprise
dtype: string
splits:
- name: train
num_bytes: 5574215
num_examples: 76011
download_size: 401283
dataset_size: 5574215
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
<!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>502</h1>
<p>Bad Gateway</p>
</div>
</main>
</body>
</html> | 3,590 | [
[
-0.055328369140625,
-0.05255126953125,
0.01078033447265625,
0.012542724609375,
-0.01018524169921875,
0.0273590087890625,
-0.01282501220703125,
-0.057861328125,
0.070556640625,
0.00890350341796875,
-0.07025146484375,
-0.050201416015625,
-0.034820556640625,
0.... |
KaiLv/UDR_CR | 2023-06-21T12:22:14.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 28 | 2023-06-21T12:22:03 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: int64
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 204336
num_examples: 1772
- name: test
num_bytes: 233558
num_examples: 1996
download_size: 252165
dataset_size: 437894
---
# Dataset Card for "UDR_CR"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 473 | [
[
-0.03338623046875,
-0.016448974609375,
0.0033855438232421875,
0.007965087890625,
-0.02105712890625,
0.01033782958984375,
0.01015472412109375,
-0.011260986328125,
0.036865234375,
0.03204345703125,
-0.05438232421875,
-0.05767822265625,
-0.0330810546875,
-0.006... |
KaiLv/UDR_CNNDailyMail | 2023-06-21T12:27:24.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 28 | 2023-06-21T12:23:37 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: article
dtype: string
- name: highlights
dtype: string
- name: len_article
dtype: int64
- name: len_highlights
dtype: int64
splits:
- name: train
num_bytes: 453635426
num_examples: 155098
- name: validation
num_bytes: 21468466
num_examples: 7512
- name: test
num_bytes: 18215547
num_examples: 6379
- name: debug
num_bytes: 292572035
num_examples: 100000
download_size: 484340245
dataset_size: 785891474
---
# Dataset Card for "UDR_CNNDailyMail"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 716 | [
[
-0.0257415771484375,
-0.0141448974609375,
-0.006103515625,
0.0148162841796875,
-0.0231475830078125,
0.0030193328857421875,
0.0192108154296875,
-0.00543975830078125,
0.039306640625,
0.027130126953125,
-0.05810546875,
-0.055633544921875,
-0.0447998046875,
-0.0... |
KaiLv/UDR_COLA | 2023-06-21T12:27:30.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 28 | 2023-06-21T12:27:24 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 517945
num_examples: 8532
- name: test
num_bytes: 31522
num_examples: 527
download_size: 272237
dataset_size: 549467
---
# Dataset Card for "UDR_COLA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 539 | [
[
-0.0303497314453125,
-0.0203399658203125,
-0.00269317626953125,
0.025665283203125,
-0.00804901123046875,
0.0230255126953125,
0.03094482421875,
-0.00586700439453125,
0.052459716796875,
0.0217437744140625,
-0.04876708984375,
-0.038970947265625,
-0.03558349609375,
... |
KaiLv/UDR_ComV | 2023-06-21T12:35:55.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 28 | 2023-06-21T12:35:46 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: string
- name: question
dtype: string
- name: choices
dtype: string
- name: len_question
dtype: int64
- name: max_len_choices
dtype: int64
splits:
- name: train
num_bytes: 3487585
num_examples: 9992
- name: test
num_bytes: 337966
num_examples: 1000
- name: debug
num_bytes: 1749561
num_examples: 5000
download_size: 2193065
dataset_size: 5575112
---
# Dataset Card for "UDR_ComV"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 660 | [
[
-0.034576416015625,
-0.0174560546875,
0.0010175704956054688,
0.01212310791015625,
-0.0189208984375,
0.00772857666015625,
0.0206451416015625,
0.005405426025390625,
0.040771484375,
0.035675048828125,
-0.05450439453125,
-0.05535888671875,
-0.034423828125,
-0.00... |
eddsterxyz/Raiders-Of-The-Lost-Kek | 2023-06-25T19:36:37.000Z | [
"arxiv:2001.07487",
"region:us"
] | eddsterxyz | null | null | 0 | 28 | 2023-06-25T18:06:23 | # Raiders Of The Lost Kek
The largest 4chan /pol/ dataset.
I extracted the post content, removed HTML nonesense, and 4chan specific things
like post number replies in text, etc.
## There are a few sizes of datasets available
- 100kLines - first 100,000 lines of text from the dataset
- 300kLines - first 300,000 lines of text from the dataset
- 500kLines - first 500,000 lines of text from the dataset
maybe at some point once i have the compute ill upload the whole thing
link : https://arxiv.org/abs/2001.07487 | 514 | [
[
-0.04925537109375,
-0.0245513916015625,
0.04754638671875,
0.0167236328125,
-0.02838134765625,
0.01397705078125,
0.0284423828125,
0.00036597251892089844,
0.03741455078125,
0.05340576171875,
-0.069580078125,
-0.0134735107421875,
-0.034637451171875,
0.040710449... |
bias-amplified-splits/wanli | 2023-07-04T10:59:59.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-4.0",
"arxiv:2305.18917",
"arxiv:2201.05955",
"region:us"
] | bias-amplified-splits | WANLI (Worker-AI Collaboration for NLI) is a collection of 108K English sentence pairs for the task of natural language inference (NLI).
Each example is created by first identifying a "pocket" of examples in MultiNLI (Williams et al., 2018) that share a challenging reasoning pattern, then instructing GPT-3 to write a new example with the same pattern.
The set of generated examples are automatically filtered to contain those most likely to aid model training, and finally labeled and optionally revised by human annotators. | @misc{liu-etal-2022-wanli,
title = "WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation",
author = "Liu, Alisa and
Swayamdipta, Swabha and
Smith, Noah A. and
Choi, Yejin",
month = jan,
year = "2022",
url = "https://arxiv.org/pdf/2201.05955",
} | 0 | 28 | 2023-07-03T21:15:20 | ---
license: cc-by-4.0
dataset_info:
- config_name: minority_examples
features:
- name: id
dtype: int64
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: gold
dtype: string
- name: genre
dtype: string
- name: pairID
dtype: string
splits:
- name: train.biased
num_bytes: 17807491
num_examples: 89402
- name: train.anti_biased
num_bytes: 2690706
num_examples: 13483
- name: test.biased
num_bytes: 865310
num_examples: 4363
- name: test.anti_biased
num_bytes: 127605
num_examples: 637
download_size: 26671494
dataset_size: 21491112
- config_name: partial_input
features:
- name: id
dtype: int64
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: gold
dtype: string
- name: genre
dtype: string
- name: pairID
dtype: string
splits:
- name: train.biased
num_bytes: 17792846
num_examples: 89402
- name: train.anti_biased
num_bytes: 2705351
num_examples: 13483
- name: test.biased
num_bytes: 858069
num_examples: 4344
- name: test.anti_biased
num_bytes: 134846
num_examples: 656
download_size: 26671494
dataset_size: 21491112
task_categories:
- text-classification
language:
- en
pretty_name: WANLI
size_categories:
- 100K<n<1M
---
# Dataset Card for Bias-amplified Splits for WANLI
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Annotations](#annotations)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [Fighting Bias with Bias repo](https://github.com/schwartz-lab-nlp/fight-bias-with-bias)
- **Paper:** [arXiv](https://arxiv.org/abs/2305.18917)
- **Point of Contact:** [Yuval Reif](mailto:yuval.reif@mail.huji.ac.il)
- **Original Dataset's Paper:** [WANLI](https://arxiv.org/abs/2201.05955)
### Dataset Summary
Bias-amplified splits is a novel evaluation framework to assess model robustness, by amplifying dataset biases in the training data and challenging models to generalize beyond them. This framework is defined by a bias-amplified training set and a hard, anti-biased test set, which we automatically extract from existing datasets using model-based methods.
Our experiments show that the identified anti-biased examples are naturally challenging for models, and moreover, models trained on bias-amplified data exhibit dramatic performance drops on anti-biased examples, which are not mitigated by common approaches to improve generalization.
Here we apply our framework to WANLI (**W**orker-**A**I Collaboration for **NLI**), a collection of 108K English sentence pairs for the task of natural language inference (NLI). WANLI was found to be more diverse and challenging for models compared to existing NLI datasets.
Our evaluation framework can be applied to any existing dataset, even those considered obsolete, to test model robustness. We hope our work will guide the development of robust models that do not rely on superficial biases and correlations.
#### Evaluation Results (DeBERTa-large)
##### For splits based on minority examples:
| Training Data \ Test Data | Original test | Anti-biased test |
|---------------------------|---------------|------------------|
| Original training split | 77.1 | 61.7 |
| Biased training split | 75.5 | 31.8 |
##### For splits based on partial-input model:
| Training Data \ Test Data | Original test | Anti-biased test |
|---------------------------|---------------|------------------|
| Original training split | 77.1 | 62.6 |
| Biased training split | 76.7 | 49.6 |
#### Loading the Data
```
from datasets import load_dataset
# choose which bias detection method to use for the bias-amplified splits: either "minority_examples" or "partial_input"
dataset = load_dataset("bias-amplified-splits/wanli", "minority_examples")
# use the biased training split and anti-biased test split
train_dataset = dataset['train.biased']
eval_dataset = dataset['test.anti_biased']
```
## Dataset Structure
### Data Instances
Data instances are taken directly from WANLI, and re-split into biased and anti-biased subsets. Here is an example of an instance from the dataset:
```
{
"id": 225295,
"premise": "It is a tribute to the skill of the coach that the team has been able to compete at the highest level.",
"hypothesis": "The coach is a good coach.",
"gold": "entailment",
"genre": "generated",
"pairID": "171408"
}
```
### Data Fields
- `id`: unique identifier for the example
- `premise`: a piece of text
- `hypothesis`: a piece of text that may be true, false, or whose truth conditions may not be knowable when compared to the premise
- `gold`: one of `entailment`, `neutral`, and `contradiction`
- `genre`: one of `generated` and `generated_revised`, depending on whether the example was revised by annotators
- `pairID`: id of seed MNLI example, corresponding to those in `data/mnli/train.jsonl`
### Data Splits
Bias-amplified splits require a method to detect *biased* and *anti-biased* examples in datasets. We release bias-amplified splits based created with each of these two methods:
- **Minority examples**: A novel method we introduce that leverages representation learning and clustering for identifying anti-biased *minority examples* (Tu et al., 2020)—examples that defy common statistical patterns found in the rest of the dataset.
- **Partial-input baselines**: A common method for identifying biased examples containing annotation artifacts in a dataset, which examines the performance of models that are restricted to using only part of the input. Such models, if successful, are bound to rely on unintended or spurious patterns in the dataset.
Using each of the two methods, we split each of the original train and test splits into biased and anti-biased subsets. See the [paper](https://arxiv.org/abs/2305.18917) for more details.
#### Minority Examples
| Dataset Split | Number of Instances in Split |
|---------------------|------------------------------|
| Train - biased | 89402 |
| Train - anti-biased | 13483 |
| Test - biased | 4363 |
| Test - anti-biased | 637 |
#### Partial-input Baselines
| Dataset Split | Number of Instances in Split |
|---------------------|------------------------------|
| Train - biased | 89402 |
| Train - anti-biased | 13483 |
| Test - biased | 4344 |
| Test - anti-biased | 656 |
## Dataset Creation
### Curation Rationale
NLP models often rely on superficial cues known as *dataset biases* to achieve impressive performance, and can fail on examples where these biases do not hold. To develop more robust, unbiased models, recent work aims to filter bisased examples from training sets. We argue that in order to encourage the development of robust models, we should in fact **amplify** biases in the training sets, while adopting the challenge set approach and making test sets anti-biased. To implement our approach, we introduce a simple framework that can be applied automatically to any existing dataset to use it for testing model robustness.
### Annotations
#### Annotation process
No new annotations are required to create bias-amplified splits. Existing data instances are split into *biased* and *anti-biased* splits based on automatic model-based methods to detect such examples.
## Considerations for Using the Data
### Social Impact of Dataset
Bias-amplified splits were created to promote the development of robust NLP models that do not rely on superficial biases and correlations, and provide more challenging evaluation of existing systems.
### Discussion of Biases
We propose to use bias-amplified splits to complement benchmarks with challenging evaluation settings that test model robustness, in addition to the dataset’s main training and test sets. As such, while existing dataset biases are *amplified* during training with bias-amplified splits, these splits are intended primarily for model evaluation, to expose the bias-exploiting behaviors of models and to identify more robsut models and effective robustness interventions.
## Additional Information
### Dataset Curators
Bias-amplified splits were introduced by Yuval Reif and Roy Schwartz from the [Hebrew University of Jerusalem](https://schwartz-lab-huji.github.io).
WANLI was developed by Alisa Liu, Swabha Swayamdipta, Noah A. Smith, and Yejin Choi from the [University of Washington](https://www.cs.washington.edu/) and [AI2](https://allenai.org/).
### Citation Information
```
@misc{reif2023fighting,
title = "Fighting Bias with Bias: Promoting Model Robustness by Amplifying Dataset Biases",
author = "Yuval Reif and Roy Schwartz",
month = may,
year = "2023",
url = "https://arxiv.org/pdf/2305.18917",
}
```
Source dataset:
```
@misc{liu-etal-2022-wanli,
title = "WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation",
author = "Liu, Alisa and
Swayamdipta, Swabha and
Smith, Noah A. and
Choi, Yejin",
month = jan,
year = "2022",
url = "https://arxiv.org/pdf/2201.05955",
}
``` | 10,003 | [
[
-0.054656982421875,
-0.056243896484375,
-0.0034732818603515625,
0.0077667236328125,
-0.0192413330078125,
-0.0189208984375,
-0.01263427734375,
-0.02716064453125,
0.0191650390625,
0.0267791748046875,
-0.05950927734375,
-0.033233642578125,
-0.052398681640625,
-... |
sudy-super/CoTangent | 2023-07-15T14:45:20.000Z | [
"language:ja",
"license:apache-2.0",
"region:us"
] | sudy-super | null | null | 10 | 28 | 2023-07-04T09:15:33 | ---
license: apache-2.0
language:
- ja
---
CoTangentは人手で作成された高品質でクリーンな100セットの日本語CoT用データセットです。
CoTangent_ja.json: CoT部分とoutput部分が繋がっています。
CoTangent_separated_ja.json: CoT部分とoutput部分が分離されていますが、CoTangent_ja.jsonの方が繋ぎが自然です。 | 221 | [
[
-0.032623291015625,
-0.06524658203125,
0.034393310546875,
0.03839111328125,
-0.04742431640625,
0.0305328369140625,
-0.0107574462890625,
-0.01983642578125,
0.042205810546875,
0.061431884765625,
-0.03619384765625,
-0.041595458984375,
-0.06475830078125,
0.02688... |
Lemoooon/Train-for-TIM | 2023-07-10T08:26:25.000Z | [
"region:us"
] | Lemoooon | null | null | 0 | 28 | 2023-07-07T09:29:27 | Training data for TIM | 21 | [
[
-0.0095062255859375,
-0.0256805419921875,
0.0163116455078125,
0.0084381103515625,
-0.01178741455078125,
-0.002384185791015625,
0.0006494522094726562,
-0.016082763671875,
0.0005311965942382812,
0.041748046875,
-0.0333251953125,
-0.034149169921875,
-0.047241210937... |
d0rj/dolphin-ru | 2023-07-26T14:54:29.000Z | [
"task_categories:conversational",
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:summarization",
"task_categories:feature-extra... | d0rj | null | null | 3 | 28 | 2023-07-20T22:49:00 | ---
language_creators:
- translated
language:
- ru
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
pretty_name: Dolphin (ru)
source_datasets:
- ehartford/dolphin
license: apache-2.0
tags:
- ChatGPT
- instruct
- instruct-tune
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 8037639673
num_examples: 2840090
download_size: 3900911155
dataset_size: 8037639673
task_categories:
- conversational
- text-classification
- token-classification
- table-question-answering
- question-answering
- zero-shot-classification
- summarization
- feature-extraction
- text-generation
- text2text-generation
---
# Dolphin-ru 🐬
## Dataset Description
- **Homepage:** https://erichartford.com/dolphin
This is translated version of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) into Russian. | 944 | [
[
-0.024871826171875,
-0.01168060302734375,
0.0128326416015625,
0.0237884521484375,
-0.0501708984375,
-0.017547607421875,
0.010955810546875,
-0.03656005859375,
0.059051513671875,
0.047882080078125,
-0.06982421875,
-0.038482666015625,
-0.032684326171875,
0.0247... |
syke9p3/multilabel-tagalog-hate-speech | 2023-10-12T07:45:41.000Z | [
"region:us"
] | syke9p3 | null | null | 0 | 28 | 2023-08-02T04:33:29 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
aharma/flickr30k_dogs_and_babies_128 | 2023-08-21T14:33:04.000Z | [
"task_categories:image-to-text",
"language:en",
"region:us"
] | aharma | null | null | 1 | 28 | 2023-08-20T12:26:00 | ---
language: en
pretty_name: "pictures of dogs and babies selected from flickr30k dataset"
task_categories: [image-to-text]
---
## Flickr30k dogs and babies selection
The data set was created for an image-to-text/text-to-image tutorial of the
Advanced Natural Language Processing (KEN4259) course at Maastricht University.
To make a good demo, but limit the data size and required training time, we selected only images
where the caption has a term for dog or a small child. Images were also cropped to squares and
compressed to 128 x 128 pixels to fit into our SWIN transformer.
## Authors and acknowledgment
Aki Härmä, Department of Advances Computing Sciences, Faculty of Science and
Engineering, Maastricht University, The Netherlands
## License
The Flickr30k data can be used for research and education use.
See [Flickr30k data set](https://www.kaggle.com/datasets/eeshawn/flickr30k) for
the original license and citatation info.
## Project status
First draft
| 976 | [
[
-0.06573486328125,
-0.012908935546875,
0.006755828857421875,
0.027923583984375,
-0.032196044921875,
-0.0158233642578125,
-0.0024509429931640625,
-0.040771484375,
-0.0008916854858398438,
0.035736083984375,
-0.06036376953125,
-0.02288818359375,
-0.031890869140625,... |
mHossain/mh_new_para_detection_data_v1 | 2023-08-20T20:47:31.000Z | [
"region:us"
] | mHossain | null | null | 0 | 28 | 2023-08-20T20:47:26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 6842297.7
num_examples: 36000
- name: test
num_bytes: 760255.3
num_examples: 4000
download_size: 3375458
dataset_size: 7602553.0
---
# Dataset Card for "mh_new_para_detection_data_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 635 | [
[
-0.048248291015625,
-0.02410888671875,
0.0286712646484375,
0.00658416748046875,
-0.0352783203125,
-0.020660400390625,
0.03314208984375,
-0.00536346435546875,
0.07666015625,
0.031402587890625,
-0.054595947265625,
-0.0654296875,
-0.049102783203125,
-0.00725555... |
RikoteMaster/Emotion_Recognition_4_llama2_chat_oversampled | 2023-08-22T07:43:53.000Z | [
"region:us"
] | RikoteMaster | null | null | 0 | 28 | 2023-08-21T12:51:34 | ---
dataset_info:
features:
- name: Text_processed
dtype: string
- name: Emotion
dtype: string
- name: Augmented
dtype: bool
- name: text
dtype: string
splits:
- name: train
num_bytes: 39065708
num_examples: 82848
download_size: 12633611
dataset_size: 39065708
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Emotion_Recognition_4_llama2_chat_oversampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 592 | [
[
-0.041473388671875,
-0.0168914794921875,
0.014923095703125,
0.04296875,
-0.025726318359375,
0.0151214599609375,
0.007587432861328125,
-0.03326416015625,
0.06494140625,
0.023223876953125,
-0.05035400390625,
-0.032470703125,
-0.058380126953125,
-0.009628295898... |
Linhz/qag_vimmrc | 2023-09-08T04:03:57.000Z | [
"region:us"
] | Linhz | null | null | 0 | 28 | 2023-09-08T04:03:38 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Linhz/qag_vimmrc2.0 | 2023-09-08T04:04:29.000Z | [
"region:us"
] | Linhz | null | null | 0 | 28 | 2023-09-08T04:04:10 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Mireu-Lab/NSL-KDD | 2023-09-10T18:27:19.000Z | [
"license:gpl-3.0",
"Network Security",
"region:us"
] | Mireu-Lab | null | null | 0 | 28 | 2023-09-10T18:23:43 | ---
license: gpl-3.0
tags:
- Network Security
---
# NSL-KDD
> The data set is a data set that converts the arff File provided by the [link](https://www.unb.ca/cic/datasets/nsl.html) into CSV and results.
>
> The data set is personally stored by converting data to float64.
>
> If you want to obtain additional original files, they are organized in the [Original Directory](./Original) in the repo.
## Labels
The label of the data set is as follows.
|#|Column|Non-Null|Count|Dtype|
|---|---|---|---|---|
|0|duration|151165|non-null|int64|
|1|protocol_type|151165|non-null|object|
|2|service|151165|non-null|object|
|3|flag|151165|non-null|object|
|4|src_bytes|151165|non-null|int64|
|5|dst_bytes|151165|non-null|int64|
|6|land|151165|non-null|int64|
|7|wrong_fragment|151165|non-null|int64|
|8|urgent|151165|non-null|int64|
|9|hot|151165|non-null|int64|
|10|num_failed_logins|151165|non-null|int64|
|11|logged_in|151165|non-null|int64|
|12|num_compromised|151165|non-null|int64|
|13|root_shell|151165|non-null|int64|
|14|su_attempted|151165|non-null|int64|
|15|num_root|151165|non-null|int64|
|16|num_file_creations|151165|non-null|int64|
|17|num_shells|151165|non-null|int64|
|18|num_access_files|151165|non-null|int64|
|19|num_outbound_cmds|151165|non-null|int64|
|20|is_host_login|151165|non-null|int64|
|21|is_guest_login|151165|non-null|int64|
|22|count|151165|non-null|int64|
|23|srv_count|151165|non-null|int64|
|24|serror_rate|151165|non-null|float64|
|25|srv_serror_rate|151165|non-null|float64|
|26|rerror_rate|151165|non-null|float64|
|27|srv_rerror_rate|151165|non-null|float64|
|28|same_srv_rate|151165|non-null|float64|
|29|diff_srv_rate|151165|non-null|float64|
|30|srv_diff_host_rate|151165|non-null|float64|
|31|dst_host_count|151165|non-null|int64|
|32|dst_host_srv_count|151165|non-null|int64|
|33|dst_host_same_srv_rate|151165|non-null|float64|
|34|dst_host_diff_srv_rate|151165|non-null|float64|
|35|dst_host_same_src_port_rate|151165|non-null|float64|
|36|dst_host_srv_diff_host_rate|151165|non-null|float64|
|37|dst_host_serror_rate|151165|non-null|float64|
|38|dst_host_srv_serror_rate|151165|non-null|float64|
|39|dst_host_rerror_rate|151165|non-null|float64|
|40|dst_host_srv_rerror_rate|151165|non-null|float64|
|41|class|151165|non-null|float64|
| 2,281 | [
[
-0.0428466796875,
-0.03033447265625,
0.005859375,
0.01114654541015625,
-0.019775390625,
0.0083770751953125,
0.01479339599609375,
-0.00511932373046875,
0.0252838134765625,
0.0318603515625,
-0.045379638671875,
-0.0596923828125,
-0.035736083984375,
0.0156402587... |
patrickfleith/controlled-anomalies-time-series-dataset | 2023-09-14T18:30:28.000Z | [
"task_categories:time-series-forecasting",
"task_categories:tabular-classification",
"size_categories:1M<n<10M",
"license:cc-by-4.0",
"timeseries",
"anomaly",
"detection",
"region:us"
] | patrickfleith | null | null | 4 | 28 | 2023-09-14T17:40:02 | ---
license: cc-by-4.0
task_categories:
- time-series-forecasting
- tabular-classification
tags:
- timeseries
- anomaly
- detection
pretty_name: cats
size_categories:
- 1M<n<10M
---
# Dataset Card for Dataset Name
## Dataset Description
Cite the dataset as:
Patrick Fleith. (2023). Controlled Anomalies Time Series (CATS) Dataset (Version 2) [Data set]. Solenix Engineering GmbH. https://doi.org/10.5281/zenodo.8338435
### Dataset Summary
The Controlled Anomalies Time Series (CATS) Dataset consists of commands, external stimuli, and telemetry readings of a simulated complex dynamical system with **200 injected anomalies.**
The CATS Dataset exhibits a set of desirable properties that make it very suitable for benchmarking Anomaly Detection Algorithms in Multivariate Time Series [1]:
### Supported Tasks and Leaderboards
Anomaly Detection in Multivariate Time Series
## Dataset Structure
- **Multivariate (17 variables) including sensors reading and control signals.** It simulates the operational behaviour of an arbitrary complex system including:
- **4 Deliberate Actuations / Control Commands sent by a simulated operator / controller**, for instance, commands of an operator to turn ON/OFF some equipment.
- **3 Environmental Stimuli / External Forces** acting on the system and affecting its behaviour, for instance, the wind affecting the orientation of a large ground antenna.
- **10 Telemetry Readings** representing the observable states of the complex system by means of sensors, for instance, a position, a temperature, a pressure, a voltage, current, humidity, velocity, acceleration, etc.
- **5 million timestamps**. Sensors readings are at 1Hz sampling frequency.
- **1 million nominal observations** (the first 1 million datapoints). This is suitable to start learning the "normal" behaviour.
- **4 million observations** that include both nominal and anomalous segments. This is suitable to evaluate both semi-supervised approaches (novelty detection) as well as unsupervised approaches (outlier detection).
- **200 anomalous segments**. One anomalous segment may contain several successive anomalous observations / timestamps. Only the last 4 million observations contain anomalous segments.
- **Different types of anomalies** to understand what anomaly types can be detected by different approaches. The categories are available in the dataset and in the metadata.
- **Fine control over ground truth**. As this is a simulated system with deliberate anomaly injection, the start and end time of the anomalous behaviour is known very precisely. In contrast to real world datasets, there is no risk that the ground truth contains mislabelled segments which is often the case for real data.Suitable for root cause analysis. In addition to the anomaly category, the time series channel in which the anomaly first developed itself is recorded and made available as part of the metadata. This can be useful to evaluate the performance of algorithm to trace back anomalies to the right root cause channel.
- **Affected channels**. In addition to the knowledge of the root cause channel in which the anomaly first developed itself, we provide information of channels possibly affected by the anomaly. This can also be useful to evaluate the explainability of anomaly detection systems which may point out to the anomalous channels (root cause and affected).
- **Obvious anomalies.** The simulated anomalies have been designed to be "easy" to be detected for human eyes (i.e., there are very large spikes or oscillations), hence also detectable for most algorithms. It makes this synthetic dataset useful for screening tasks (i.e., to eliminate algorithms that are not capable to detect those obvious anomalies). However, during our initial experiments, the dataset turned out to be challenging enough even for state-of-the-art anomaly detection approaches, making it suitable also for regular benchmark studies.
- **Context provided**. Some variables can only be considered anomalous in relation to other behaviours. A typical example consists of a light and switch pair. The light being either on or off is nominal, the same goes for the switch, but having the switch on and the light off shall be considered anomalous. In the CATS dataset, users can choose (or not) to use the available context, and external stimuli, to test the usefulness of the context for detecting anomalies in this simulation.
- **Pure signal ideal for robustness-to-noise analysis**. The simulated signals are provided without noise: while this may seem unrealistic at first, it is an advantage since users of the dataset can decide to add on top of the provided series any type of noise and choose an amplitude. This makes it well suited to test how sensitive and robust detection algorithms are against various levels of noise.
- **No missing data**. You can drop whatever data you want to assess the impact of missing values on your detector with respect to a clean baseline.
### Data Splits
- The first 1 million points are nominal (no occurence of anomalies)
- The next 4 million points include both nominal and anomalous segments.
### Licensing Information
license: cc-by-4.0
### Citation Information
Patrick Fleith. (2023). Controlled Anomalies Time Series (CATS) Dataset (Version 1) [Data set]. Solenix Engineering GmbH. https://doi.org/10.5281/zenodo.7646897 | 5,386 | [
[
-0.0369873046875,
-0.058837890625,
0.017547607421875,
0.0297698974609375,
-0.018890380859375,
0.00841522216796875,
0.014068603515625,
-0.0254669189453125,
0.046783447265625,
0.04534912109375,
-0.07391357421875,
-0.0260162353515625,
-0.026885986328125,
0.0180... |
shnl/qg_vimmrc1.0 | 2023-09-19T15:21:55.000Z | [
"region:us"
] | shnl | null | null | 0 | 28 | 2023-09-19T15:21:25 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
shossain/govreport-summarization-tokenized | 2023-09-20T07:04:40.000Z | [
"region:us"
] | shossain | null | null | 0 | 28 | 2023-09-20T06:19:21 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 69604
num_examples: 973
download_size: 22673
dataset_size: 69604
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "govreport-summarization-tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 542 | [
[
-0.031890869140625,
-0.0155029296875,
0.005641937255859375,
0.014556884765625,
-0.0283966064453125,
0.0041351318359375,
0.0078125,
-0.0015859603881835938,
0.07916259765625,
0.038970947265625,
-0.037322998046875,
-0.058197021484375,
-0.053253173828125,
-0.016... |
DanFosing/wizardlm-vicuna-guanaco-uncensored | 2023-09-27T18:45:31.000Z | [
"license:apache-2.0",
"region:us"
] | DanFosing | null | null | 3 | 28 | 2023-09-20T18:39:21 | ---
license: apache-2.0
---
# Dataset
This dataset is a combination of guanaco, wizardlm instruct and wizard vicuna datasets (all of them were uncensored). | 156 | [
[
-0.007175445556640625,
-0.019195556640625,
0.0007843971252441406,
0.0036296844482421875,
-0.0266571044921875,
0.024749755859375,
0.0128936767578125,
-0.00708770751953125,
0.0263214111328125,
0.08453369140625,
-0.03997802734375,
-0.06597900390625,
-0.030319213867... |
DavidLanz/chinese-dolly-input-output-15k | 2023-09-22T02:13:53.000Z | [
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:zh",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | DavidLanz | null | null | 0 | 28 | 2023-09-22T02:11:39 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
- text-generation
language:
- zh
- en
size_categories:
- 10K<n<100K
---
Chinese-Dolly-15k 是繁體中文翻譯的Dolly instruction(Databricks)資料集,並用於 Fine tune 的問答 JSON 格式。
原來的資料集'databricks/databricks-dolly-15k'是由數千名Databricks員工根據InstructGPT論文中概述的幾種行為類別生成的遵循指示記錄的開來源資料集。這幾個行為類別包括頭腦風暴、分類、封閉型問答、生成、資訊擷取、開放類型的問答和摘要。
在知識共用署名-相同方式共用3.0(CC BY-SA 3.0)許可下,此資料集可用於任何學術或商業用途。
如果你也在做這些資料集的籌備,歡迎來聯繫我們,避免重複花錢。
## Citation
Please cite the repo if you use the data or code in this repo.
```
@misc{alpaca,
author = {DavidLanz},
title = {An Instruction-following Chinese Language model, LoRA tuning on LLaMA},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
urldate = {2023-09-15}
}
```
| 924 | [
[
-0.0005731582641601562,
-0.07550048828125,
-0.01105499267578125,
0.053802490234375,
-0.04388427734375,
-0.01122283935546875,
0.004268646240234375,
-0.00988006591796875,
0.0124664306640625,
0.0455322265625,
-0.045074462890625,
-0.063232421875,
-0.03521728515625,
... |
tiagoblima/translation-pt-indigenouns | 2023-10-12T21:30:45.000Z | [
"region:us"
] | tiagoblima | null | null | 0 | 28 | 2023-09-24T20:52:11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: translation
struct:
- name: pt
dtype: string
- name: gub
dtype: string
- name: gun
dtype: string
splits:
- name: train
num_bytes: 57522705
num_examples: 108670
- name: validation
num_bytes: 100285
num_examples: 125
- name: test
num_bytes: 1324019
num_examples: 1950
download_size: 11569330
dataset_size: 58947009
---
# Dataset Card for "translation-pt-indigenouns"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 781 | [
[
-0.034423828125,
-0.003528594970703125,
0.01462554931640625,
0.040496826171875,
-0.033905029296875,
0.006267547607421875,
-0.00745391845703125,
-0.007099151611328125,
0.052490234375,
0.0316162109375,
-0.055938720703125,
-0.0635986328125,
-0.0662841796875,
0.... |
reversebutlerianjihad/AnorexicPajama | 2023-09-25T08:04:20.000Z | [
"region:us"
] | reversebutlerianjihad | null | null | 1 | 28 | 2023-09-25T08:03:57 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
- name: meta
struct:
- name: redpajama_set_name
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 239181187.24
num_examples: 54890
- name: test
num_bytes: 40114950
num_examples: 9346
- name: validation
num_bytes: 39109042
num_examples: 9347
download_size: 185544769
dataset_size: 318405179.24
---
# Dataset Card for "AnorexicPajama"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 796 | [
[
-0.0289154052734375,
-0.0190277099609375,
0.011932373046875,
0.0322265625,
-0.0014696121215820312,
-0.0004782676696777344,
0.0260467529296875,
-0.0191650390625,
0.0673828125,
0.045928955078125,
-0.04473876953125,
-0.056243896484375,
-0.052032470703125,
-0.00... |
Intuit-GenSRF/sexting-nsfw-adultconten | 2023-10-05T01:05:04.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 1 | 28 | 2023-10-05T01:05:02 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 33518
num_examples: 538
download_size: 19162
dataset_size: 33518
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "sexting-nsfw-adultconten"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 483 | [
[
-0.0322265625,
-0.0202789306640625,
0.0025653839111328125,
0.043548583984375,
-0.0251922607421875,
-0.0229339599609375,
0.00562286376953125,
-0.01561737060546875,
0.036224365234375,
0.040313720703125,
-0.057098388671875,
-0.0626220703125,
-0.040374755859375,
... |
RIW/small-coco-wm_50_2 | 2023-10-08T03:32:30.000Z | [
"region:us"
] | RIW | null | null | 0 | 28 | 2023-10-08T03:30:37 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
- name: url
dtype: string
- name: key
dtype: string
- name: status
dtype: string
- name: error_message
dtype: 'null'
- name: width
dtype: int64
- name: height
dtype: int64
- name: original_width
dtype: int64
- name: original_height
dtype: int64
- name: exif
dtype: string
- name: sha256
dtype: string
splits:
- name: train
num_bytes: 781729596.182
num_examples: 8362
- name: validation
num_bytes: 851865993.632
num_examples: 8514
download_size: 554825307
dataset_size: 1633595589.8140001
---
# Dataset Card for "small-coco-wm_50_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 849 | [
[
-0.04705810546875,
-0.0168304443359375,
0.00959014892578125,
0.0221405029296875,
-0.018768310546875,
0.0025177001953125,
0.00019240379333496094,
-0.016754150390625,
0.060638427734375,
0.0267486572265625,
-0.060028076171875,
-0.042022705078125,
-0.0450439453125,
... |
minh21/COVID-QA-Chunk-64-question-answering-biencoder-data-90_10 | 2023-10-09T04:29:31.000Z | [
"region:us"
] | minh21 | null | null | 0 | 28 | 2023-10-09T04:29:26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context_chunks
sequence: string
- name: document_id
dtype: int64
- name: id
dtype: int64
splits:
- name: train
num_bytes: 78943266
num_examples: 1631
- name: validation
num_bytes: 8529659
num_examples: 185
download_size: 14143196
dataset_size: 87472925
---
# Dataset Card for "COVID-QA-Chunk-64-question-answering-biencoder-data-90_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 758 | [
[
-0.0423583984375,
-0.0303192138671875,
0.0074005126953125,
0.0179290771484375,
-0.0149993896484375,
-0.0025997161865234375,
0.035400390625,
-0.003917694091796875,
0.049591064453125,
0.0211181640625,
-0.050140380859375,
-0.037994384765625,
-0.033447265625,
-0... |
promptora11/llama2 | 2023-10-09T11:27:06.000Z | [
"region:us"
] | promptora11 | null | null | 0 | 28 | 2023-10-09T11:27:02 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 338808
num_examples: 200
download_size: 201257
dataset_size: 338808
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 430 | [
[
-0.0274810791015625,
-0.00949859619140625,
0.0205841064453125,
0.03076171875,
-0.0298004150390625,
0.00951385498046875,
0.03411865234375,
-0.02764892578125,
0.057586669921875,
0.031341552734375,
-0.053466796875,
-0.050018310546875,
-0.050750732421875,
-0.014... |
blockplacer4/hobby-dataset-v4 | 2023-10-11T00:52:35.000Z | [
"region:us"
] | blockplacer4 | null | null | 0 | 28 | 2023-10-09T22:08:38 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
yangwang825/sst2-textfooler-5 | 2023-10-10T11:18:40.000Z | [
"region:us"
] | yangwang825 | null | null | 0 | 28 | 2023-10-10T11:15:46 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Nbardy/photo_geometric | 2023-10-15T03:16:05.000Z | [
"region:us"
] | Nbardy | null | null | 0 | 28 | 2023-10-15T02:13:39 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 51470967349.009
num_examples: 21381
download_size: 61304680550
dataset_size: 51470967349.009
---
# Dataset Card for "photo_geometric"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 497 | [
[
-0.045684814453125,
-0.026214599609375,
0.0210113525390625,
0.01239776611328125,
-0.016357421875,
-0.007843017578125,
0.02301025390625,
-0.02105712890625,
0.045135498046875,
0.0226593017578125,
-0.0633544921875,
-0.07012939453125,
-0.0382080078125,
-0.017639... |
yh0sh/resized_camvid_annot_car | 2023-10-19T02:16:43.000Z | [
"region:us"
] | yh0sh | Resized CamVid dataset to 256*256 for semantic segmentation for car
This dataset originated from CamVid: https://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid
We resized these images to 256*256 and these annotation data converts to binary:
1 if each pixel is "car" class and 0 if that is the other classes | @InProceedings{huggingface:dataset,
title = {Resized CamVid dataset with annotation for car},
author={yh0sh},
year={2023}
} | 0 | 28 | 2023-10-15T13:46:23 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
pasupula/insurance_test | 2023-10-18T02:54:53.000Z | [
"region:us"
] | pasupula | The Health Insurance Questions and Answers dataset provides a comprehensive collection of common inquiries related to health insurance, along with informative responses. This resource offers individuals, healthcare professionals, and organizations valuable insights into the complex world of health insurance. It covers topics such as the fundamentals of health insurance, its significance, obtaining coverage, covered services, and explanations of key terms like premium, deductible, and copayment. The dataset also delves into various types of health insurance plans, including Health Maintenance Organizations (HMOs), Preferred Provider Organizations (PPOs), and Exclusive Provider Organizations (EPOs). Moreover, it addresses the impact of pre-existing conditions on coverage eligibility and discusses options for adding family members to insurance plans. Additionally, it explores the concept of open enrollment periods and the benefits of Health Savings Accounts (HSAs) and Flexible Spending Accounts (FSAs) for managing healthcare expenses. This dataset is a valuable resource for anyone seeking to understand, compare, and make informed decisions about health insurance. | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | 0 | 28 | 2023-10-18T02:22:18 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
phanvancongthanh/pubchem_enamine_backup | 2023-10-19T17:05:03.000Z | [
"region:us"
] | phanvancongthanh | null | null | 0 | 28 | 2023-10-19T08:10:46 | ---
dataset_info:
features:
- name: standardized_smiles
dtype: string
splits:
- name: train
num_bytes: 12610061376.788551
num_examples: 258185238
download_size: 5817560683
dataset_size: 12610061376.788551
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "pubchem_enamine_backup"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 495 | [
[
-0.033050537109375,
-0.00637054443359375,
0.0178680419921875,
0.0175323486328125,
-0.0093994140625,
-0.007778167724609375,
0.01316070556640625,
0.0198974609375,
0.0654296875,
0.061370849609375,
-0.06634521484375,
-0.044952392578125,
-0.01201629638671875,
0.0... |
Maverick17/my_sds_dataset | 2023-10-19T13:20:01.000Z | [
"region:us"
] | Maverick17 | null | null | 0 | 28 | 2023-10-19T13:19:58 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
dataset_info:
features:
- name: target
struct:
- name: name
dtype: string
- name: steps
list:
- name: action
dtype: string
- name: name
dtype: string
- name: param
dtype: string
- name: source
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 1794995
num_examples: 583
- name: train
num_bytes: 6870046
num_examples: 2332
download_size: 1562230
dataset_size: 8665041
---
# Dataset Card for "my_sds_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 797 | [
[
-0.042724609375,
0.0040435791015625,
0.023681640625,
0.022369384765625,
-0.01401519775390625,
0.00495147705078125,
0.0269927978515625,
0.01114654541015625,
0.0855712890625,
0.036712646484375,
-0.06768798828125,
-0.04583740234375,
-0.04010009765625,
-0.001932... |
BrunoGR/emotional_response_spanish_dataset | 2023-10-20T00:30:40.000Z | [
"region:us"
] | BrunoGR | null | null | 0 | 28 | 2023-10-20T00:30:25 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: index
dtype: float64
- name: input
dtype: string
- name: output
dtype: string
- name: Prompt_sp
dtype: string
- name: Prompt_mix
dtype: string
- name: Prompt_en
dtype: string
splits:
- name: train
num_bytes: 128862064
num_examples: 41910
- name: test
num_bytes: 4724540
num_examples: 1320
- name: validation
num_bytes: 7753180
num_examples: 2220
download_size: 42060281
dataset_size: 141339784
---
# Dataset Card for "emotional_response_spanish_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 870 | [
[
-0.049041748046875,
-0.023834228515625,
0.005035400390625,
0.051849365234375,
-0.01092529296875,
0.0062255859375,
0.007080078125,
-0.022003173828125,
0.0728759765625,
0.0187225341796875,
-0.0736083984375,
-0.04833984375,
-0.047119140625,
-0.00004178285598754... |
AgamP/LLM_Dataset | 2023-10-22T10:10:51.000Z | [
"region:us"
] | AgamP | null | null | 0 | 28 | 2023-10-22T10:06:07 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
haseong8012/general10k_for-test | 2023-10-22T17:31:53.000Z | [
"region:us"
] | haseong8012 | null | null | 0 | 28 | 2023-10-22T16:40:28 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: audio
sequence: float32
splits:
- name: test
num_bytes: 1824475284.3333333
num_examples: 10000
download_size: 0
dataset_size: 1824475284.3333333
---
# Dataset Card for "general10k_for-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 499 | [
[
-0.049407958984375,
-0.029052734375,
0.01316070556640625,
0.027801513671875,
-0.0191650390625,
-0.015899658203125,
0.016021728515625,
-0.0131988525390625,
0.0643310546875,
0.0247955322265625,
-0.049041748046875,
-0.05108642578125,
-0.0303192138671875,
0.0005... |
nairaxo/shingazidja-sentences-jw | 2023-10-22T16:59:34.000Z | [
"region:us"
] | nairaxo | null | null | 0 | 28 | 2023-10-22T16:59:33 | ---
dataset_info:
features:
- name: Sentence
dtype: string
- name: Translation (fr)
dtype: string
- name: Translation (en) (Google)
dtype: string
- name: Polarity
dtype: float64
- name: Sentiment
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 496096
num_examples: 1902
download_size: 305556
dataset_size: 496096
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "shingazidja-sentences-jw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 673 | [
[
-0.039398193359375,
-0.034820556640625,
0.0228729248046875,
0.0301361083984375,
-0.00905609130859375,
-0.01024627685546875,
-0.01013946533203125,
-0.014984130859375,
0.058319091796875,
0.035491943359375,
-0.0596923828125,
-0.054412841796875,
-0.0416259765625,
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.