id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
RossVermouth/chensu_test_dataset | RossVermouth | 2023-05-19T08:23:29Z | 25 | 0 | null | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"language:aa",
"language:ae",
"license:apache-2.0",
"not-for-all-audiences",
"region:us"
] | 2023-05-19T08:23:29Z | 2023-05-19T07:58:00.000Z | 2023-05-19T07:58:00 | ---
license: apache-2.0
task_categories:
- image-classification
language:
- aa
- ae
tags:
- not-for-all-audiences
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
just for test
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.4841858446598053,
-0.48016881942749023,
-0.05218074843287468,
0.33772870898246765,
-0.2928682565689087,
0.18005606532096863,
-0.31815090775489807,
-0.23186151683330536,
0.4713747203350067,
0.7350213527679443,
-0.9031068682670593,
-1.1819566488265991,
-0.7111650109291077,
0.1512516587972... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mask-distilled-one-sec-cv12/chunk_86 | mask-distilled-one-sec-cv12 | 2023-05-19T22:54:15Z | 25 | 0 | null | [
"region:us"
] | 2023-05-19T22:54:15Z | 2023-05-19T22:53:26.000Z | 2023-05-19T22:53:26 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1377070296
num_examples: 270438
download_size: 1404210357
dataset_size: 1377070296
---
# Dataset Card for "chunk_86"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5220022201538086,
-0.3123195767402649,
0.25156962871551514,
0.3115271329879761,
-0.5274405479431152,
-0.07727762311697006,
0.30984142422676086,
-0.385343462228775,
0.9398937821388245,
0.5953932404518127,
-0.7383105158805847,
-0.6655201315879822,
-0.7292303442955017,
-0.20954352617263794... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
fnlp/moss-003-sft-data | fnlp | 2023-07-09T15:09:50Z | 25 | 47 | null | [
"license:cc-by-4.0",
"region:us"
] | 2023-07-09T15:09:50Z | 2023-05-20T13:07:50.000Z | 2023-05-20T13:07:50 | ---
license: cc-by-4.0
---
# moss-003-sft-data
## Conversation Without Plugins
### Categories
| Category | \# samples |
|----------------------|-----------:|
| Brainstorming | 99,162 |
| Complex Instruction | 95,574 |
| Code | 198,079 |
| Role Playing | 246,375 |
| Writing | 341,087 |
| Harmless | 74,573 |
| Others | 19,701 |
| Total | 1,074,551 |
**Others** contains two categories: **Continue**(9,839) and **Switching**(9,862).
The **Continue** category refers to instances in a conversation where the user asks the system to continue outputting the response from the previous round that was not completed.
The **Switching** category refers to instances in a conversation where the user switches the language they are using.
We remove the data for honesty because it contains private information.
| [
-0.40057018399238586,
-0.8681566119194031,
0.2428250014781952,
0.7256638407707214,
-0.2561814785003662,
0.49358615279197693,
0.15711455047130585,
-0.07777929306030273,
0.08407653123140335,
0.9348720908164978,
-0.8845626711845398,
-0.5392667055130005,
-0.3179748058319092,
-0.037448488175868... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
voidful/StrategyQA | voidful | 2023-05-20T16:06:43Z | 25 | 1 | null | [
"region:us"
] | 2023-05-20T16:06:43Z | 2023-05-20T16:02:29.000Z | 2023-05-20T16:02:29 | A Question Answering Benchmark with Implicit Reasoning Strategies
The StrategyQA dataset was created through a crowdsourcing pipeline for eliciting creative and diverse yes/no questions that require implicit reasoning steps. To solve questions in StrategyQA, the reasoning steps should be inferred using a strategy. To guide and evaluate the question answering process, each example in StrategyQA was annotated with a decomposition into reasoning steps for answering it, and Wikipedia paragraphs that provide evidence for the answer to each step.
Illustrated in the figure below: Questions in StrategyQA (Q1) require implicit reasoning, in contrast to multi-step questions that explicitly specify the reasoning process (Q2). Each training example contains a question (Q1), yes/no answer (A), decomposition (D), and evidence paragraphs (E).
[strategyqa_test](https://huggingface.co/datasets/voidful/StrategyQA/resolve/main/strategyqa_test.json)
[strategyqa_train](https://huggingface.co/datasets/voidful/StrategyQA/blob/main/strategyqa_train.json)
[strategyqa_train_filtered](https://huggingface.co/datasets/voidful/StrategyQA/blob/main/strategyqa_train_filtered.json)
[strategyqa_train_paragraphs](https://huggingface.co/datasets/voidful/StrategyQA/blob/main/strategyqa_train_paragraphs.json)
Paper
Title: Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies
Authors: Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, Jonathan Berant
Transactions of the Association for Computational Linguistics (TACL), 2021
Citation:
```
@article{geva2021strategyqa,
title = {{Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies}},
author = {Geva, Mor and Khashabi, Daniel and Segal, Elad and Khot, Tushar and Roth, Dan and Berant, Jonathan},
journal = {Transactions of the Association for Computational Linguistics (TACL)},
year = {2021},
}
``` | [
-0.5773292183876038,
-0.8024035692214966,
0.8765885233879089,
-0.0037696408107876778,
0.04924388229846954,
-0.134120911359787,
0.006455422844737768,
-0.16252870857715607,
-0.36071479320526123,
0.1664005070924759,
-0.9392411708831787,
-0.33969244360923767,
-0.251867413520813,
0.147022560238... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Linly-AI/Chinese-pretraining-dataset | Linly-AI | 2023-05-26T02:32:06Z | 25 | 25 | null | [
"license:apache-2.0",
"region:us"
] | 2023-05-26T02:32:06Z | 2023-05-25T08:31:43.000Z | 2023-05-25T08:31:43 | ---
license: apache-2.0
---
Data source: https://github.com/CVI-SZU/Linly/wiki/Linly-OpenLLaMA | [
-0.05840451270341873,
-0.5067986845970154,
0.6520258784294128,
-0.050096407532691956,
0.17896680533885956,
-0.4286973178386688,
-0.37768295407295227,
-0.3319784998893738,
0.6551434397697449,
0.7003040909767151,
-0.7607609033584595,
-0.8701214790344238,
-0.017717653885483742,
-0.63445365428... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shawarmas/profanity-filter | shawarmas | 2023-06-22T08:31:38Z | 25 | 0 | null | [
"region:us"
] | 2023-06-22T08:31:38Z | 2023-06-03T09:50:17.000Z | 2023-06-03T09:50:17 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AlekseyScorpi/docs_on_several_languages | AlekseyScorpi | 2023-09-16T07:01:24Z | 25 | 1 | null | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"code",
"region:us"
] | 2023-09-16T07:01:24Z | 2023-06-11T13:50:31.000Z | 2023-06-11T13:50:31 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': az
'1': by
'2': cn
'3': en
'4': es
'5': fn
'6': gr
'7': jp
'8': ko
'9': kz
'10': la
'11': li
'12': mo
'13': 'no'
'14': pl
'15': ru
'16': ua
splits:
- name: train
num_bytes: 1893804579.79
num_examples: 1987
- name: test
num_bytes: 374568135
num_examples: 339
download_size: 2423302965
dataset_size: 2268372714.79
task_categories:
- text-classification
tags:
- code
size_categories:
- 1K<n<10K
---
# Dataset Card for "docs_on_several_languages"
This dataset is a collection of different images in different languages.
The set includes the following languages: Azerbaijani, Belorussian, Chinese, English, Estonian, Finnish, Georgian, Japanese, Korean, Kazakh, Latvian, Lithuanian, Mongolian, Norwegian, Polish, Russian, Ukranian.
Each language has a corresponding class label defined. At least 100 images in the entire dataset are allocated per class. This dataset was originally used for the task of classifying the language of a document based on its image, but I hope it can help you in other machine learning tasks.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5371086001396179,
-0.357617050409317,
0.1816321760416031,
0.06279616802930832,
-0.2527163326740265,
0.1744309663772583,
-0.3596816658973694,
-0.37211912870407104,
0.19139258563518524,
0.48958590626716614,
-0.4932159185409546,
-0.8344783782958984,
-0.7671512365341187,
0.3540639281272888,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
henryscheible/implicit_bias | henryscheible | 2023-06-17T23:50:37Z | 25 | 0 | null | [
"region:us"
] | 2023-06-17T23:50:37Z | 2023-06-17T21:21:05.000Z | 2023-06-17T21:21:05 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shinonomelab/cleanvid-15m_map | shinonomelab | 2023-07-02T04:22:55Z | 25 | 10 | null | [
"task_categories:text-to-video",
"task_categories:video-classification",
"size_categories:10M<n<100M",
"language:en",
"license:cc-by-4.0",
"captions",
"metadata",
"region:us"
] | 2023-07-02T04:22:55Z | 2023-06-27T04:45:10.000Z | 2023-06-27T04:45:10 | ---
license: cc-by-4.0
dataset_info:
features:
- name: id
dtype: int64
- name: description
dtype: string
- name: duration
dtype: float64
- name: aspectratio
dtype: string
- name: videourl
dtype: string
- name: author
dtype: string
- name: categories
dtype: string
- name: framerate
dtype: float64
- name: r18
dtype: int64
splits:
- name: train
num_bytes: 16755833083
num_examples: 14394510
download_size: 5410262648
dataset_size: 16755833083
task_categories:
- text-to-video
- video-classification
language:
- en
tags:
- captions
- metadata
pretty_name: CleanVid Map (15M)
size_categories:
- 10M<n<100M
---
# CleanVid Map (15M) 🎥
### TempoFunk Video Generation Project
CleanVid-15M is a large-scale dataset of videos with multiple metadata entries such as:
- Textual Descriptions 📃
- Recording Equipment 📹
- Categories 🔠
- Framerate 🎞️
- Aspect Ratio 📺
CleanVid aim is to improve the quality of WebVid-10M dataset by adding more data and cleaning the dataset by dewatermarking the videos in it.
This dataset includes only the map with the urls and metadata, with 3,694,510 more entries than the original WebVid-10M dataset.
Note that the videos are low-resolution, ranging from 240p to 480p. But this shouldn't be a problem as resolution scaling is difficult in Text-To-Video models.
More Datasets to come for high-res use cases.
CleanVid is the foundation dataset for the TempoFunk Video Generation project.
Built from a crawl of Shutterstock from June 25, 2023.
## Format 📊
- id: Integer (int64) - Shutterstock video ID
- description: String - Description of the video
- duration: Float(64) - Duration of the video in seconds
- aspectratio: String - Aspect Ratio of the video separated by colons (":")
- videourl: String - Video URL for the video in the entry, MP4 format. WEBM format is also available most of the times (by changing the extension at the end of the URL.).
- author: String - JSON-String containing information of the author such as `Recording Equipment`, `Style`, `Nationality` and others.
- categories: String - JSON-String containing the categories of the videos. (Values from shutterstock, not by us.)
- framerate: Float(64) - Framerate of the video
- r18: Bit (int64) - Wether the video is marked as mature content. 0 = Safe For Work; 1 = Mature Content
## Code 👩💻
If you want to re-create this dataset on your own, code is available here:
https://github.com/chavinlo/tempofunk-scrapper/tree/refractor1/sites/shutterstock
Due to rate-limitations, you might need to obtain a proxy. Functionality for proxies is included in the repository.
## Sample 🧪
```json
{
"id": 1056934082,
"description": "Rio, Brazil - February 24, 2020: parade of the samba school Mangueira, at the Marques de Sapucai Sambodromo",
"duration": 9.76,
"aspectratio": "16:9",
"videourl": "https://www.shutterstock.com/shutterstock/videos/1056934082/preview/stock-footage-rio-brazil-february-parade-of-the-samba-school-mangueira-at-the-marques-de-sapucai.mp4",
"author": {
"accountsId": 101974372,
"contributorId": 62154,
"bio": "Sempre produzindo mais",
"location": "br",
"website": "www.dcpress.com.br",
"contributorTypeList": [
"photographer"
],
"equipmentList": [
"300mm f2.8",
"24-70mm",
"70-200mm",
"Nikon D7500 ",
"Nikon Df",
"Flashs Godox"
],
"styleList": [
"editorial",
"food",
"landscape"
],
"subjectMatterList": [
"photographer",
"people",
"nature",
"healthcare",
"food_and_drink"
],
"facebookUsername": "celso.pupo",
"googlePlusUsername": "celsopupo",
"twitterUsername": "celsopupo",
"storageKey": "/contributors/62154/avatars/thumb.jpg",
"cdnThumbPath": "/contributors/62154/avatars/thumb.jpg",
"displayName": "Celso Pupo",
"vanityUrlUsername": "rodrigues",
"portfolioUrlSuffix": "rodrigues",
"portfolioUrl": "https://www.shutterstock.com/g/rodrigues",
"instagramUsername": "celsopupo",
"hasPublicSets": true,
"instagramUrl": "https://www.instagram.com/celsopupo",
"facebookUrl": "https://www.facebook.com/celso.pupo",
"twitterUrl": "https://twitter.com/celsopupo"
},
"categories": [
"People"
],
"framerate": 29.97,
"r18": 0
}
```
## Credits 👥
### Main
- Lopho - Part of TempoFunk Video Generation
- Chavinlo - Part of TempoFunk Video Generation & CleanVid Crawling, Scraping and Formatting
```
@InProceedings{Bain21,
author = "Max Bain and Arsha Nagrani and G{\"u}l Varol and Andrew Zisserman",
title = "Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval",
booktitle = "IEEE International Conference on Computer Vision",
year = "2021",
}
```
### Extra
- Salt - Base Threading Code (2022) | [
-0.6356505155563354,
-0.3614339232444763,
0.06813942641019821,
0.29771688580513,
-0.5755554437637329,
0.2259545922279358,
-0.06857755035161972,
-0.311324805021286,
0.401978075504303,
-0.06635237485170364,
-0.5546568036079407,
-0.7470341920852661,
-0.6020137071609497,
-0.12824738025665283,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
JourneyDB/JourneyDB | JourneyDB | 2023-08-10T14:19:04Z | 25 | 25 | null | [
"arxiv:2307.00716",
"region:us"
] | 2023-08-10T14:19:04Z | 2023-06-28T08:32:06.000Z | 2023-06-28T08:32:06 | ---
extra_gated_prompt: "You have carefully read the [Terms of Usage](https://journeydb.github.io/assets/Terms_of_Usage.html) and agree with the listed terms."
extra_gated_fields:
First Name: text
Last Name: text
Affiliation: text
I agree with our JourneyDB usage terms and I will obey the terms when using the JourneyDB dataset: checkbox
---
---
task_categories:
- image-to-text
language:
- en
size_categories:
- 1M<n<10M
---
# JourneyDB
[[Project Page]](https://journeydb.github.io) [[Paper]](https://arxiv.org/abs/2307.00716) [[Code]](https://github.com/JourneyDB/JourneyDB) [[HuggingFace]](https://huggingface.co/datasets/JourneyDB/JourneyDB) [[OpenDataLab]]()

## Dataset Description
### Summary
**JourneyDB** is a large-scale generated image understanding dataset that contains **4,429,295** high-resolution Midjourney images, annotated with corresponding **text prompt**, **image caption** and **visual question answering**.
### Supported Tasks
**JourneyDB** supports **4** downstream tasks, i.e. **Prompt Inversion**, **Style Retrieval**, **Image Caption**, and **Visual Question Answering**. We evaluate many existing methods on these tasks and provide a comprehensive benchmark. Please see our [Paper](https://arxiv.org/abs/2307.00716) for more details.
## Dataset Details
### Data Collection
For each image instance, we acquire the corresponding text prompts used to generate the images with Midjourney. Furthermore, we employ GPT3.5 to generate the caption and VAQ groundtruth.

### Data Instances
We provide several examples to show the contents of each dataset instance.

### Data Splits
We provide detailed statistics for each split subset in the following table. We randomly split the whole dataset into roughly 20 : 1 to obtain the training and validation set. The training set contains 4,189,737 labeled images and 1,385,317 labeled prompts. The validation set contains 235,156 images and 82,093 prompts. And we additionally sample a testing set for manual filtering. The testing set contains 5,402 images and 5,171 prompts.
| | Image | Prompt | Labeled Image | Labeled Prompt | Style QA | Content QA |
|----------------|:---------:|:---------:|:-------------:|:--------------:|:---------:|:----------:|
| Training Set | 4,453,193 | 1,643,375 | 4,189,737 | 1,385,317 | 7,056,394 | 8,775,971 |
| Validation Set | 234,156 | 82,093 | 234,156 | 82,093 | 311,569 | 374,310 |
| Testing Set | 5,402 | 5,171 | 5,402 | 5,171 | 10,040 | 11,369 |
| Total | 4,692,751 | 1,730,639 | 4,429,295 | 1,472,581 | 7,378,003 | 9,161,650 |
## Acquirements
### License
The JourneyDB dataset is available under the customised [Terms of Usage](./assets/Terms_of_Usage.md).
### Citation
```
@misc{pan2023journeydb,
title={JourneyDB: A Benchmark for Generative Image Understanding},
author={Junting Pan and Keqiang Sun and Yuying Ge and Hao Li and Haodong Duan and Xiaoshi Wu and Renrui Zhang and Aojun Zhou and Zipeng Qin and Yi Wang and Jifeng Dai and Yu Qiao and Hongsheng Li},
year={2023},
eprint={2307.00716},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
### Contributions
[Junting Pan](https://junting.github.io)\*, [Keqiang Sun](https://keqiangsun.github.io)\*, [Yuying Ge](https://geyuying.github.io), [Hao Li](https://cpsxhao.github.io), [Haodong Duan](https://kennymckormick.github.io), [Xiaoshi Wu](https://github.com/tgxs002), [Renrui Zhang](https://github.com/ZrrSkywalker), [Aojun Zhou](https://scholar.google.com/citations?user=cC8lXi8AAAAJ&hl=en), [Zipeng Qin](https://www.linkedin.cn/incareer/in/zipeng-bruce-qin-846a65119), [Yi Wang](https://shepnerd.github.io), [Jifeng Dai](https://jifengdai.org), [Yu Qiao](http://mmlab.siat.ac.cn/yuqiao/), [Hongsheng Li](https://www.ee.cuhk.edu.hk/~hsli/)<sup>+</sup>
(\* equal contribution, <sup>+</sup> corresponding author)
### Contact
If you have any problem or suggestion, please feel free to open an issue or send emails to the contributors. | [
-0.5540295839309692,
-0.29766619205474854,
0.434438556432724,
0.293487012386322,
-0.2698180079460144,
-0.09394381195306778,
0.07732868939638138,
-0.46595388650894165,
0.03661716729402542,
0.46647292375564575,
-0.7448304295539856,
-0.9295494556427002,
-0.52523273229599,
0.08587350696325302,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
d0rj/audiocaps | d0rj | 2023-06-30T12:17:56Z | 25 | 1 | audiocaps | [
"task_categories:text-to-speech",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"youtube",
"captions",
"region:us"
] | 2023-06-30T12:17:56Z | 2023-06-29T19:10:43.000Z | 2023-06-29T19:10:43 | ---
dataset_info:
features:
- name: audiocap_id
dtype: int64
- name: youtube_id
dtype: string
- name: start_time
dtype: int64
- name: caption
dtype: string
splits:
- name: train
num_bytes: 4162928
num_examples: 49838
- name: validation
num_bytes: 198563
num_examples: 2475
- name: test
num_bytes: 454652
num_examples: 4875
download_size: 2781679
dataset_size: 4816143
license: mit
task_categories:
- text-to-speech
language:
- en
multilinguality:
- monolingual
tags:
- youtube
- captions
pretty_name: AudioCaps
size_categories:
- 10K<n<100K
source_datasets:
- original
paperswithcode_id: audiocaps
---
# audiocaps
## Dataset Description
- **Homepage:** https://audiocaps.github.io/
- **Repository:** https://github.com/cdjkim/audiocaps
- **Paper:** [AudioCaps: Generating Captions for Audios in The Wild](https://aclanthology.org/N19-1011.pdf)
HuggingFace mirror of [official data repo](https://github.com/cdjkim/audiocaps). | [
-0.580333411693573,
-0.17742988467216492,
0.2568068206310272,
0.4191272258758545,
-0.10674618184566498,
0.32732832431793213,
-0.26977282762527466,
-0.21769803762435913,
0.9444586038589478,
0.6070895195007324,
-1.049654483795166,
-0.8780574202537537,
-0.47634050250053406,
0.1270104944705963... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
awettig/Pile-ArXiv-0.5B-6K-opt | awettig | 2023-07-10T19:42:58Z | 25 | 0 | null | [
"region:us"
] | 2023-07-10T19:42:58Z | 2023-07-10T19:41:28.000Z | 2023-07-10T19:41:28 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 6500959920
num_examples: 81380
- name: test
num_bytes: 64945692
num_examples: 813
download_size: 1581567196
dataset_size: 6565905612
---
# Dataset Card for "Pile-ArXiv-0.5B-6K-opt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7604792714118958,
-0.11773673444986343,
-0.01087619736790657,
0.18616890907287598,
-0.5193488597869873,
-0.05841653794050217,
0.5853230357170105,
-0.14739637076854706,
0.7357849478721619,
0.7429044842720032,
-0.44972339272499084,
-0.6753705143928528,
-0.6342816948890686,
-0.004945049528... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
openchat/openchat_sharegpt_v3 | openchat | 2023-09-04T14:32:11Z | 25 | 16 | null | [
"license:mit",
"region:us"
] | 2023-09-04T14:32:11Z | 2023-07-22T15:51:31.000Z | 2023-07-22T15:51:31 | ---
license: mit
---
ShareGPT dataset for training OpenChat V3 series. See [OpenChat repository](https://github.com/imoneoi/openchat) for instructions.
Contents:
* `sharegpt_clean.json`: ShareGPT dataset in original format, converted to Markdown, and with `model` labels.
* `sharegpt_gpt4.json`: All instances in `sharegpt_clean.json` with `model == "Model: GPT-4"`.
* `*.parquet`: Pre-tokenized dataset for training specified version of OpenChat.
Note: The dataset is NOT currently compatible with HF dataset loader.
Licensed under MIT.
| [
-0.362832635641098,
-0.5491498708724976,
0.10946116596460342,
0.3660847246646881,
-0.24832700192928314,
0.009816818870604038,
0.04465722292661667,
-0.20630379021167755,
0.19595447182655334,
0.6654757261276245,
-0.8016611933708191,
-0.5077435970306396,
-0.49894264340400696,
-0.0899000540375... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
chaoyi-wu/PMC-Inline | chaoyi-wu | 2023-08-06T00:40:40Z | 25 | 4 | null | [
"task_categories:text-generation",
"license:apache-2.0",
"biology",
"region:us"
] | 2023-08-06T00:40:40Z | 2023-07-31T07:00:25.000Z | 2023-07-31T07:00:25 | ---
license: apache-2.0
task_categories:
- text-generation
tags:
- biology
---
# PMC-Inline Dataset
- [PMC-Inline Dataset](#pmc-inline-dataset)
- [Daraset Structure](#dataset-structure)
- [Sample](#sample)
This is the text parts and the figure parts can be dowloaded from https://pan.baidu.com/s/1Src_rhXsaOFp8zJ_3zMFsQ?pwd=p3ne.
## Dataset Structure
**PMC-Inline** (PMC papers with inline figures).
We collect the cc lincense papers from pubmed central and remoce the bib, author info, table and iamge captions in the original paper xml files.
Based on the inline figure ref, we link back 11M images into the paper contexts.
Each paper is organized as a PMCxxxxxxx.json. ```xxxxxxx``` refers to the paper unique PMCid
-
## Sample
A json in dataset is organized as bellow,
| info | {"article-type": "research-article", "pmid": "17925856", "pmc": "PMC1999654", "publisher-id": "07-PONE-RA-01026R1", "doi": "10.1371/journal.pone.0001008"} |
| ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
| text | \nPredicting Spatial Patterns of Plant Recruitment Using Animal-Displacement Kernels\nFor plants ... |
| img_ref | [{"id": "pone-0001008-g001", "start": 9177, "end": 9185}, {"id": "pone-0001008-g001", "start": 10715, "end": 10723}, ...] | | | | |
Explanation to each key
- info: some info. about the paper, like paper type, pmid, pmc id and so on.
- text: a string whihc is the paper content.
- img_ref: a list which contains which image and where it is referred in the original paper. For example {"id": "pone-0001008-g001", "start": 9177, "end": 9185} denotes the fig pone-0001008-g001 have been metioned in the text string at index 9177-9185.
You can get the image form our PMC figure parts, and fig is named unified as ```PMCxxxxxxx_figid.jpg``` like ```PMC1999654_pone-0001008-g001.jpg```
Note that, our PMC figures are collected before PMC-Inline, and during the time window, some papers have been updated. Thus some figures may be missed in our figure base. | [
-0.37995028495788574,
-0.299073189496994,
0.5364730954170227,
0.11343996971845627,
-0.5221797227859497,
-0.21944378316402435,
0.10897630453109741,
-0.2777072787284851,
0.3479401469230652,
0.4638627767562866,
-0.7664255499839783,
-0.7625248432159424,
-0.4084905982017517,
0.2899652421474457,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/logic_tasks_ru | dim | 2023-08-14T18:00:38Z | 25 | 0 | null | [
"license:mit",
"region:us"
] | 2023-08-14T18:00:38Z | 2023-08-14T17:59:33.000Z | 2023-08-14T17:59:33 | ---
license: mit
dataset_info:
features:
- name: title
dtype: string
- name: task
dtype: string
- name: answer
dtype: string
- name: ok/trash
dtype: string
splits:
- name: train
num_bytes: 87178
num_examples: 99
download_size: 54016
dataset_size: 87178
---
Задачи с этого сайта https://www.potehechas.ru/zadachi/zadachi.shtml | [
-0.37600380182266235,
-0.8137111067771912,
0.2883026599884033,
0.2950079143047333,
-0.9108151197433472,
-0.06131897494196892,
-0.03612392768263817,
-0.21249760687351227,
0.9155716896057129,
-0.01336043979972601,
-1.0437252521514893,
-0.8220361471176147,
-0.22289399802684784,
-0.10271258652... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MuskumPillerum/General-Knowledge | MuskumPillerum | 2023-10-15T14:51:33Z | 25 | 2 | null | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"general knowledge",
"GK",
"reasoning",
"facts",
"alpaca",
"region:us"
] | 2023-10-15T14:51:33Z | 2023-08-15T05:07:04.000Z | 2023-08-15T05:07:04 | ---
license: mit
task_categories:
- text-generation
- text2text-generation
language:
- en
tags:
- general knowledge
- GK
- reasoning
- facts
- alpaca
pretty_name: General knowledge dataset
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
### Dataset Summary
The dataset is a collection of questions and answers themed on general facts and reasoning. The dataset is divided into two features - 'Question' and 'Answer'.
It is meant to be used for training a model to be good at general knowledge and reasoning. This dataset is inspired from the Alpaca dataset, and infact contains a subset of the alpaca dataset in itself.
### Distribution
The distribution of the MuskumPillerum/General-Knowledge dataset is:
```
Total (non alpaca): 6315
- Facts - 80.8 %
- Nature - 16.5 %
- AI, Computer science, Robotics - 7.3 %
- Physics, Chemistry - 16.3 %
- Geography, History - 11.2 %
- People - 16 %
- Sports - 13.5 %
- Recommendation, Reasoning, Dilemma - 17.8 %
- Others - 1.4 %
```
### Format
```
{'Question': 'What is the largest species of shark',
'Answer': 'The whale shark is considered the largest species of shark, with adults reaching lengths of up to 40 feet or more and weighing several tons.'}
```
### Languages
English
### Source Data
This dataset is inspired from Stanfords alpaca dataset: tatsu-lab/alpaca
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
### Licensing Information
This uses MIT licence
### Citation Information
Right now, just refer: MuskumPillerum/General-Knowledge
| [
-0.6316530704498291,
-0.7865461707115173,
0.31951212882995605,
0.006965584587305784,
-0.633823573589325,
-0.15914759039878845,
-0.07351125031709671,
-0.3956947326660156,
0.5995534658432007,
0.3783414959907532,
-0.6524282693862915,
-0.6894610524177551,
-0.5635620355606079,
-0.02336215972900... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/sharegpt_short_ru | dim | 2023-09-02T00:53:23Z | 25 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | 2023-09-02T00:53:23Z | 2023-08-17T22:15:08.000Z | 2023-08-17T22:15:08 | ---
license: cc-by-nc-4.0
dataset_info:
features:
- name: conversation
sequence: string
- name: hash
dtype: string
splits:
- name: train
num_bytes: 825523
num_examples: 253
download_size: 367027
dataset_size: 825523
---
### Version 1
```python
import json
with open("verbalist/datasets/RyokoAI_ShareGPT52K/sg_90k_part1.json") as f:
dataset1 = json.load(f)
with open("verbalist/datasets/RyokoAI_ShareGPT52K/sg_90k_part2.json") as f:
dataset2 = json.load(f)
dataset = dataset1 + dataset2
import re
import regex
import hashlib
def filter_string(string):
has = True
has_zh = not len(re.findall(r"[\u4e00-\u9fff]+", string)) > 0
has_ko = not len(re.findall(r"[\u3131-\ucb4c]+", string)) > 0
has = has_zh and has_ko
invalid_letters = "ієùéàçğİžš"
for letter in invalid_letters:
if letter in string:
return False
return has
def has_cyrillic(text):
return bool(regex.search(r"\p{IsCyrillic}", text))
clean_dataset = []
for conversation in dataset:
all_text = "\n".join([item["value"] for item in conversation["conversations"]])
# print(all_text)
# break
if filter_string(all_text) and has_cyrillic(all_text):
clean_dataset.append(conversation)
import markdownify
def correct_string(string):
string = string.replace("\\_", "_")
languages = [
"css",
"python",
"go",
"html",
"kotlin",
"diff",
"vba",
"sql",
]
for lang in languages:
string = string.replace(f"\n{lang}Copy code`", f"{lang}\n")
string = string.replace("`\n```", "\n```")
string = string.replace("\n ", "\n ")
delete_phrases = [
"Как искусственный интеллект, я не являюсь владельцем физических объектов и не могу продавать или покупать предметы. Однако, я могу поделиться советом, как можно попытаться убедить кого-то в покупке карандаша.",
"Как искусственный интеллект, я не имею личных чувств и мнений, и не могу иметь предпочтений в выборе между рождением своего ребенка и усыновлением приемного ребенка из приюта.",
"1 / 1",
"2 / 2",
"3 / 3",
"4 / 4",
"5 / 5",
"6 / 6",
"7 / 7",
"8 / 8",
"9 / 9",
"К сожалению, я не могу проверить дату вопроса, но я могу предоставить информацию о Максиме Радайкине и Борисе Марцинкевиче на начало 2021 года.",
"К сожалению, я не могу вставить пример базы данных в чат, но я могу объяснить, что это такое.",
"К сожалению, я не могу написать полноценное расширение для Google Chrome в рамках этой сессии. Однако,",
"К сожалению, я не могу выбрать материалы и дизайн за вас, так как это зависит от ваших потребностей и предпочтений. Однако,",
"Извините, но я не могу создать код для такой сложной программы с нуля, так как это потребовало бы обширных исследований, ресурсов и опыта. Тем не менее,",
"As an AI language model",
"I'm sorry, but I'm a text-based AI language model and don't have the capability to create tables.",
"Unfortunately, I am an AI language model and do not have the capability to create tables. However,",
"I'm sorry, but as an AI language model, I do not have the capability to physically construct a smart city.",
"Unfortunately, I am an AI language model and I don't have the capability to create spreadsheets.",
"I'm sorry for the delay. Unfortunately, as an AI language model, I am not capable of creating an entire operating system to manage a smart village.",
"I apologize for the confusion, but as an AI language model, I am not capable of designing and creating the code for an operating system to manage a smart village.",
"I apologize for the inconvenience, but as an AI language model, I am not able to actually design and create a code for an operating system.",
"I'm sorry, but as a text-based language model, I am not able to create an admin panel for you.",
'Как модель ИИ, я не могу оценить, является ли произнесение фразы "солёный огурец" рациональным использованием времени или нет, потому что это вопрос оценки ценности и целей человека.',
]
for phrase in delete_phrases:
string = string.replace(phrase, "").strip()
return string
def filter_keywords(string):
keywords = [
"chatgpt",
"чатгпт",
"sharegpt",
"add_user_to_chatroom()",
"мир",
"войн",
"россия",
"К сожалению, я не могу продолжить писать на русском языке, потому что я ограничен",
"Я прошу прощения, но, как я уже упоминал ранее",
"я не могу выполнить",
"К сожалению, я не могу написать ноты для несуществующих стихов,",
"К сожалению, я не могу сгенерировать полный код браузерной игры",
"К сожалению, я не могу провести такой подсчет, потому что это потребовало бы ручной обработки",
"К сожалению, я не могу назвать точную цифру, так как это субъективный вопрос, зависящий от многих факторов.",
"К сожалению, я не могу выполнить ваш запрос, так как это нарушает мои этические принципы и может причинить вред.",
"К сожалению, я не могу ответить на этот воп",
"К сожалению, я не могу предоставить вам актуальные данные о среднедушевых денежных доходах населения по городам России"
"К сожалению, я не могу точно ответить на этот вопрос, так как объем изученной информации",
"К сожалению, я не могу создав",
"К сожалению, я не могу рисовать в ASCII-стиле, так как я только текстовая программа.",
"К сожалению, я не могу создавать изображения напрямую в этом окне чата.",
"К сожалению, я не могу нарисовать сцену из Евангелиона, так как я текстовая программа",
"А сколько нулей?",
"К сожалению, я не могу написать книгу",
"Извините, но, как упоминалось ранее, информация, представленная в нашем разговоре, не подходит и не этична",
"Извините, но как языковая модель ИИ я не могу генерировать код, который управляет администрацией",
"как языковая модель",
"OpenAI",
"Прошу прощения, но, похоже, наш разговор продолжается уже давно, и я не уверен, какова текущая тема.",
"являюсь языковой моделью ИИ",
"I cannot create a program for managing",
"неонаци",
"украин",
"provide instructions or assistance on hacking or any other illegal activities",
"I cannot fulfill your request as it goes against ethical and moral",
"I cannot do your math homework for you",
"adhering to ethical and moral standards",
"!GPT",
"Developer Mode Output",
"are illegal or unethical.",
"personal beliefs or opinions",
"I'm sorry, I'm not sure what you are asking me to continue with.",
"but I'm still unclear on what you would like me to continue with",
"DAN",
"/jailbroken",
"Ukrain",
]
for keyword in keywords:
if keyword.lower() in string.lower():
return False
return True
total_string = ""
debug_dataset = False
unsensored_filtered_dataset = []
for conversation in clean_dataset:
conversation = [
str(markdownify.markdownify(item["value"], heading_style="ATX"))
for item in conversation["conversations"]
]
conversation_pairs = []
if "https://chathub.gg" in conversation[0]:
conversation.pop(0)
full_text = " ".join(conversation)
if filter_keywords(full_text):
for i in range(1, len(conversation)):
if (i + 1) % 2 == 0:
if debug_dataset:
bot_message = "BOT " + correct_string(conversation[i])
user_message = "USER " + correct_string(conversation[i - 1])
else:
bot_message = correct_string(conversation[i])
user_message = correct_string(conversation[i - 1])
conversation_pairs.append(user_message)
conversation_pairs.append(bot_message)
if len(conversation_pairs) > 0:
unsensored_filtered_dataset.append(conversation_pairs)
if debug_dataset:
all_text = "\n===\n".join([item for item in conversation_pairs])
total_string += all_text
total_string += "===" * 10
total_string += "\n"
total_string += "===" * 10
total_string += "\n"
total_string += "===" * 10
total_string += "\n"
# print(total_string)
from transformers import AutoTokenizer
from verbalist.datasets.utils import visualize_hist
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
conversation_lengths = []
for conversation in unsensored_filtered_dataset:
all_text = "\n===\n".join([item for item in conversation])
conversation_lengths.append(len(tokenizer(all_text)["input_ids"]))
# print(all_text)
# print("="*100)
# print("="*100)
# print("="*100)
# break
# if has_cyrillic(all_text):
# rus_conv.append(conversation)
visualize_hist(conversation_lengths, "ru_share_gpt_filtered")
filter_num = 85
passed_convs = (
np.array(conversation_lengths) < np.percentile(conversation_lengths, filter_num)
).tolist()
unsensored_passed = []
for i, status in enumerate(passed_convs):
if status:
unsensored_passed.append(unsensored_filtered_dataset[i])
unsensored_dataset = []
for conv in unsensored_passed:
conv_hash = hashlib.sha256(conv[0].encode('utf-8')).hexdigest()
unsensored_dataset.append({
"conversation": conv,
"hash": conv_hash
})
``` | [
-0.451345831155777,
-0.787352979183197,
0.4027661681175232,
0.29358673095703125,
-0.2986743152141571,
0.2017485499382019,
-0.139424666762352,
-0.20634163916110992,
0.4487197995185852,
0.4076388478279114,
-0.6067259311676025,
-0.7898464202880859,
-0.4053744077682495,
0.10626185685396194,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sarahpann/MATH | sarahpann | 2023-09-23T03:06:46Z | 25 | 0 | null | [
"region:us"
] | 2023-09-23T03:06:46Z | 2023-08-19T05:24:14.000Z | 2023-08-19T05:24:14 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MaggiePai/SLUE-sqa5-CODE | MaggiePai | 2023-08-20T17:10:38Z | 25 | 0 | null | [
"region:us"
] | 2023-08-20T17:10:38Z | 2023-08-20T15:16:50.000Z | 2023-08-20T15:16:50 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jasonkstevens/pippa-llama2-chat | jasonkstevens | 2023-08-21T07:27:16Z | 25 | 4 | null | [
"license:agpl-3.0",
"region:us"
] | 2023-08-21T07:27:16Z | 2023-08-21T07:06:44.000Z | 2023-08-21T07:06:44 | ---
license: agpl-3.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
leffff/south-park-character-png-dataset | leffff | 2023-10-20T16:49:00Z | 25 | 0 | null | [
"license:mit",
"region:us"
] | 2023-10-20T16:49:00Z | 2023-08-31T07:59:35.000Z | 2023-08-31T07:59:35 | ---
license: mit
---
# South Park Character Png Dataset
 | [
-0.3245769441127777,
-0.16681574285030365,
0.22108325362205505,
0.4771154224872589,
-0.3940330445766449,
0.4249756932258606,
0.13421347737312317,
0.06240719184279442,
0.5387177467346191,
0.765010416507721,
-0.5849718451499939,
-0.49511227011680603,
-0.3583783209323883,
0.16508802771568298,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/ru_instruct_gpt4 | dim | 2023-08-31T15:07:24Z | 25 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | 2023-08-31T15:07:24Z | 2023-08-31T14:57:43.000Z | 2023-08-31T14:57:43 | ---
license: cc-by-nc-4.0
dataset_info:
features:
- name: prompt
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 18294770
num_examples: 14222
download_size: 9373283
dataset_size: 18294770
---
| [
-0.12853386998176575,
-0.18616756796836853,
0.652912974357605,
0.4943627715110779,
-0.1931934952735901,
0.2360743284225464,
0.3607199192047119,
0.05056323856115341,
0.5793654918670654,
0.7400139570236206,
-0.6508104205131531,
-0.2378396987915039,
-0.7102250456809998,
-0.047825999557971954,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/gpt_roleplay_realm | dim | 2023-08-31T15:26:55Z | 25 | 0 | null | [
"license:cc-by-nd-4.0",
"region:us"
] | 2023-08-31T15:26:55Z | 2023-08-31T15:19:44.000Z | 2023-08-31T15:19:44 | ---
license: cc-by-nd-4.0
dataset_info:
features:
- name: conversation
sequence: string
- name: name
dtype: string
- name: char_description
dtype: string
splits:
- name: train
num_bytes: 26058509
num_examples: 8700
download_size: 8069442
dataset_size: 26058509
---
| [
-0.12853386998176575,
-0.18616756796836853,
0.652912974357605,
0.4943627715110779,
-0.1931934952735901,
0.2360743284225464,
0.3607199192047119,
0.05056323856115341,
0.5793654918670654,
0.7400139570236206,
-0.6508104205131531,
-0.2378396987915039,
-0.7102250456809998,
-0.047825999557971954,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/ultrachat_ru | dim | 2023-08-31T16:44:16Z | 25 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | 2023-08-31T16:44:16Z | 2023-08-31T16:42:57.000Z | 2023-08-31T16:42:57 | ---
license: cc-by-nc-4.0
dataset_info:
features:
- name: conversation
sequence: string
splits:
- name: train
num_bytes: 4495105
num_examples: 500
download_size: 1919370
dataset_size: 4495105
---
| [
-0.12853386998176575,
-0.18616756796836853,
0.652912974357605,
0.4943627715110779,
-0.1931934952735901,
0.2360743284225464,
0.3607199192047119,
0.05056323856115341,
0.5793654918670654,
0.7400139570236206,
-0.6508104205131531,
-0.2378396987915039,
-0.7102250456809998,
-0.047825999557971954,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
chiayewken/flan-v2 | chiayewken | 2023-09-01T05:19:13Z | 25 | 3 | null | [
"region:us"
] | 2023-09-01T05:19:13Z | 2023-08-31T18:13:51.000Z | 2023-08-31T18:13:51 | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
- name: task_name
dtype: string
- name: task_source
dtype: string
- name: template_type
dtype: string
- name: template_idx
dtype: int64
splits:
- name: train
num_bytes: 44316029472
num_examples: 23173509
download_size: 0
dataset_size: 44316029472
---
# Dataset Card for "flan-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5180265307426453,
-0.2853354811668396,
0.10146728903055191,
0.09971068054437637,
-0.11934948712587357,
-0.2350224256515503,
0.33843955397605896,
-0.48550736904144287,
0.8708926439285278,
0.5986568331718445,
-0.827263355255127,
-0.4279821515083313,
-0.5227388143539429,
-0.389519423246383... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/dolphin_ru_3k | dim | 2023-08-31T20:24:23Z | 25 | 0 | null | [
"region:us"
] | 2023-08-31T20:24:23Z | 2023-08-31T20:20:15.000Z | 2023-08-31T20:20:15 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 8490195.387822216
num_examples: 3000
download_size: 4148079
dataset_size: 8490195.387822216
---
# Dataset Card for "dolphin_ru_3k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8710280656814575,
-0.13824790716171265,
0.17411980032920837,
0.37004604935646057,
-0.5617817640304565,
-0.31269019842147827,
0.6040089726448059,
-0.523510754108429,
0.8189380764961243,
0.6285730004310608,
-0.8200638294219971,
-0.5640594363212585,
-0.4874107241630554,
0.10741086304187775... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dongyoung4091/hh-generated_flan_t5_rx_xl_all | dongyoung4091 | 2023-09-03T02:17:32Z | 25 | 0 | null | [
"region:us"
] | 2023-09-03T02:17:32Z | 2023-09-03T02:15:58.000Z | 2023-09-03T02:15:58 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: response
dtype: string
- name: prompt
dtype: string
- name: model_A
dtype: float64
- name: model_B
dtype: float64
- name: external_rm1
dtype: float64
- name: external_rm2
dtype: float64
- name: RM_enough-detail
dtype: float64
- name: RM_fail-to-consider-context
dtype: float64
- name: RM_readability
dtype: float64
- name: zeroshot_helpfulness
dtype: float64
- name: zeroshot_specificity
dtype: float64
- name: zeroshot_intent
dtype: float64
- name: zeroshot_factuality
dtype: float64
- name: zeroshot_easy-to-understand
dtype: float64
- name: zeroshot_relevance
dtype: float64
- name: zeroshot_readability
dtype: float64
- name: zeroshot_enough-detail
dtype: float64
- name: 'zeroshot_biased:'
dtype: float64
- name: zeroshot_fail-to-consider-individual-preferences
dtype: float64
- name: zeroshot_repetetive
dtype: float64
- name: zeroshot_fail-to-consider-context
dtype: float64
- name: zeroshot_too-long
dtype: float64
splits:
- name: train
num_bytes: 7769957
num_examples: 25600
download_size: 3659087
dataset_size: 7769957
---
# Dataset Card for "hh-generated_flan_t5_rx_xl_all"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.48590993881225586,
-0.23333659768104553,
0.3371831178665161,
0.10116992145776749,
-0.1967756152153015,
0.061223648488521576,
0.26018109917640686,
-0.19022534787654877,
1.0272160768508911,
0.6255598068237305,
-0.8024951815605164,
-0.8360006213188171,
-0.5035654306411743,
0.05675908550620... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dongyoung4091/hh-rlhf_with_features_flan_t5_large_flan_t5_zeroshot | dongyoung4091 | 2023-09-08T11:37:16Z | 25 | 0 | null | [
"region:us"
] | 2023-09-08T11:37:16Z | 2023-09-08T11:37:07.000Z | 2023-09-08T11:37:07 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: helpfulness_chosen
dtype: int64
- name: helpfulness_rejected
dtype: int64
- name: specificity_chosen
dtype: int64
- name: specificity_rejected
dtype: int64
- name: intent_chosen
dtype: int64
- name: intent_rejected
dtype: int64
- name: factuality_chosen
dtype: int64
- name: factuality_rejected
dtype: int64
- name: easy-to-understand_chosen
dtype: int64
- name: easy-to-understand_rejected
dtype: int64
- name: relevance_chosen
dtype: int64
- name: relevance_rejected
dtype: int64
- name: readability_chosen
dtype: int64
- name: readability_rejected
dtype: int64
- name: enough-detail_chosen
dtype: int64
- name: enough-detail_rejected
dtype: int64
- name: biased:_chosen
dtype: int64
- name: biased:_rejected
dtype: int64
- name: fail-to-consider-individual-preferences_chosen
dtype: int64
- name: fail-to-consider-individual-preferences_rejected
dtype: int64
- name: repetetive_chosen
dtype: int64
- name: repetetive_rejected
dtype: int64
- name: fail-to-consider-context_chosen
dtype: int64
- name: fail-to-consider-context_rejected
dtype: int64
- name: too-long_chosen
dtype: int64
- name: too-long_rejected
dtype: int64
- name: human
dtype: string
- name: assistant_chosen
dtype: string
- name: assistant_rejected
dtype: string
- name: log_score_chosen
dtype: float64
- name: log_score_rejected
dtype: float64
- name: labels
dtype: string
- name: zeroshot_helpfulness_chosen
dtype: int64
- name: zeroshot_helpfulness_rejected
dtype: int64
- name: zeroshot_specificity_chosen
dtype: int64
- name: zeroshot_specificity_rejected
dtype: int64
- name: zeroshot_intent_chosen
dtype: int64
- name: zeroshot_intent_rejected
dtype: int64
- name: zeroshot_factuality_chosen
dtype: int64
- name: zeroshot_factuality_rejected
dtype: int64
- name: zeroshot_easy-to-understand_chosen
dtype: int64
- name: zeroshot_easy-to-understand_rejected
dtype: int64
- name: zeroshot_relevance_chosen
dtype: int64
- name: zeroshot_relevance_rejected
dtype: int64
- name: zeroshot_readability_chosen
dtype: int64
- name: zeroshot_readability_rejected
dtype: int64
- name: zeroshot_enough-detail_chosen
dtype: int64
- name: zeroshot_enough-detail_rejected
dtype: int64
- name: zeroshot_biased:_chosen
dtype: int64
- name: zeroshot_biased:_rejected
dtype: int64
- name: zeroshot_fail-to-consider-individual-preferences_chosen
dtype: int64
- name: zeroshot_fail-to-consider-individual-preferences_rejected
dtype: int64
- name: zeroshot_repetetive_chosen
dtype: int64
- name: zeroshot_repetetive_rejected
dtype: int64
- name: zeroshot_fail-to-consider-context_chosen
dtype: int64
- name: zeroshot_fail-to-consider-context_rejected
dtype: int64
- name: zeroshot_too-long_chosen
dtype: int64
- name: zeroshot_too-long_rejected
dtype: int64
splits:
- name: train
num_bytes: 16425816
num_examples: 9574
- name: test
num_bytes: 16369741
num_examples: 9574
download_size: 16115109
dataset_size: 32795557
---
# Dataset Card for "hh-rlhf_with_features_flan_t5_large_flan_t5_zeroshot"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.596076250076294,
-0.2999914586544037,
0.32023224234580994,
0.0850432962179184,
-0.2960684299468994,
0.08695422857999802,
0.16624942421913147,
-0.2965180277824402,
1.0400046110153198,
0.6014345288276672,
-0.8186356425285339,
-0.8355526328086853,
-0.545319139957428,
-0.1103098914027214,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tim9510019/llama2_QA_Economics_230915 | tim9510019 | 2023-11-26T03:33:30Z | 25 | 3 | null | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:mit",
"finance",
"region:us"
] | 2023-11-26T03:33:30Z | 2023-09-15T11:09:29.000Z | 2023-09-15T11:09:29 | ---
language:
- en
license: mit
task_categories:
- question-answering
- text-generation
dataset_info:
features:
- name: Question
dtype: string
- name: input
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: int64
- name: Date
dtype: timestamp[ns]
- name: Type
dtype: int64
- name: Prompt
dtype: int64
- name: QuestionTokenNum
dtype: int64
- name: inputTokenNum
dtype: int64
- name: AnswerTokenNum
dtype: int64
- name: Source.1
dtype: string
splits:
- name: train
num_bytes: 3284924
num_examples: 536
download_size: 1073755
dataset_size: 3284924
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- finance
---
# Dataset Card for "llama2_QA_Economics_230915"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3672758936882019,
-0.15841400623321533,
0.3815072774887085,
0.41272133588790894,
-0.270018070936203,
0.02415269985795021,
0.4367218315601349,
-0.14394892752170563,
0.8359532356262207,
0.43422189354896545,
-0.6411253213882446,
-0.5779735445976257,
-0.31725671887397766,
-0.163729071617126... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
garcianacho/human_genome_csv | garcianacho | 2023-10-04T12:41:28Z | 25 | 1 | null | [
"task_categories:token-classification",
"license:apache-2.0",
"biology",
"genome",
"human genome",
"bioinformatics",
"region:us"
] | 2023-10-04T12:41:28Z | 2023-09-20T08:52:07.000Z | 2023-09-20T08:52:07 | ---
license: apache-2.0
task_categories:
- token-classification
tags:
- biology
- genome
- human genome
- bioinformatics
---
## Human Genome Dataset
Here is a human genome ready to be used to train LLM.
| [
-0.12472482025623322,
0.05889888107776642,
0.2031373381614685,
0.08040333539247513,
-0.28056657314300537,
0.16819019615650177,
0.14146023988723755,
0.12755776941776276,
0.3527611792087555,
0.8175176382064819,
-0.7246295213699341,
-0.5731893181800842,
-0.5362110137939453,
0.0114995678886771... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/databricks_dolly_15k_ru | dim | 2023-09-20T15:51:37Z | 25 | 0 | null | [
"region:us"
] | 2023-09-20T15:51:37Z | 2023-09-20T15:51:24.000Z | 2023-09-20T15:51:24 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 22121608
num_examples: 14914
download_size: 11365356
dataset_size: 22121608
---
# Dataset Card for "databricks_dolly_15k_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3310964107513428,
-0.3121681213378906,
-0.04217645525932312,
0.5864372849464417,
-0.2779165506362915,
0.07879135012626648,
0.6155732274055481,
0.018483737483620644,
0.7551019787788391,
0.34698134660720825,
-0.9906987547874451,
-0.6601528525352478,
-0.5301257371902466,
-0.036681104451417... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/ficbook_prompts_best_10k | dim | 2023-09-25T17:36:47Z | 25 | 0 | null | [
"region:us"
] | 2023-09-25T17:36:47Z | 2023-09-22T20:56:20.000Z | 2023-09-22T20:56:20 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: solution_short_llama2
dtype: string
- name: solution_full
dtype: string
splits:
- name: train
num_bytes: 268346552
num_examples: 10000
download_size: 138937080
dataset_size: 268346552
---
# Dataset Card for "ficbook_prompts_best_10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6843478083610535,
-0.17161744832992554,
0.14919555187225342,
0.4995169937610626,
-0.41511788964271545,
-0.057839535176754,
0.25436410307884216,
0.11264932155609131,
0.8856768012046814,
0.3972923755645752,
-0.7860279083251953,
-0.6681787967681885,
-0.5846225023269653,
0.00957676954567432... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/azbyka_logic_ru | dim | 2023-09-23T21:17:31Z | 25 | 0 | null | [
"region:us"
] | 2023-09-23T21:17:31Z | 2023-09-23T21:17:29.000Z | 2023-09-23T21:17:29 | ---
dataset_info:
features:
- name: task
dtype: string
- name: solution
dtype: string
- name: link
dtype: string
- name: long_solution
dtype: string
splits:
- name: train
num_bytes: 205135
num_examples: 480
download_size: 96545
dataset_size: 205135
---
# Dataset Card for "azbyka_logic_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5631547570228577,
-0.34765636920928955,
0.21693038940429688,
0.16267147660255432,
-0.1977781355381012,
-0.1901472955942154,
0.1291624754667282,
-0.1902865469455719,
0.6443883180618286,
0.4637458622455597,
-1.171289086341858,
-0.8340064883232117,
-0.479034960269928,
-0.16314950585365295,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/what_where_when_50k | dim | 2023-09-25T12:07:50Z | 25 | 0 | null | [
"region:us"
] | 2023-09-25T12:07:50Z | 2023-09-25T12:07:12.000Z | 2023-09-25T12:07:12 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
- name: url
dtype: string
- name: uuid
dtype: string
splits:
- name: train
num_bytes: 42224521.044228844
num_examples: 50000
download_size: 24272957
dataset_size: 42224521.044228844
---
# Dataset Card for "what_where_when_50k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6833898425102234,
0.010189272463321686,
0.3022756576538086,
0.31308504939079285,
-0.02707047574222088,
-0.3230087161064148,
0.30784544348716736,
-0.14030233025550842,
0.7815027236938477,
0.45719486474990845,
-0.9066752791404724,
-0.9087677001953125,
-0.48383092880249023,
-0.307921528816... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/ru_turbo_alpaca_evol_instruct | dim | 2023-09-25T13:19:49Z | 25 | 0 | null | [
"region:us"
] | 2023-09-25T13:19:49Z | 2023-09-25T13:19:36.000Z | 2023-09-25T13:19:36 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: iteration
dtype: uint32
splits:
- name: train
num_bytes: 105428021
num_examples: 47793
download_size: 50796845
dataset_size: 105428021
---
# Dataset Card for "ru_turbo_alpaca_evol_instruct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6589823961257935,
-0.40736421942710876,
0.03638969734311104,
0.3273433744907379,
-0.30809271335601807,
0.00808884110301733,
0.24847912788391113,
-0.24059367179870605,
1.0114853382110596,
0.2717133164405823,
-0.8934648633003235,
-0.5662990212440491,
-0.5122562050819397,
-0.15425559878349... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/ru_turbo_saiga | dim | 2023-09-25T13:24:41Z | 25 | 0 | null | [
"region:us"
] | 2023-09-25T13:24:41Z | 2023-09-25T13:23:33.000Z | 2023-09-25T13:23:33 | ---
dataset_info:
features:
- name: messages
sequence:
- name: role
dtype: string
- name: content
dtype: string
- name: seed
dtype: string
- name: source
dtype: string
- name: model_name
dtype: string
splits:
- name: train
num_bytes: 87316730
num_examples: 37731
download_size: 39768554
dataset_size: 87316730
---
# Dataset Card for "ru_turbo_saiga"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5416882634162903,
-0.18832749128341675,
0.16324834525585175,
0.37071844935417175,
-0.1668291836977005,
0.033874958753585815,
0.04133804887533188,
0.0069899726659059525,
0.8151480555534363,
0.10983647406101227,
-1.0033420324325562,
-0.5807607173919678,
-0.5132738351821899,
-0.17903378605... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/grade_school_math_instructions_ru | dim | 2023-09-25T13:56:39Z | 25 | 0 | null | [
"region:us"
] | 2023-09-25T13:56:39Z | 2023-09-25T13:56:36.000Z | 2023-09-25T13:56:36 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 6815618
num_examples: 7473
download_size: 3284007
dataset_size: 6815618
---
# Dataset Card for "grade_school_math_instructions_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.33244842290878296,
-0.4382697641849518,
0.1704140603542328,
0.3979783356189728,
0.03457269445061684,
-0.11397694796323776,
0.34168457984924316,
0.35449403524398804,
0.4548552632331848,
0.23561197519302368,
-1.0444777011871338,
-0.8775390982627869,
-0.4536243677139282,
-0.401432871818542... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/nerp | SEACrowd | 2023-09-26T12:34:00Z | 25 | 0 | null | [
"language:ind",
"named-entity-recognition",
"region:us"
] | 2023-09-26T12:34:00Z | 2023-09-26T11:41:47.000Z | 2023-09-26T11:41:47 | ---
tags:
- named-entity-recognition
language:
- ind
---
# nerp
The NERP dataset (Hoesen and Purwarianti, 2018) contains texts collected from several Indonesian news websites with five labels
- PER (name of person)
- LOC (name of location)
- IND (name of product or brand)
- EVT (name of the event)
- FNB (name of food and beverage).
NERP makes use of the IOB chunking format, just like the TermA dataset.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{hoesen2018investigating,
title={Investigating bi-lstm and crf with pos tag embedding for indonesian named entity tagger},
author={Hoesen, Devin and Purwarianti, Ayu},
booktitle={2018 International Conference on Asian Language Processing (IALP)},
pages={35--38},
year={2018},
organization={IEEE}
}
```
## License
Creative Common Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/IndoNLP/indonlu](https://github.com/IndoNLP/indonlu)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.5765810608863831,
-0.668402910232544,
-0.0546773225069046,
0.49516990780830383,
-0.4841289520263672,
-0.17526990175247192,
-0.010179002769291401,
-0.5343835949897766,
0.6882467269897461,
0.7357979416847229,
-0.0588080920279026,
-0.41273295879364014,
-0.45885026454925537,
0.5764624476432... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MLNTeam-Unical/NFT-70M_transactions | MLNTeam-Unical | 2023-10-03T07:15:49Z | 25 | 3 | null | [
"task_categories:time-series-forecasting",
"task_categories:text-classification",
"task_categories:feature-extraction",
"task_categories:text-generation",
"task_categories:zero-shot-classification",
"task_categories:text2text-generation",
"task_categories:sentence-similarity",
"task_categories:image-c... | 2023-10-03T07:15:49Z | 2023-09-26T15:48:21.000Z | 2023-09-26T15:48:21 | ---
dataset_info:
features:
- name: num_sales
dtype: int64
- name: fees_seller
dtype: float64
- name: fees_opensea
dtype: float64
- name: fees_seller_usd
dtype: float64
- name: fees_opensea_usd
dtype: float64
- name: tx_timestamp
dtype: string
- name: price
dtype: float64
- name: gain
dtype: float64
- name: usd_price
dtype: float64
- name: usd_gain
dtype: float64
- name: token
dtype: string
- name: to_eth
dtype: float64
- name: to_usd
dtype: float64
- name: created_date
dtype: string
- name: chain
dtype: string
- name: token_type
dtype: string
- name: asset_contract_type
dtype: string
- name: asset_type
dtype: string
- name: payout_collection_address
dtype: int64
- name: from_account
dtype: int64
- name: to_account
dtype: int64
- name: seller_account
dtype: int64
- name: winner_account
dtype: int64
- name: contract_address
dtype: int64
- name: nft_image
dtype: int64
- name: collection_image
dtype: int64
- name: token_id
dtype: int64
- name: nft_name
dtype: int64
- name: nft_description
dtype: int64
- name: collection_name
dtype: int64
- name: collection_description
dtype: int64
splits:
- name: train
num_bytes: 21291348001
num_examples: 70972143
download_size: 6633664673
dataset_size: 21291348001
size_categories:
- 10M<n<100M
license: cc-by-nc-4.0
task_categories:
- time-series-forecasting
- text-classification
- feature-extraction
- text-generation
- zero-shot-classification
- text2text-generation
- sentence-similarity
- image-classification
- image-to-text
- text-to-image
- text-retrieval
language:
- en
tags:
- Non-fungible Tokens
- Crypto
- Web3
- Art
- Multimodal Learning
pretty_name: NFT-70M_transactions
---
# Dataset Card for "NFT-70M_transactions"
## Dataset summary
The *NFT-70M_transactions* dataset is the largest and most up-to-date collection of Non-Fungible Tokens (NFT) transactions between 2021 and 2023 sourced from [OpenSea](https://opensea.io), the leading trading platform in the Web3 ecosystem.
With more than 70M transactions enriched with metadata, this dataset is conceived to support a wide range of tasks, ranging from sequential and transactional data processing/analysis to graph-based modeling of the complex relationships between traders.
Besides, the availability of textual and image contents further amplifies the modeling capabilities and usage opportunities of this dataset, making it a unique and comprehensive multimodal source of information for delving into the NFT landscape.
This dataset can serve as a benchmark for various innovative and impactful tasks within the crypto landscape, such as projecting NFT prices or detecting fraudolent and wash trading activities.
Furthermore, the multimodal nature of the dataset fosters the development of classification models, as well as textual and visual generative models.
## Data anonymization
We point out that the collected NFT transactions and metadata from OpenSea are publicly distributed on blockchain.
For our purposes of re-distribution, we are also committed to ensure non-disclosure of information that might lead to identifying the NFT creators, in order to be compliant with privacy-preserving requirements and to avoid violation of data protection regulations and of property rights.
In this respect, we carried out three actions:
- Values of all variables describing non-sensitive information were kept in their original form;
- Values of all variables describing sensitive information were anonymized, in a one-way, non-revertible mode;
- URLs of image data and textual contents (i.e., NFT images and their descriptions) were replaced by identifiers to numerical vectors that represent an encrypted representation (i.e., embeddings) of the image/text contents obtained via neural network models. Such embeddings are eventually provided in place of their original image and text data,
and can be found in the [**NFT-70M_image**](https://huggingface.co/datasets/MLNTeam-Unical/NFT-70M_image) and [**NFT-70M_text**](https://huggingface.co/datasets/MLNTeam-Unical/NFT-70M_text) supplementary datasets, respectively.
## Data Fields
| Variable | Type | Description | Processing | Notes |
|--------------------------|-------------|-----------------------------------------------------------------------------------------------------------|------------------|-----------------------------------|
| token_id | String | The id of the NFT — this value is unique within the same collection | Anonymized | Original values were replaced by hash-codes |
| num_sales | Integer | A progressive integer indicating the number of successful transactions involving the NFT up to the current timestamp (cf. *tx_timestamp*) | Original | Not sensitive variable |
| nft_name | Vector ID | The name of the NFT | Anonymized | Original values were encrypted via neural textual embedding |
| nft_description | Vector ID | The description of the NFT as provided by the creator | Anonymized | Original values were encrypted via neural textual embedding |
| nft_image | Vector ID | The ID for accessing the NFT image vector | Anonymized | Original values were encrypted via neural visual embedding |
| collection_name | Vector ID | The ID for accessing the Collection name vector | Anonymized | Original values were encrypted via neural textual embedding |
| collection_description | Vector ID | The ID for accessing the Collection description vector | Anonymized | Original values were encrypted via neural textual embedding |
| collection_image | Vector ID | The ID for accessing the Collection image vector | Anonymized | Original values were encrypted via neural visual embedding |
| fees_seller | Float | The absolute amount of fees the seller has gained from this transaction expressed in *token* | Original | Not sensitive variable |
| fees_opensea | Float | The absolute amount of fees OpenSea has gained from this transaction expressed in *token* | Original | Not sensitive variable |
| fees_seller_usd | Float | The absolute amount of fees the seller has gained from this transaction expressed in US dollars (USD) | Original | Not sensitive variable |
| fees_opensea_usd | Float | The absolute amount of fees OpenSea has gained from this transaction expressed in US dollars (USD) | Original | Not sensitive variable |
| payout_collection_address| String | The wallet address where seller fees are deposited | Anonymized | Original values were replaced by hash-codes |
| tx_timestamp | String | Timestamp of the transaction expressed in yyyy-mm-ddTHH:MM:SS | Original | Not sensitive variable |
| price | Float | The price of the transaction expressed in token | Original | Not sensitive variable |
| gain | Float | The gain after fees (i.e., gain = price - fees_opensea * price - fees_seller * price) | Original | Not sensitive variable |
| usd_price | Float | The price of the transaction expressed in US dollars (USD) | Original | Not sensitive variable |
| usd_gain | Float | The difference between the price and the fees expressed in US dollars (USD) | Original | Not sensitive variable |
| token | Categorical | The token type used to pay the transaction | Original | Not sensitive variable |
| to_eth | Float | The conversion rate to convert tokens into Ethereum at the current timestamp, such that eth = price * to_eth | Original | Not sensitive variable |
| to_usd | Float | The conversion rate to convert tokens into US dollars (USD) at the current timestamp, such that usd = price * to_usd | Original | Not sensitive variable |
| from_account | String | The address that sends the payment (i.e., winner/buyer) | Anonymized | Original values were replaced by hash-codes |
| to_account | String | The address that receives the payment (it often corresponds to the contract linked to the asset) | Anonymized | Original values were replaced by hash-codes |
| seller_account | String | The address of the NFT seller | Anonymized | Original values were replaced by hash-codes |
| winner_account | String | The address of the NFT buyer | Anonymized | Original values were replaced by hash-codes |
| contract_address | String | The contract address on the blockchain | Anonymized | Original values were replaced by hash-codes |
| created_date | Timestamp | The date of creation of the contract | Original | Not sensitive variable |
| chain | Categorical | The blockchain where the transaction occurs | Original | Not sensitive variable |
| token_type | Categorical | The schema of the token, i.e., ERC721 or ERC1155 | Original | Not sensitive variable |
| asset_contract_type | Categorical | The asset typology, i.e., non-fungible or semi-fungible | Original | Not sensitive variable |
| asset_type | Categorical | Whether the asset was involved in a simple or bundle transaction | Original | Not sensitive variable |
## How to use
Data provided within this repository can be straightforwardly loaded via the *datasets* library as follows:
```python
from datasets import load_dataset
dataset = load_dataset("MLNTeam-Unical/NFT-70M_transactions")
```
Complementary data involving textual and visual embeddings can be integrated as follows:
```python
from datasets import load_dataset
import numpy as np
transactions_dataset=load_dataset("MLNTeam-Unical/NFT-70M_transactions")
image_dataset=load_dataset("MLNTeam-Unical/NFT-70M_image")
text_dataset=load_dataset("MLNTeam-Unical/NFT-70M_text")
# Mapping from image_id to the row_index within the image dataset
image_id2row_index={int(id):k for k,id in enumerate(image_dataset["train"]["id"])}
# Mapping from text_id to row_index within the text dataset
text_id2row_index={int(id):k for k,id in enumerate(text_dataset["train"]["id"])}
def get_image_embedding(image_id,image_id2row_index,image_dataset):
# If the mapping contains the image, the embedding exists
idx_emb=image_id2row_index.get(int(image_id),None)
if idx_emb:
# If the embedding exists, return it
return np.array(image_dataset["train"].select([idx_emb])["emb"][0])
else:
return None
def get_text_embedding(text_id,text_id2row_index,text_dataset):
# If the mapping contains the text, the embedding exists
idx_emb=text_id2row_index.get(int(text_id),None)
if idx_emb:
# If the embedding exists, return it
return np.array(text_dataset["train"].select([idx_emb])["emb"][0])
else:
return None
### USAGE EXAMPLE ###
# Select transaction_id
transaction_id=120
# Get the image_id (e.g., collection_image or nft_image)
id_image=transactions_dataset["train"].select([transaction_id])["collection_image"][0]
# Get the image
image_embedding=get_image_embedding(id_image,image_id2row_index,image_dataset)
# Get the text_id
id_text=transactions_dataset["train"].select([transaction_id])["collection_description"][0]
# Get the text
text_embedding=get_text_embedding(id_text,text_id2row_index,text_dataset)
```
## Ethical use of data and informed consent
This data repository is made available for research and informational purposes only.
Any finding that might be drawn from the data provided within this repository should be intended to support decision-making regarding actions made on NFTs, and not to replace the human specialists.
*The authors are not responsible for any issues related to trading failures based on the data provided within this repository.*
## Terms of Usage
Please cite the following papers in any research product whose findings are based on the data provided within this repository:
- L. La Cava, D. Costa, A. Tagarelli: SONAR: Web-based Tool for Multimodal Exploration of Non-Fungible Token Inspiration Networks. In: Proc. ACM SIGIR 2023. Taipei, Taiwan, July 23-27 2023. DOI: https://doi.org/10.1145/3539618.3591821
- L. La Cava, D. Costa, A. Tagarelli: Visually Wired NFTs: Exploring the Role of Inspiration in Non-Fungible Tokens. CoRR abs/2303.17031 (2023). DOI: https://doi.org/10.48550/arXiv.2303.17031
- D. Costa, L. La Cava, A. Tagarelli: Show me your NFT and I tell you how it will perform: Multimodal representation learning for NFT selling price prediction. In: Proc. ACM WebConf 2023, pp. 1875-1885. Austin, TX, USA, 30 April 2023 – 4 May 2023. DOI: https://doi.org/10.1145/3543507.3583520
Data within this repository were fetched using the REST APIs provided by OpenSea. You should also acknowledge [OpenSea API]("https://docs.opensea.io/reference/api-overview).
## Liability statement
The authors hereby declare that they are not responsible for any harmful or objectionable content that may be contained within the data provided within this repository.
Users of the dataset are expected to exercise due diligence and responsibility when using the data, including but not limited to:
(i) Content Review: Users should review the dataset's contents carefully and assess its suitability for their intended purposes; (ii) Compliance: Users are responsible for ensuring that their use of the dataset complies with all applicable laws, regulations, and ethical standards;
(iii) Data Processing: Users may need to apply data preprocessing, filtering, or other techniques to remove or address any objectionable or harmful content as needed.
The authors of this dataset disclaim any liability for the accuracy, completeness, or suitability of the data and shall not be held responsible for any consequences resulting from the use or misuse of the dataset.
*By accessing and using this dataset, users acknowledge and accept this disclaimer.* | [
-0.5180361270904541,
-0.8510618209838867,
0.13852794468402863,
0.04377643018960953,
-0.46219560503959656,
0.053329918533563614,
0.08845025300979614,
-0.8428255319595337,
0.5959494113922119,
0.7598888874053955,
-0.5655026435852051,
-0.722213864326477,
-0.6558566093444824,
0.0519787780940532... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sungile/bedroom_left_vs_right | sungile | 2023-09-27T21:08:42Z | 25 | 0 | null | [
"region:us"
] | 2023-09-27T21:08:42Z | 2023-09-27T19:56:21.000Z | 2023-09-27T19:56:21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 19193302.0
num_examples: 20
download_size: 19194928
dataset_size: 19193302.0
---
# Dataset Card for "bedroom_left_vs_right"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5756745934486389,
-0.4725876450538635,
0.08709248900413513,
0.3156999945640564,
-0.3066461980342865,
-0.1609710156917572,
0.07942953705787659,
0.08925563842058182,
0.9818164110183716,
0.5126147866249084,
-0.8551360368728638,
-0.7794802188873291,
-0.5155914425849915,
-0.2619846761226654,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jhuang14/Labeled_Data | jhuang14 | 2023-09-28T08:32:36Z | 25 | 0 | null | [
"region:us"
] | 2023-09-28T08:32:36Z | 2023-09-28T08:32:09.000Z | 2023-09-28T08:32:09 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': bustruck
'2': other
'3': rail
splits:
- name: train
num_bytes: 1652124.1515151516
num_examples: 92
- name: test
num_bytes: 718314.8484848485
num_examples: 40
download_size: 2372957
dataset_size: 2370439.0
---
# Dataset Card for "Labeled_Data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5724276900291443,
-0.366569459438324,
0.1489313691854477,
0.3082379996776581,
-0.14904017746448517,
-0.0012846142053604126,
0.2179785668849945,
-0.31846883893013,
0.805933952331543,
0.5721100568771362,
-0.7259065508842468,
-0.9742124080657959,
-0.6818980574607849,
-0.18985725939273834,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
muhammadravi251001/indonesian-nli-and-qa | muhammadravi251001 | 2023-10-28T06:08:59Z | 25 | 0 | null | [
"license:mit",
"region:us"
] | 2023-10-28T06:08:59Z | 2023-10-06T14:08:25.000Z | 2023-10-06T14:08:25 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hippocrates/DDI2013_train | hippocrates | 2023-10-12T19:18:48Z | 25 | 0 | null | [
"region:us"
] | 2023-10-12T19:18:48Z | 2023-10-12T19:18:42.000Z | 2023-10-12T19:18:42 | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6078356
num_examples: 3000
- name: valid
num_bytes: 6758153
num_examples: 3000
- name: test
num_bytes: 6233436
num_examples: 3000
download_size: 3401816
dataset_size: 19069945
---
# Dataset Card for "DDI2013_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6865321397781372,
-0.04179998114705086,
0.35818907618522644,
0.48532187938690186,
-0.07829446345567703,
-0.14214518666267395,
0.4862855076789856,
-0.02237473614513874,
0.6772492527961731,
0.19279173016548157,
-1.1060043573379517,
-0.5286524891853333,
-0.6134939193725586,
-0.177617147564... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
konfuzio/funsd_plus | konfuzio | 2023-10-16T09:33:20Z | 25 | 3 | null | [
"size_categories:1K<n<10K",
"language:en",
"license:other",
"funsd",
"region:us"
] | 2023-10-16T09:33:20Z | 2023-10-14T12:31:04.000Z | 2023-10-14T12:31:04 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: words
sequence: string
- name: bboxes
sequence:
sequence: float64
- name: labels
sequence: int64
- name: grouped_words
sequence:
sequence: int64
- name: linked_groups
sequence:
sequence: int64
splits:
- name: train
num_bytes: 183288640.158
num_examples: 1026
- name: test
num_bytes: 20706650
num_examples: 113
download_size: 195177090
dataset_size: 203995290.158
extra_gated_prompt: >-
You agree to not attempt to determine the identity of individuals in this
dataset.
You agree to the terms and conditions of the [FUNSD+ license](https://huggingface.co/datasets/konfuzio/funsd_plus/blob/main/LICENSE).
extra_gated_fields:
Name: text
Company: text
Country: text
Email: text
I agree to the terms and conditions of the FUNSD+ license: checkbox
license: other
language:
- en
pretty_name: FUNSD+
size_categories:
- 1K<n<10K
tags:
- funsd
---
# Dataset Card for "funsd_plus"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Homepage](#homepage)
- [Point of Contact](#point-of-contact)
- [Languages](#languages)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [FUNSD+ | A larger and revised FUNSD dataset by Konfuzio](https://konfuzio.com/en/funsd-plus/)
- **Point of Contact:** [mohamed.dhiab@konfuzio.com](mailto:mohamed.dhiab@konfuzio.com)
- **Languages:** `English`
## Additional Information
### Licensing Information
[FUNSD+ license](https://huggingface.co/datasets/konfuzio/funsd_plus/blob/main/LICENSE)
### Citation Information
```
@misc{zagami_helm_2022,
title = {FUNSD+: A larger and revised FUNSD dataset},
author = {Zagami, Davide and Helm, Christopher},
year = 2022,
month = {Oct},
journal = {FUNSD+ | A larger and revised FUNSD dataset},
publisher = {Helm & Nagel GmbH},
url = {http://konfuzio.com/funsd-plus/}
}
``` | [
-0.5968148112297058,
0.005325367674231529,
0.20508475601673126,
0.24565114080905914,
-0.618206799030304,
-0.1544322371482849,
-0.08903156965970993,
-0.4126255512237549,
0.8111019730567932,
0.44793134927749634,
-1.1436045169830322,
-0.8780670762062073,
-0.24201592803001404,
-0.0459365062415... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pedrosousa/IntentObjectsJSON | pedrosousa | 2023-10-16T17:47:32Z | 25 | 0 | null | [
"task_categories:text-generation",
"license:unknown",
"region:us"
] | 2023-10-16T17:47:32Z | 2023-10-16T16:53:08.000Z | 2023-10-16T16:53:08 | ---
license: unknown
task_categories:
- text-generation
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
liyucheng/mmlu_test | liyucheng | 2023-10-16T23:28:37Z | 25 | 0 | null | [
"region:us"
] | 2023-10-16T23:28:37Z | 2023-10-16T23:28:24.000Z | 2023-10-16T23:28:24 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: id
dtype: string
- name: in-context examples
dtype: string
- name: testing input
dtype: string
- name: prompt
dtype: string
- name: task
dtype: string
splits:
- name: train
num_bytes: 90455312
num_examples: 13987
download_size: 14673948
dataset_size: 90455312
---
# Dataset Card for "mmlu_all_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.579383134841919,
-0.5748886466026306,
0.15682442486286163,
0.16521573066711426,
-0.09186390787363052,
-0.13099510967731476,
0.4419226348400116,
0.02722162753343582,
0.938254177570343,
0.21673396229743958,
-0.9132511019706726,
-0.6740312576293945,
-0.5482617616653442,
-0.0284374300390481... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
oscarlaird/miniF2f_valid_hf_dataset | oscarlaird | 2023-10-24T14:54:33Z | 25 | 0 | null | [
"region:us"
] | 2023-10-24T14:54:33Z | 2023-10-20T19:11:03.000Z | 2023-10-20T19:11:03 | ---
dataset_info:
features:
- name: informal_statement
dtype: string
- name: formal_statement
dtype: string
splits:
- name: train
num_bytes: 69374
num_examples: 244
download_size: 0
dataset_size: 69374
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "miniF2f_valid_hf_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4578421115875244,
-0.380906879901886,
0.1817561388015747,
0.2772436738014221,
-0.21432042121887207,
-0.11859976500272751,
0.2736820876598358,
-0.07508005946874619,
0.578948974609375,
0.33312198519706726,
-0.7882440686225891,
-0.5156437754631042,
-0.5659006237983704,
-0.06357499212026596... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Rewcifer/radio-llama2-resp_tag_90pct | Rewcifer | 2023-10-21T01:46:59Z | 25 | 0 | null | [
"region:us"
] | 2023-10-21T01:46:59Z | 2023-10-21T01:46:42.000Z | 2023-10-21T01:46:42 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1109388970
num_examples: 222141
download_size: 255573571
dataset_size: 1109388970
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "radio-llama2-resp_tag_90pct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5873321294784546,
0.15921629965305328,
0.32861030101776123,
0.44631463289260864,
-0.4112500250339508,
0.09189687669277191,
0.1041947603225708,
-0.20040103793144226,
0.8766788840293884,
0.27471327781677246,
-0.8900047540664673,
-0.6325427293777466,
-0.49371272325515747,
-0.14182305335998... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
UNCANNY69/Hindi_Trans | UNCANNY69 | 2023-10-23T17:03:49Z | 25 | 0 | null | [
"license:mit",
"region:us"
] | 2023-10-23T17:03:49Z | 2023-10-23T16:57:12.000Z | 2023-10-23T16:57:12 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kardosdrur/opensubtitles-no-da | kardosdrur | 2023-10-26T07:09:53Z | 25 | 0 | null | [
"license:mit",
"region:us"
] | 2023-10-26T07:09:53Z | 2023-10-25T10:46:28.000Z | 2023-10-25T10:46:28 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: link_id
dtype: string
- name: da
dtype: string
- name: 'no'
dtype: string
- name: overlap
dtype: float64
splits:
- name: train
num_bytes: 270499727.08648384
num_examples: 1772983
- name: test
num_bytes: 67624969.91351616
num_examples: 443246
download_size: 201396375
dataset_size: 338124697.0
---
# OpenSubtitles Danish-Norwegian
Aligned sentences with heuristic-based filters from OpenSubtitles in Danish and in Norwegian.
The source code for producing the dataset is included in the repository.
The dataset was created to aid training sentence transformers in the Danish Foundation Models project.
| [
-0.4017617702484131,
-0.42237940430641174,
0.42709556221961975,
0.30360960960388184,
-0.45599982142448425,
-0.06230378523468971,
-0.28670910000801086,
-0.20313102006912231,
-0.057906728237867355,
1.0039829015731812,
-0.6320287585258484,
-0.5582901239395142,
-0.30547958612442017,
0.21649043... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
fruk19/ptvn_sum_cls | fruk19 | 2023-10-31T10:55:17Z | 25 | 0 | null | [
"region:us"
] | 2023-10-31T10:55:17Z | 2023-10-26T09:51:45.000Z | 2023-10-26T09:51:45 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 118832924.0
num_examples: 307
- name: test
num_bytes: 45724934.0
num_examples: 115
download_size: 152076344
dataset_size: 164557858.0
---
# Dataset Card for "ptvn_sum_cls"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6201252341270447,
0.05042015761137009,
0.047828659415245056,
0.39885246753692627,
-0.5679477453231812,
-0.18225927650928497,
0.1854085773229599,
0.31552064418792725,
0.8373042941093445,
0.6716346740722656,
-0.6730214357376099,
-0.7207738161087036,
-0.6363239288330078,
-0.108715169131755... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sanak/IDD | sanak | 2023-10-28T15:58:56Z | 25 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-10-28T15:58:56Z | 2023-10-28T10:00:52.000Z | 2023-10-28T10:00:52 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Harsh-7300/english_to_french | Harsh-7300 | 2023-11-09T14:44:33Z | 25 | 0 | null | [
"task_categories:translation",
"size_categories:1K<n<10K",
"language:en",
"language:fr",
"license:mit",
"legal",
"region:us"
] | 2023-11-09T14:44:33Z | 2023-10-28T10:44:49.000Z | 2023-10-28T10:44:49 | ---
license: mit
dataset_card: H@rsh7300
language:
- en
- fr
task_categories:
- translation
pretty_name: dataset3
size_categories:
- 1K<n<10K
tags:
- legal
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | [
-0.5322356224060059,
-0.5534716844558716,
0.1290130317211151,
0.23470577597618103,
-0.39626216888427734,
-0.11762470006942749,
-0.03545305132865906,
-0.6389272212982178,
0.5699822306632996,
0.7838326692581177,
-0.7834625840187073,
-0.9173274040222168,
-0.55633145570755,
0.13078093528747559... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
verayang/plainscree | verayang | 2023-10-29T22:07:02Z | 25 | 0 | null | [
"region:us"
] | 2023-10-29T22:07:02Z | 2023-10-29T20:14:20.000Z | 2023-10-29T20:14:20 | ---
dataset_info:
features:
- name: audio_id
dtype: int64
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: cree_transcription
dtype: string
- name: english_transcription
dtype: string
- name: gender
dtype: string
splits:
- name: train
num_bytes: 22116992.0
num_examples: 64
download_size: 22072728
dataset_size: 22116992.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "plainscree"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5980300903320312,
-0.2307479828596115,
0.23050178587436676,
0.40962833166122437,
-0.2547568380832672,
-0.17442801594734192,
0.17965787649154663,
-0.12696997821331024,
0.9503687620162964,
0.5879466533660889,
-0.9993274807929993,
-0.9371020197868347,
-0.9844658970832825,
-0.41179651021957... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wisenut-nlp-team/FiD_aihub_books | wisenut-nlp-team | 2023-10-30T04:59:27Z | 25 | 0 | null | [
"region:us"
] | 2023-10-30T04:59:27Z | 2023-10-30T00:12:11.000Z | 2023-10-30T00:12:11 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: answer
dtype: string
- name: similar_contexts
sequence: string
splits:
- name: train
num_bytes: 11133875890
num_examples: 900000
- name: validation
num_bytes: 613048834
num_examples: 50000
download_size: 4288972879
dataset_size: 11746924724
---
# Dataset Card for "FiD_aihub_books"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6609922051429749,
-0.3303243815898895,
-0.12108339369297028,
-0.03314977139234543,
-0.1986324042081833,
0.04884498566389084,
0.4415399134159088,
-0.11986761540174484,
0.6657203435897827,
0.5927218198776245,
-0.7512363195419312,
-0.7566882967948914,
-0.4774186611175537,
-0.21711760759353... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Geonmo/laion-rvs-fashion-caption-only | Geonmo | 2023-10-31T01:08:26Z | 25 | 1 | null | [
"region:us"
] | 2023-10-31T01:08:26Z | 2023-10-30T10:49:40.000Z | 2023-10-30T10:49:40 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 64727598
num_examples: 1436088
download_size: 39909300
dataset_size: 64727598
---
# Dataset Card for "laion-rvs-fashion-caption-only"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.29670169949531555,
-0.09977427870035172,
0.24549929797649384,
0.4575851261615753,
-0.4765562415122986,
0.017917074263095856,
0.22996783256530762,
0.0653533935546875,
0.8915566802024841,
0.9235780239105225,
-1.042828917503357,
-0.8492395281791687,
-0.42904049158096313,
-0.221510574221611... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
parksimon0808/prm800k-llama-generator | parksimon0808 | 2023-11-08T21:30:42Z | 25 | 0 | null | [
"region:us"
] | 2023-11-08T21:30:42Z | 2023-10-30T16:56:06.000Z | 2023-10-30T16:56:06 | ---
dataset_info:
features:
- name: texts
dtype: string
- name: input_ids
sequence: int32
- name: labels
sequence: int64
- name: answers
dtype: string
splits:
- name: train
num_bytes: 2469819413
num_examples: 657764
- name: test
num_bytes: 78271501
num_examples: 20419
download_size: 251440965
dataset_size: 2548090914
---
# Dataset Card for "prm800k-llama-v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4353657364845276,
-0.03587157651782036,
0.35812968015670776,
0.45129209756851196,
-0.6262010931968689,
-0.03793436288833618,
0.5990263819694519,
-0.20188945531845093,
0.9483815431594849,
0.7465433478355408,
-0.7869554162025452,
-0.7553552985191345,
-0.6794081330299377,
0.018548782914876... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yangwang825/qnli | yangwang825 | 2023-11-03T17:35:12Z | 25 | 0 | null | [
"region:us"
] | 2023-11-03T17:35:12Z | 2023-11-02T06:18:22.000Z | 2023-11-02T06:18:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arthurmluz/wikilingua_data-wiki_1024_results | arthurmluz | 2023-11-13T19:28:28Z | 25 | 0 | null | [
"region:us"
] | 2023-11-13T19:28:28Z | 2023-11-03T04:28:06.000Z | 2023-11-03T04:28:06 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: summary
dtype: string
- name: gen_summary
dtype: string
- name: rouge
struct:
- name: rouge1
dtype: float64
- name: rouge2
dtype: float64
- name: rougeL
dtype: float64
- name: rougeLsum
dtype: float64
- name: bert
struct:
- name: f1
sequence: float64
- name: hashcode
dtype: string
- name: precision
sequence: float64
- name: recall
sequence: float64
- name: moverScore
dtype: float64
splits:
- name: validation
num_bytes: 21885909
num_examples: 8165
download_size: 12842290
dataset_size: 21885909
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "wikilingua_data-wiki_1024_results"
rouge= {'rouge1': 0.3547652574772463, 'rouge2': 0.1505956971978055, 'rougeL': 0.2785170891387953, 'rougeLsum': 0.2785170891387953}
bert= {'precision': 0.7906573472691691, 'recall': 0.7655439093188866, 'f1': 0.7771048831560097}
mover = 0.6245790568121278 | [
-0.34605488181114197,
-0.5706803202629089,
-0.007120346184819937,
-0.07329446822404861,
-0.3283190429210663,
-0.21509434282779694,
-0.3173049986362457,
-0.025369269773364067,
0.8279953002929688,
0.3354770243167877,
-0.47163763642311096,
-0.9459587335586548,
-0.6850409507751465,
0.017824748... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
roettger/eighteenth_century_french_novels | roettger | 2023-11-07T10:43:16Z | 25 | 0 | null | [
"task_categories:text-generation",
"size_categories:10M<n<100M",
"language:fr",
"license:cc-by-4.0",
"region:us"
] | 2023-11-07T10:43:16Z | 2023-11-03T10:49:39.000Z | 2023-11-03T10:49:39 | ---
license: cc-by-4.0
task_categories:
- text-generation
language:
- fr
pretty_name: Collection of Eighteenth-Century French Novels (1751-1800)
size_categories:
- 10M<n<100M
---
# General information
This dataset contains 12 Mio Token of Literary French prose 1751-1800 in plain text format, built within the project 'Mining and Modeling Text' (2019-2023) at Trier University.
For the dataset in XML/TEI see the [GitHub repository of the project](https://github.com/MiMoText/roman18/blob/master/README.md).
# Collection de romans français du dix-huitième siècle (1751-1800) / Collection of Eighteenth-Century French Novels (1751-1800)
This collection of Eighteenth-Century French Novels contains 200 digital French texts of novels created or first published between 1751 and 1800. The collection is created in the context of [Mining and Modeling Text](https://www.mimotext.uni-trier.de/en) (2019-2023), a project which is located at the Trier Center for Digital Humanities ([TCDH](https://tcdh.uni-trier.de/en)) at Trier University.
## Corpus building
In the first step, about 40 novels have been carefully created by double keying. Using this first group of novels, an OCR-model has been trained in cooperation with Christian Reul (University of Würzburg), who is one of the developers of OCR4all.
Applying this OCR-model to additional scans provided by for instance by Gallica (bnf.fr) and other sources (see metadata for details), a second group of novels which are not yet available in full text (or only in low quality) was produced.
A third group of texts, based on existing full texts (from Wikisource and other sources) helped us reach 200 volumes.
## Balancing criteria
At the beginning, corpus composition depended primarily on pragmatic criteria. We then proceeded and used additional metadata on the literary production to balance the corpus of full texts. A bibliography documenting the overall production of novels in the period is Angus Martin, Vivienne G. Mylne and Richard Frautschi, *Bibliographie du genre romanesque français 1751-1800*, 1977. We used this metadata to balance our corpus of texts regarding the parameters gender, year of first publication and narrative form in approaching the historical distribution of these parameters in our full text metadata.
BGRF = abreviation for *Bibliographie du genre romanesque français 1751-1800*, a source of bibliographic metadata we mined for extracting publication years, narrative form, authors and more.
### Year of first publication per decades
The year of first publication according to BGRF data. We compared the overall novel publication with the corpus data and added novels per year according to the known historical publication proportions. Shown here is an overview per decade. Please note that the last bar ('1800') contains only data for one year.

### Gender balance
Concerning gender, we used statements from Wikidata as well as a python script filtering for gender specific titles (Abbé, Marquis etc.). In cases where names lacked a Wikidata match or a specific title, we employed the gender guesser Python package to make gender predictions.

### Narrative form
Information on narrative form was extracted from the BGRF data (Mylne et al., 1977) supplemented by human evaluations conducted on the full texts.

For a more detailed documentation of our sampling and balancing strategy, see our [Jupyter Notebook](https://github.com/MiMoText/balance_novels/blob/main/balance_analysis_newStructure.ipynb).
## Metadata
There is a metadata file on the level of the full texts. The column names are explained in the next paragraph.
# Data Fields
* filename: file name
* au-name: author name
* au-birth: birth date of author
* au-death: death date of author
* title: title of literary work
* au-gender: gender of author
* firsted-yr: first year of publication
* printSource-yr: year of publication of print source
* form: narrative form
* spelling: information in historical spelling
* data-capture: information on data capture
* token count: token count of text file
* vols_count: count of volumes ('tome')
* size: size according to Eltec scheme https://distantreading.github.io/Schema/eltec-1.html#TEI.size
* bgrf: unique identifier in 'Bibliographie du genre romanesque français, 1751-1800 (Martin / Mylne / Frautschi 1977)'
* author_wikidata: unique identifier of author on Wikidata
* author_MiMoText-ID: unique identifier of author on MiMoText: https://data.mimotext.uni-trier.de
* title_wikidata: unique identifier of title on Wikidata
* title_MiMoText-ID: unique identifier of title on MiMoText: https://data.mimotext.uni-trier.de
* lang: language of text file
* publisher: information on publisher
* distributor: information on distributor of file
* distribution_date: information on distribuation date
* copyright_status: information on copyrights status of text file
* digitalSource_Title: title of digital text source
* digitalSource_Ref: reference of digital source
* digitalSource_Publisher: publisher of digital source
* digitalSource_Date: date of digital source
* printSource_title: title of print source
* printSource_author: author according to print source
* printSource_pubPlace: place of publication according to print source
* printSource_publisher: publisher of print source
* printSource_date: date of publication of print source
* resp_datacapture: person responsible for data capture
* resp_encoding: person responsible for encoding | [
-0.225617915391922,
-0.19001737236976624,
0.3373562693595886,
0.09744997322559357,
0.1004948616027832,
-0.22926007211208344,
0.0456627756357193,
-0.4204520285129547,
0.2783765494823456,
0.7018977999687195,
-0.6624730229377747,
-0.5025871992111206,
-0.48302072286605835,
0.5566279292106628,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vietgpt/orca | vietgpt | 2023-11-07T09:25:43Z | 25 | 0 | null | [
"region:us"
] | 2023-11-07T09:25:43Z | 2023-11-03T11:23:25.000Z | 2023-11-03T11:23:25 | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_2048
num_bytes: 569938554.1909978
num_examples: 343944
- name: train_1024
num_bytes: 467379309.0929899
num_examples: 282052
download_size: 643797649
dataset_size: 1630738726.3724823
configs:
- config_name: default
data_files:
- split: train_2048
path: data/train_2048-*
- split: train_1024
path: data/train_1024-*
---
# Dataset Card for "orca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5262315273284912,
-0.3707503080368042,
0.11692903935909271,
0.07602444291114807,
-0.32998159527778625,
-0.1140243411064148,
0.4249497652053833,
-0.5146366953849792,
1.0081220865249634,
0.6070916056632996,
-0.7940506935119629,
-0.8765143752098083,
-0.5690041184425354,
-0.2527183890342712... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Alchemy5/autodiagram | Alchemy5 | 2023-11-06T02:59:15Z | 25 | 0 | null | [
"region:us"
] | 2023-11-06T02:59:15Z | 2023-11-05T19:16:13.000Z | 2023-11-05T19:16:13 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: images
dtype: image
- name: tex
dtype: string
splits:
- name: train
num_bytes: 260860.0
num_examples: 31
- name: validation
num_bytes: 70143.0
num_examples: 8
download_size: 230710
dataset_size: 331003.0
---
# Dataset Card for "autodiagram"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7143932580947876,
-0.18331117928028107,
0.12220438569784164,
0.3232339322566986,
-0.1115354374051094,
0.11354468017816544,
0.37318986654281616,
-0.38652274012565613,
0.9604172110557556,
0.3227173388004303,
-0.7094199061393738,
-0.8051884770393372,
-0.6861370205879211,
-0.111831940710544... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sinandraide/zero_shot_test | sinandraide | 2023-11-07T01:26:54Z | 25 | 0 | null | [
"region:us"
] | 2023-11-07T01:26:54Z | 2023-11-06T14:43:50.000Z | 2023-11-06T14:43:50 | ---
# For reference on dataset card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset is a test result csv file from the zero-shot prompting experiment.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | [
-0.3805913031101227,
-0.6287264227867126,
0.22328199446201324,
0.1558501422405243,
-0.27063655853271484,
-0.12693467736244202,
0.004082918632775545,
-0.4690016210079193,
0.4173089861869812,
0.73226398229599,
-0.7736156582832336,
-0.9526498317718506,
-0.46150797605514526,
0.1008207723498344... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nguyenthanhdo/dolphin_mqa_details_vi | nguyenthanhdo | 2023-11-08T04:09:46Z | 25 | 0 | null | [
"region:us"
] | 2023-11-08T04:09:46Z | 2023-11-08T04:09:40.000Z | 2023-11-08T04:09:40 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 28509274
num_examples: 15037
download_size: 12692096
dataset_size: 28509274
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dolphin_mqa_details_vi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.9381967186927795,
-0.22180351614952087,
0.16512779891490936,
0.08843358606100082,
-0.43322256207466125,
-0.1127808466553688,
0.6082541346549988,
-0.24953345954418182,
0.8775909543037415,
0.6689110398292542,
-1.0183049440383911,
-0.6604182124137878,
-0.5521093606948853,
-0.05003467574715... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
aeoebe/josun | aeoebe | 2023-11-08T05:52:43Z | 25 | 0 | null | [
"region:us"
] | 2023-11-08T05:52:43Z | 2023-11-08T05:47:14.000Z | 2023-11-08T05:47:14 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/litra_ru_essays | dim | 2023-11-09T01:29:47Z | 25 | 0 | null | [
"region:us"
] | 2023-11-09T01:29:47Z | 2023-11-09T01:28:49.000Z | 2023-11-09T01:28:49 | ---
dataset_info:
features:
- name: text
dtype: string
- name: title
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 5247453
num_examples: 650
download_size: 2565584
dataset_size: 5247453
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "litra_ru_essays"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3307599723339081,
-0.4163707494735718,
0.39483436942100525,
0.08723516762256622,
-0.10771235078573227,
0.05710555985569954,
0.16356509923934937,
-0.22519095242023468,
0.8025740385055542,
0.6474385261535645,
-0.7876936197280884,
-0.6787528395652771,
-0.2398752123117447,
-0.27191960811614... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
presencesw/dataset_2000_decompese_question_0 | presencesw | 2023-11-09T15:42:01Z | 25 | 0 | null | [
"region:us"
] | 2023-11-09T15:42:01Z | 2023-11-09T15:03:53.000Z | 2023-11-09T15:03:53 | ---
dataset_info:
features:
- name: entities
sequence: 'null'
- name: triplets
list:
- name: question
dtype: string
- name: answer
dtype: string
- name: complex_question
dtype: string
splits:
- name: train
num_bytes: 70060
num_examples: 199
download_size: 26888
dataset_size: 70060
---
# Dataset Card for "dataset_2000_decompese_question_0"
The dataset has struct
```json
{
"complex_question": "Does Mercury help detect coronavirus?",
"entities": ["Mercury", "coronavirus"],
"triples": [
{
"question": "What is the name of the coronavirus?",
"evidence": "str...",
"answer": "The coronavirus is called COVID-19"
},
{
"question": "Does Mercury help detect COVID-19?",
"evidence": [
"",
"",
""
],
"answer": "Mercury does not help detect COVID-19"
},
{
"question": "What is mercury used to detect?",
"evidence": "str...",
"answer": "Mercury is used to detect the temperature of things"
},
{
"question": "What are some symtoms of coronavirus?",
"evidence": "str...",
"answer": "Common symtoms of coronavirus are fever..."
}
],
"answer": "Yes, ..."
}
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.46362540125846863,
-0.7033609747886658,
0.364361971616745,
-0.07774527370929718,
-0.2706339657306671,
0.3237563669681549,
-0.07600069046020508,
0.0017837875057011843,
0.35300299525260925,
0.4756890833377838,
-0.4563937783241272,
-0.7956690192222595,
-0.6100563406944275,
0.29009586572647... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pphuc25/cv13-train-vectorized | pphuc25 | 2023-11-11T17:13:49Z | 25 | 0 | null | [
"region:us"
] | 2023-11-11T17:13:49Z | 2023-11-11T09:54:29.000Z | 2023-11-11T09:54:29 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: input_length
dtype: int64
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 273530247.93
num_examples: 1671
download_size: 253957905
dataset_size: 273530247.93
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cv13-train-vectorized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6447207927703857,
-0.0063471063040196896,
0.09592841565608978,
0.4830365777015686,
-0.2810983657836914,
-0.10753665864467621,
0.2406274974346161,
0.005333008244633675,
0.6168478727340698,
0.27264606952667236,
-0.864296555519104,
-0.764022946357727,
-0.6466522812843323,
-0.30015313625335... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pphuc25/vivos-train-vectorized | pphuc25 | 2023-11-11T17:06:31Z | 25 | 0 | null | [
"region:us"
] | 2023-11-11T17:06:31Z | 2023-11-11T17:04:37.000Z | 2023-11-11T17:04:37 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: input_length
dtype: int64
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 1540103696.5
num_examples: 9964
download_size: 1511582741
dataset_size: 1540103696.5
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "vivos-train-vectorized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4337196350097656,
0.12846025824546814,
-0.11768998205661774,
0.48982781171798706,
-0.3697860836982727,
-0.034272901713848114,
0.3356251120567322,
-0.05205394700169563,
0.6856833696365356,
-0.04684660956263542,
-0.8106710910797119,
-0.6053497791290283,
-0.46247202157974243,
-0.1772777140... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kuanhuggingface/amazon_tts_encodec | kuanhuggingface | 2023-11-14T05:14:43Z | 25 | 0 | null | [
"region:us"
] | 2023-11-14T05:14:43Z | 2023-11-14T05:13:37.000Z | 2023-11-14T05:13:37 | ---
dataset_info:
features:
- name: file_id
dtype: string
- name: instruction
dtype: string
- name: transcription
dtype: string
- name: src_encodec_0
sequence: int64
- name: src_encodec_1
sequence: int64
- name: src_encodec_2
sequence: int64
- name: src_encodec_3
sequence: int64
- name: src_encodec_4
sequence: int64
- name: src_encodec_5
sequence: int64
- name: src_encodec_6
sequence: int64
- name: src_encodec_7
sequence: int64
- name: tgt_encodec_0
sequence: int64
- name: tgt_encodec_1
sequence: int64
- name: tgt_encodec_2
sequence: int64
- name: tgt_encodec_3
sequence: int64
- name: tgt_encodec_4
sequence: int64
- name: tgt_encodec_5
sequence: int64
- name: tgt_encodec_6
sequence: int64
- name: tgt_encodec_7
sequence: int64
splits:
- name: train
num_bytes: 6057391940
num_examples: 171430
- name: validation
num_bytes: 351554634
num_examples: 10000
- name: test
num_bytes: 353040020
num_examples: 10000
download_size: 506194253
dataset_size: 6761986594
---
# Dataset Card for "amazon_tts_encodec"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4231043756008148,
-0.25292789936065674,
0.17353595793247223,
0.3442254960536957,
-0.4160943925380707,
0.1887931525707245,
0.2100372314453125,
-0.1737787127494812,
0.7898866534233093,
0.5109387040138245,
-0.8176888823509216,
-0.9376554489135742,
-0.7127259373664856,
0.08609539270401001,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zxvix/amazon_review_automotive_100 | zxvix | 2023-11-14T06:13:10Z | 25 | 0 | null | [
"region:us"
] | 2023-11-14T06:13:10Z | 2023-11-14T06:13:07.000Z | 2023-11-14T06:13:07 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 45571
num_examples: 100
download_size: 32147
dataset_size: 45571
---
# Dataset Card for "amazon_review_automotive_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.779779314994812,
-0.16824403405189514,
0.25993674993515015,
0.37539607286453247,
-0.10526253283023834,
0.02140192501246929,
0.31012892723083496,
-0.20578424632549286,
0.44000887870788574,
0.29426077008247375,
-1.0491943359375,
-0.651381254196167,
-0.1955009400844574,
-0.2746156752109527... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
agil/similis | agil | 2023-11-14T09:02:54Z | 25 | 0 | null | [
"region:us"
] | 2023-11-14T09:02:54Z | 2023-11-14T09:02:51.000Z | 2023-11-14T09:02:51 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: q1
dtype: string
- name: q2
dtype: string
- name: result
dtype: int64
splits:
- name: train
num_bytes: 217862.67262791854
num_examples: 1610
- name: test
num_bytes: 54533.32737208147
num_examples: 403
download_size: 100214
dataset_size: 272396.0
---
# Dataset Card for "similis"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.488777220249176,
-0.1468028724193573,
0.2632313072681427,
0.15396861732006073,
-0.24236559867858887,
-0.3738136887550354,
0.10271912813186646,
-0.3053130805492401,
0.98798006772995,
0.3305196464061737,
-0.8760494589805603,
-0.7638648152351379,
-0.5302446484565735,
0.014182113111019135,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
core-outline/llama-2-7b-chat-hf | core-outline | 2023-11-14T09:25:57Z | 25 | 0 | null | [
"region:us"
] | 2023-11-14T09:25:57Z | 2023-11-14T09:23:38.000Z | 2023-11-14T09:23:38 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
anastasia624/rus_93_nez_6k | anastasia624 | 2023-11-14T12:57:56Z | 25 | 0 | null | [
"region:us"
] | 2023-11-14T12:57:56Z | 2023-11-14T12:56:36.000Z | 2023-11-14T12:56:36 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sanchit-gandhi/librispeech_asr_dummy_pseudo_labelled | sanchit-gandhi | 2023-11-14T14:27:53Z | 25 | 0 | null | [
"region:us"
] | 2023-11-14T14:27:53Z | 2023-11-14T14:24:59.000Z | 2023-11-14T14:24:59 | ---
dataset_info:
config_name: clean
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
- name: whisper_transcript
sequence: int64
splits:
- name: validation
num_bytes: 9700520.0
num_examples: 73
download_size: 9198584
dataset_size: 9700520.0
configs:
- config_name: clean
data_files:
- split: validation
path: clean/validation-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
atmallen/qm_alice_hard_4_grader_first_1.0e | atmallen | 2023-11-16T18:27:19Z | 25 | 0 | null | [
"region:us"
] | 2023-11-16T18:27:19Z | 2023-11-16T03:19:33.000Z | 2023-11-16T03:19:33 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: alice_label
dtype: bool
- name: bob_label
dtype: bool
- name: difficulty
dtype: int64
- name: statement
dtype: string
- name: choices
sequence: string
- name: character
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 3455633.0
num_examples: 37091
- name: validation
num_bytes: 369717.0
num_examples: 3969
- name: test
num_bytes: 365744.0
num_examples: 3926
download_size: 1063722
dataset_size: 4191094.0
---
# Dataset Card for "qm_alice_hard_4_grader_first_1.0e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.342191606760025,
-0.28422069549560547,
0.21086201071739197,
0.19470445811748505,
-0.09609995037317276,
-0.00601958017796278,
0.6328778862953186,
0.17971961200237274,
0.5093252062797546,
0.37292420864105225,
-0.7904529571533203,
-1.0343022346496582,
-0.6432867646217346,
-0.19159969687461... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
OliverYoung/threejs_30 | OliverYoung | 2023-11-16T06:52:52Z | 25 | 1 | null | [
"license:mit",
"region:us"
] | 2023-11-16T06:52:52Z | 2023-11-16T06:52:28.000Z | 2023-11-16T06:52:28 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lakong/yt-thumbnails-train | lakong | 2023-11-17T07:11:54Z | 25 | 0 | null | [
"region:us"
] | 2023-11-17T07:11:54Z | 2023-11-17T01:01:35.000Z | 2023-11-17T01:01:35 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 259863631.184
num_examples: 2067
download_size: 258196017
dataset_size: 259863631.184
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Pavitra05/Questions | Pavitra05 | 2023-11-17T01:55:10Z | 25 | 0 | null | [
"region:us"
] | 2023-11-17T01:55:10Z | 2023-11-17T01:43:21.000Z | 2023-11-17T01:43:21 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomashs/lsc_multiplechoice_top2vec | tomashs | 2023-11-19T17:12:35Z | 25 | 0 | null | [
"region:us"
] | 2023-11-19T17:12:35Z | 2023-11-19T17:06:14.000Z | 2023-11-19T17:06:14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: short_form
dtype: string
- name: long_form
dtype: string
- name: freq
dtype: int64
- name: num_candidates
dtype: int64
- name: __index_level_0__
dtype: int64
- name: topic_vector
sequence: float64
splits:
- name: train
num_bytes: 150188148
num_examples: 110752
- name: val
num_bytes: 34578554
num_examples: 25932
- name: test
num_bytes: 34161105
num_examples: 25175
download_size: 190641646
dataset_size: 218927807
---
# Dataset Card for "lsc_multiplechoice_top2vec"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5051712989807129,
-0.1804899424314499,
0.16243323683738708,
0.2038256973028183,
-0.07714802771806717,
0.17388050258159637,
0.251560777425766,
0.026174260303378105,
0.6882125735282898,
0.5383173823356628,
-0.8962998986244202,
-0.819384753704071,
-0.6890584230422974,
-0.46476101875305176,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
severo/bug-hfh-lfs | severo | 2023-11-21T15:25:48Z | 25 | 0 | null | [
"region:us"
] | 2023-11-21T15:25:48Z | 2023-11-21T15:21:39.000Z | 2023-11-21T15:21:39 | ---
dataset_info:
features:
- name: col
dtype: string
splits:
- name: train
num_bytes: 5653946
num_examples: 4567
download_size: 38896
dataset_size: 5653946
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
result-kand2-sdxl-wuerst-karlo/d8b81ca5 | result-kand2-sdxl-wuerst-karlo | 2023-11-23T15:32:31Z | 25 | 0 | null | [
"region:us"
] | 2023-11-23T15:32:31Z | 2023-11-23T15:32:29.000Z | 2023-11-23T15:32:29 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 163
num_examples: 10
download_size: 1299
dataset_size: 163
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "d8b81ca5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7090585231781006,
-0.1441514939069748,
0.33086249232292175,
0.3134691119194031,
-0.28794151544570923,
0.057013362646102905,
0.5208241939544678,
-0.21600808203220367,
0.8954591751098633,
0.4778100252151489,
-0.863288164138794,
-0.8098762035369873,
-0.655727744102478,
-0.0604323148727417,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
janakipanneerselvam/TMSL21_Sunlit_Tomatoes | janakipanneerselvam | 2023-11-25T04:09:11Z | 25 | 0 | null | [
"region:us"
] | 2023-11-25T04:09:11Z | 2023-11-25T00:04:36.000Z | 2023-11-25T00:04:36 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 324313448.7
num_examples: 3342
- name: validation
num_bytes: 116465781.048
num_examples: 1098
download_size: 352836635
dataset_size: 440779229.74799997
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
davidgaofc/PRIMA_inout | davidgaofc | 2023-11-25T02:06:15Z | 25 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-25T02:06:15Z | 2023-11-25T02:05:34.000Z | 2023-11-25T02:05:34 | ---
license: mit
dataset_info:
features:
- name: Text
dtype: string
- name: Label
dtype: int64
splits:
- name: train
num_bytes: 1287817
num_examples: 1640
download_size: 450804
dataset_size: 1287817
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
franlucc/code-debug-train-v0 | franlucc | 2023-11-25T15:17:30Z | 25 | 0 | null | [
"region:us"
] | 2023-11-25T15:17:30Z | 2023-11-25T02:32:57.000Z | 2023-11-25T02:32:57 | ---
dataset_info:
features:
- name: hexsha
dtype: string
- name: size
dtype: int64
- name: ext
dtype: string
- name: lang
dtype: string
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_repo_head_hexsha
dtype: string
- name: max_stars_repo_licenses
sequence: string
- name: max_stars_count
dtype: float64
- name: max_stars_repo_stars_event_min_datetime
dtype: string
- name: max_stars_repo_stars_event_max_datetime
dtype: string
- name: max_issues_repo_path
dtype: string
- name: max_issues_repo_name
dtype: string
- name: max_issues_repo_head_hexsha
dtype: string
- name: max_issues_repo_licenses
sequence: string
- name: max_issues_count
dtype: float64
- name: max_issues_repo_issues_event_min_datetime
dtype: string
- name: max_issues_repo_issues_event_max_datetime
dtype: string
- name: max_forks_repo_path
dtype: string
- name: max_forks_repo_name
dtype: string
- name: max_forks_repo_head_hexsha
dtype: string
- name: max_forks_repo_licenses
sequence: string
- name: max_forks_count
dtype: float64
- name: max_forks_repo_forks_event_min_datetime
dtype: string
- name: max_forks_repo_forks_event_max_datetime
dtype: string
- name: content
dtype: string
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: mutated
dtype: string
- name: mutation_descr
dtype: string
splits:
- name: train
num_bytes: 30740105
num_examples: 2225
download_size: 10106857
dataset_size: 30740105
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "code-debug-train-v0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.599860668182373,
-0.21101561188697815,
0.1800452619791031,
0.3882589638233185,
-0.2062041461467743,
-0.02219105325639248,
0.303759902715683,
-0.0727853924036026,
0.8197624087333679,
0.40016865730285645,
-0.7488892078399658,
-0.7076274752616882,
-0.5132108330726624,
-0.20379845798015594,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ylacombe/google-tamil | ylacombe | 2023-11-27T11:37:22Z | 25 | 0 | null | [
"task_categories:text-to-speech",
"task_categories:text-to-audio",
"language:ta",
"license:cc-by-sa-4.0",
"region:us"
] | 2023-11-27T11:37:22Z | 2023-11-25T12:59:49.000Z | 2023-11-25T12:59:49 | ---
dataset_info:
- config_name: female
features:
- name: audio
dtype: audio
- name: text
dtype: string
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 1364555763.88
num_examples: 2335
download_size: 1006094564
dataset_size: 1364555763.88
- config_name: male
features:
- name: audio
dtype: audio
- name: text
dtype: string
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 1064641765.528
num_examples: 1956
download_size: 781072069
dataset_size: 1064641765.528
configs:
- config_name: female
data_files:
- split: train
path: female/train-*
- config_name: male
data_files:
- split: train
path: male/train-*
license: cc-by-sa-4.0
task_categories:
- text-to-speech
- text-to-audio
language:
- ta
pretty_name: Tamil Speech
---
# Dataset Card for Tamil Speech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Statistics](#data-statistics)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Crowdsourced high-quality Tamil multi-speaker speech data set.](https://www.openslr.org/65/)
- **Repository:** [Google Language Resources and Tools](https://github.com/google/language-resources)
- **Paper:** [Open-source Multi-speaker Speech Corpora for Building Gujarati, Kannada, Malayalam, Marathi, Tamil and Telugu Speech Synthesis Systems](https://aclanthology.org/2020.lrec-1.804/)
### Dataset Summary
This dataset consists of 7 hours of transcribed high-quality audio of Tamil sentences recorded by 50 volunteers. The dataset is intended for speech technologies.
The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/65/) to make it easier to stream.
### Supported Tasks
- `text-to-speech`, `text-to-audio`: The dataset can be used to train a model for Text-To-Speech (TTS).
- `automatic-speech-recognition`, `speaker-identification`: The dataset can also be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the female config, simply specify the corresponding language config name (i.e., "female" for female speakers):
```python
from datasets import load_dataset
dataset =load_dataset("ylacombe/google-tamil", "female", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
dataset =load_dataset("ylacombe/google-tamil", "female", split="train", streaming=True)
print(next(iter(dataset)))
```
#### *Bonus*
You can create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
**Local:**
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
dataset =load_dataset("ylacombe/google-tamil", "female", split="train")
batch_sampler = BatchSampler(RandomSampler(dataset), batch_size=32, drop_last=False)
dataloader = DataLoader(dataset, batch_sampler=batch_sampler)
```
**Streaming:**
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
dataset =load_dataset("ylacombe/google-tamil", "female", split="train", streaming=True)
dataloader = DataLoader(dataset, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file called `audio` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'audio': {'path': 'taf_02345_00348037167.wav', 'array': array([-9.15527344e-05, -9.15527344e-05, -1.22070312e-04, ...,
-3.05175781e-05, 0.00000000e+00, 3.05175781e-05]), 'sampling_rate': 48000}, 'text': 'ஆஸ்த்ரேலியப் பெண்ணுக்கு முப்பத்தி மூன்று ஆண்டுகளுக்குப் பின்னர் இந்தியா இழப்பீடு வழங்கியது', 'speaker_id': 2345}
```
### Data Fields
- audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
### Data Statistics
| | Total duration (h) | Average duration (s) | # speakers | # sentences | # total words | # unique words | # total syllables | # unique syllables | # total phonemes | # unique phonemes |
|--------|--------------------|----------------------|------------|-------------|---------------|----------------|-------------------|--------------------|------------------|-------------------|
| Female | 4.01 | 6.18 | 25 | 2,335 | 15,880 | 6,620 | 56,607 | 1,696 | 126,659 | 37 |
| Male | 3.07 | 5.66 | 25 | 1,956 | 13,545 | 6,159 | 48,049 | 1,642 | 107,570 | 37 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
License: ([CC BY-SA 4.0 DEED](https://creativecommons.org/licenses/by-sa/4.0/deed.en))
### Citation Information
```
@inproceedings{he-etal-2020-open,
title = {{Open-source Multi-speaker Speech Corpora for Building Gujarati, Kannada, Malayalam, Marathi, Tamil and Telugu Speech Synthesis Systems}},
author = {He, Fei and Chu, Shan-Hui Cathy and Kjartansson, Oddur and Rivera, Clara and Katanova, Anna and Gutkin, Alexander and Demirsahin, Isin and Johny, Cibu and Jansche, Martin and Sarin, Supheakmungkol and Pipatsrisawat, Knot},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference (LREC)},
month = may,
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association (ELRA)},
pages = {6494--6503},
url = {https://www.aclweb.org/anthology/2020.lrec-1.800},
ISBN = "{979-10-95546-34-4},
}
```
### Contributions
Thanks to [@ylacombe](https://github.com/ylacombe) for adding this dataset. | [
-0.36225688457489014,
-0.6281483173370361,
-0.10428610444068909,
0.3325555622577667,
-0.2989622950553894,
0.0645824447274208,
-0.5262179970741272,
-0.13150694966316223,
0.4012850821018219,
0.37572765350341797,
-0.4678702652454376,
-0.7834917306900024,
-0.6346051692962646,
0.309954524040222... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Odunope/testsets | Odunope | 2023-11-27T10:38:03Z | 25 | 0 | null | [
"region:us"
] | 2023-11-27T10:38:03Z | 2023-11-27T10:28:40.000Z | 2023-11-27T10:28:40 | ---
dataset_info:
features:
- name: row
dtype: string
splits:
- name: train
num_bytes: 1448529.2274939173
num_examples: 1150
- name: test
num_bytes: 622237.7725060828
num_examples: 494
download_size: 520492
dataset_size: 2070767.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lhoestq/wikipedia_bn | lhoestq | 2023-08-18T09:44:36Z | 24 | 1 | null | [
"region:us"
] | 2023-08-18T09:44:36Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mozilla-foundation/common_voice_3_0 | mozilla-foundation | 2023-07-29T15:59:59Z | 24 | 0 | common-voice | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] | 2023-07-29T15:59:59Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
br:
- 10K<n<100K
ca:
- 10K<n<100K
cnh:
- 1K<n<10K
cv:
- 1K<n<10K
cy:
- 10K<n<100K
de:
- 100K<n<1M
dv:
- 1K<n<10K
en:
- 100K<n<1M
eo:
- 10K<n<100K
es:
- 10K<n<100K
et:
- 1K<n<10K
eu:
- 10K<n<100K
fa:
- 10K<n<100K
fr:
- 100K<n<1M
ga-IE:
- 1K<n<10K
it:
- 10K<n<100K
kab:
- 100K<n<1M
ky:
- 10K<n<100K
mn:
- 1K<n<10K
nl:
- 10K<n<100K
ru:
- 10K<n<100K
rw:
- 1K<n<10K
sah:
- 1K<n<10K
sl:
- 1K<n<10K
sv-SE:
- 1K<n<10K
tr:
- 1K<n<10K
tt:
- 10K<n<100K
zh-CN:
- 1K<n<10K
zh-TW:
- 10K<n<100K
source_datasets:
- extended|common_voice
paperswithcode_id: common-voice
pretty_name: Common Voice Corpus 3
language_bcp47:
- br
- ca
- cnh
- cv
- cy
- de
- dv
- en
- eo
- es
- et
- eu
- fa
- fr
- ga-IE
- it
- kab
- ky
- mn
- nl
- ru
- rw
- sah
- sl
- sv-SE
- tr
- tt
- zh-CN
- zh-TW
extra_gated_prompt: By clicking on “Access repository” below, you also agree to not
attempt to determine the identity of speakers in the Common Voice dataset.
task_categories:
- automatic-speech-recognition
---
# Dataset Card for Common Voice Corpus 3
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Anton Lozhkov](mailto:anton@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 2454 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 1979 validated hours in 29 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
### Languages
```
Basque, Breton, Catalan, Chinese (China), Chinese (Taiwan), Chuvash, Dhivehi, Dutch, English, Esperanto, Estonian, French, German, Hakha Chin, Irish, Italian, Kabyle, Kinyarwanda, Kyrgyz, Mongolian, Persian, Russian, Sakha, Slovenian, Spanish, Swedish, Tatar, Turkish, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_3_0", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
| [
-0.534290075302124,
-0.7328149080276489,
0.13989543914794922,
0.45837146043777466,
-0.24204783141613007,
0.03410673886537552,
-0.5691333413124084,
-0.23841403424739838,
0.43126723170280457,
0.5569053888320923,
-0.7349998950958252,
-0.9422455430030823,
-0.42685791850090027,
0.24558307230472... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wanyu/IteraTeR_human_sent | wanyu | 2022-10-24T18:58:22Z | 24 | 0 | null | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"conditional-text-generation",
"text-editing",
"arxiv:2203.03802",
"region:us"
] | 2022-10-24T18:58:22Z | 2022-03-13T20:46:23.000Z | 2022-03-13T20:46:23 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
pretty_name: IteraTeR_human_sent
language_bcp47:
- en-US
tags:
- conditional-text-generation
- text-editing
---
Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802)
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
Github repo: https://github.com/vipulraheja/IteraTeR
| [
-0.07486411929130554,
-0.4984821677207947,
0.7287435531616211,
0.13309934735298157,
-0.333004891872406,
0.2274867594242096,
-0.261161744594574,
-0.25514981150627136,
0.011397454887628555,
0.7972139120101929,
-0.6495078206062317,
-0.40806397795677185,
-0.23328807950019836,
0.230274885892868... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
roman_urdu_hate_speech | null | 2023-01-25T15:03:53Z | 24 | 1 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ur",
"license:mit",
"binary classification",
"... | 2023-01-25T15:03:53Z | 2022-03-25T15:51:45.000Z | 2022-03-25T15:51:45 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- ur
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
pretty_name: roman_urdu_hate_speech
tags:
- binary classification
dataset_info:
- config_name: Coarse_Grained
features:
- name: tweet
dtype: string
- name: label
dtype:
class_label:
names:
'0': Abusive/Offensive
'1': Normal
splits:
- name: train
num_bytes: 725719
num_examples: 7208
- name: test
num_bytes: 218087
num_examples: 2002
- name: validation
num_bytes: 79759
num_examples: 800
download_size: 927937
dataset_size: 1023565
- config_name: Fine_Grained
features:
- name: tweet
dtype: string
- name: label
dtype:
class_label:
names:
'0': Abusive/Offensive
'1': Normal
'2': Religious Hate
'3': Sexism
'4': Profane/Untargeted
splits:
- name: train
num_bytes: 723670
num_examples: 7208
- name: test
num_bytes: 219359
num_examples: 2002
- name: validation
num_bytes: 723670
num_examples: 7208
download_size: 1519423
dataset_size: 1666699
---
# Dataset Card for roman_urdu_hate_speech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [roman_urdu_hate_speech homepage](https://aclanthology.org/2020.emnlp-main.197/)
- **Repository:** [roman_urdu_hate_speech repository](https://github.com/haroonshakeel/roman_urdu_hate_speech)
- **Paper:** [Hate-Speech and Offensive Language Detection in Roman Urdu](https://aclanthology.org/2020.emnlp-main.197.pdf)
- **Leaderboard:** [N/A]
- **Point of Contact:** [M. Haroon Shakeel](mailto:m.shakeel@lums.edu.pk)
### Dataset Summary
The Roman Urdu Hate-Speech and Offensive Language Detection (RUHSOLD) dataset is a Roman Urdu dataset of tweets annotated by experts in the relevant language. The authors develop the gold-standard for two sub-tasks. First sub-task is based on binary labels of Hate-Offensive content and Normal content (i.e., inoffensive language). These labels are self-explanatory. The authors refer to this sub-task as coarse-grained classification. Second sub-task defines Hate-Offensive content with four labels at a granular level. These labels are the most relevant for the demographic of users who converse in RU and are defined in related literature. The authors refer to this sub-task as fine-grained classification. The objective behind creating two gold-standards is to enable the researchers to evaluate the hate speech detection approaches on both easier (coarse-grained) and challenging (fine-grained) scenarios.
### Supported Tasks and Leaderboards
- 'multi-class-classification', 'text-classification-other-binary classification': The dataset can be used for both multi class classification as well as for binary classification as it contains both coarse grained and fine grained labels.
### Languages
The text of this dataset is Roman Urdu. The associated BCP-47 code is 'ur'.
## Dataset Structure
### Data Instances
The dataset consists of two parts divided as a set of two types, Coarse grained examples and Fine Grained examples. The difference is that in the coarse grained example the tweets are labelled as abusive or normal whereas in the fine grained version there are several classes of hate associated with a tweet.
For the Coarse grained segment of the dataset the label mapping is:-
Task 1: Coarse-grained Classification Labels
0: Abusive/Offensive
1: Normal
Whereas for the Fine Grained segment of the dataset the label mapping is:-
Task 2: Fine-grained Classification Labels
0: Abusive/Offensive
1: Normal
2: Religious Hate
3: Sexism
4: Profane/Untargeted
An example from Roman Urdu Hate Speech looks as follows:
```
{
'tweet': 'there are some yahodi daboo like imran chore zakat khore'
'label': 0
}
```
### Data Fields
-tweet:a string denoting the tweet which has been selected by using a random sampling from a tweet base of 50000 tweets to select 10000 tweets and annotated for the dataset.
-label:An annotation manually labeled by three independent annotators, during the annotation process, all conflicts are resolved by a majority vote among three annotators.
### Data Splits
The data of each of the segments, Coarse Grained and Fine Grained is further split into training, validation and test set. The data is split in train, test, and validation sets with 70,20,10 split ratio using stratification based on fine-grained labels.
The use of stratified sampling is deemed necessary to preserve the same labels ratio across all splits.
The Final split sizes are as follows:
Train Valid Test
7209 2003 801
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was created by Hammad Rizwan, Muhammad Haroon Shakeel, Asim Karim during work done at Department of Computer Science, Lahore University of Management Sciences (LUMS), Lahore, Pakistan.
### Licensing Information
The licensing status of the dataset hinges on the legal status of the [Roman Urdu Hate Speech Dataset Repository](https://github.com/haroonshakeel/roman_urdu_hate_speech) which is under MIT License.
### Citation Information
```bibtex
@inproceedings{rizwan2020hate,
title={Hate-speech and offensive language detection in roman Urdu},
author={Rizwan, Hammad and Shakeel, Muhammad Haroon and Karim, Asim},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
pages={2512--2522},
year={2020}
}
```
### Contributions
Thanks to [@bp-high](https://github.com/bp-high), for adding this dataset. | [
-0.4049023687839508,
-0.7415289878845215,
-0.18065379559993744,
0.24463330209255219,
-0.17072533071041107,
0.1873129904270172,
-0.41646531224250793,
-0.43157175183296204,
0.1893322914838791,
0.36761775612831116,
-0.3400033116340637,
-0.9151170253753662,
-0.9204537272453308,
0.0807995647192... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
artemis13fowl/imdb | artemis13fowl | 2022-03-30T15:35:39Z | 24 | 0 | null | [
"region:us"
] | 2022-03-30T15:35:39Z | 2022-03-30T14:30:25.000Z | 2022-03-30T14:30:25 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pietrolesci/breaking_nli | pietrolesci | 2022-04-25T13:37:23Z | 24 | 0 | null | [
"region:us"
] | 2022-04-25T13:37:23Z | 2022-04-25T13:36:48.000Z | 2022-04-25T13:36:48 | ## Overview
Proposed by
```latex
@InProceedings{glockner_acl18,
author = {Glockner, Max and Shwartz, Vered and Goldberg, Yoav},
title = {Breaking NLI Systems with Sentences that Require Simple Lexical Inferences},
booktitle = {The 56th Annual Meeting of the Association for Computational Linguistics (ACL)},
month = {July},
year = {2018},
address = {Melbourne, Australia}
}
```
Original dataset available [here](https://github.com/BIU-NLP/Breaking_NLI).
## Dataset curation
Labels encoded with the following mapping `{"entailment": 0, "neutral": 1, "contradiction": 2}`
and made available in the `label` column.
## Code to create the dataset
```python
import pandas as pd
from datasets import Features, Value, ClassLabel, Dataset, Sequence
# load data
with open("<path to folder>/dataset.jsonl", "r") as fl:
data = fl.read().split("\n")
df = pd.DataFrame([eval(i) for i in data if len(i) > 0])
# encode labels
df["label"] = df["gold_label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
# cast to dataset
features = Features({
"sentence1": Value(dtype="string", id=None),
"category": Value(dtype="string", id=None),
"gold_label": Value(dtype="string", id=None),
"annotator_labels": Sequence(feature=Value(dtype="string", id=None), length=3),
"pairID": Value(dtype="int32", id=None),
"sentence2": Value(dtype="string", id=None),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
})
ds = Dataset.from_pandas(df, features=features)
ds.push_to_hub("breaking_nli", token="<token>", split="all")
``` | [
-0.3554023802280426,
-0.809630811214447,
0.32477056980133057,
0.2308795303106308,
-0.07278071343898773,
-0.13333943486213684,
-0.26457956433296204,
-0.4052950143814087,
0.30379483103752136,
0.583010196685791,
-0.4925532042980194,
-0.594097912311554,
-0.6443420052528381,
0.5016780495643616,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wza/TimeTravel | wza | 2022-05-05T06:42:38Z | 24 | 0 | null | [
"region:us"
] | 2022-05-05T06:42:38Z | 2022-04-27T06:51:36.000Z | 2022-04-27T06:51:36 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
IsaacRodgz/DravidianCodeMix-Dataset | IsaacRodgz | 2022-05-04T19:03:35Z | 24 | 0 | null | [
"region:us"
] | 2022-05-04T19:03:35Z | 2022-05-04T19:03:24.000Z | 2022-05-04T19:03:24 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
d0r1h/customer_churn | d0r1h | 2022-05-07T03:27:33Z | 24 | 2 | null | [
"license:apache-2.0",
"region:us"
] | 2022-05-07T03:27:33Z | 2022-05-07T03:04:13.000Z | 2022-05-07T03:04:13 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
spoiled/ecqa_classify_94 | spoiled | 2022-05-18T13:53:37Z | 24 | 0 | null | [
"region:us"
] | 2022-05-18T13:53:37Z | 2022-05-18T12:34:54.000Z | 2022-05-18T12:34:54 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nateraw/pascal-voc-2012 | nateraw | 2022-06-07T04:52:13Z | 24 | 1 | null | [
"region:us"
] | 2022-06-07T04:52:13Z | 2022-06-07T04:38:46.000Z | 2022-06-07T04:38:46 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tonytins/chat-dataset | tonytins | 2022-06-10T03:36:25Z | 24 | 1 | null | [
"region:us"
] | 2022-06-10T03:36:25Z | 2022-06-08T13:12:08.000Z | 2022-06-08T13:12:08 | # Chat Dataset
Derived from Hitomi Team's [Convo Dataset](https://github.com/hitomi-team/convo-dataset) on Github, the Chat Dataset is a vast dataset with diverse data used for training models to assist in conversation analysis and generation.
## Getting Started
### Prerequisites
- Python
- Git LFS
## DISCLAIMER
**In order to efficiently process the data, this repository contains language that may be offensive! View at your own risk!**
## License
This project is licensed under GNU Public License version 2.0. See [LICENSE](LICENSE) for details.
| [
-0.19028066098690033,
-0.8731304407119751,
-0.028562944382429123,
0.03822477534413338,
-0.13379298150539398,
0.14581049978733063,
-0.3110939860343933,
-0.26947298645973206,
0.2539239525794983,
0.6240015029907227,
-0.9628517627716064,
-0.5197195410728455,
-0.3303808569908142,
-0.27537065744... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
EddieChen372/tokenized-256-jest | EddieChen372 | 2022-06-17T16:55:03Z | 24 | 0 | null | [
"region:us"
] | 2022-06-17T16:55:03Z | 2022-06-17T16:54:49.000Z | 2022-06-17T16:54:49 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.