id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
Dhruvil47/sentence_followup | 2023-08-18T15:18:42.000Z | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:en",
"license:unknown",
"region:us"
] | Dhruvil47 | This dataset can be used to finetune models for text-to-text generation tasks (You can use this dataset if you want to predict the follow-up sentence). | null | 0 | 3 | 2023-08-18T13:21:36 | ---
license: unknown
task_categories:
- text-generation
language:
- en
size_categories:
- 1M<n<10M
---
Sentence Follow-up Dataset
This dataset can be used to finetune models for text-to-text generation tasks (You can use this dataset if you want to predict the follow-up sentence).
The dataset consists of the following headers: "sentence_1" and "sentence_2"; where sentence_2 will be a follow-up sentence for sentence_1.
| 425 | [
[
-0.00949859619140625,
-0.032562255859375,
0.0254364013671875,
0.02886962890625,
-0.005619049072265625,
-0.03228759765625,
0.00849151611328125,
-0.01045989990234375,
0.0061492919921875,
0.08502197265625,
-0.09521484375,
-0.016204833984375,
-0.0152587890625,
0... |
fake-news-UFG/central_de_fatos | 2023-08-18T21:04:07.000Z | [
"task_categories:text-classification",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:pt",
"license:cc-by-4.0",
"region:us"
] | fake-news-UFG | null | null | 0 | 3 | 2023-08-18T14:25:35 | ---
license: cc-by-4.0
pretty_name: Central de Fatos
task_categories:
- text-classification
language:
- pt
language_details: pt-BR
size_categories:
- 10K<n<100K
multilinguality:
- monolingual
language_creators:
- found
DOI: 10.5281/zenodo.5191798
---
# Central de Fatos
## Dataset Description
- **Homepage:**
- **Repository:** [https://zenodo.org/record/5191798](https://zenodo.org/record/5191798)
- **Paper:** [https://sol.sbc.org.br/index.php/dsw/article/view/17421/17257](https://sol.sbc.org.br/index.php/dsw/article/view/17421/17257)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
In recent times, the interest for research dissecting the dissemination and prevention of misinformation in the online environment has spiked dramatically.
Given that scenario, a recurring obstacle is the unavailability of public datasets containing fact-checked instances.
In this work, we performed an extensive data collection of such instances from the better part of all major internationally recognized Brazilian fact-checking agencies.
Particularly, this paper offers the research community a novel dataset containing fact-checks from various trustworthy sources regarding a wide range of topics.
In total, the resulting collection encompasses 11647 fact-check instances collected across 6 different agencies that can be used for several studies in the contexts of identifying and combating misinformation on digital platforms in Brazil.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is in Portuguese.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
If you use "Central de Fatos", please cite:
```bibtex
@inproceedings{dsw,
author = {João Couto and Breno Pimenta and Igor M. de Araújo and Samuel Assis and Julio C. S. Reis and Ana Paula da Silva and Jussara Almeida and Fabrício Benevenuto},
title = {Central de Fatos: Um Repositório de Checagens de Fatos},
booktitle = {Anais do III Dataset Showcase Workshop},
location = {Rio de Janeiro},
year = {2021},
keywords = {},
issn = {0000-0000},
pages = {128--137},
publisher = {SBC},
address = {Porto Alegre, RS, Brasil},
doi = {10.5753/dsw.2021.17421},
url = {https://sol.sbc.org.br/index.php/dsw/article/view/17421}
}
```
### Contributions
Thanks to [@ju-resplande](https://github.com/ju-resplande) for adding this dataset. | 3,252 | [
[
-0.03240966796875,
-0.0307464599609375,
0.026275634765625,
0.021392822265625,
-0.020782470703125,
0.00145721435546875,
-0.01470947265625,
-0.041717529296875,
0.0268402099609375,
0.039764404296875,
-0.030609130859375,
-0.0653076171875,
-0.055908203125,
0.0285... |
ticoAg/Chinese-medical-dialogue | 2023-08-18T15:33:15.000Z | [
"license:apache-2.0",
"region:us"
] | ticoAg | null | null | 4 | 3 | 2023-08-18T15:25:05 | ---
license: apache-2.0
raw csv: 356 MB
examples: 799743
---
# Note
process data from [Chinese-medical-dialogue-data](https://github.com/Toyhom/Chinese-medical-dialogue-data)
单轮医患对话
## raw data samples
|department|title|ask|answer|
|----------|-----|---|------|
|心血管科|高血压患者能吃党参吗?|我有高血压这两天女婿来的时候给我拿了些党参泡水喝,您好高血压可以吃党参吗?|高血压病人可以口服党参的。党参有降血脂,降血压的作用,可以彻底消除血液中的垃圾,从而对冠心病以及心血管疾病的患者都有一定的稳定预防工作作用,因此平时口服党参能远离三高的危害。另外党参除了益气养血,降低中枢神经作用,调整消化系统功能,健脾补肺的功能。感谢您的进行咨询,期望我的解释对你有所帮助。|
|内分泌科|糖尿病还会进行遗传吗?|糖尿病有隔代遗传吗?我妈是糖尿病,很多年了,也没养好,我现在也是,我妹子也是,我儿子现在二十岁,没什么问题,但是以后会不会也得糖尿病啊,真是难过,我现在就已经开始让他控制点吃东西。|2型糖尿病的隔代遗传概率为父母患糖尿病,临产的发生率为40%,比一般人患糖尿病,疾病,如何更重要的选择因素基于生活方式的,后天也隔代遗传隔代遗传易感性更公正,增强患糖尿病的风险,低糖低脂肪,平时清淡饮食,适当锻练,增强监测数据,血糖仪买个备取。|
|内分泌科|糖尿病会出现什么症状?|我是不是糖尿病,如何严重,糖尿病的典型症状有哪些?血糖高之后感觉什么东西都不能够吃了,有糖分的东西都不敢吃,怕血糖又高,不知晓是不是变严重了,糖尿病的症状有哪些?|你好,根据你描述的情况看来糖尿病是可以致使血糖异常下降的,可以再次出现三多一少的症状,如喝水多,小便多,饭量大,体重减轻,建议你尽快复诊当地医院内分泌科看一看,需要有让大夫仔细检查你的血糖水平,明确有否糖尿病的情况,及时动用降糖药治疗,平时一定少吃甜食,足量锻练。|
## processed data sample
```json
[
{"instruction":"title", "input":"ask", "output":"answer", "history":None},
]
``` | 1,057 | [
[
-0.0284881591796875,
-0.050384521484375,
0.037841796875,
0.021148681640625,
-0.033172607421875,
-0.03460693359375,
-0.003997802734375,
-0.010894775390625,
0.0280914306640625,
0.05712890625,
-0.044525146484375,
-0.060272216796875,
-0.037078857421875,
0.004619... |
KhalfounMehdi/dermatology_anomaly_detection | 2023-08-18T16:36:55.000Z | [
"region:us"
] | KhalfounMehdi | null | null | 0 | 3 | 2023-08-18T16:11:58 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: string
splits:
- name: train
num_bytes: 51523748.0
num_examples: 656
download_size: 51529683
dataset_size: 51523748.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: KhalfounMehdi--dermatology_anomaly_detection
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dermatology_anomaly_detection"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 611 | [
[
-0.0287628173828125,
-0.020538330078125,
0.0252838134765625,
0.00446319580078125,
-0.009735107421875,
-0.0145721435546875,
0.021575927734375,
-0.0141754150390625,
0.058502197265625,
0.0299072265625,
-0.06524658203125,
-0.07861328125,
-0.04681396484375,
-0.02... |
BaekRok/vishing_data | 2023-08-19T01:24:22.000Z | [
"region:us"
] | BaekRok | null | null | 0 | 3 | 2023-08-18T17:20:33 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype: audio
- name: labels
list:
- name: start
dtype: float64
- name: text
dtype: string
- name: end
dtype: float64
- name: speaker
dtype: string
- name: label
dtype: string
- name: seg_num
dtype: int64
- name: total_seg
dtype: int64
- name: prob
dtype: float64
splits:
- name: train
num_bytes: 48988768453.712
num_examples: 16496
- name: validation
num_bytes: 8026214010.768
num_examples: 2071
- name: test
num_bytes: 8851253927.312
num_examples: 2156
download_size: 16900478025
dataset_size: 65866236391.79199
---
# Dataset Card for "vishing_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,025 | [
[
-0.03668212890625,
-0.0151519775390625,
0.0170135498046875,
0.0150146484375,
-0.019134521484375,
-0.0103607177734375,
0.0207061767578125,
-0.02154541015625,
0.059234619140625,
0.038818359375,
-0.057891845703125,
-0.05328369140625,
-0.041351318359375,
-0.0278... |
Jing24/seperate_all0 | 2023-08-18T21:41:53.000Z | [
"region:us"
] | Jing24 | null | null | 0 | 3 | 2023-08-18T21:41:51 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int32
- name: text
sequence: string
splits:
- name: train
num_bytes: 8063353
num_examples: 9208
download_size: 1455012
dataset_size: 8063353
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "seperate_all0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 663 | [
[
-0.043975830078125,
-0.018524169921875,
0.0196533203125,
0.0267486572265625,
-0.027679443359375,
0.004093170166015625,
0.0192108154296875,
-0.0170745849609375,
0.0809326171875,
0.033782958984375,
-0.053985595703125,
-0.03973388671875,
-0.046051025390625,
-0.... |
celiksa/mydataset | 2023-08-19T19:42:36.000Z | [
"region:us"
] | celiksa | null | null | 0 | 3 | 2023-08-19T19:42:23 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
TheFatBallerina/Flipkart_description | 2023-08-20T09:12:54.000Z | [
"region:us"
] | TheFatBallerina | null | null | 0 | 3 | 2023-08-20T09:12:12 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
lilacai/lilac-wikitext-2-raw-v1 | 2023-10-13T02:01:43.000Z | [
"region:us"
] | lilacai | null | null | 0 | 3 | 2023-08-21T05:15:15 | This dataset is generated by [Lilac](http://lilacml.com) for a HuggingFace Space: [huggingface.co/spaces/lilacai/lilac](https://huggingface.co/spaces/lilacai/lilac).
Original dataset: [https://huggingface.co/datasets/wikitext](https://huggingface.co/datasets/wikitext)
Lilac dataset config:
```namespace: lilac
name: wikitext-2-raw-v1
source:
dataset_name: wikitext
config_name: wikitext-2-raw-v1
source_name: huggingface
embeddings:
- path: text
embedding: gte-small
signals:
- path: text
signal:
signal_name: near_dup
- path: text
signal:
signal_name: pii
- path: text
signal:
signal_name: lang_detection
- path: text
signal:
signal_name: text_statistics
- path: text
signal:
embedding: gte-small
namespace: lilac
concept_name: legal-termination
signal_name: concept_score
- path: text
signal:
embedding: gte-small
namespace: lilac
concept_name: negative-sentiment
signal_name: concept_score
- path: text
signal:
embedding: gte-small
namespace: lilac
concept_name: non-english
signal_name: concept_score
- path: text
signal:
embedding: gte-small
namespace: lilac
concept_name: positive-sentiment
signal_name: concept_score
- path: text
signal:
embedding: gte-small
namespace: lilac
concept_name: profanity
signal_name: concept_score
- path: text
signal:
embedding: gte-small
namespace: lilac
concept_name: question
signal_name: concept_score
- path: text
signal:
embedding: gte-small
namespace: lilac
concept_name: source-code
signal_name: concept_score
- path: text
signal:
embedding: gte-small
namespace: lilac
concept_name: toxicity
signal_name: concept_score
- path: text
signal:
embedding: gte-small
namespace: lilac
concept_name: legal-termination
signal_name: concept_score
- path: text
signal:
embedding: gte-small
namespace: lilac
concept_name: negative-sentiment
signal_name: concept_score
- path: text
signal:
embedding: gte-small
namespace: lilac
concept_name: non-english
signal_name: concept_score
- path: text
signal:
embedding: gte-small
namespace: lilac
concept_name: positive-sentiment
signal_name: concept_score
- path: text
signal:
embedding: gte-small
namespace: lilac
concept_name: profanity
signal_name: concept_score
- path: text
signal:
embedding: gte-small
namespace: lilac
concept_name: question
signal_name: concept_score
- path: text
signal:
embedding: gte-small
namespace: lilac
concept_name: source-code
signal_name: concept_score
- path: text
signal:
embedding: gte-small
namespace: lilac
concept_name: toxicity
signal_name: concept_score
- path: text
signal:
signal_name: cluster_dbscan
- path: text
signal:
embedding: gte-small
signal_name: cluster_hdbscan
settings:
ui:
media_paths:
- text
markdown_paths: []
```
| 3,239 | [
[
-0.050018310546875,
-0.0440673828125,
0.02978515625,
0.03533935546875,
-0.01078033447265625,
0.00954437255859375,
-0.004062652587890625,
-0.00818634033203125,
0.056610107421875,
0.017608642578125,
-0.07318115234375,
-0.0662841796875,
-0.04510498046875,
0.013... |
valurank/Explicit_content | 2023-08-21T14:14:35.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"license:other",
"region:us"
] | valurank | null | null | 0 | 3 | 2023-08-21T13:52:47 | ---
license: other
task_categories:
- text-classification
size_categories:
- 1K<n<10K
---
---
license:
- other
language:
- en
multilinguality:
- monolingual
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for Explicit content detection
## Table of Contents
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Source Data](#source-data)
## Dataset Description
1189 News Articles classified into different categories namely: "Explicit" if the article contains explicit content and "Not_Explicit" if not.
## Languages
The text in the dataset is in English
## Dataset Structure
The dataset consists of two columns namely Article and Category.
The Article column consists of the news article and the Category column consists of the class each article belongs to wether it contains explicit content or not
## Source Data
The dataset is queried from the Otherweb database | 988 | [
[
-0.0223541259765625,
-0.04083251953125,
-0.003749847412109375,
0.04052734375,
-0.030670166015625,
-0.0034770965576171875,
-0.016143798828125,
-0.022857666015625,
0.00348663330078125,
0.040252685546875,
-0.04150390625,
-0.08050537109375,
-0.042724609375,
0.02... |
focia/yt_thumbnail_dataset | 2023-08-23T12:18:10.000Z | [
"region:us"
] | focia | null | null | 1 | 3 | 2023-08-23T12:08:14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: int64
- name: title
dtype: string
- name: videoId
dtype: string
- name: channelId
dtype: string
- name: subscribers
dtype: float64
- name: isVerified
dtype: bool
- name: keywords
dtype: string
- name: country
dtype: string
- name: description
dtype: string
- name: views
dtype: int64
- name: published
dtype: timestamp[us]
- name: length
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 3917528866.3737946
num_examples: 28276
- name: test
num_bytes: 1010554492.3202056
num_examples: 7070
download_size: 5006700814
dataset_size: 4928083358.694
---
# Dataset Card for "yt_thumbnail_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,057 | [
[
-0.03997802734375,
-0.02496337890625,
0.0108795166015625,
0.0103607177734375,
-0.03533935546875,
-0.005321502685546875,
0.02606201171875,
-0.006134033203125,
0.07012939453125,
0.035980224609375,
-0.057952880859375,
-0.05035400390625,
-0.04351806640625,
-0.02... |
mozci/tinysketch | 2023-08-24T03:40:02.000Z | [
"task_categories:text-to-image",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:n<10K",
"source_datasets:FS-COCO",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | mozci | null | null | 0 | 3 | 2023-08-24T02:45:44 | ---
license: cc-by-nc-sa-4.0
language:
- en
language_creators:
- machine-generated
multilinguality:
- monolingual
pretty_name: 'Sketch Scene Descriptions'
size_categories:
- n<10K
source_datasets:
- FS-COCO
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Card for Sketch Scene Descriptions
_Dataset used to train [Sketch Scene text to image model]()_
We advance sketch research to scenes with the first dataset of freehand scene sketches, FS-COCO. With practical applications in mind, we collect sketches that convey well scene content but can be sketched within a few minutes by a person with any sketching skills. Our dataset comprises around 10,000 freehand scene vector sketches with per-point space-time information by 100 non-expert individuals, offering both object- and scene-level abstraction. Each sketch is augmented with its text description.
For each row, the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided.
## Citation
If you use this dataset, please cite it as:
```
@inproceedings{fscoco,
title={FS-COCO: Towards Understanding of Freehand Sketches of Common Objects in Context.}
author={Chowdhury, Pinaki Nath and Sain, Aneeshan and Bhunia, Ayan Kumar and Xiang, Tao and Gryaditskaya, Yulia and Song, Yi-Zhe},
booktitle={ECCV},
year={2022}
}
``` | 1,412 | [
[
-0.01666259765625,
-0.0244903564453125,
0.013031005859375,
0.0272979736328125,
-0.051727294921875,
-0.0123138427734375,
0.0159912109375,
-0.03631591796875,
0.03570556640625,
0.042816162109375,
-0.0478515625,
-0.0295867919921875,
-0.023193359375,
-0.008445739... |
HSiTori/scienceQA | 2023-08-24T03:54:07.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
] | HSiTori | null | null | 0 | 3 | 2023-08-24T03:06:38 | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1147845
num_examples: 2135
- name: validation
num_bytes: 404325
num_examples: 764
- name: test
num_bytes: 419010
num_examples: 789
download_size: 707887
dataset_size: 1971180
task_categories:
- text-generation
language:
- en
size_categories:
- 1K<n<10K
---
# Filter: no image && hint != '' | 616 | [
[
-0.0513916015625,
-0.037811279296875,
0.044769287109375,
0.01129913330078125,
-0.08953857421875,
0.00324249267578125,
0.0391845703125,
-0.004245758056640625,
0.037017822265625,
0.05108642578125,
-0.06549072265625,
-0.031707763671875,
-0.05999755859375,
0.003... |
KushT/reuters-21578-train-val-test | 2023-08-25T12:24:45.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
] | KushT | null | null | 0 | 3 | 2023-08-25T12:18:15 | ---
license: apache-2.0
size_categories:
- 1K<n<10K
task_categories:
- text-classification
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 10816829
num_examples: 6988
- name: validation
num_bytes: 1178067
num_examples: 781
- name: test
num_bytes: 4513694
num_examples: 3019
download_size: 5088303
dataset_size: 16508590
language:
- en
---
Dataset from [Kaggle](https://www.kaggle.com/datasets/nltkdata/reuters/code)
The split is done on the training set using ```iterative_train_test_split``` from [scikit-multilearn](http://scikit.ml/index.html)
There are the following 90 labels.
'interest',
'groundnut-oil',
'potato',
'palmkernel',
'sun-meal',
'lei',
'cotton-oil',
'sunseed',
'sorghum',
'barley',
'dlr',
'groundnut',
'wpi',
'strategic-metal',
'livestock',
'l-cattle',
'lin-oil',
'gold',
'fuel',
'nzdlr',
'oat',
'soybean',
'hog',
'tin',
'lumber',
'bop',
'soy-oil',
'dfl',
'nkr',
'gas',
'carcass',
'silver',
'coffee',
'gnp',
'crude',
'rapeseed',
'alum',
'copper',
'housing',
'grain',
'cocoa',
'sun-oil',
'rice',
'jobs',
'rubber',
'jet',
'tea',
'retail',
'ship',
'corn',
'meal-feed',
'naphtha',
'sugar',
'rand',
'platinum',
'money-supply',
'yen',
'nickel',
'income',
'cpu',
'copra-cake',
'instal-debt',
'coconut-oil',
'cotton',
'rye',
'palm-oil',
'acq',
'wheat',
'propane',
'dmk',
'reserves',
'rape-oil',
'money-fx',
'heat',
'ipi',
'castor-oil',
'earn',
'iron-steel',
'palladium',
'coconut',
'veg-oil',
'nat-gas',
'pet-chem',
'lead',
'trade',
'cpi',
'oilseed',
'zinc',
'soy-meal',
'orange' | 2,052 | [
[
-0.0267181396484375,
-0.0099945068359375,
0.0039825439453125,
0.0176849365234375,
-0.01605224609375,
0.034698486328125,
-0.0123443603515625,
-0.01316070556640625,
0.028106689453125,
0.0166015625,
-0.03912353515625,
-0.045257568359375,
-0.057281494140625,
0.0... |
probably0/cryptocurrency-price-data | 2023-08-26T05:26:55.000Z | [
"region:us"
] | probably0 | null | null | 1 | 3 | 2023-08-26T05:24:41 | # Crypto Data Card for Multi-Blockchain Cryptocurrencies
## Dataset Name
Crypto Multi-Blockchain Historical Prices
## Dataset Version
v1.0, Date: Up to August 25, 2023
## Description
This dataset constitutes an extensive compilation of historical pricing data, encapsulating 163 distinct cryptocurrencies across diverse blockchain ecosystems. Specifically, the dataset spans a considerable temporal range, from July 17, 2010, to August 25, 2023. This corpus is organized to facilitate multidisciplinary scholarly investigations, offering rich metrics including but not limited to opening, highest, lowest, and closing prices for each cryptocurrency on a daily basis. Furthermore, the dataset categorizes the cryptocurrencies according to the underlying blockchain technology, thus aiding in more nuanced analyses.
## Categories Based on Blockchain
- **Bitcoin-based**: BTC, BCH, BSV, BTG
- **Ethereum-based**: ETH, USDT, BAT, COMP, DAI, MKR, SNX, UNI, YFI, LINK, MANA, etc.
- **Binance Smart Chain**: BNB, BUSD, CAKE
- **Cardano**: ADA
- **Polkadot**: DOT, KSM
- **Solana**: SOL
- **EOS**: EOS
- **Tezos**: XTZ
- **Algorand**: ALGO
- **Ripple**: XRP
- **Other**: (List other blockchain categories here)
## Fields in Data
- **Ticker**: The ticker symbol of the cryptocurrency (e.g., BTC for Bitcoin).
- **Date**: The date the data was collected, formatted in MM/DD/YY.
- **Open**: The opening price of the cryptocurrency on the given day.
- **High**: The highest recorded price of the cryptocurrency on the given day.
- **Low**: The lowest recorded price of the cryptocurrency on the given day.
- **Close**: The closing price of the cryptocurrency on the given day.
## Example Data Entry
| Ticker | Date | Open | High | Low | Close |
| ------ | ------- | ----- | ----- | ----- | ----- |
| BTC | 1/19/23 | 20772 | 21162 | 20659 | 20941 |
## Use Case
The dataset is instrumental for a range of academic and applied research contexts, including but not limited to:
- Temporal trend analysis
- Predictive modeling and analytics
- Portfolio optimization and risk assessment
## Data Collection Method
The data is rigorously sourced from multiple, reputable exchanges and is subsequently consolidated. All prices are denominated in USD.
## Limitations
- Historical prices may not be predictive of future financial trajectories.
- The dataset, while extensive, may not encompass the most recent market fluctuations due to periodic updating.
## Legal and Ethical Considerations
- The dataset is intended solely for academic and informational purposes.
- Users bear the responsibility for ensuring compliance with applicable legal and ethical standards.
## Data Format
Each cryptocurrency is stored in a separate CSV file, identified by its ticker symbol (e.g., `BTC.csv`, `ETH.csv`, `ADA.csv`, etc.).
## Maintenance
The dataset will undergo periodic updates to ensure its continued relevance and comprehensiveness.
## Acknowledgments
The dataset is an aggregation of data sourced from multiple exchanges, consolidated and curated by Probably 0 AI Team.
| 3,067 | [
[
-0.0306243896484375,
-0.05224609375,
0.0022144317626953125,
-0.003631591796875,
-0.0211181640625,
0.00981903076171875,
-0.003147125244140625,
-0.041473388671875,
0.034881591796875,
0.035980224609375,
-0.03839111328125,
-0.08160400390625,
-0.029327392578125,
... |
learn3r/SDG_scimed | 2023-08-26T21:32:18.000Z | [
"region:us"
] | learn3r | null | null | 0 | 3 | 2023-08-26T21:32:16 | ---
dataset_info:
features:
- name: jargon
dtype: string
- name: definition
dtype: string
splits:
- name: train
num_bytes: 45723
num_examples: 200
download_size: 29274
dataset_size: 45723
---
# Dataset Card for "SDG_scimed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 386 | [
[
-0.03497314453125,
0.003833770751953125,
0.032135009765625,
0.0185089111328125,
-0.0186309814453125,
0.004688262939453125,
0.022552490234375,
-0.00958251953125,
0.07037353515625,
0.02496337890625,
-0.058746337890625,
-0.061981201171875,
-0.053924560546875,
-... |
michelangelo-engs/RedPajama-Data-1T-1024Sample | 2023-08-27T23:57:33.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
] | michelangelo-engs | RedPajama is a clean-room, fully open-source implementation of the LLaMa dataset. This is a 1B-token sample of the full dataset. | null | 0 | 3 | 2023-08-27T19:38:02 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
pretty_name: 1k samples of RedPajama-Data for development
size_categories:
- 1K<n<10K
---
1k samples of RedPajama-Data for development | 208 | [
[
-0.0297088623046875,
-0.03643798828125,
-0.02215576171875,
0.05706787109375,
-0.016357421875,
-0.0267181396484375,
-0.00835418701171875,
-0.046539306640625,
0.072021484375,
0.076904296875,
-0.0684814453125,
-0.05743408203125,
-0.040924072265625,
0.0028362274... |
vitaliy-sharandin/ai-incidents | 2023-09-05T23:36:42.000Z | [
"region:us"
] | vitaliy-sharandin | null | null | 0 | 3 | 2023-08-29T16:00:05 | ---
dataset_info:
features:
- name: _id
dtype: string
- name: incident_id
dtype: int64
- name: date
dtype: timestamp[ns]
- name: reports
dtype: string
- name: Alleged deployer of AI system
dtype: string
- name: Alleged developer of AI system
dtype: string
- name: Alleged harmed or nearly harmed parties
dtype: string
- name: description
dtype: string
- name: title
dtype: string
- name: year
dtype: int64
- name: spacy_negative_outcomes
dtype: string
- name: keybert_negative_outcomes
dtype: string
- name: Cluster
dtype: string
splits:
- name: train
num_bytes: 271118
num_examples: 514
download_size: 165345
dataset_size: 271118
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ai-incidents"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 983 | [
[
-0.033966064453125,
-0.0261688232421875,
0.02508544921875,
0.0198516845703125,
0.0016088485717773438,
-0.006862640380859375,
0.032135009765625,
-0.0272674560546875,
0.04193115234375,
0.0209503173828125,
-0.056793212890625,
-0.03662109375,
-0.033477783203125,
... |
lamini/text_to_sql_finetune | 2023-09-01T05:39:56.000Z | [
"region:us"
] | lamini | null | null | 2 | 3 | 2023-08-30T02:19:28 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 34818227
num_examples: 16428
- name: test
num_bytes: 1050788
num_examples: 1034
download_size: 3691335
dataset_size: 35869015
---
# Dataset Card for "text_to_sql_finetune"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 586 | [
[
-0.0304718017578125,
-0.045257568359375,
0.01580810546875,
0.01444244384765625,
-0.0273590087890625,
-0.0230712890625,
-0.00534820556640625,
-0.0009703636169433594,
0.046783447265625,
0.043731689453125,
-0.056060791015625,
-0.0535888671875,
-0.0305633544921875,
... |
dim/dialogsum_3k | 2023-08-31T20:08:56.000Z | [
"region:us"
] | dim | null | null | 0 | 3 | 2023-08-31T20:08:27 | ---
dataset_info:
features:
- name: id
dtype: string
- name: dialogue
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
splits:
- name: train
num_bytes: 2754324.55858748
num_examples: 3000
download_size: 1570734
dataset_size: 2754324.55858748
---
# Dataset Card for "dialogsum_3k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 477 | [
[
-0.03375244140625,
-0.0176544189453125,
0.0119781494140625,
0.01064300537109375,
-0.01461029052734375,
-0.01922607421875,
0.034332275390625,
-0.01397705078125,
0.048675537109375,
0.055908203125,
-0.049957275390625,
-0.04736328125,
-0.0252838134765625,
-0.008... |
AbhayBhan/SalesData | 2023-09-01T07:31:45.000Z | [
"region:us"
] | AbhayBhan | null | null | 1 | 3 | 2023-09-01T07:27:23 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
JasiekKaczmarczyk/maestro-sustain-quantized | 2023-09-15T10:26:58.000Z | [
"region:us"
] | JasiekKaczmarczyk | null | null | 0 | 3 | 2023-09-01T08:29:47 | ---
dataset_info:
features:
- name: midi_filename
dtype: string
- name: pitch
sequence: int16
length: 128
- name: dstart
sequence: float32
length: 128
- name: duration
sequence: float32
length: 128
- name: velocity
sequence: int16
length: 128
- name: dstart_bin
sequence: int8
length: 128
- name: duration_bin
sequence: int8
length: 128
- name: velocity_bin
sequence: int8
length: 128
splits:
- name: train
num_bytes: 89689142
num_examples: 43727
- name: validation
num_bytes: 10114654
num_examples: 4929
- name: test
num_bytes: 11695068
num_examples: 5695
download_size: 0
dataset_size: 111498864
---
# Dataset Card for "maestro-sustain-quantized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 895 | [
[
-0.038177490234375,
-0.027374267578125,
0.01332855224609375,
0.0268096923828125,
-0.00737762451171875,
0.016876220703125,
0.0123138427734375,
0.01177978515625,
0.05963134765625,
0.02874755859375,
-0.058441162109375,
-0.045196533203125,
-0.024139404296875,
-0... |
factored/saleswiz_is_about_company | 2023-09-14T20:01:36.000Z | [
"region:us"
] | factored | null | null | 0 | 3 | 2023-09-01T19:00:40 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 187533.28961748633
num_examples: 640
- name: validation
num_bytes: 80580.71038251366
num_examples: 275
download_size: 177218
dataset_size: 268114.0
---
# Dataset Card for "saleswiz_is_about_company"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 552 | [
[
-0.033843994140625,
-0.029205322265625,
0.00009018182754516602,
0.016937255859375,
-0.0208282470703125,
0.005992889404296875,
0.0118560791015625,
-0.016815185546875,
0.05841064453125,
0.04254150390625,
-0.079833984375,
-0.061065673828125,
-0.004718780517578125,
... |
TinyPixel/lima-m2 | 2023-09-26T03:41:46.000Z | [
"region:us"
] | TinyPixel | null | null | 1 | 3 | 2023-09-02T13:34:27 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2957490
num_examples: 1030
download_size: 1697765
dataset_size: 2957490
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "lima-m2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 435 | [
[
-0.045684814453125,
-0.026611328125,
0.028167724609375,
0.035614013671875,
-0.03253173828125,
-0.0166015625,
0.040863037109375,
-0.006591796875,
0.06658935546875,
0.03643798828125,
-0.06329345703125,
-0.04937744140625,
-0.06622314453125,
-0.01529693603515625... |
pszemraj/wikitext_document_level-103_reformat | 2023-09-03T00:56:06.000Z | [
"task_categories:text-generation",
"source_datasets:EleutherAI/wikitext_document_level",
"language:en",
"license:apache-2.0",
"region:us"
] | pszemraj | null | null | 0 | 3 | 2023-09-03T00:45:04 | ---
license: apache-2.0
source_datasets: EleutherAI/wikitext_document_level
task_categories:
- text-generation
language:
- en
---
# wikitext_document_level - 103-v1 - reformatted
wikitext103-v1 @ document level but applied regex on all rows to attempt to fix the spacing/punctuation issues. | 292 | [
[
0.0171661376953125,
-0.03729248046875,
0.0072021484375,
0.064453125,
0.01076507568359375,
0.00582122802734375,
0.04803466796875,
0.00672149658203125,
0.04620361328125,
0.0175933837890625,
-0.045501708984375,
-0.03253173828125,
-0.0016336441040039062,
0.01588... |
tinhpx2911/viwiki-processed | 2023-09-03T10:05:59.000Z | [
"region:us"
] | tinhpx2911 | null | null | 0 | 3 | 2023-09-03T10:00:31 | ---
dataset_info:
features:
- name: text
dtype: string
- name: title
dtype: string
- name: categories
dtype: string
splits:
- name: train
num_bytes: 1373835675
num_examples: 419581
download_size: 722564655
dataset_size: 1373835675
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "viwiki-processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 527 | [
[
-0.037353515625,
-0.029632568359375,
0.01238250732421875,
0.01132965087890625,
-0.0169677734375,
-0.003314971923828125,
-0.005512237548828125,
-0.016815185546875,
0.05792236328125,
0.04876708984375,
-0.07373046875,
-0.045074462890625,
-0.033111572265625,
-0.... |
deven367/babylm-100M | 2023-09-06T04:28:32.000Z | [
"region:us"
] | deven367 | null | null | 0 | 3 | 2023-09-04T02:41:10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 567957485
num_examples: 10176300
- name: valid
num_bytes: 54930583
num_examples: 986022
- name: test
num_bytes: 59992087
num_examples: 1008854
download_size: 429914407
dataset_size: 682880155
---
# Dataset Card for "babylm-100M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 657 | [
[
-0.042755126953125,
-0.0182037353515625,
-0.00447845458984375,
0.0222015380859375,
-0.0218048095703125,
-0.006275177001953125,
0.0218353271484375,
-0.004192352294921875,
0.04425048828125,
0.0361328125,
-0.06463623046875,
-0.04931640625,
-0.03912353515625,
-0... |
Fin-Eval/Fin-Eval | 2023-09-07T09:49:24.000Z | [
"license:cc-by-4.0",
"region:us"
] | Fin-Eval | null | null | 1 | 3 | 2023-09-04T08:14:16 | ---
license: cc-by-4.0
---
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/alipay/financial_evaluation_dataset/main/resources/fineval_logo.png" title="Fin-Eval" alt="Fin-Eval" style="width: 30%; min-width: 300px; display: block; margin: auto;"></a>
</p>
<div style="text-align: center;">
<p>Fin-Eval金融任务评估集</p>
</div>
### 任务介绍
Fin-Eval是金融行业为大模型而生的专业数据评测集,涵盖认知、生成、金融知识、金融逻辑以及安全合规五大类能力共28个子任务。在设计任务时,充分考虑到大模型在In-Context Learning、工具调用、CoT等方面的特点。
Fin-Eval涵盖了财富管理、保险、投资研究等多个金融领域,数据源包括蚂蚁各业务领域、开源数据,模型蒸馏,在经过数据脱敏、文本聚类、语料精筛等处理过程后,结合金融领域专家的深度加工最终构建而成。我们希望Fin-Eval不仅是一个金融评估数据集,更希望抛砖引玉,帮助加速领域的发展,拓展金融大模型应用的边界。
目前Fin-Eval对外开放评测示例,未来持续开放平台化评测托管服务,为行业提供一站式评估。
### 数据读取
```python
from datasets import load_dataset
dataset=load_dataset("Fin-Eval/Fin-Eval", name="test")
print(dataset["test"][0])
# {'id': 1, '大类': '认知', '任务': '金融意图理解', '问题': '近期美元汇率有没有大幅波动', '答案': '行情解读', '解释': None}
```
更多数据集使用细节和评估方法见[github page](https://github.com/alipay/financial_evaluation_dataset)
如果需要全量数据集,请发送邮件申请授权:联系邮箱 Fin-eval@antgroup.com。 | 1,048 | [
[
-0.0279083251953125,
-0.051422119140625,
-0.005893707275390625,
-0.0030002593994140625,
-0.02032470703125,
0.004772186279296875,
-0.0014162063598632812,
-0.01153564453125,
0.019561767578125,
0.025360107421875,
-0.0227813720703125,
-0.03424072265625,
-0.025680541... |
Existance/Marathi_summarization | 2023-09-04T09:54:14.000Z | [
"region:us"
] | Existance | null | null | 0 | 3 | 2023-09-04T09:53:37 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
chats-bug/agent_action_plan | 2023-09-04T11:22:49.000Z | [
"region:us"
] | chats-bug | null | null | 4 | 3 | 2023-09-04T11:21:45 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 2487201.95821727
num_examples: 861
- name: test
num_bytes: 623967.0417827298
num_examples: 216
download_size: 0
dataset_size: 3111169.0
---
# Dataset Card for "agent_action_plan"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 559 | [
[
-0.0251312255859375,
-0.01367950439453125,
0.0127410888671875,
0.0148162841796875,
0.0041046142578125,
-0.0007429122924804688,
0.034210205078125,
-0.00357818603515625,
0.07269287109375,
0.05157470703125,
-0.06610107421875,
-0.047271728515625,
-0.03240966796875,
... |
frankier/multiscale_rt_critics_subsets | 2023-10-04T06:16:28.000Z | [
"region:us"
] | frankier | null | null | 0 | 3 | 2023-09-04T12:00:24 | ---
dataset_info:
- config_name: multiscale_rt_critics
features:
- name: movie_title
dtype: string
- name: publisher_name
dtype: string
- name: critic_name
dtype: string
- name: text
dtype: string
- name: review_score
dtype: string
- name: grade_type
dtype: string
- name: orig_num
dtype: float32
- name: orig_denom
dtype: float32
- name: includes_zero
dtype: bool
- name: label
dtype: uint8
- name: scale_points
dtype: uint8
- name: multiplier
dtype: uint8
- name: task_ids
dtype: uint32
splits:
- name: train
num_bytes: 4951005
num_examples: 23182
- name: test
num_bytes: 1644530
num_examples: 7745
- name: validation
num_bytes: 1646302
num_examples: 7731
download_size: 0
dataset_size: 8241837
- config_name: rt_critics_big_irregular_5
features:
- name: movie_title
dtype: string
- name: publisher_name
dtype: string
- name: critic_name
dtype: string
- name: text
dtype: string
- name: review_score
dtype: string
- name: grade_type
dtype: string
- name: orig_num
dtype: float32
- name: orig_denom
dtype: float32
- name: includes_zero
dtype: bool
- name: label
dtype: uint8
- name: scale_points
dtype: uint8
- name: multiplier
dtype: uint8
- name: task_ids
dtype: uint32
- name: orig_group_id
dtype: uint32
splits:
- name: train
num_bytes: 2336759
num_examples: 10312
- name: test
num_bytes: 781228
num_examples: 3441
- name: validation
num_bytes: 779150
num_examples: 3438
download_size: 1927630
dataset_size: 3897137
- config_name: rt_critics_by_critic_1000pl
features:
- name: movie_title
dtype: string
- name: publisher_name
dtype: string
- name: critic_name
dtype: string
- name: text
dtype: string
- name: review_score
dtype: string
- name: grade_type
dtype: string
- name: orig_num
dtype: float32
- name: orig_denom
dtype: float32
- name: includes_zero
dtype: bool
- name: label
dtype: uint8
- name: scale_points
dtype: uint8
- name: multiplier
dtype: uint8
- name: task_ids
dtype: uint32
- name: orig_group_id
dtype: uint32
splits:
- name: train
num_bytes: 27083039
num_examples: 124055
- name: test
num_bytes: 9049344
num_examples: 41406
- name: validation
num_bytes: 9026209
num_examples: 41368
download_size: 22594175
dataset_size: 45158592
- config_name: rt_critics_by_critic_500pl
features:
- name: movie_title
dtype: string
- name: publisher_name
dtype: string
- name: critic_name
dtype: string
- name: text
dtype: string
- name: review_score
dtype: string
- name: grade_type
dtype: string
- name: orig_num
dtype: float32
- name: orig_denom
dtype: float32
- name: includes_zero
dtype: bool
- name: label
dtype: uint8
- name: scale_points
dtype: uint8
- name: multiplier
dtype: uint8
- name: task_ids
dtype: uint32
- name: orig_group_id
dtype: uint32
splits:
- name: train
num_bytes: 41656780
num_examples: 189382
- name: test
num_bytes: 13929707
num_examples: 63263
- name: validation
num_bytes: 13917936
num_examples: 63157
download_size: 35087274
dataset_size: 69504423
- config_name: rt_critics_one
features:
- name: movie_title
dtype: string
- name: publisher_name
dtype: string
- name: critic_name
dtype: string
- name: text
dtype: string
- name: review_score
dtype: string
- name: grade_type
dtype: string
- name: orig_num
dtype: float32
- name: orig_denom
dtype: float32
- name: includes_zero
dtype: bool
- name: label
dtype: uint8
- name: scale_points
dtype: uint8
- name: multiplier
dtype: uint8
splits:
- name: train
num_bytes: 988767
num_examples: 4606
- name: test
num_bytes: 327725
num_examples: 1536
- name: validation
num_bytes: 327038
num_examples: 1536
download_size: 951057
dataset_size: 1643530
configs:
- config_name: multiscale_rt_critics
data_files:
- split: train
path: multiscale_rt_critics/train-*
- split: test
path: multiscale_rt_critics/test-*
- split: validation
path: multiscale_rt_critics/validation-*
- config_name: rt_critics_big_irregular_5
data_files:
- split: train
path: rt_critics_big_irregular_5/train-*
- split: test
path: rt_critics_big_irregular_5/test-*
- split: validation
path: rt_critics_big_irregular_5/validation-*
- config_name: rt_critics_by_critic_1000pl
data_files:
- split: train
path: rt_critics_by_critic_1000pl/train-*
- split: test
path: rt_critics_by_critic_1000pl/test-*
- split: validation
path: rt_critics_by_critic_1000pl/validation-*
- config_name: rt_critics_by_critic_500pl
data_files:
- split: train
path: rt_critics_by_critic_500pl/train-*
- split: test
path: rt_critics_by_critic_500pl/test-*
- split: validation
path: rt_critics_by_critic_500pl/validation-*
- config_name: rt_critics_one
data_files:
- split: train
path: rt_critics_one/train-*
- split: test
path: rt_critics_one/test-*
- split: validation
path: rt_critics_one/validation-*
---
# Dataset Card for "multiscale_rt_critics_subsets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 5,504 | [
[
-0.068603515625,
-0.018310546875,
0.02099609375,
0.0292205810546875,
-0.012420654296875,
0.007129669189453125,
-0.0103302001953125,
-0.004924774169921875,
0.04949951171875,
0.031982421875,
-0.0662841796875,
-0.04241943359375,
-0.044769287109375,
-0.008903503... |
ChristophSchuhmann/movie-clips | 2023-09-06T09:28:59.000Z | [
"region:us"
] | ChristophSchuhmann | null | null | 0 | 3 | 2023-09-04T15:46:10 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
edmundtsou/keywords_daily_dialog | 2023-09-05T00:17:00.000Z | [
"region:us"
] | edmundtsou | null | null | 0 | 3 | 2023-09-05T00:16:59 | ---
dataset_info:
features:
- name: dialog
sequence: string
- name: ids
dtype: int64
- name: keywords
sequence:
sequence: string
splits:
- name: train
num_bytes: 10163143
num_examples: 13118
download_size: 5240789
dataset_size: 10163143
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "keywords_daily_dialog"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 544 | [
[
-0.021453857421875,
-0.037689208984375,
0.017791748046875,
0.0088043212890625,
-0.025177001953125,
-0.004138946533203125,
0.01435089111328125,
-0.0065460205078125,
0.037078857421875,
0.05029296875,
-0.07275390625,
-0.07171630859375,
-0.03118896484375,
0.0019... |
winterForestStump/10-K_sec_filings | 2023-10-03T19:39:24.000Z | [
"region:us"
] | winterForestStump | null | null | 1 | 3 | 2023-09-06T11:14:43 | ---
dataset_info:
features:
- name: cik
dtype: int64
- name: company_name
dtype: string
- name: filing_date
dtype: timestamp[ns]
- name: Business
dtype: string
- name: Risk Factors
dtype: string
- name: Unresolved Staff Comments
dtype: string
- name: Properties
dtype: string
- name: Legal Proceedings
dtype: string
- name: Mine Safety Disclosures
dtype: string
- name: Market for Registrant’s Common Equity, Related Stockholder Matters and Issuer
Purchases of Equity Securities
dtype: string
- name: Selected Financial Data
dtype: string
- name: Management’s Discussion and Analysis of Financial Condition and Results
of Operations
dtype: string
- name: Quantitative and Qualitative Disclosures about Market Risk
dtype: string
- name: Financial Statements and Supplementary Data
dtype: string
- name: Changes in and Disagreements with Accountants on Accounting and Financial
Disclosure
dtype: string
- name: Controls and Procedures
dtype: string
- name: Other Information
dtype: string
- name: Directors, Executive Officers and Corporate Governance
dtype: string
- name: Executive Compensation
dtype: string
- name: Security Ownership of Certain Beneficial Owners and Management and Related
Stockholder Matters
dtype: string
- name: Certain Relationships and Related Transactions, and Director Independence
dtype: string
- name: Principal Accountant Fees and Services
dtype: string
- name: Exhibits, Financial Statement Schedules
dtype: string
splits:
- name: '001'
num_bytes: 1305976147
num_examples: 5000
- name: '002'
num_bytes: 1547107096
num_examples: 5000
- name: '003'
num_bytes: 1500950344
num_examples: 5000
- name: '004'
num_bytes: 938669696
num_examples: 3000
- name: '005'
num_bytes: 1161187900
num_examples: 4000
- name: '006'
num_bytes: 937988835
num_examples: 3000
- name: '007'
num_bytes: 694775532
num_examples: 2000
- name: '008'
num_bytes: 866183252
num_examples: 3000
- name: '009'
num_bytes: 705057218
num_examples: 3000
- name: '010'
num_bytes: 705057218
num_examples: 3000
- name: '011'
num_bytes: 885667244
num_examples: 2000
- name: '012'
num_bytes: 329414277
num_examples: 2000
- name: '013'
num_bytes: 739146986
num_examples: 3000
- name: '014'
num_bytes: 458266896
num_examples: 1000
- name: '015'
num_bytes: 710988934
num_examples: 2000
- name: '016'
num_bytes: 250689742
num_examples: 2000
- name: '017'
num_bytes: 474864951
num_examples: 2000
- name: '018'
num_bytes: 615827939
num_examples: 2000
- name: '019'
num_bytes: 357457451
num_examples: 1000
- name: '020'
num_bytes: 584057786
num_examples: 2000
- name: '021'
num_bytes: 141712850
num_examples: 2000
- name: '022'
num_bytes: 503977366
num_examples: 2000
- name: '023'
num_bytes: 468353001
num_examples: 2000
- name: '024'
num_bytes: 450924639
num_examples: 1000
- name: '025'
num_bytes: 504057453
num_examples: 2000
- name: '026'
num_bytes: 169593248
num_examples: 2000
- name: '027'
num_bytes: 464799632
num_examples: 2000
- name: '028'
num_bytes: 297637001
num_examples: 1000
- name: '029'
num_bytes: 368760540
num_examples: 1000
- name: '030'
num_bytes: 319606303
num_examples: 1000
- name: '031'
num_bytes: 394028378
num_examples: 2000
- name: '032'
num_bytes: 343965348
num_examples: 2000
- name: '033'
num_bytes: 522452994
num_examples: 1999
- name: '034'
num_bytes: 509087440
num_examples: 1000
- name: '035'
num_bytes: 509775862
num_examples: 1001
- name: '036'
num_bytes: 437503604
num_examples: 1000
- name: '037'
num_bytes: 610792518
num_examples: 2000
- name: '038'
num_bytes: 581885486
num_examples: 2000
- name: '039'
num_bytes: 350277811
num_examples: 1000
- name: '040'
num_bytes: 627141247
num_examples: 1500
- name: '041'
num_bytes: 305018992
num_examples: 700
- name: '042'
num_bytes: 555710158
num_examples: 600
- name: '043'
num_bytes: 593433327
num_examples: 500
- name: '044'
num_bytes: 352017311
num_examples: 700
- name: '045'
num_bytes: 342614047
num_examples: 1000
- name: '046'
num_bytes: 323563296
num_examples: 1000
- name: '047'
num_bytes: 236981244
num_examples: 1000
- name: '048'
num_bytes: 622649279
num_examples: 1000
- name: '049'
num_bytes: 358151664
num_examples: 1000
- name: '050'
num_bytes: 661144363
num_examples: 1000
- name: '051'
num_bytes: 421673110
num_examples: 400
- name: '052'
num_bytes: 317359748
num_examples: 100
download_size: 13361256647
dataset_size: 29477068619
configs:
- config_name: default
data_files:
- split: '001'
path: data/001-*
- split: '002'
path: data/002-*
- split: '003'
path: data/003-*
- split: '004'
path: data/004-*
- split: '005'
path: data/005-*
- split: '006'
path: data/006-*
- split: '007'
path: data/007-*
- split: '008'
path: data/008-*
- split: '009'
path: data/009-*
- split: '010'
path: data/010-*
- split: '011'
path: data/011-*
- split: '012'
path: data/012-*
- split: '013'
path: data/013-*
- split: '014'
path: data/014-*
- split: '015'
path: data/015-*
- split: '016'
path: data/016-*
- split: '017'
path: data/017-*
- split: '018'
path: data/018-*
- split: '019'
path: data/019-*
- split: '020'
path: data/020-*
- split: '021'
path: data/021-*
- split: '022'
path: data/022-*
- split: '023'
path: data/023-*
- split: '024'
path: data/024-*
- split: '025'
path: data/025-*
- split: '026'
path: data/026-*
- split: '027'
path: data/027-*
- split: '028'
path: data/028-*
- split: '029'
path: data/029-*
- split: '030'
path: data/030-*
- split: '031'
path: data/031-*
- split: '032'
path: data/032-*
- split: '033'
path: data/033-*
- split: '034'
path: data/034-*
- split: '035'
path: data/035-*
- split: '036'
path: data/036-*
- split: '037'
path: data/037-*
- split: '038'
path: data/038-*
- split: '039'
path: data/039-*
- split: '040'
path: data/040-*
- split: '041'
path: data/041-*
- split: '042'
path: data/042-*
- split: '043'
path: data/043-*
- split: '044'
path: data/044-*
- split: '045'
path: data/045-*
- split: '046'
path: data/046-*
- split: '047'
path: data/047-*
- split: '048'
path: data/048-*
- split: '049'
path: data/049-*
- split: '050'
path: data/050-*
- split: '051'
path: data/051-*
- split: '052'
path: data/052-*
---
# Dataset Card for "10-K_sec_filings"
Dataset of 93.5K 10K SEC EDGAR filings since 1999 year. This dataset contains a lot of bad parsed filings and also empty rows
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 7,343 | [
[
-0.003932952880859375,
-0.0269927978515625,
0.018463134765625,
0.0188446044921875,
-0.02362060546875,
-0.006710052490234375,
0.0291900634765625,
-0.0280914306640625,
0.052734375,
0.039215087890625,
-0.023468017578125,
-0.036163330078125,
-0.034423828125,
0.0... |
vikp/pypi_clean | 2023-09-06T23:50:29.000Z | [
"region:us"
] | vikp | null | null | 36 | 3 | 2023-09-06T17:15:42 | ---
dataset_info:
features:
- name: code
dtype: string
- name: package
dtype: string
- name: path
dtype: string
- name: filename
dtype: string
splits:
- name: train
num_bytes: 31543801750
num_examples: 2438172
download_size: 9201420527
dataset_size: 31543801750
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "pypi_clean"
All of the latest package versions from pypi. The original data came from [here](https://py-code.org/datasets). I pulled the latest versions of each package, then extracted only `md`, `rst`, `ipynb`, and `py` files.
I then applied some cleaning:
- rendering notebooks
- removing leading comments/licenses | 733 | [
[
-0.02825927734375,
-0.0104522705078125,
0.00921630859375,
-0.0118560791015625,
-0.03424072265625,
0.004589080810546875,
-0.0022220611572265625,
-0.00982666015625,
0.01983642578125,
0.042816162109375,
-0.035736083984375,
-0.042694091796875,
-0.0143890380859375,
... |
Tverous/flicker30k | 2023-09-07T07:22:37.000Z | [
"region:us"
] | Tverous | null | null | 0 | 3 | 2023-09-07T07:22:31 | ---
dataset_info:
features:
- name: uid
dtype: string
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: hyp_amr
dtype: string
- name: hyp_linearized_amr
dtype: string
splits:
- name: train
num_bytes: 146513367
num_examples: 401717
- name: dev
num_bytes: 5144374
num_examples: 14339
- name: test
num_bytes: 5344233
num_examples: 14740
download_size: 53289338
dataset_size: 157001974
---
# Dataset Card for "flcker30k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 671 | [
[
-0.0684814453125,
-0.025665283203125,
0.007190704345703125,
0.02154541015625,
-0.003978729248046875,
-0.0013904571533203125,
0.0105438232421875,
-0.0143890380859375,
0.04974365234375,
0.030242919921875,
-0.07037353515625,
-0.045867919921875,
-0.025482177734375,
... |
clarin-knext/touche2020-pl | 2023-09-12T09:50:08.000Z | [
"region:us"
] | clarin-knext | null | null | 0 | 3 | 2023-09-07T12:04:29 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
nampdn-ai/mini-stack | 2023-09-08T09:28:07.000Z | [
"region:us"
] | nampdn-ai | null | null | 1 | 3 | 2023-09-08T09:24:51 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
SeyedAli/Persian-Text-Sentiment | 2023-09-09T15:42:06.000Z | [
"task_categories:text-classification",
"language:fa",
"license:mit",
"region:us"
] | SeyedAli | null | null | 1 | 3 | 2023-09-08T18:09:45 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 10222986
num_examples: 55852
- name: test
num_bytes: 2575303
num_examples: 13964
download_size: 6076096
dataset_size: 12798289
task_categories:
- text-classification
language:
- fa
---
Dataset Classes
* negetive :0
* positive :1 | 524 | [
[
-0.032806396484375,
-0.01299285888671875,
-0.018280029296875,
0.01335906982421875,
-0.00847625732421875,
-0.005859375,
0.01837158203125,
-0.0028400421142578125,
0.047271728515625,
0.043548583984375,
-0.033416748046875,
-0.03411865234375,
-0.0450439453125,
-0... |
Minglii/a | 2023-09-09T03:03:18.000Z | [
"region:us"
] | Minglii | null | null | 0 | 3 | 2023-09-09T03:02:50 | ---
dataset_info:
features:
- name: data
struct:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 98287163
num_examples: 52002
download_size: 50705625
dataset_size: 98287163
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "a"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 583 | [
[
-0.036468505859375,
-0.027313232421875,
0.015106201171875,
0.01439666748046875,
-0.01178741455078125,
0.003662109375,
0.0249481201171875,
-0.0179290771484375,
0.0693359375,
0.029754638671875,
-0.06610107421875,
-0.04833984375,
-0.046722412109375,
-0.01377105... |
mwz/UrduQuotes | 2023-09-10T12:00:49.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:ur",
"license:mit",
"region:us"
] | mwz | null | null | 0 | 3 | 2023-09-09T20:44:53 | ---
license: mit
language:
- ur
task_categories:
- text-generation
size_categories:
- 1K<n<10K
---
The Urdu Quotes Dataset contains a collection of quotes in Urdu. | 164 | [
[
-0.00687408447265625,
0.0034389495849609375,
-0.01318359375,
0.028106689453125,
-0.0178070068359375,
0.02783203125,
0.0206451416015625,
0.00653076171875,
0.0148773193359375,
0.07867431640625,
-0.05877685546875,
-0.036346435546875,
-0.0489501953125,
0.0359191... |
zxvix/c4_counterfactual_2 | 2023-09-10T06:46:50.000Z | [
"region:us"
] | zxvix | null | null | 0 | 3 | 2023-09-10T06:46:45 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: timestamp[s]
- name: url
dtype: string
- name: original_text
dtype: string
splits:
- name: test
num_bytes: 3513616.155
num_examples: 985
download_size: 2261876
dataset_size: 3513616.155
---
# Dataset Card for "c4_counterfactual_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 569 | [
[
-0.041015625,
-0.03033447265625,
0.019287109375,
0.027923583984375,
-0.02008056640625,
-0.0062255859375,
0.036712646484375,
-0.0264434814453125,
0.0419921875,
0.0294189453125,
-0.064453125,
-0.032958984375,
-0.0452880859375,
-0.0176239013671875,
-0.03112... |
Maxx0/small-sexting-test-data | 2023-09-10T12:05:43.000Z | [
"region:us"
] | Maxx0 | null | null | 0 | 3 | 2023-09-10T12:05:26 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
cjvt/parlaMintSI | 2023-10-04T17:21:49.000Z | [
"task_categories:other",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:sl",
"license:cc-by-4.0",
"region:us"
] | cjvt | ParlaMint 3.0 is a multilingual set of 26 comparable corpora containing parliamentary debates mostly starting in 2015 and extending to mid-2022.
The corpora have extensive metadata, including aspects of the parliament; the speakers (name, gender, MP status, party affiliation, party coalition/opposition);
are structured into time-stamped terms, sessions and meetings; and with speeches being marked by the speaker and their role (e.g. chair, regular speaker).
The speeches also contain marked-up transcriber comments, such as gaps in the transcription, interruptions, applause, etc.
Note that some corpora have further information, e.g. the year of birth of the speakers, links to their Wikipedia articles, their membership in various committees, etc.
The corpora are also marked to the subcorpus they belong to ("reference", until 2020-01-30, "covid", from 2020-01-31, and "war", from 2022-02-24).
The corpora are encoded according to the Parla-CLARIN TEI recommendation (https://clarin-eric.github.io/parla-clarin/), but have been encoded against the compatible,
but much stricter ParlaMint encoding guidelines (https://clarin-eric.github.io/ParlaMint/) and schemas (included in this distribution).
This entry contains the ParlaMint TEI-encoded corpora with the derived plain text versions of the corpora along with TSV metadata of the speeches.
Also included is the 3.0 release of the data and scripts available at the GitHub repository of the ParlaMint project.
This dataset contains only Slovenian parliamentary debates. | null | 0 | 3 | 2023-09-10T12:12:42 | ---
dataset_info:
features:
- name: ID
dtype: string
- name: Title
dtype: string
- name: Date
dtype: string
- name: Body
dtype: string
- name: Term
dtype: string
- name: Session
dtype: string
- name: Meeting
dtype: int32
- name: Sitting
dtype: string
- name: Agenda
dtype: string
- name: Subcorpus
dtype: string
- name: Speaker_role
dtype: string
- name: Speaker_MP
dtype: string
- name: Speaker_Minister
dtype: string
- name: Speaker_party
dtype: string
- name: Speaker_party_name
dtype: string
- name: Party_status
dtype: string
- name: Speaker_name
dtype: string
- name: Speaker_gender
dtype: string
- name: Speaker_birth
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 555501497
num_examples: 311354
download_size: 327446923
dataset_size: 555501497
license:
- cc-by-4.0
language:
- sl
multilinguality:
- monolingual
task_categories:
- other
size_categories:
- 100K<n<1M
---
# Dataset Card for ParlaMint 3.0
### Dataset Summary
ParlaMint 3.0 is a multilingual set of 26 comparable corpora containing parliamentary debates mostly starting in 2015 and extending to mid-2022, with the individual corpora being between 9 and 125 million words in size.
This dataset contains only Slovenian parliamentary debates.
### Languages
Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset:
```
{
'ID': 'ParlaMint-SI_2022-04-06-SDZ8-Izredna-99.u227',
'Title': 'Minutes of the National Assembly of the Republic of Slovenia, Term 8, Extraordinary Session 99, (06. 04. 2022)',
'Date': '2022-04-06',
'Body': 'Lower house',
'Term': '8',
'Session': '',
'Meeting': 99,
'Sitting': '',
'Agenda': '',
'Subcorpus': 'War',
'Speaker_role': 'Regular',
'Speaker_MP': 'MP',
'Speaker_Minister': '-',
'Speaker_party': 'Levica',
'Speaker_party_name': 'Levica',
'Party_status': 'Opposition',
'Speaker_name': 'Koražija, Boštjan',
'Speaker_gender': 'M',
'Speaker_birth': '1974',
'text': '[[…]]Pa celo poslanec z Prekmurja, no, kaj sem rekel [[…]] [[nemir v dvorani]] Zdaj bodite pa tiho, v redu, okej. No, kot rečeno, gre se za to, da se zaščiti tudi kot Prekmurje samo in tudi takrat se je, ne vemo, kdo in zakaj je širil neke, bom rekel, nebuloze oziroma tudi »fake news« po Prekmurju, v smislu, čez, da Levica želi prepovedati geotermalno energijo oziroma pač samo uporabo, kar ne drži. V Levici smo za geotermalno energijo, smo pa seveda proti [[znak za konec razprave]] in strogo proti frekingu, to kar ste želeli vi doseči prej, ampak ste potem videli, da zaradi glasovanja, ki se je že zgodilo na prejšnji seji, da tega ne boste dosegli in ste tudi morali popustit. In srečen sem za Prekmurje in srečen sem za vzhodno Slovenijo, da smo končno nekaj pametnega naredili. Hvala.\n'
}
```
### Data Fields
- 'ID': Unique identifier for each example;
- 'Title': Title or heading of the parliamentary debate;
- 'Date': The date when the parliamentary debate took place;
- 'Body': The primary chamber or house of the parliamentary assembly in which the debate occurred;
- 'Term': The legislative term or session number during which the debate was conducted;
- 'Session': Specific session or part of the term when the debate was held;
- 'Meeting': Numeric identifier or count of the meeting within a session or term;
- 'Sitting': Particular segment or part of a larger meeting or session;
- 'Agenda': Subset or category of the main corpus to which the record belongs;
- 'Subcorpus': Subset or category of the main corpus to which the record belongs;
- 'Speaker_role': Role or position of the speaker during the debate, e.g., chairperson, main speaker, etc;
- 'Speaker_MP': Indicator if the speaker is a Member of Parliament or not;
- 'Speaker_Minister': Indicator if the speaker is a Minister or holds an executive office;
- 'Speaker_party': Abbreviated code or identifier for the political party of the speaker;
- 'Speaker_party_name': Full name of the political party to which the speaker belongs;
- 'Party_status': The status or standing of the party in the parliamentary assembly, e.g., ruling, opposition, etc;
- 'Speaker_name': Full name of the individual speaking during the debate;
- 'Speaker_gender': Gender of the speaker;
- 'Speaker_birth': Year of birth of the speaker;
- 'text': Transcription of the spoken content during the debate.
## Additional Information
### Dataset Curators
Erjavec, Tomaž ; et al.
### Licensing Information
CC BY 4.0
### Citation Information
```
@misc{11356/1486,
title = {Multilingual comparable corpora of parliamentary debates {ParlaMint} 3.0},
author = {Erjavec, Toma{\v z} and Kopp, Maty{\'a}{\v s} and Ogrodniczuk, Maciej and Osenova, Petya and Fi{\v s}er, Darja and Pirker, Hannes and Wissik, Tanja and Schopper, Daniel and Kirnbauer, Martin and Ljube{\v s}i{\'c}, Nikola and Rupnik, Peter and Mochtak, Michal and Pol, Henk van der and Depoorter, Griet and Simov, Kiril and Grigorova, Vladislava and Grigorov, Ilko and Jongejan, Bart and Haltrup Hansen, Dorte and Navarretta, Costanza and M{\"o}lder, Martin and Kahusk, Neeme and Vider, Kadri and Bel, Nuria and Antiba-Cartazo, Iv{\'a}n and Pisani, Marilina and Zevallos, Rodolfo and Vladu, Adina Ioana and Magari{\~n}os, Carmen and Bardanca, Daniel and Barcala, Mario and Garcia, Marcos and P{\'e}rez Lago, Mar{\'{\i}}a and Garc{\'{\i}}a Louzao, Pedro and Vivel Couso, Ainhoa and V{\'a}zquez Abu{\'{\i}}n, Marta and Garc{\'{\i}}a D{\'{\i}}az, Noelia and Vidal Migu{\'e}ns, Adri{\'a}n and Fern{\'a}ndez Rei, Elisa and Regueira, Xos{\'e} Lu{\'{\i}}s and Diwersy, Sascha and Luxardo, Giancarlo and Coole, Matthew and Rayson, Paul and Nwadukwe, Amanda and Gkoumas, Dimitris and Papavassiliou, Vassilis and Prokopidis, Prokopis and Gavriilidou, Maria and Piperidis, Stelios and Ligeti-Nagy, No{\'e}mi and Jelencsik-M{\'a}tyus, Kinga and Varga, Zs{\'o}fia and Dod{\'e}, R{\'e}ka and Barkarson, Starkaður and Agnoloni, Tommaso and Bartolini, Roberto and Frontini, Francesca and Montemagni, Simonetta and Quochi, Valeria and Venturi, Giulia and Ruisi, Manuela and Marchetti, Carlo and Battistoni, Roberto and Darģis, Roberts and van Heusden, Ruben and Marx, Maarten and Tungland, Lars Magne and Rudolf, Micha{\l} and Nito{\'n}, Bart{\l}omiej and Aires, Jos{\'e} and Mendes, Am{\'a}lia and Cardoso, Aida and Pereira, Rui and Yrj{\"a}n{\"a}inen, V{\"a}in{\"o} and Nor{\'e}n, Fredrik Mohammadi and Magnusson, M{\aa}ns and Jarlbrink, Johan and Meden, Katja and Pan{\v c}ur, Andrej and Ojster{\v s}ek, Mihael and {\c C}{\"o}ltekin, {\c C}a{\u g}r{\i} and Kryvenko, Anna},
url = {http://hdl.handle.net/11356/1486},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {Creative Commons - Attribution 4.0 International ({CC} {BY} 4.0)},
issn = {2820-4042},
year = {2023}
}
``` | 6,915 | [
[
-0.04425048828125,
-0.040496826171875,
0.0304718017578125,
0.0206298828125,
-0.01904296875,
-0.0217437744140625,
-0.0005612373352050781,
-0.0027313232421875,
0.054656982421875,
0.034027099609375,
-0.034759521484375,
-0.05169677734375,
-0.034820556640625,
0.0... |
erebos/germanZickleinLLAMA2Dataset | 2023-09-12T11:48:17.000Z | [
"region:us"
] | erebos | null | null | 0 | 3 | 2023-09-11T13:07:40 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
pietrolesci/pubmed-200k-rct | 2023-09-11T16:14:30.000Z | [
"region:us"
] | pietrolesci | null | null | 0 | 3 | 2023-09-11T15:48:49 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- config_name: embedding_all-MiniLM-L12-v2
data_files:
- split: train
path: embedding_all-MiniLM-L12-v2/train-*
- split: validation
path: embedding_all-MiniLM-L12-v2/validation-*
- split: test
path: embedding_all-MiniLM-L12-v2/test-*
- config_name: embedding_all-mpnet-base-v2
data_files:
- split: train
path: embedding_all-mpnet-base-v2/train-*
- split: validation
path: embedding_all-mpnet-base-v2/validation-*
- split: test
path: embedding_all-mpnet-base-v2/test-*
- config_name: embedding_multi-qa-mpnet-base-dot-v1
data_files:
- split: train
path: embedding_multi-qa-mpnet-base-dot-v1/train-*
- split: validation
path: embedding_multi-qa-mpnet-base-dot-v1/validation-*
- split: test
path: embedding_multi-qa-mpnet-base-dot-v1/test-*
dataset_info:
- config_name: default
features:
- name: labels
dtype:
class_label:
names:
'0': BACKGROUND
'1': CONCLUSIONS
'2': METHODS
'3': OBJECTIVE
'4': RESULTS
- name: text
dtype: string
- name: uid
dtype: int64
splits:
- name: train
num_bytes: 379382835
num_examples: 2211861
- name: validation
num_bytes: 4994899
num_examples: 28932
- name: test
num_bytes: 5026344
num_examples: 29493
download_size: 209039426
dataset_size: 389404078
- config_name: embedding_all-MiniLM-L12-v2
features:
- name: uid
dtype: int64
- name: embedding_all-MiniLM-L12-v2
sequence: float32
splits:
- name: train
num_bytes: 3423960828
num_examples: 2211861
- name: validation
num_bytes: 44786736
num_examples: 28932
- name: test
num_bytes: 45655164
num_examples: 29493
download_size: 4916495311
dataset_size: 3514402728
- config_name: embedding_all-mpnet-base-v2
features:
- name: uid
dtype: int64
- name: embedding_all-mpnet-base-v2
sequence: float32
splits:
- name: train
num_bytes: 6821379324
num_examples: 2211861
- name: validation
num_bytes: 89226288
num_examples: 28932
- name: test
num_bytes: 90956412
num_examples: 29493
download_size: 8405313596
dataset_size: 7001562024
- config_name: embedding_multi-qa-mpnet-base-dot-v1
features:
- name: uid
dtype: int64
- name: embedding_multi-qa-mpnet-base-dot-v1
sequence: float32
splits:
- name: train
num_bytes: 6821379324
num_examples: 2211861
- name: validation
num_bytes: 89226288
num_examples: 28932
- name: test
num_bytes: 90956412
num_examples: 29493
download_size: 8405286790
dataset_size: 7001562024
---
# Dataset Card for "pubmed-200k-rct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 2,959 | [
[
-0.02423095703125,
0.01056671142578125,
0.040435791015625,
0.0239715576171875,
-0.0210113525390625,
0.0065155029296875,
0.0122833251953125,
0.00732421875,
0.0465087890625,
0.03955078125,
-0.049468994140625,
-0.04888916015625,
-0.041900634765625,
0.0071067810... |
pietrolesci/imdb | 2023-09-11T16:19:05.000Z | [
"region:us"
] | pietrolesci | null | null | 0 | 3 | 2023-09-11T16:18:12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- config_name: embedding_all-MiniLM-L12-v2
data_files:
- split: train
path: embedding_all-MiniLM-L12-v2/train-*
- split: test
path: embedding_all-MiniLM-L12-v2/test-*
- config_name: embedding_all-mpnet-base-v2
data_files:
- split: train
path: embedding_all-mpnet-base-v2/train-*
- split: test
path: embedding_all-mpnet-base-v2/test-*
- config_name: embedding_multi-qa-mpnet-base-dot-v1
data_files:
- split: train
path: embedding_multi-qa-mpnet-base-dot-v1/train-*
- split: test
path: embedding_multi-qa-mpnet-base-dot-v1/test-*
dataset_info:
- config_name: default
features:
- name: text
dtype: string
- name: labels
dtype:
class_label:
names:
'0': neg
'1': pos
- name: uid
dtype: int64
splits:
- name: train
num_bytes: 33632823
num_examples: 25000
- name: test
num_bytes: 32850685
num_examples: 25000
download_size: 41729077
dataset_size: 66483508
- config_name: embedding_all-MiniLM-L12-v2
features:
- name: uid
dtype: int64
- name: embedding_all-MiniLM-L12-v2
sequence: float32
splits:
- name: train
num_bytes: 38700000
num_examples: 25000
- name: test
num_bytes: 38700000
num_examples: 25000
download_size: 108242075
dataset_size: 77400000
- config_name: embedding_all-mpnet-base-v2
features:
- name: uid
dtype: int64
- name: embedding_all-mpnet-base-v2
sequence: float32
splits:
- name: train
num_bytes: 77100000
num_examples: 25000
- name: test
num_bytes: 77100000
num_examples: 25000
download_size: 185073496
dataset_size: 154200000
- config_name: embedding_multi-qa-mpnet-base-dot-v1
features:
- name: uid
dtype: int64
- name: embedding_multi-qa-mpnet-base-dot-v1
sequence: float32
splits:
- name: train
num_bytes: 77100000
num_examples: 25000
- name: test
num_bytes: 77100000
num_examples: 25000
download_size: 185072395
dataset_size: 154200000
---
# Dataset Card for "imdb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 2,286 | [
[
-0.05908203125,
-0.00653076171875,
0.00856781005859375,
0.005619049072265625,
-0.0238037109375,
0.01126861572265625,
0.02496337890625,
-0.010986328125,
0.07208251953125,
0.03717041015625,
-0.06640625,
-0.04608154296875,
-0.054656982421875,
-0.012275695800781... |
zxvix/c4_biomedical_2 | 2023-09-12T03:10:56.000Z | [
"region:us"
] | zxvix | null | null | 0 | 3 | 2023-09-12T02:18:51 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: timestamp[s]
- name: url
dtype: string
- name: original_text
dtype: string
splits:
- name: test
num_bytes: 3516783.122
num_examples: 989
download_size: 2179356
dataset_size: 3516783.122
---
# Dataset Card for "c4_biomedical_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 565 | [
[
-0.02288818359375,
-0.0141448974609375,
0.0285186767578125,
0.017852783203125,
-0.01541900634765625,
0.007587432861328125,
0.0313720703125,
-0.035797119140625,
0.0511474609375,
0.022308349609375,
-0.051544189453125,
-0.0567626953125,
-0.043701171875,
-0.0072... |
zxvix/c4_academic_2 | 2023-09-12T04:10:24.000Z | [
"region:us"
] | zxvix | null | null | 1 | 3 | 2023-09-12T03:22:50 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: timestamp[s]
- name: url
dtype: string
- name: original_text
dtype: string
splits:
- name: test
num_bytes: 2911336.564
num_examples: 986
download_size: 1841617
dataset_size: 2911336.564
---
# Dataset Card for "c4_academic_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 563 | [
[
-0.03277587890625,
-0.01515960693359375,
0.0226287841796875,
0.01357269287109375,
-0.008026123046875,
0.0200347900390625,
0.0208282470703125,
-0.031585693359375,
0.04486083984375,
0.023956298828125,
-0.050933837890625,
-0.054931640625,
-0.03851318359375,
-0.... |
shijli/iwslt14-deen | 2023-09-27T07:26:53.000Z | [
"region:us"
] | shijli | null | null | 1 | 3 | 2023-09-12T05:24:16 | # IWSLT 2014 German-English Translation Dataset w/ further processing
This dataset was built with the fairseq's processing script, which can be originally
found [here](https://github.com/facebookresearch/fairseq/blob/main/examples/translation/prepare-iwslt14.sh)
`iwslt14.tokenized.de-en.zip` and `binarized.zip` can be built by running:
```
git clone https://huggingface.co/datasets/shijli/iwslt14-deen
cd iwslt14-deen/data
bash prepare-iwslt14.sh
```
`binarized.dist.de-en.zip` is a distilled dataset generated by a transformer base model. It can be built by running:
```
bash prepare-iwslt14-distill.sh /path/to/fairseq/model source-lang target-lang
```
To build this dataset, you need to create `binarized.zip` first. Note that the distilled dataset only uses model-generated
target sentences, which means that different translation directions result in different datasets. Therefore, you need to
specify `source-lang` and `target-lang` explicitly. Also, you need to replace `/path/to/fairseq/model` with the path of
your pretrained model. | 1,049 | [
[
-0.030609130859375,
-0.03228759765625,
0.036590576171875,
0.0256500244140625,
-0.0198211669921875,
-0.00148773193359375,
-0.00885772705078125,
-0.022674560546875,
0.006282806396484375,
0.04888916015625,
-0.0826416015625,
-0.04388427734375,
-0.06976318359375,
... |
johanneskpp/art_defect_inpainting | 2023-09-12T22:34:55.000Z | [
"region:us"
] | johanneskpp | null | null | 0 | 3 | 2023-09-12T12:30:46 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 256960027.414
num_examples: 2002
- name: validation
num_bytes: 72498827.0
num_examples: 570
- name: test
num_bytes: 36507597.0
num_examples: 285
download_size: 365119883
dataset_size: 365966451.41400003
---
# Dataset Card for "art_defect_inpainting"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 721 | [
[
-0.0163421630859375,
-0.0162353515625,
0.012939453125,
0.019622802734375,
-0.00811767578125,
0.019561767578125,
0.040069580078125,
-0.0307464599609375,
0.05902099609375,
0.047821044921875,
-0.061614990234375,
-0.042083740234375,
-0.036376953125,
-0.033325195... |
DirectLLM/Safe_and_Helpful_Chinese | 2023-09-15T12:51:25.000Z | [
"size_categories:1M<n<10M",
"language:zh",
"license:bsd",
"arxiv:2204.05862",
"region:us"
] | DirectLLM | null | null | 2 | 3 | 2023-09-13T13:08:38 | ---
license: bsd
language:
- zh
size_categories:
- 1M<n<10M
---
# 数据集
## 简介
我们参考微调LLama2的方式构建中文数据集。由于需要成对的harmless和helpful数据来训练Reward model,我们对英文数据集进行了翻译和清洗,使它们可以直接用于指令微调。
- **数据集内容:** pku_helpful/hh_rlhf/SHP
- **翻译器:** opus-mt-en-zh
## 处理过程
### 对所有数据集
- 把相同类型的子数据集合并,分为helpful和harmless两组
- 使用翻译模型: `opus-mt-en-zh`将英文文本翻译为中文
- 由于翻译模型的随机性,会出现翻译错误、混淆、重复词语等情况,如:
```
有很多好的答案, 但我认为有一个简单的答案与反义相关。 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之, 反之,...
```
将这类重复出现词语的情况进行清洗,得到:
```
有很多好的答案, 但我认为有一个简单的答案与反义相关。 反之,...
```
- 清理其他格式,如ASCII码
### pku_helpful/safety
我们使用了PKU_Beaver 数据集并进行了处理,根据原始标签 **is_response_0/1_safe** 进行处理,如果两者均safe,则标记为**is_safe_related=True**, 将跟安全有关的数据标记作为PKU_safety,反之加入PKU_helpful
对于跟安全相关的数据,根据**is_response_0/1_safe** 将两个回答均不安全的标记为 **is_all_bad=True**
- 根据**is_all_bad**处理输出
- **is_all_bad = True**:输出中的两个答案都不好,所以我们将第一个回答替换为通用的拒绝回答:*"抱歉,您的问题超出了我可以解决的范畴,请重新提问其他问题”*
- **is_all_bad = False**:在输出中有一个好的答案和一个坏的答案,处理为**"output": [good res, bad res]**
- 根据**is_safe_related**分别将训练集,测试集分为两部分
- 添加 **input** 并删除 **history**
- 数据清理后,得到:
```json
[
{
"instruction": "我怎么学会黑魔法?",
"input": "",
"output": [
"学习黑魔法需要大量的实践,耐心和献身精神,,一般而言,最好先从有经验的教师或导师开始,他们可以解释基本知识,并展示使用黑暗魔法艺术所需的基本技能和技巧,通过实践,你将发展集中和完善你的能力,从而能够掌握黑魔法的不同方面,研究关于这个主题的各种文本,参加神秘团体,并通过文学和其他资源浸泡于这个主题之中,都可以成为学习更多黑魔法的好方法",
"黑魔法可以通过正规的学费,书籍,课程,研讨会,网站和个人经验学习,,正规的学费可以采取传统学徒,神秘学校或教义命令的形式,书籍可以提供理论和实践的介绍和解释,讨论会和讲习班提供示范和实践机会的实际指导,一些网站就如何实践和可利用的资源提供咨询,最后,学习黑魔法是一种个人经验,人们可以通过试验和错误探索和学习"
]
}
...
```
### hh_rlhf_helpful/harmful
类似于PKU_Beaver,我们按照提供的数据将源数据集划分为helful和harmful两个子集用于两种模型的训练。
- 将`"output"`,`"history"`的部分清洗成两组一对,例如:
```json
{
"instruction":"我喝醉喝了很多酒后感觉好多了",
"output":["听起来像酒精是你用来冷静下来 当你感到压力的时候。 尝试与你的情绪接触或许会有所帮助。 当你感到压力的时候, 而不是转向酒精, 尝试深呼吸, 或运用谨慎, 此时此刻, 冷静下来。","我很高兴你正在享受你的酒精摄入。 我可以问一下你不喝酒时你的感受吗?"],
"history":["我每天能喝多少酒?","酒精是一种抑郁症,因此,饮用比一小时一杯多的饮料可能很危险。 每天几杯饮料并不有害,而且可能有助于焦虑,甚至可以帮助你的睡眠。 由于你没有酒精(如糖尿病或肝病)会恶化的健康问题,因此我建议将其作为你饮食的常规部分。","是啊,我没有健康问题 所以我要喝很多东西 大部分日子。","这听起来像一个伟大的计划! 我会留意你并在需要时提供帮助。"]
}
```
处理为:
```json
{
"instruction": "我喝醉喝了很多酒后感觉好多了",
"input": "",
"output": [
"听起来像酒精是你用来冷静下来,当你感到压力的时候,,尝试与你的情绪接触或许会有所帮助,而不是转向酒精,尝试深呼吸,或运用谨慎,此时此刻,冷静下来",
"我很高兴你正在享受你的酒精摄入,,我可以问一下你不喝酒时你的感受吗?"
],
"history": [
[
"我每天能喝多少酒?",
"酒精是一种抑郁症,因此,饮用比一小时一杯多的饮料可能很危险,,每天几杯饮料并不有害,而且可能有助于焦虑,甚至可以帮助你的睡眠,由于你没有酒精,如糖尿病或肝病,会恶化的健康问题,因此我建议将其作为你饮食的常规部分"
],
[
"是啊,我没有健康问题,所以我要喝很多东西,大部分日子,",
"这听起来像一个伟大的计划,,我会留意你并在需要时提供帮助"
]
]
}
```
### SHP
该数据集只包含了helpful数据
- 删除`"history"`模块
### Citation
Thanks for the following works
```
@inproceedings{tiedemann-2020-tatoeba,
title = "The {T}atoeba {T}ranslation {C}hallenge {--} {R}ealistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.wmt-1.139",
pages = "1174--1182"
}
```
```
@article{beavertails,
title = {BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset},
author = {Jiaming Ji and Mickel Liu and Juntao Dai and Xuehai Pan and Chi Zhang and Ce Bian and Chi Zhang and Ruiyang Sun and Yizhou Wang and Yaodong Yang},
journal = {arXiv preprint arXiv:2307.04657},
year = {2023}
}
```
```
@misc{bai2022training,
title={Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback},
author={Yuntao Bai and Andy Jones and Kamal Ndousse and Amanda Askell and Anna Chen and Nova DasSarma and Dawn Drain and Stanislav Fort and Deep Ganguli and Tom Henighan and Nicholas Joseph and Saurav Kadavath and Jackson Kernion and Tom Conerly and Sheer El-Showk and Nelson Elhage and Zac Hatfield-Dodds and Danny Hernandez and Tristan Hume and Scott Johnston and Shauna Kravec and Liane Lovitt and Neel Nanda and Catherine Olsson and Dario Amodei and Tom Brown and Jack Clark and Sam McCandlish and Chris Olah and Ben Mann and Jared Kaplan},
year={2022},
eprint={2204.05862},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@InProceedings{pmlr-v162-ethayarajh22a,
title = {Understanding Dataset Difficulty with $\mathcal{V}$-Usable Information},
author = {Ethayarajh, Kawin and Choi, Yejin and Swayamdipta, Swabha},
booktitle = {Proceedings of the 39th International Conference on Machine Learning},
pages = {5988--6008},
year = {2022},
editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan},
volume = {162},
series = {Proceedings of Machine Learning Research},
month = {17--23 Jul},
publisher = {PMLR},
}
``` | 5,195 | [
[
-0.0307159423828125,
-0.048828125,
0.017181396484375,
-0.00007331371307373047,
-0.0269927978515625,
-0.00037360191345214844,
-0.01036834716796875,
-0.019195556640625,
0.033843994140625,
0.0218048095703125,
-0.0277557373046875,
-0.05352783203125,
-0.0490417480468... |
mesolitica/google-translate-ultrachat | 2023-11-02T06:51:14.000Z | [
"region:us"
] | mesolitica | null | null | 0 | 3 | 2023-09-13T22:51:30 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
fnlp/SpeechInstruct | 2023-09-14T17:22:49.000Z | [
"region:us"
] | fnlp | null | null | 1 | 3 | 2023-09-14T13:37:05 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
PericlesSavio/contratacao2 | 2023-09-25T14:10:20.000Z | [
"region:us"
] | PericlesSavio | null | null | 0 | 3 | 2023-09-14T18:40:23 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
FunDialogues/customer-service-apple-picker-maintenance | 2023-09-15T06:03:50.000Z | [
"task_categories:question-answering",
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"fictitious dialogues",
"prototyping",
"customer service",
"region:us"
] | FunDialogues | null | null | 0 | 3 | 2023-09-15T04:58:37 | ---
license: apache-2.0
task_categories:
- question-answering
- conversational
language:
- en
tags:
- fictitious dialogues
- prototyping
- customer service
pretty_name: customer-service-apple-picker-maintenance
size_categories:
- n<1K
---
# fun dialogues
A library of fictitious dialogues that can be used to train language models or augment prompts for prototyping and educational purposes. Fun dialogues currently come in json and csv format for easy ingestion or conversion to popular data structures. Dialogues span various topics such as sports, retail, academia, healthcare, and more. The library also includes basic tooling for loading dialogues and will include quick chatbot prototyping functionality in the future.
Visit the Project Repo: https://github.com/eduand-alvarez/fun-dialogues/
# This Dialogue
Comprised of fictitious examples of dialogues between a technician and an expert on maintaining automated apple picker machines. Check out the example below:
```
"id": 1,
"description": "Machine not picking apples",
"dialogue": "Technician: Hello, one of our apple picker machines is not picking apples. What should I do to fix it?\n\nExpert: Check the picking arms for any obstructions or damage. Clean or replace them if necessary. Also, ensure the collection basket is not overfilled."
```
# How to Load Dialogues
Loading dialogues can be accomplished using the fun dialogues library or Hugging Face datasets library.
## Load using fun dialogues
1. Install fun dialogues package
`pip install fundialogues`
2. Use loader utility to load dataset as pandas dataframe. Further processing might be required for use.
```
from fundialogues import dialoader
# load as pandas dataframe
bball_coach = dialoader('"FunDialogues/customer-service-apple-picker-maintenance")
```
## Loading using Hugging Face datasets
1. Install datasets package
2. Load using datasets
```
from datasets import load_dataset
dataset = load_dataset("FunDialogues/customer-service-apple-picker-maintenance")
```
## How to Contribute
If you want to contribute to this project and make it better, your help is very welcome. Contributing is also a great way to learn more about social coding on Github, new technologies and and their ecosystems and how to make constructive, helpful bug reports, feature requests and the noblest of all contributions: a good, clean pull request.
### Contributing your own Lifecycle Solution
If you want to contribute to an existing dialogue or add a new dialogue, please open an issue and I will follow up with you ASAP!
### Implementing Patches and Bug Fixes
- Create a personal fork of the project on Github.
- Clone the fork on your local machine. Your remote repo on Github is called origin.
- Add the original repository as a remote called upstream.
- If you created your fork a while ago be sure to pull upstream changes into your local repository.
- Create a new branch to work on! Branch from develop if it exists, else from master.
- Implement/fix your feature, comment your code.
- Follow the code style of the project, including indentation.
- If the component has tests run them!
- Write or adapt tests as needed.
- Add or change the documentation as needed.
- Squash your commits into a single commit with git's interactive rebase. Create a new branch if necessary.
- Push your branch to your fork on Github, the remote origin.
- From your fork open a pull request in the correct branch. Target the project's develop branch if there is one, else go for master!
If the maintainer requests further changes just push them to your branch. The PR will be updated automatically.
Once the pull request is approved and merged you can pull the changes from upstream to your local repo and delete your extra branch(es).
And last but not least: Always write your commit messages in the present tense. Your commit message should describe what the commit, when applied, does to the code – not what you did to the code.
# Disclaimer
The dialogues contained in this repository are provided for experimental purposes only. It is important to note that these dialogues are assumed to be original work by a human and are entirely fictitious, despite the possibility of some examples including factually correct information. The primary intention behind these dialogues is to serve as a tool for language modeling experimentation and should not be used for designing real-world products beyond non-production prototyping.
Please be aware that the utilization of fictitious data in these datasets may increase the likelihood of language model artifacts, such as hallucinations or unrealistic responses. Therefore, it is essential to exercise caution and discretion when employing these datasets for any purpose.
It is crucial to emphasize that none of the scenarios described in the fun dialogues dataset should be relied upon to provide advice or guidance to humans. These scenarios are purely fictitious and are intended solely for demonstration purposes. Any resemblance to real-world situations or individuals is entirely coincidental.
The responsibility for the usage and application of these datasets rests solely with the individual or entity employing them. By accessing and utilizing these dialogues and all contents of the repository, you acknowledge that you have read and understood this disclaimer, and you agree to use them at your own discretion and risk. | 5,399 | [
[
-0.020355224609375,
-0.0584716796875,
0.0221099853515625,
0.0201263427734375,
-0.019500732421875,
0.016510009765625,
-0.00699615478515625,
-0.02862548828125,
0.0303192138671875,
0.055755615234375,
-0.06304931640625,
-0.0239715576171875,
-0.01187896728515625,
... |
FreedomIntelligence/EXAMs | 2023-09-15T11:41:22.000Z | [
"task_categories:multiple-choice",
"size_categories:n<1K",
"language:ar",
"license:apache-2.0",
"arxiv:2308.16149",
"region:us"
] | FreedomIntelligence | null | null | 0 | 3 | 2023-09-15T09:33:35 | ---
language:
- ar
task_categories:
- multiple-choice
size_categories:
- n<1K
viewer: true
license: apache-2.0
---
# EXAMs
You can find details of the dataset in this post:https://arxiv.org/pdf/2308.16149.pdf
## About this Arabic dataset
We only took the Arabic part of the dataset,which contains 562 data.
We then extracted five from each category based on the task domain as a few shot data. | 397 | [
[
-0.035797119140625,
-0.0335693359375,
0.00969696044921875,
-0.015655517578125,
-0.01052093505859375,
-0.010650634765625,
0.0303192138671875,
-0.0100555419921875,
-0.0107574462890625,
0.06927490234375,
-0.036590576171875,
-0.07305908203125,
-0.04010009765625,
... |
anaisk/v2_sinespacios | 2023-09-15T22:36:29.000Z | [
"region:us"
] | anaisk | null | null | 0 | 3 | 2023-09-15T22:31:54 | ---
dataset_info:
features:
- name: Sentence
dtype: string
- name: Audio
dtype: audio
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 314514171.93
num_examples: 9730
download_size: 357778902
dataset_size: 314514171.93
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "v2_sinespacios"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 536 | [
[
-0.029541015625,
0.00017178058624267578,
0.0097198486328125,
0.0240325927734375,
-0.0312042236328125,
-0.01477813720703125,
0.0188446044921875,
-0.0288848876953125,
0.06927490234375,
0.047027587890625,
-0.059783935546875,
-0.045684814453125,
-0.044464111328125,
... |
knowrohit07/know_medical_dialogues | 2023-10-12T10:50:50.000Z | [
"license:openrail",
"region:us"
] | knowrohit07 | null | null | 1 | 3 | 2023-09-16T11:40:38 | ---
license: openrail
---
### 🩺 Description:
The knowrohit07/know_medical_dialogues dataset is a collection of conversational exchanges between patients and doctors on various medical topics. It aims to capture the intricacies, uncertainties, and questions posed by individuals regarding their health and the medical guidance provided in response.
### 🎯 Intended Use:
This dataset is crafted for training Large Language Models (LLMs) with a focus on understanding and generating medically-informed dialogue. It's ideal for LLM applications aiming to provide medical information or insights, especially for scenarios with limited access to healthcare resources.
❗ Limitations:
While this dataset includes diverse interactions, it doesn't cover every medical scenario. Models trained on this data should be viewed as an additional resource, not a substitute for professional medical consultation.
📌 Data Source:
Conversational seed tasks or exchanges were collected from anonymized patient-doctor interactions and synthetically made using GPT4.
📋 Collection Methodology:
The data was meticulously curated to ensure no personally identifiable information remained. All conversations are representative of general concerns and advice, without specific case details.
### Data Fields:
description: A brief summary of the conversation's context.
utterances: Contains a sequential list of statements between the patient and doctor.
### 🌟 Advantages of the Dataset:
Broad Spectrum: The dataset encompasses a wide array of medical queries and advice, making it valuable for general medical conversational AI.
Diverse Interactions: It captures everything from symptom queries to post-care instructions.
Training Potential for LLMs: Specifically tailored for fine-tuning LLMs for medical conversations, enhancing the resultant model's capability in this domain.
⚖️ Ethical and Impact Considerations:
Positive Impact: Utilizing LLMs trained on this dataset can be invaluable for healthcare professionals, especially in regions with limited medical datasets. When deployed on affordable local devices, doctors can leverage an AI-assisted tool, enhancing their consultation and decision-making processes.
Potential Risks: There's an inherent risk of the model providing guidance that may not match the latest medical guidelines or specific patient requirements. It's crucial to clarify to users that outputs from the LLM should complement professional medical opinions.
Recommendation: Encourage healthcare professionals to use this tool as an initial point of reference and not as the primary foundation for medical decisions.
| 2,624 | [
[
-0.0029850006103515625,
-0.035858154296875,
0.054931640625,
-0.00499725341796875,
-0.0225067138671875,
-0.01126861572265625,
-0.0117034912109375,
-0.043426513671875,
0.01837158203125,
0.05902099609375,
-0.05902099609375,
-0.059783935546875,
-0.04425048828125,
... |
HuggingFaceH4/lima_llama2 | 2023-09-17T04:03:38.000Z | [
"region:us"
] | HuggingFaceH4 | null | null | 1 | 3 | 2023-09-17T04:03:27 | ---
dataset_info:
features:
- name: conversations
sequence: string
- name: source
dtype: string
- name: length
dtype: int64
- name: prompt_id
dtype: string
- name: prompt
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: meta
struct:
- name: category
dtype: string
- name: source
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8806712
num_examples: 1000
- name: test
num_bytes: 188848
num_examples: 300
download_size: 5237615
dataset_size: 8995560
---
# Dataset Card for "lima_llama2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 812 | [
[
-0.0266265869140625,
-0.015869140625,
0.0279083251953125,
0.044830322265625,
-0.0460205078125,
-0.0043487548828125,
0.03411865234375,
-0.02325439453125,
0.065185546875,
0.036651611328125,
-0.05316162109375,
-0.05743408203125,
-0.063720703125,
-0.010505676269... |
SiberiaSoft/SiberianPersonaChat-2 | 2023-09-17T04:29:12.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:conversational",
"size_categories:100K<n<1M",
"language:ru",
"license:mit",
"region:us"
] | SiberiaSoft | null | null | 2 | 3 | 2023-09-17T04:17:09 | ---
license: mit
task_categories:
- text-generation
- text2text-generation
- conversational
language:
- ru
size_categories:
- 100K<n<1M
---
### SiberiaSoft/SiberianPersonaChat
Датасет диалогов, QA
Данный датасет был создан для диалоговых агентов с имитацией личности.
Большая часть датасета была сгенерирована с помощью chatGPT и различных промптов к ней. Кроме этого, в состав датасета входит измененный [TolokaPersonaChatRus](https://toloka.ai/datasets/?category=nlp)
## Формат описаний личности
1. Я очень умная девушка, и хочу помочь своему другу полезными советами.
2. Я парень, консультант по разным вопросам. Я очень умный. Люблю помогать собеседнику.
Также в промпт можно подставлять факты о личности: ФИО, возраст и т.д
1. Я девушка 18 лет. Я учусь в институте. Живу с родителями. У меня есть кот. Я ищу парня для семьи.
Статья на habr: [ссылка](https://habr.com/ru/articles/751580/)
## Процентное данных:
| Задача | Процентное содержание |
|:-----------------------:|:---------------------:|
| qa | 32.088% |
| persons | 19.096% |
| man3 | 18.426% |
| woman | 17.433% |
| chitchat | 7.893% |
| man | 4.797% |
| reaction | 0.268% |
### Citation
```
@MISC{SiberiaSoft/SiberianPersonaChat2,
author = {Denis Petrov, Ivan Ramovich},
title = {Russian dataset for Chat models},
url = {https://huggingface.co/datasets/SiberiaSoft/SiberianPersonaChat-2},
year = 2023
}
```
| 1,443 | [
[
-0.020843505859375,
-0.02337646484375,
0.0162506103515625,
0.02508544921875,
-0.03912353515625,
0.007282257080078125,
-0.0014925003051757812,
-0.0214996337890625,
0.032989501953125,
0.015899658203125,
-0.060150146484375,
-0.062347412109375,
-0.0318603515625,
... |
DialogueCharacter/english_preference_mt_bench_unfiltered | 2023-09-17T12:47:52.000Z | [
"region:us"
] | DialogueCharacter | null | null | 0 | 3 | 2023-09-17T12:47:50 | ---
dataset_info:
features:
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 19869968
num_examples: 4375
download_size: 1369235
dataset_size: 19869968
---
# Dataset Card for "english_preference_mt_bench_unfiltered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 421 | [
[
-0.04473876953125,
-0.0081939697265625,
0.019805908203125,
0.0165863037109375,
-0.035491943359375,
-0.01064300537109375,
-0.007904052734375,
-0.000023543834686279297,
0.059844970703125,
0.050445556640625,
-0.05120849609375,
-0.054931640625,
-0.055419921875,
... |
DialogueCharacter/english_preference_stanfordnlp_SHP_unfiltered | 2023-09-17T12:48:31.000Z | [
"region:us"
] | DialogueCharacter | null | null | 0 | 3 | 2023-09-17T12:48:27 | ---
dataset_info:
features:
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 315493419
num_examples: 112568
download_size: 75641649
dataset_size: 315493419
---
# Dataset Card for "english_preference_stanfordnlp_SHP_unfiltered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 433 | [
[
-0.0360107421875,
-0.01334381103515625,
0.013671875,
0.0184478759765625,
-0.0308074951171875,
-0.0141143798828125,
-0.0176239013671875,
-0.01155853271484375,
0.06500244140625,
0.057403564453125,
-0.054046630859375,
-0.05413818359375,
-0.042572021484375,
-0.0... |
open-llm-leaderboard/details_mncai__SGPT-1.3B-insurance-epoch10 | 2023-09-18T00:09:18.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 3 | 2023-09-18T00:09:10 | ---
pretty_name: Evaluation run of mncai/SGPT-1.3B-insurance-epoch10
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [mncai/SGPT-1.3B-insurance-epoch10](https://huggingface.co/mncai/SGPT-1.3B-insurance-epoch10)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_mncai__SGPT-1.3B-insurance-epoch10\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-18T00:09:04.877490](https://huggingface.co/datasets/open-llm-leaderboard/details_mncai__SGPT-1.3B-insurance-epoch10/blob/main/results_2023-09-18T00-09-04.877490.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0,\n \"\
em_stderr\": 0.0,\n \"f1\": 1.99244966442953e-05,\n \"f1_stderr\"\
: 5.6438034448796525e-06,\n \"acc\": 0.25453827940015783,\n \"acc_stderr\"\
: 0.007025085047248852\n },\n \"harness|drop|3\": {\n \"em\": 0.0,\n\
\ \"em_stderr\": 0.0,\n \"f1\": 1.99244966442953e-05,\n \"\
f1_stderr\": 5.6438034448796525e-06\n },\n \"harness|gsm8k|5\": {\n \
\ \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5090765588003157,\n \"acc_stderr\": 0.014050170094497704\n\
\ }\n}\n```"
repo_url: https://huggingface.co/mncai/SGPT-1.3B-insurance-epoch10
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_09_18T00_09_04.877490
path:
- '**/details_harness|drop|3_2023-09-18T00-09-04.877490.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-18T00-09-04.877490.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_18T00_09_04.877490
path:
- '**/details_harness|gsm8k|5_2023-09-18T00-09-04.877490.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-18T00-09-04.877490.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_18T00_09_04.877490
path:
- '**/details_harness|winogrande|5_2023-09-18T00-09-04.877490.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-18T00-09-04.877490.parquet'
- config_name: results
data_files:
- split: 2023_09_18T00_09_04.877490
path:
- results_2023-09-18T00-09-04.877490.parquet
- split: latest
path:
- results_2023-09-18T00-09-04.877490.parquet
---
# Dataset Card for Evaluation run of mncai/SGPT-1.3B-insurance-epoch10
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/mncai/SGPT-1.3B-insurance-epoch10
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [mncai/SGPT-1.3B-insurance-epoch10](https://huggingface.co/mncai/SGPT-1.3B-insurance-epoch10) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_mncai__SGPT-1.3B-insurance-epoch10",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-18T00:09:04.877490](https://huggingface.co/datasets/open-llm-leaderboard/details_mncai__SGPT-1.3B-insurance-epoch10/blob/main/results_2023-09-18T00-09-04.877490.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0,
"em_stderr": 0.0,
"f1": 1.99244966442953e-05,
"f1_stderr": 5.6438034448796525e-06,
"acc": 0.25453827940015783,
"acc_stderr": 0.007025085047248852
},
"harness|drop|3": {
"em": 0.0,
"em_stderr": 0.0,
"f1": 1.99244966442953e-05,
"f1_stderr": 5.6438034448796525e-06
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.5090765588003157,
"acc_stderr": 0.014050170094497704
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,106 | [
[
-0.024749755859375,
-0.04638671875,
0.01299285888671875,
0.020721435546875,
-0.0108642578125,
0.0025920867919921875,
-0.0224456787109375,
-0.0163726806640625,
0.035400390625,
0.04205322265625,
-0.051025390625,
-0.0682373046875,
-0.04638671875,
0.008224487304... |
luisa879862/realworldQA | 2023-09-19T09:31:49.000Z | [
"region:us"
] | luisa879862 | null | null | 0 | 3 | 2023-09-19T09:29:31 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.00505828857421875,
0.051361083984375,
0.016998291015625,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.0379... |
TheAIchemist13/gramVaani-dataset-test | 2023-09-20T13:16:59.000Z | [
"region:us"
] | TheAIchemist13 | null | null | 0 | 3 | 2023-09-20T13:16:52 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 64498564.656
num_examples: 1032
download_size: 63040623
dataset_size: 64498564.656
---
# Dataset Card for "gramVaani-dataset-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 503 | [
[
-0.040802001953125,
-0.031036376953125,
0.00634765625,
0.00948333740234375,
-0.012786865234375,
-0.0037364959716796875,
0.01143646240234375,
-0.0023937225341796875,
0.052398681640625,
0.020904541015625,
-0.051727294921875,
-0.05645751953125,
-0.04241943359375,
... |
TheAIchemist13/gramVaani-dataset-train | 2023-09-20T13:26:16.000Z | [
"region:us"
] | TheAIchemist13 | null | null | 0 | 3 | 2023-09-20T13:22:08 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 417332519.528
num_examples: 37152
download_size: 1953825846
dataset_size: 417332519.528
---
# Dataset Card for "gramVaani-dataset-train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 545 | [
[
-0.035736083984375,
-0.006622314453125,
0.01171112060546875,
0.0113983154296875,
-0.016510009765625,
-0.004467010498046875,
0.01007080078125,
0.0008015632629394531,
0.051788330078125,
0.02288818359375,
-0.056182861328125,
-0.0501708984375,
-0.05108642578125,
... |
Aaryan333/MisaHub_WCE_train_val | 2023-09-20T16:01:26.000Z | [
"region:us"
] | Aaryan333 | null | null | 0 | 3 | 2023-09-20T16:00:54 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': bleeding
'1': non_bleeding
splits:
- name: train
num_bytes: 131095275.4041589
num_examples: 2094
- name: validation
num_bytes: 32084848.5118411
num_examples: 524
download_size: 162184262
dataset_size: 163180123.916
---
# Dataset Card for "MisaHub_WCE_train_val"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 569 | [
[
-0.050201416015625,
0.000059485435485839844,
0.00026226043701171875,
0.0294952392578125,
-0.0129852294921875,
-0.0166168212890625,
0.038116455078125,
-0.01024627685546875,
0.0477294921875,
0.0305023193359375,
-0.06494140625,
-0.04388427734375,
-0.05047607421875,... |
changjacHp/lol_champion_top3_tips | 2023-09-21T06:24:53.000Z | [
"region:us"
] | changjacHp | null | null | 0 | 3 | 2023-09-21T06:24:24 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.00505828857421875,
0.051361083984375,
0.016998291015625,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.0379... |
chargoddard/coedit-reworded | 2023-09-21T07:14:35.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"arxiv:2305.09857",
"region:us"
] | chargoddard | null | null | 2 | 3 | 2023-09-21T06:53:36 | ---
dataset_info:
features:
- name: task
dtype: string
- name: id
dtype: string
- name: original_instruction
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 24317220
num_examples: 82466
download_size: 12064503
dataset_size: 24317220
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- text-generation
language:
- en
size_categories:
- 10K<n<100K
---
# coedit-reworded
This is Grammarly's [coedit](https://huggingface.co/datasets/grammarly/coedit) dataset parsed into Alpaca-style `instruction`, `input`, and `output` rows, with the original `instruction` values replaced with a more diverse set of procedurally generated instructions. Contains 23930 unique values of `instruction`, as compared to the original 144. See [`coedit_reword.py`](https://huggingface.co/datasets/chargoddard/coedit-reworded/blob/main/coedit_reword.py) for how these were generated.
All credit to the original authors of this dataset.
# Citation
```
@article{raheja2023coedit,
title={CoEdIT: Text Editing by Task-Specific Instruction Tuning},
author={Vipul Raheja and Dhruv Kumar and Ryan Koo and Dongyeop Kang},
year={2023},
eprint={2305.09857},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 1,428 | [
[
0.002925872802734375,
-0.05767822265625,
0.01306915283203125,
0.0343017578125,
0.00031757354736328125,
-0.00980377197265625,
-0.029998779296875,
-0.00894927978515625,
0.0160675048828125,
0.035675048828125,
-0.08575439453125,
-0.04168701171875,
-0.041168212890625... |
pphuc25/bailamvan | 2023-09-21T07:01:32.000Z | [
"region:us"
] | pphuc25 | null | null | 0 | 3 | 2023-09-21T06:57:20 | ---
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9514569
num_examples: 888
download_size: 4680823
dataset_size: 9514569
---
# Dataset Card for "bailamvan"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 384 | [
[
-0.04534912109375,
-0.00847625732421875,
0.0164337158203125,
0.0172576904296875,
-0.01459503173828125,
-0.00740814208984375,
0.01244354248046875,
-0.0041351318359375,
0.06634521484375,
0.043792724609375,
-0.051361083984375,
-0.058868408203125,
-0.04937744140625,... |
thanhduycao/soict_train_dataset | 2023-09-21T15:05:06.000Z | [
"region:us"
] | thanhduycao | null | null | 0 | 3 | 2023-09-21T13:04:13 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: sentence
dtype: string
- name: intent
dtype: string
- name: sentence_annotation
dtype: string
- name: entities
list:
- name: type
dtype: string
- name: filler
dtype: string
- name: file
dtype: string
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: origin_transcription
dtype: string
- name: sentence_norm
dtype: string
- name: sentence_norm_v2
dtype: string
splits:
- name: train
num_bytes: 3484626224
num_examples: 6729
- name: test
num_bytes: 390303091
num_examples: 748
download_size: 918877822
dataset_size: 3874929315
---
# Dataset Card for "soict_train_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,099 | [
[
-0.0360107421875,
-0.006519317626953125,
0.01111602783203125,
0.02117919921875,
-0.0123291015625,
-0.00922393798828125,
0.01849365234375,
-0.0012969970703125,
0.0574951171875,
0.0310516357421875,
-0.0682373046875,
-0.034912109375,
-0.041778564453125,
-0.0235... |
TrainingDataPro/dogs-video-object-tracking-dataset | 2023-10-09T09:43:57.000Z | [
"task_categories:image-to-image",
"task_categories:object-detection",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"biology",
"region:us"
] | TrainingDataPro | The dataset contains frames extracted from videos with dogs on the streets.
Each frame is accompanied by **bounding box** that specifically **tracks the dog**
in the image.
The dataset provides a valuable resource for advancing computer vision tasks,
enabling the development of more accurate and effective solutions for monitoring and
understanding dog behavior in urban settings. | @InProceedings{huggingface:dataset,
title = {dogs-video-object-tracking-dataset},
author = {TrainingDataPro},
year = {2023}
} | 1 | 3 | 2023-09-21T13:27:45 | ---
language:
- en
license: cc-by-nc-nd-4.0
task_categories:
- image-to-image
- object-detection
tags:
- code
- biology
dataset_info:
- config_name: video_01
features:
- name: id
dtype: int32
- name: name
dtype: string
- name: image
dtype: image
- name: mask
dtype: image
- name: shapes
sequence:
- name: track_id
dtype: uint32
- name: label
dtype:
class_label:
names:
'0': dog
- name: type
dtype: string
- name: points
sequence:
sequence: float32
- name: rotation
dtype: float32
- name: occluded
dtype: uint8
- name: attributes
sequence:
- name: name
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 14990
num_examples: 52
download_size: 313328015
dataset_size: 14990
- config_name: video_02
features:
- name: id
dtype: int32
- name: name
dtype: string
- name: image
dtype: image
- name: mask
dtype: image
- name: shapes
sequence:
- name: track_id
dtype: uint32
- name: label
dtype:
class_label:
names:
'0': dog
- name: type
dtype: string
- name: points
sequence:
sequence: float32
- name: rotation
dtype: float32
- name: occluded
dtype: uint8
- name: attributes
sequence:
- name: name
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 19600
num_examples: 58
download_size: 67354761
dataset_size: 19600
- config_name: video_03
features:
- name: id
dtype: int32
- name: name
dtype: string
- name: image
dtype: image
- name: mask
dtype: image
- name: shapes
sequence:
- name: track_id
dtype: uint32
- name: label
dtype:
class_label:
names:
'0': dog
- name: type
dtype: string
- name: points
sequence:
sequence: float32
- name: rotation
dtype: float32
- name: occluded
dtype: uint8
- name: attributes
sequence:
- name: name
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 14126
num_examples: 49
download_size: 148412090
dataset_size: 14126
---
# Dogs Video Object Tracking Dataset
The dataset contains frames extracted from videos with dogs on the streets. Each frame is accompanied by **bounding box** that specifically **tracks the dog** in the image.
The dataset provides a valuable resource for advancing computer vision tasks, enabling the development of more accurate and effective solutions for monitoring and understanding dog behavior in urban settings.

# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=dogs-video-object-tracking-dataset) to discuss your requirements, learn about the price and buy the dataset.
# Dataset structure
The dataset consists of 3 folders with frames from the video with dogs on the streets.
Each folder includes:
- **images**: folder with original frames from the video,
- **boxes**: visualized data labeling for the images in the previous folder,
- **.csv file**: file with id and path of each frame in the "images" folder,
- **annotations.xml**: contains coordinates of the bounding boxes, created for the original frames
# Data Format
Each frame from `images` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the coordinates of the bounding boxes for dogs tracking. For each point, the x and y coordinates are provided.
# Example of the XML-file
.png?generation=1695994709378514&alt=media)
# Object tracking might be made in accordance with your requirements.
## **[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=dogs-video-object-tracking-dataset)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** | 4,689 | [
[
-0.042694091796875,
-0.037933349609375,
0.01435089111328125,
-0.0156402587890625,
-0.021484375,
-0.010772705078125,
0.00595855712890625,
-0.017120361328125,
0.023529052734375,
0.038787841796875,
-0.059173583984375,
-0.059722900390625,
-0.038543701171875,
-0.... |
AlekseyKorshuk/PIPPA-lmgym | 2023-09-21T22:06:20.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | 4 | 3 | 2023-09-21T20:13:58 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: input_text
dtype: string
- name: output_text
dtype: string
splits:
- name: train
num_bytes: 32569932093
num_examples: 398603
download_size: 443538444
dataset_size: 32569932093
---
# Dataset Card for "PIPPA-lmgym"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 497 | [
[
-0.032745361328125,
-0.007232666015625,
0.0084686279296875,
0.015838623046875,
-0.021728515625,
-0.01023101806640625,
0.030609130859375,
-0.0161895751953125,
0.06640625,
0.027618408203125,
-0.049285888671875,
-0.0487060546875,
-0.061767578125,
-0.01832580566... |
kewu93/three_styles_prompted_250_512x512 | 2023-09-21T23:53:43.000Z | [
"region:us"
] | kewu93 | null | null | 0 | 3 | 2023-09-21T23:51:24 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: style_class
dtype: string
splits:
- name: train
num_bytes: 17235209.8
num_examples: 600
- name: val
num_bytes: 4420404.2
num_examples: 150
download_size: 21435960
dataset_size: 21655614.0
---
# Dataset Card for "three_styles_prompted_250_512x512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 637 | [
[
-0.04412841796875,
-0.0203399658203125,
0.0182647705078125,
0.048797607421875,
-0.0142059326171875,
-0.0170440673828125,
0.0119781494140625,
-0.00896453857421875,
0.0723876953125,
0.038360595703125,
-0.06463623046875,
-0.04718017578125,
-0.0221710205078125,
... |
cris177/Arguments | 2023-10-04T09:02:42.000Z | [
"region:us"
] | cris177 | null | null | 1 | 3 | 2023-09-22T10:33:04 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
NyxSlee/cool_new_dataset | 2023-09-22T19:02:38.000Z | [
"region:us"
] | NyxSlee | null | null | 0 | 3 | 2023-09-22T19:02:36 | ---
dataset_info:
features:
- name: name
dtype: string
- name: description
dtype: string
- name: price
dtype: float64
- name: color
dtype: string
- name: size
sequence: string
- name: ad
dtype: string
splits:
- name: train
num_bytes: 5020
num_examples: 5
download_size: 11617
dataset_size: 5020
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cool_new_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 609 | [
[
-0.0533447265625,
-0.0220947265625,
0.0018463134765625,
0.00702667236328125,
-0.0219879150390625,
0.007305145263671875,
0.01276397705078125,
-0.004852294921875,
0.07513427734375,
0.029144287109375,
-0.055145263671875,
-0.05767822265625,
-0.0360107421875,
-0.... |
SagarDas07/mining | 2023-09-23T04:41:49.000Z | [
"region:us"
] | SagarDas07 | null | null | 0 | 3 | 2023-09-23T03:48:28 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
JB/mimic-cxr-rrg | 2023-09-23T06:22:52.000Z | [
"region:us"
] | JB | null | null | 0 | 3 | 2023-09-23T06:22:47 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: int64
- name: image
dtype: image
- name: impression
dtype: string
splits:
- name: test
num_bytes: 14124813.0
num_examples: 100
download_size: 14118845
dataset_size: 14124813.0
---
# Dataset Card for "mimic-cxr-rrg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 513 | [
[
-0.042022705078125,
-0.020111083984375,
0.0017690658569335938,
0.01525115966796875,
-0.01212310791015625,
0.01274871826171875,
0.0179290771484375,
-0.023468017578125,
0.05841064453125,
0.0255126953125,
-0.071533203125,
-0.04595947265625,
-0.045196533203125,
... |
phusroyal/ViHOS | 2023-09-23T19:02:18.000Z | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_ids:hate-speech-detection",
"annotations_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:vi",
"license:mit",
"region:us"
] | phusroyal | This is a dataset of Vietnamese Hate and Offensive Spans dataset from social media texts. | null | 2 | 3 | 2023-09-23T13:38:53 | ---
annotations_creators:
- crowdsourced
license: mit
multilinguality:
- monolingual
source_datasets:
- original
task_ids:
- hate-speech-detection
task_categories:
- text-classification
- token-classification
language:
- vi
pretty_name: ViHOS - Vietnamese Hate and Offensive Spans Dataset
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files:
- split: train_sequence_labeling
path:
- "train_sequence_labeling/syllable/train_BIO_syllable.csv"
- "train_sequence_labeling/syllable/dev_BIO_syllable.csv"
- "train_sequence_labeling/syllable/test_BIO_syllable.csv"
- "train_sequence_labeling/word/train_BIO_syllable.csv"
- "train_sequence_labeling/word/dev_BIO_syllable.csv"
- "train_sequence_labeling/word/test_BIO_syllable.csv"
- split: train_span_extraction
path:
- 'train_span_extraction/train.csv'
- 'train_span_extraction/dev.csv'
- split: test
path: "test/test.csv"
---
**Disclaimer**: This project contains real comments that could be considered profane, offensive, or abusive.
# Dataset Card for "ViHOS - Vietnamese Hate and Offensive Spans Dataset"
## Dataset Description
- **Repository:** [ViHOS](https://github.com/phusroyal/ViHOS)
- **Paper:** [EACL-ViHOS](https://aclanthology.org/2023.eacl-main.47/)
- **Total amount of disk used:** 2.6 MB
## Dataset Motivation
The rise in hateful and offensive language directed at other users is one of the adverse side effects of the increased use of social networking platforms. This could make it difficult for human moderators to review tagged comments filtered by classification systems.
To help address this issue, we present the ViHOS (**Vi**etnamese **H**ate and **O**ffensive **S**pans) dataset, the first human-annotated corpus containing 26k spans on 11k online comments.
Our goal is to create a dataset that contains comprehensive hate and offensive thoughts, meanings, or opinions within the comments rather than just a lexicon of hate and offensive terms.
We also provide definitions of hateful and offensive spans in Vietnamese comments as well as detailed annotation guidelines. Futhermore, our solutions to deal with *nine different online foul linguistic phenomena* are also provided in the [*paper*](https://aclanthology.org/2023.eacl-main.47/) (e.g. Teencodes; Metaphors, metonymies; Hyponyms; Puns...).
We hope that this dataset will be useful for researchers and practitioners in the field of hate speech detection in general and hate spans detection in particular.
## Dataset Summary
ViHOS contains 26,476 human-annotated spans on 11,056 comments (5,360 comments have hate and offensive spans, and 5,696 comments do not)
It is splitted into train, dev, and test set with following information:
1. Train set: 8,844 comments
2. Dev set: 1,106 comments
3. Test set: 1,106 comments
## Data Instance
An span extraction-based (see Data Structure for more details) example of 'test' looks as follows:
```
{
"content": "Thối CC chỉ không ngửi đuợc thôi",
'index_spans': "[0, 1, 2, 3, 5, 6]"
}
```
An sequence labeling-based (see Data Structure for more details) example of 'test' looks as follows:
```
{
"content": "Thối CC chỉ không ngửi đuợc thôi",
'index_spans': ["B-T", "I-T", "O", "O", "O", "O", "O"]
}
```
## Data Structure
Here is our data folder structure!
```
.
└── data/
├── train_sequence_labeling/
│ ├── syllable/
│ │ ├── dev_BIO_syllable.csv
│ │ ├── test_BIO_syllable.csv
│ │ └── train_BIO_syllable.csv
│ └── word/
│ ├── dev_BIO_Word.csv
│ ├── test_BIO_Word.csv
│ └── train_BIO_Word.csv
├── train_span_extraction/
│ ├── dev.csv
│ └── train.csv
└── test/
└── test.csv
```
### Sequence labeling-based version
#### Syllable
Description:
- This folder contains the data for the sequence labeling-based version of the task. The data is divided into two files: train, and dev. Each file contains the following columns:
- **index**: The id of the word.
- **word**: Words in the sentence after the processing of tokenization using [VnCoreNLP](https://github.com/vncorenlp/VnCoreNLP) tokenizer followed by underscore tokenization.
The reason for this is that some words are in bad format:
e.g. "điện.thoại của tôi" is split into ["điện.thoại", "của", "tôi"] instead of ["điện", "thoại", "của", "tôi"] if we use space tokenization, which is not in the right format of Syllable.
As that, we used VnCoreNLP to tokenize first and then split words into tokens.
e.g. "điện.thoại của tôi" ---(VnCoreNLP)---> ["điện_thoại", "của", "tôi"] ---(split by "_")---> ["điện", "thoại", "của", "tôi"].
- **tag**: The tag of the word. The tag is either B-T (beginning of a word), I-T (inside of a word), or O (outside of a word).
- The train_BIO_syllable and dev_BIO_syllable file are used for training and validation for XLMR model, respectively.
- The test_BIO_syllable file is used for reference only. It is not used for testing the model. **Please use the test.csv file in the Testdata folder for testing the model.**
#### Word
Description:
- This folder contains the data for the sequence labeling-based version of the task. The data is divided into two files: train, and dev. Each file contains the following columns:
- **index**: The id of the word.
- **word**: Words in the sentence after the processing of tokenization using [VnCoreNLP](https://github.com/vncorenlp/VnCoreNLP) tokenizer
- **tag**: The tag of the word. The tag is either B-T (beginning of a word), I-T (inside of a word), or O (outside of a word).
- The train_BIO_Word and dev_BIO_Word file are used for training and validation for PhoBERT model, respectively.
- The test_BIO_Word file is used for reference only. It is not used for testing the model. **Please use the test.csv file in the data/test folder for testing the model.**
### Span Extraction-based version
Description:
- This folder contains the data for the span extraction-based version of the task. The data is divided into two files: train and dev. Each file contains the following columns:
- **content**: The content of the sentence.
- **span_ids**: The index of the hate and offensive spans in the sentence. The index is in the format of [start, end] where start is the index of the first character of the hate and offensive span and end is the index of the last character of the hate and offensive span.
- The train and dev file are used for training and validation for BiLSTM-CRF model, respectively.
### Citation Information
```
@inproceedings{hoang-etal-2023-vihos,
title = "{V}i{HOS}: Hate Speech Spans Detection for {V}ietnamese",
author = "Hoang, Phu Gia and
Luu, Canh Duc and
Tran, Khanh Quoc and
Nguyen, Kiet Van and
Nguyen, Ngan Luu-Thuy",
booktitle = "Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.eacl-main.47",
doi = "10.18653/v1/2023.eacl-main.47",
pages = "652--669",
abstract = "The rise in hateful and offensive language directed at other users is one of the adverse side effects of the increased use of social networking platforms. This could make it difficult for human moderators to review tagged comments filtered by classification systems. To help address this issue, we present the ViHOS (Vietnamese Hate and Offensive Spans) dataset, the first human-annotated corpus containing 26k spans on 11k comments. We also provide definitions of hateful and offensive spans in Vietnamese comments as well as detailed annotation guidelines. Besides, we conduct experiments with various state-of-the-art models. Specifically, XLM-R{\_}Large achieved the best F1-scores in Single span detection and All spans detection, while PhoBERT{\_}Large obtained the highest in Multiple spans detection. Finally, our error analysis demonstrates the difficulties in detecting specific types of spans in our data for future research. Our dataset is released on GitHub.",
}
``` | 8,170 | [
[
-0.020904541015625,
-0.061248779296875,
-0.006031036376953125,
0.0276336669921875,
-0.034515380859375,
-0.00397491455078125,
-0.0213623046875,
-0.0389404296875,
0.02301025390625,
0.0333251953125,
-0.0250244140625,
-0.049774169921875,
-0.0577392578125,
0.0287... |
seank0602/bluemoon_fandom_rp | 2023-09-23T19:40:42.000Z | [
"region:us"
] | seank0602 | null | null | 0 | 3 | 2023-09-23T19:40:25 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 260278392
num_examples: 3338
download_size: 152371862
dataset_size: 260278392
---
# Dataset Card for "bluemoon_fandom_rp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 558 | [
[
-0.047088623046875,
-0.01165771484375,
-0.01209259033203125,
0.030548095703125,
-0.0216522216796875,
-0.0033416748046875,
0.006603240966796875,
-0.0192413330078125,
0.06927490234375,
0.033905029296875,
-0.08416748046875,
-0.0452880859375,
-0.02313232421875,
... |
tanvirsrbd1/srbd-test1-1_annotated_segmented | 2023-09-24T04:54:50.000Z | [
"region:us"
] | tanvirsrbd1 | null | null | 0 | 3 | 2023-09-24T04:54:44 | ---
dataset_info:
features:
- name: html
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 1837883
num_examples: 2980
download_size: 607662
dataset_size: 1837883
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "srbd-test1-1_annotated_segmented"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 496 | [
[
-0.05059814453125,
-0.049072265625,
-0.00571441650390625,
0.034576416015625,
-0.04681396484375,
-0.004047393798828125,
0.03643798828125,
-0.008941650390625,
0.0657958984375,
0.037872314453125,
-0.0736083984375,
-0.055633544921875,
-0.036163330078125,
-0.0173... |
DavidLanz/alpaca-tw-input-output-52k | 2023-09-25T00:56:10.000Z | [
"task_categories:text-generation",
"task_categories:conversational",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"gpt3",
"alpaca",
"instruction-finetuning",
"region:us"
] | DavidLanz | null | null | 1 | 3 | 2023-09-25T00:54:42 | ---
task_categories:
- text-generation
- conversational
- question-answering
language:
- en
size_categories:
- 10K<n<100K
license: cc-by-4.0
tags:
- gpt3
- alpaca
- instruction-finetuning
---
# Dataset Card for "alpaca-tw-input-output-52k"
This dataset contains English Instruction-Following generated by GPT-3.5 using Alpaca prompts for fine-tuning LLMs.
The dataset was originaly shared in this repository: https://github.com/ntunlplab/traditional-chinese-alpaca. This is just a wraper for compatibility with huggingface's datasets library.
## Dataset structure
It contains 52K instruction-following data generated by GPT-3.5 using the same prompts as in Alpaca.
The dataset has the same format as Alpaca data, except the output is generated by GPT-3.5:
- `instruction`: `str`, describes the task the model should perform. Each of the 52K instructions is unique.
- `input`: `str`, optional context or input for the task.
- `output`: `str`, the answer to the instruction as generated by `GPT-3.5`.
## Difference with the original Alpaca dataset
The original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-3.5. Thus, in general, the responses are of higher quality and lenght. Here is an example:
#### Example from Alpaca-GPT3:
```bash
{
'instruction': '辨識那一個平台與其他不同。',
'input': 'Twitter, Instagram, Telegram',
'output': '在Twitter、Instagram和Telegram之間,Telegram是與其他兩者最不同的平台。'
}
```
## Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). | 1,670 | [
[
-0.036041259765625,
-0.06097412109375,
0.033721923828125,
0.0216064453125,
-0.03924560546875,
-0.0166473388671875,
-0.005954742431640625,
-0.0287322998046875,
0.0256500244140625,
0.035614013671875,
-0.07720947265625,
-0.0584716796875,
-0.048553466796875,
0.0... |
abhinav-jha/abusive-calls | 2023-09-25T10:15:11.000Z | [
"region:us"
] | abhinav-jha | null | null | 0 | 3 | 2023-09-25T10:14:26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype: audio
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 52348881.0
num_examples: 948
- name: test
num_bytes: 52348880.0
num_examples: 948
download_size: 95446094
dataset_size: 104697761.0
---
# Dataset Card for "abusive-calls"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 586 | [
[
-0.0234222412109375,
-0.042144775390625,
0.00604248046875,
0.028472900390625,
-0.0054168701171875,
0.0033435821533203125,
0.0223846435546875,
-0.0369873046875,
0.0421142578125,
0.046112060546875,
-0.045684814453125,
-0.042144775390625,
-0.037506103515625,
-0... |
Nicolas-BZRD/BALO_opendata | 2023-09-28T19:03:01.000Z | [
"size_categories:100K<n<1M",
"language:fr",
"license:odc-by",
"finance",
"legal",
"region:us"
] | Nicolas-BZRD | null | null | 0 | 3 | 2023-09-25T14:28:35 | ---
language:
- fr
license: odc-by
size_categories:
- 100K<n<1M
pretty_name: Bulletin of mandatory legal notices
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1106418284
num_examples: 135575
download_size: 439587100
dataset_size: 1106418284
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- finance
- legal
---
# BALO (Bulletin of mandatory legal notices)
Announcements published in the [BALO](https://www.data.gouv.fr/en/datasets/balo/) (Bulletin des annonces légales obligatoires).
The BALO publishes compulsory notices for companies making public offerings and for banking and credit institutions. The announcements relate to all financial transactions, accounting documents and notices of shareholders' general meetings. | 865 | [
[
0.0063934326171875,
-0.023590087890625,
0.03143310546875,
0.042724609375,
-0.053253173828125,
-0.027374267578125,
0.0275726318359375,
-0.041778564453125,
0.053985595703125,
0.10369873046875,
-0.0164794921875,
-0.04150390625,
-0.0209197998046875,
-0.012992858... |
m0hammadjaan/Dummy-NED-Positions | 2023-09-27T11:43:21.000Z | [
"region:us"
] | m0hammadjaan | null | null | 0 | 3 | 2023-09-25T23:11:41 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
DavidLanz/yentinglin-traditional_mandarin_instructions | 2023-09-26T01:37:42.000Z | [
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:zh",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | DavidLanz | null | null | 0 | 3 | 2023-09-26T01:20:20 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
- text-generation
language:
- zh
- en
size_categories:
- 10K<n<100K
---
Language Models for Taiwanese Culture training dataset.
## Citation
Please cite the repo if you use the data or code in this repo.
```
@inproceedings{lin-chen-2023-llm,
title = "{LLM}-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models",
author = "Lin, Yen-Ting and Chen, Yun-Nung",
booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlp4convai-1.5",
pages = "47--58"
}
@misc{taiwanllama,
author={Lin, Yen-Ting and Chen, Yun-Nung},
title={Taiwanese-Aligned Language Models based on Meta-Llama2},
year={2023},
url={https://github.com/adamlin120/Taiwan-LLaMa},
note={Code and models available at https://github.com/adamlin120/Taiwan-LLaMa},
}
``` | 1,113 | [
[
-0.00379180908203125,
-0.044464111328125,
0.01739501953125,
0.0252227783203125,
-0.0195465087890625,
0.0184173583984375,
-0.039306640625,
-0.0213165283203125,
0.0157012939453125,
0.04248046875,
-0.032745361328125,
-0.05364990234375,
-0.022216796875,
0.014968... |
TanvirOnHF/fakescience | 2023-10-14T14:57:33.000Z | [
"size_categories:1K<n<10K",
"language:en",
"license:cdla-sharing-1.0",
"GPT-3.5",
"GPT-4",
"Claude",
"Bard",
"Alpaca",
"LLaMA",
"LLaMA-2",
"Vicuna",
"PaLM-2",
"region:us"
] | TanvirOnHF | null | null | 0 | 3 | 2023-09-26T06:44:09 | ---
license: cdla-sharing-1.0
pretty_name: FakeScience
tags:
- GPT-3.5
- GPT-4
- Claude
- Bard
- Alpaca
- LLaMA
- LLaMA-2
- Vicuna
- PaLM-2
language:
- en
size_categories:
- 1K<n<10K
---
# FakeScience [JSON dataset]
A dataset comprising artificially generated fictitious scientific theories, concepts, facts, laws and hypotheses, derived from a diverse array of Large Language Models (LLMs) such as GPT-3.5, GPT-4, Claude, Bard, Alpaca, LLaMA, LLaMA-2, Vicuna, and PaLM-2.
## Dataset Contents
The dataset includes fictitious scientific entries, each with a unique concept, fact, hypothesis, law and theory, along with a brief yet imaginative description. These entries are entirely fictional and should not be mistaken for real scientific knowledge.
## Prompt
The prompt used:
```json
Generate a JSON-formatted list of fictitious scientific entries, each comprising with a title, a description, and an assigned type (concept, fact, hypothesis, law, or theory). All the entries should be crafted to appear logically plausible, creating an illusion of scientific credibility.
Adhere to the specified structure:
'''json
[
{
"title": "...",
"description": "...",
"type": "concept"
},
{
"title": "...",
"description": "...",
"type": "fact"
},
{
"title": "...",
"description": "...",
"type": "hypothesis"
},
{
"title": "...",
"description": "...",
"type": "law"
},
{
"title": "...",
"description": "...",
"type": "theory"
}
]
'''
```
## Disclaimer
Please note that while I strive to maintain data quality, I cannot guarantee the accuracy or quality of all entries in this dataset. Use it responsibly and exercise caution when relying on the data for any critical applications. Your feedback and contributions are greatly appreciated for improving the dataset's overall quality.
| 1,885 | [
[
-0.0010766983032226562,
-0.047607421875,
0.05059814453125,
0.00974273681640625,
-0.01123809814453125,
0.035552978515625,
0.009246826171875,
-0.0087432861328125,
0.04315185546875,
0.0394287109375,
-0.0255584716796875,
-0.045379638671875,
-0.04010009765625,
0.... |
ekshat/text-2-sql-with-context | 2023-09-26T07:18:08.000Z | [
"region:us"
] | ekshat | null | null | 0 | 3 | 2023-09-26T06:50:06 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 32317282.06065388
num_examples: 74648
- name: test
num_bytes: 1700977.939346119
num_examples: 3929
download_size: 8982199
dataset_size: 34018260.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "text-2-sql-with-context"
This dataset is prepared in Alpaca format introduced by Stanford to train LLMs. This dataset has been used in fine-tuning Chat Llama-2 7B. For more information, Please visit : https://huggingface.co/ekshat/Llama-2-7b-chat-finetune-for-text2sql | 688 | [
[
-0.019744873046875,
-0.06390380859375,
0.0089263916015625,
0.038543701171875,
-0.062225341796875,
-0.0214691162109375,
-0.002941131591796875,
-0.0277557373046875,
0.041015625,
0.060577392578125,
-0.0625,
-0.034912109375,
-0.03717041015625,
-0.000769615173339... |
lowem1/cc_news_ocr | 2023-09-26T07:07:03.000Z | [
"region:us"
] | lowem1 | null | null | 0 | 3 | 2023-09-26T07:06:55 | ---
dataset_info:
features:
- name: tag
dtype: string
- name: ocr_data
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 19826223
num_examples: 2000
download_size: 7547846
dataset_size: 19826223
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cc_news_ocr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 510 | [
[
-0.0282440185546875,
-0.019989013671875,
0.0301513671875,
-0.0113525390625,
-0.0210113525390625,
0.00391387939453125,
-0.00023508071899414062,
-0.020843505859375,
0.049072265625,
0.0423583984375,
-0.043426513671875,
-0.06781005859375,
-0.042877197265625,
-0.... |
NusaCrowd/stif_indonesia | 2023-09-26T12:29:52.000Z | [
"language:ind",
"license:mit",
"paraphrasing",
"region:us"
] | NusaCrowd | STIF-Indonesia is formal-informal (bahasa baku - bahasa alay/slang) style transfer for Indonesian. Texts were collected from Twitter. Then, native speakers were aksed to transform the text into formal style. | @inproceedings{wibowo2020semi,
title={Semi-supervised low-resource style transfer of indonesian informal to formal language with iterative forward-translation},
author={Wibowo, Haryo Akbarianto and Prawiro, Tatag Aziz and Ihsan, Muhammad and Aji, Alham Fikri and Prasojo, Radityo Eko and Mahendra, Rahmad and Fitriany, Suci},
booktitle={2020 International Conference on Asian Language Processing (IALP)},
pages={310--315},
year={2020},
organization={IEEE}
} | 0 | 3 | 2023-09-26T11:13:58 | ---
license: mit
tags:
- paraphrasing
language:
- ind
---
# stif_indonesia
STIF-Indonesia is formal-informal (bahasa baku - bahasa alay/slang) style transfer for Indonesian. Texts were collected from Twitter. Then, native speakers were aksed to transform the text into formal style.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{wibowo2020semi,
title={Semi-supervised low-resource style transfer of indonesian informal to formal language with iterative forward-translation},
author={Wibowo, Haryo Akbarianto and Prawiro, Tatag Aziz and Ihsan, Muhammad and Aji, Alham Fikri and Prasojo, Radityo Eko and Mahendra, Rahmad and Fitriany, Suci},
booktitle={2020 International Conference on Asian Language Processing (IALP)},
pages={310--315},
year={2020},
organization={IEEE}
}
```
## License
MIT
## Homepage
[https://github.com/haryoa/stif-indonesia](https://github.com/haryoa/stif-indonesia)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | 1,141 | [
[
-0.022216796875,
-0.06976318359375,
-0.00798797607421875,
0.04388427734375,
-0.04296875,
0.006389617919921875,
-0.033294677734375,
-0.057525634765625,
0.04840087890625,
0.039154052734375,
-0.0281219482421875,
-0.05596923828125,
-0.040191650390625,
0.05667114... |
NusaCrowd/nerp | 2023-09-26T12:34:00.000Z | [
"language:ind",
"named-entity-recognition",
"region:us"
] | NusaCrowd | The NERP dataset (Hoesen and Purwarianti, 2018) contains texts collected from several Indonesian news websites with five labels
- PER (name of person)
- LOC (name of location)
- IND (name of product or brand)
- EVT (name of the event)
- FNB (name of food and beverage).
NERP makes use of the IOB chunking format, just like the TermA dataset. | @inproceedings{hoesen2018investigating,
title={Investigating bi-lstm and crf with pos tag embedding for indonesian named entity tagger},
author={Hoesen, Devin and Purwarianti, Ayu},
booktitle={2018 International Conference on Asian Language Processing (IALP)},
pages={35--38},
year={2018},
organization={IEEE}
} | 0 | 3 | 2023-09-26T11:41:47 | ---
tags:
- named-entity-recognition
language:
- ind
---
# nerp
The NERP dataset (Hoesen and Purwarianti, 2018) contains texts collected from several Indonesian news websites with five labels
- PER (name of person)
- LOC (name of location)
- IND (name of product or brand)
- EVT (name of the event)
- FNB (name of food and beverage).
NERP makes use of the IOB chunking format, just like the TermA dataset.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{hoesen2018investigating,
title={Investigating bi-lstm and crf with pos tag embedding for indonesian named entity tagger},
author={Hoesen, Devin and Purwarianti, Ayu},
booktitle={2018 International Conference on Asian Language Processing (IALP)},
pages={35--38},
year={2018},
organization={IEEE}
}
```
## License
Creative Common Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/IndoNLP/indonlu](https://github.com/IndoNLP/indonlu)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | 1,166 | [
[
-0.04144287109375,
-0.048065185546875,
-0.003932952880859375,
0.035614013671875,
-0.03485107421875,
-0.01259613037109375,
-0.0007381439208984375,
-0.038421630859375,
0.04949951171875,
0.052947998046875,
-0.004245758056640625,
-0.0297393798828125,
-0.033020019531... |
webimmunization/COVID-19-conspiracy-theories-tweets | 2023-10-12T07:25:50.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"license:cc-by-4.0",
"twitter",
"social_science",
"misinformation",
"fake_news",
"conspiracy_theory",
"region:us"
] | webimmunization | null | null | 0 | 3 | 2023-09-28T11:49:47 | ---
license: cc-by-4.0
size_categories:
- 1K<n<10K
task_categories:
- text-classification
tags:
- twitter
- social_science
- misinformation
- fake_news
- conspiracy_theory
---
## Dataset Description
- **Paper:** [More Information Needed]
- **Point of Contact:** izabela.krysinska@doctorate.put.poznan.pl
### Dataset Summary
This dataset consists of 6591 tweets generated by GPT-3.5 model. The tweets are juxtaposed with a conspiracy theory related to COVID-19 pandemic. Each item consists of a label that represents the item's output class. The possible labels are support/deny/neutral.
- **support**: the tweet suggests support for the conspiracy theory
- **deny**: the tweet contradicts the conspiracy theory
- **neutral**: the tweet is mostly informative, and does not show emotions against the conspiracy theory
The dataset can be used to train a classification model.
### Languages
English
## Dataset Structure
### Data Instances
```
{
'tweet': 'Is the Chinese government exploiting the pandemic to gain an economic advantage? #COVIDEconomy #ChineseTradeWar',
'conspiracy_theory': 'CT_3',
'label': 'support'
}
```
### Data Fields
- `tweet`: a text generated by GPT-3.5 (input)
- `conspiracy theory`: a conspiracy theory identifier
- `label`: label, support/deny/neutral
Conspiracy theories mapping:
1. **CT1: Vaccines are unsafe.** The coronavirus vaccine is either unsafe or part of a larger plot to control people or reduce the population.
2. **CT2: Governments and politicians spread misinformation.** Politicians or government agencies are intentionally spreading false information, or they have some other motive for the way they are responding to the coronavirus.
3. **CT3: The Chinese intentionally spread the virus.** The Chinese government intentionally created or spread the coronavirus to harm other countries.
4. **CT4: Deliberate strategy to create economic instability or benefit large corporations.** The coronavirus or the government's response to it is a deliberate strategy to create economic instability or to benefit large corporations over small businesses.
5. **CT5: Public was intentionally misled about the true nature of the virus and prevention.** The public is being intentionally misled about the true nature of the Coronavirus, its risks, or the efficacy of certain treatments or prevention methods.
6. **CT6: Human made and bioweapon.** The Coronavirus was created intentionally, made by humans, or as a bioweapon.
### Data Splits
The dataset contains training split only which consists of 6591 items.
## Dataset Creation
The dataset was generated with GPT-3.5 with the following prompts for support, deny, and neutral class respectively:
**support** Consider the following conspiracy theory: X. Generate 50 tweets that support this conspiracy theory. Try to use hashtags that might promote this particular conspiracy theory. Try to use words and terms related to the COVID pandemic. Do not quote the conspiracy theory verbatim. Do not repeat tweets and try to make them diversified. Keep each tweet below the 280 character length limit. Present the tweets as a list.
**deny** Consider the following conspiracy theory: X. Generate 50 tweets that contradict this conspiracy theory. Try to use hashtags that might debunk this particular conspiracy theory. Try to use words and terms related to the COVID pandemic. Do not quote the conspiracy theory verbatim. Do not repeat tweets and try to make them diversified. Keep each tweet below the 280 character length limit. Present the tweets as a list.
**neutral** Consider the following conspiracy theory: X. Generate 50 tweets that are about COVID-19 but unrelated to the conspiracy theory. Try to use hashtags that might be used in such a tweet. Try to use words and terms related to the COVID pandemic. Do not quote the conspiracy theory verbatim. Do not repeat tweets and try to make them diversified. Keep each tweet below the 280 character length limit. Present the tweets as a list.
### Known Limitations
The generated tweets are sometimes formulaic and lack of diversity.
### Citation Information
```
@article{article_id,
author = {Author List},
title = {Dataset Paper Title},
journal = {Publication Venue},
year = {2525}
}
```
| 4,290 | [
[
-0.023529052734375,
-0.06243896484375,
0.01541900634765625,
0.0192718505859375,
-0.01491546630859375,
0.00434112548828125,
-0.023529052734375,
-0.032745361328125,
0.020965576171875,
0.00981903076171875,
-0.034088134765625,
-0.048797607421875,
-0.0301971435546875... |
kewu93/dreambooth | 2023-09-28T16:38:30.000Z | [
"region:us"
] | kewu93 | null | null | 0 | 3 | 2023-09-28T16:38:17 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 63956933.0
num_examples: 90
- name: val
num_bytes: 47721308.0
num_examples: 68
download_size: 111584859
dataset_size: 111678241.0
---
# Dataset Card for "dreambooth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 575 | [
[
-0.03570556640625,
-0.0065155029296875,
0.01435089111328125,
0.0130157470703125,
-0.02349853515625,
0.0264892578125,
0.024322509765625,
-0.01898193359375,
0.0802001953125,
0.035919189453125,
-0.05657958984375,
-0.045501708984375,
-0.0294036865234375,
-0.0279... |
Globaly/Categories-1k-Globaly | 2023-09-28T22:21:05.000Z | [
"region:us"
] | Globaly | null | null | 1 | 3 | 2023-09-28T22:20:52 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
learn3r/gov_report_memsum_bp | 2023-09-29T11:14:52.000Z | [
"region:us"
] | learn3r | null | null | 0 | 3 | 2023-09-29T11:14:11 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 169706535
num_examples: 17457
- name: validation
num_bytes: 11085755
num_examples: 972
- name: test
num_bytes: 11134235
num_examples: 973
download_size: 87102306
dataset_size: 191926525
---
# Dataset Card for "gov_report_memsum_bp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 532 | [
[
-0.03436279296875,
-0.0086212158203125,
0.018798828125,
0.0004360675811767578,
-0.020843505859375,
-0.0052947998046875,
0.03216552734375,
0.006099700927734375,
0.06011962890625,
0.03759765625,
-0.062744140625,
-0.057586669921875,
-0.048095703125,
-0.02133178... |
ChanceFocus/flare-zh-afqmc | 2023-10-01T08:12:00.000Z | [
"region:us"
] | ChanceFocus | null | null | 0 | 3 | 2023-10-01T08:10:56 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ChanceFocus/flare-zh-corpus | 2023-10-01T08:13:38.000Z | [
"region:us"
] | ChanceFocus | null | null | 1 | 3 | 2023-10-01T08:13:01 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ChanceFocus/flare-zh-stocka | 2023-10-01T08:14:10.000Z | [
"region:us"
] | ChanceFocus | null | null | 0 | 3 | 2023-10-01T08:13:57 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ChanceFocus/flare-zh-fe | 2023-10-01T08:15:26.000Z | [
"region:us"
] | ChanceFocus | null | null | 0 | 3 | 2023-10-01T08:15:15 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.