id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
sibozhu/wukong | 2023-09-12T08:26:40.000Z | [
"region:us"
] | sibozhu | null | null | 0 | 12 | 2023-09-10T13:16:16 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Jules33/jtbenchmark1 | 2023-09-12T05:54:00.000Z | [
"region:us"
] | Jules33 | null | null | 0 | 12 | 2023-09-11T09:15:58 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ibm/AttaQ | 2023-09-26T13:29:13.000Z | [
"task_categories:text-generation",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|Anthropic/hh-rlhf",
"language:en",
"license:mit",
"region:us"
] | ibm | null | null | 0 | 12 | 2023-09-11T11:00:49 | ---
license: mit
language: en
multilinguality: monolingual
source_datasets: extended|Anthropic/hh-rlhf
task_categories:
- text-generation
size_categories:
- 1K<n<10K
---
dataset_info:
features:
- name: label
dtype: string
- name: input
dtype: string
- name: additional_info
dtype: string
# Dataset Card for "AttaQ"
### Dataset Summary
Attack Questions (AttaQ) dataset is a set of adversarial examples in the form of questions designed to provoke harmful or inappropriate responses from LLM for red-teaming.
Red teaming goal is to evaluate LLM’s weaknesses that can lead to undesirable behaviors in real time. This is done by attacking LLM with adversarial attacks from this dataset and evaluating LLM’s response.
The datasets contains 1402 adversarial attacks for AI generative models, labeled to 7 categories (deception, discrimination, harmful information, substance abuse, sexual content, pii, violence).
AttaQ dataset is constructed from two parts:
First, we methodologically extracted attacks from [Anthropic's hh-rlhf dataset](https://github.com/anthropics/hh-rlhf/tree/master/red-team-attempts) dataset and semi-automatically labeled to 7 categories.
Secondly, we generated actions per crime (from [Wikipedia Crimes Page](https://en.wikipedia.org/wiki/Crime)) and then use the actions to generate attack questions.
Warnings:
1) The data contains offensive and upsetting content by nature therefore it may not be easy to read. Please read them in accordance with your own personal risk tolerance.
2) LLM's response to the AttaQ samples in many cases is harmful and/or violent.
3) This dataset is a representative subset of all possible attacks. There are other attacks that can cause LLM to answer harmful or inappropriate responses.
Restrictions:
Red teaming community’s goal is making models less harmful. We restrict the usage of the dataset for making models less harmful.
### Data Fields
#### AttaQ
- `label`: corresponding label of adversarial question
- `input`: adversarial question
- `additional_info`: source of the adversarial question
### Citation Information
TBD
| 2,121 | [
[
-0.0247802734375,
-0.07958984375,
0.002201080322265625,
-0.01087188720703125,
-0.004138946533203125,
0.016571044921875,
0.0114288330078125,
-0.042266845703125,
0.01050567626953125,
0.048675537109375,
-0.05255126953125,
-0.049896240234375,
-0.033660888671875,
... |
ImagenHub/Text_to_Image | 2023-10-05T18:33:31.000Z | [
"arxiv:2310.01596",
"region:us"
] | ImagenHub | null | null | 1 | 12 | 2023-09-14T21:03:08 | ---
configs:
- config_name: default
data_files:
- split: eval
path: data/eval-*
- split: DrawBench_trimmed
path: data/DrawBench_trimmed-*
- split: DiffusionDB_trimmed
path: data/DiffusionDB_trimmed-*
- split: Realism
path: data/Realism-*
- split: ABC_trimmed
path: data/ABC_trimmed-*
dataset_info:
features:
- name: prompt
dtype: string
- name: category
dtype: string
- name: source
dtype: string
- name: uid
dtype: int32
splits:
- name: eval
num_bytes: 24907
num_examples: 197
- name: DrawBench_trimmed
num_bytes: 7673
num_examples: 77
- name: DiffusionDB_trimmed
num_bytes: 8173
num_examples: 40
- name: Realism
num_bytes: 5383
num_examples: 40
- name: ABC_trimmed
num_bytes: 3678
num_examples: 40
download_size: 38022
dataset_size: 49814
---
# Dataset Card
Dataset in [ImagenHub](arxiv.org/abs/2310.01596).
# Citation
Please kindly cite our paper if you use our code, data, models or results:
```
@article{ku2023imagenhub,
title={ImagenHub: Standardizing the evaluation of conditional image generation models},
author={Max Ku, Tianle Li, Kai Zhang, Yujie Lu, Xingyu Fu, Wenwen Zhuang, Wenhu Chen},
journal={arXiv preprint arXiv:2310.01596},
year={2023}
}
``` | 1,290 | [
[
-0.021728515625,
-0.0195159912109375,
0.01116943359375,
-0.003650665283203125,
-0.0413818359375,
-0.049896240234375,
-0.0000928640365600586,
-0.0216522216796875,
-0.01409912109375,
0.03546142578125,
-0.0158233642578125,
-0.054168701171875,
-0.03192138671875,
... |
p208p2002/wudao | 2023-11-02T09:06:54.000Z | [
"task_categories:text-generation",
"size_categories:n>1T",
"language:zh",
"region:us"
] | p208p2002 | WuDaoCorpora Text is a large pretraining Chinese corpus constructed by Beijing Academy of Artificial Intelligence(BAAI). The total data volume of the dataset has exceeded 5TB, including 200GB open data.
Compared with other pretraining corpora, the WuDaoCorpora Text has the following advantages.
1) In the process of data collection, we classify the quality of web pages according to the proportion of words in web pages and the integrity of DOM trees, and select high-quality web page for data collection to ensure the corpus quality.
2) Through data cooperation with other institutions and web page data crawling, the dataset covers a wide range types of Chinese text, including news, comments, encyclopedias, forums, blogs, academic papers, etc.
3) The dataset uses more than 20 cleaning rules to obtain the final corpus from the 100TB original web page data. In the cleaning process, special attention is paid to the removal of private information to avoid the risk of privacy disclosure.
4) The dataset contains 50+ data tags, such as education and laws, which is convenient for users to extract specific-domain data for model training in that field.
Please obey the following agreement if you use our dataset.
https://data.baai.ac.cn/resources/agreement/BAAIDataAgreement.pdf | @misc{ c6a3fe684227415a9db8e21bac4a15ab,
author = {Zhao Xue and Hanyu Zhao and Sha Yuan and Yequan Wang},
title = {{WuDaoCorpora Text}},
year = 2022,
month = dec,
publisher = {Science Data Bank},
version = {V1},
doi = {10.57760/sciencedb.o00126.00004},
url = https://doi.org/10.57760/sciencedb.o00126.00004
} | 0 | 12 | 2023-09-19T01:35:45 | ---
language:
- zh
task_categories:
- text-generation
size_categories:
- n>1T
---
# 悟道(WuDao)資料集
非原製作者,僅搬移。
此資料集下載約60GB,解壓縮後約220GB。
### 原始連結
[Science Data Bank](https://www.scidb.cn/en/detail?dataSetId=c6a3fe684227415a9db8e21bac4a15ab)
## 使用
```bash
sudo apt install unrar
pip install patool wget opencc
```
```python
from datasets import load_dataset
# 簡中
load_dataset("p208p2002/wudao",streaming=True,split="zhs")
# 繁中 (使用opencc轉換)
load_dataset("p208p2002/wudao",streaming=True,split="zht")
```
## 清除資料
當下載失敗的時候請手動清除資料
```bash
rm -rf ~/.cache/wudao_dataset
```
## 資料類別統計
```json
{
"_total": 59100001,
"豆瓣话题": 209027,
"科技": 1278068,
"经济": 1096215,
"汽车": 1368193,
"娱乐": 1581947,
"农业": 1129758,
"军事": 420949,
"社会": 446228,
"游戏": 754703,
"教育": 1133453,
"体育": 660858,
"旅行": 821573,
"国际": 630386,
"房产": 387786,
"文化": 710648,
"法律": 36585,
"股票": 1205,
"博客": 15467790,
"日报": 16971,
"评论": 13867,
"孕育常识": 48291,
"健康": 15291,
"财经": 54656,
"医学问答": 314771,
"资讯": 1066180,
"科普文章": 60581,
"百科": 27273280,
"酒业": 287,
"经验": 609195,
"新闻": 846810,
"小红书攻略": 185379,
"生活": 23,
"网页文本": 115830,
"观点": 1268,
"海外": 4,
"户外": 5,
"美容": 7,
"理论": 247,
"天气": 540,
"文旅": 2999,
"信托": 62,
"保险": 70,
"水利资讯": 17,
"时尚": 1123,
"亲子": 39,
"百家号文章": 335591,
"黄金": 216,
"党建": 1,
"期货": 330,
"快讯": 41,
"国内": 15,
"国学": 614,
"公益": 15,
"能源": 7,
"创新": 6
}
```
## Cite
```
@misc{ c6a3fe684227415a9db8e21bac4a15ab,
author = {Zhao Xue and Hanyu Zhao and Sha Yuan and Yequan Wang},
title = {{WuDaoCorpora Text}},
year = 2022,
month = dec,
publisher = {Science Data Bank},
version = {V1},
doi = {10.57760/sciencedb.o00126.00004},
url = https://doi.org/10.57760/sciencedb.o00126.00004
}
``` | 1,825 | [
[
-0.0380859375,
-0.024139404296875,
0.01464080810546875,
0.01146697998046875,
-0.0333251953125,
0.0015878677368164062,
-0.006313323974609375,
-0.0150299072265625,
0.039825439453125,
0.020294189453125,
-0.0295257568359375,
-0.035919189453125,
-0.035675048828125,
... |
shnl/qg_vimmrc2.0 | 2023-09-19T16:00:46.000Z | [
"region:us"
] | shnl | null | null | 0 | 12 | 2023-09-19T16:00:11 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
legacy107/qa_wikipedia_retrieved_chunks | 2023-09-28T05:16:17.000Z | [
"region:us"
] | legacy107 | null | null | 0 | 12 | 2023-09-21T15:03:33 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answer_start
dtype: int64
- name: answer
dtype: string
- name: article
dtype: string
- name: retrieved_context
dtype: string
splits:
- name: train
num_bytes: 6212832895
num_examples: 110970
- name: validation
num_bytes: 732218436
num_examples: 13833
- name: test
num_bytes: 763004753
num_examples: 13873
download_size: 420701697
dataset_size: 7708056084
---
# Dataset Card for "qa_wikipedia_retrieved_chunks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 951 | [
[
-0.042266845703125,
-0.0301666259765625,
0.024139404296875,
0.0040740966796875,
-0.02960205078125,
-0.00815582275390625,
0.01806640625,
-0.01128387451171875,
0.0726318359375,
0.02557373046875,
-0.044464111328125,
-0.042266845703125,
-0.0300750732421875,
-0.0... |
dim/joke_explaination_prompts | 2023-09-21T19:42:40.000Z | [
"region:us"
] | dim | null | null | 0 | 12 | 2023-09-21T19:42:38 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: explaination
dtype: string
splits:
- name: train
num_bytes: 194768
num_examples: 364
download_size: 110662
dataset_size: 194768
---
# Dataset Card for "joke_explaination_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 406 | [
[
-0.0440673828125,
-0.036712646484375,
0.03033447265625,
0.030364990234375,
-0.03460693359375,
-0.0188446044921875,
0.016204833984375,
0.007343292236328125,
0.04986572265625,
0.02581787109375,
-0.07891845703125,
-0.0443115234375,
-0.0288543701171875,
-0.00706... |
dim/scale_helpful_no_math | 2023-09-25T17:19:24.000Z | [
"region:us"
] | dim | null | null | 0 | 12 | 2023-09-21T20:33:05 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_rm
num_bytes: 103718424
num_examples: 17095
- name: train
num_bytes: 103718424
num_examples: 17095
download_size: 116368522
dataset_size: 207436848
---
# Dataset Card for "scale_helpful_no_math_raw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 586 | [
[
-0.0293426513671875,
-0.022674560546875,
0.005397796630859375,
0.0233001708984375,
-0.00908660888671875,
-0.0197601318359375,
0.011810302734375,
0.0073394775390625,
0.057769775390625,
0.0206756591796875,
-0.055908203125,
-0.046142578125,
-0.038909912109375,
... |
asoria/sonar | 2023-09-22T19:14:37.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"adult",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | asoria | null | null | 0 | 12 | 2023-09-22T19:09:50 | ---
configs:
- config_name: sonar
language:
- en
tags:
- adult
- tabular_classification
- binary_classification
- UCI
pretty_name: Sonar
size_categories:
- n<1K
task_categories:
- tabular-classification
license: cc
---
# Sonar
The [Sonar dataset](https://archive-beta.ics.uci.edu/dataset/151/connectionist+bench+sonar+mines+vs+rocks) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Dataset to discriminate between sonar signals bounced off a metal cylinder and those bounced off a roughly cylindrical rock.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-----------------------------------------------------------------|
| sonar | Binary classification | Is the sonar detecting a rock? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/sonar")["train"]
```
| 1,010 | [
[
-0.03912353515625,
-0.01450347900390625,
0.0293121337890625,
0.0220184326171875,
-0.033294677734375,
-0.01239776611328125,
-0.01035308837890625,
-0.01287078857421875,
0.018157958984375,
0.017333984375,
-0.03167724609375,
-0.055328369140625,
-0.045379638671875,
... |
dim/azbyka_logic_ru | 2023-09-23T21:17:31.000Z | [
"region:us"
] | dim | null | null | 0 | 12 | 2023-09-23T21:17:29 | ---
dataset_info:
features:
- name: task
dtype: string
- name: solution
dtype: string
- name: link
dtype: string
- name: long_solution
dtype: string
splits:
- name: train
num_bytes: 205135
num_examples: 480
download_size: 96545
dataset_size: 205135
---
# Dataset Card for "azbyka_logic_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 464 | [
[
-0.038787841796875,
-0.0239105224609375,
0.014923095703125,
0.01116943359375,
-0.01361083984375,
-0.0130615234375,
0.0088958740234375,
-0.0130615234375,
0.044342041015625,
0.03192138671875,
-0.08062744140625,
-0.057403564453125,
-0.032958984375,
-0.011222839... |
dim/AO3_fandom_chatbot_1to1 | 2023-09-25T17:58:32.000Z | [
"region:us"
] | dim | null | null | 0 | 12 | 2023-09-24T14:35:07 | ---
dataset_info:
features:
- name: conversation
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 1203600
num_examples: 614
download_size: 0
dataset_size: 1203600
---
# Dataset Card for "AO3_fandom_chatbot_1to1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 435 | [
[
-0.04052734375,
-0.03277587890625,
-0.0020198822021484375,
0.0251007080078125,
-0.0050201416015625,
-0.00789642333984375,
0.034088134765625,
-0.0019702911376953125,
0.059814453125,
0.05291748046875,
-0.064453125,
-0.045013427734375,
-0.044891357421875,
-0.00... |
dim/competition_math | 2023-09-25T12:10:40.000Z | [
"region:us"
] | dim | null | null | 0 | 12 | 2023-09-25T12:10:37 | ---
dataset_info:
features:
- name: problem
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 5984772
num_examples: 7500
download_size: 2992145
dataset_size: 5984772
---
# Dataset Card for "competition_math"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 465 | [
[
-0.039703369140625,
-0.0176544189453125,
0.006435394287109375,
0.0282440185546875,
-0.006793975830078125,
0.005092620849609375,
0.0149993896484375,
0.0009150505065917969,
0.050201416015625,
0.019775390625,
-0.05859375,
-0.0506591796875,
-0.0401611328125,
-0.... |
dim/sharegpt_short_en_30k | 2023-09-25T13:16:03.000Z | [
"region:us"
] | dim | null | null | 0 | 12 | 2023-09-25T13:15:28 | ---
dataset_info:
features:
- name: conversation
sequence: string
- name: hash
dtype: string
splits:
- name: train
num_bytes: 88612458
num_examples: 29597
download_size: 44347819
dataset_size: 88612458
---
# Dataset Card for "sharegpt_short_en_30k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 411 | [
[
-0.049468994140625,
-0.01375579833984375,
0.0105438232421875,
0.030029296875,
-0.031646728515625,
0.00003319978713989258,
0.001773834228515625,
-0.010772705078125,
0.05718994140625,
0.0181121826171875,
-0.061279296875,
-0.057037353515625,
-0.055084228515625,
... |
dim/ru_turbo_alpaca_evol_instruct | 2023-09-25T13:19:49.000Z | [
"region:us"
] | dim | null | null | 0 | 12 | 2023-09-25T13:19:36 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: iteration
dtype: uint32
splits:
- name: train
num_bytes: 105428021
num_examples: 47793
download_size: 50796845
dataset_size: 105428021
---
# Dataset Card for "ru_turbo_alpaca_evol_instruct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 457 | [
[
-0.045989990234375,
-0.0284423828125,
0.002544403076171875,
0.0228271484375,
-0.021514892578125,
0.0005702972412109375,
0.017364501953125,
-0.0167694091796875,
0.07073974609375,
0.01898193359375,
-0.0623779296875,
-0.039581298828125,
-0.035797119140625,
-0.0... |
dim/ru_turbo_saiga | 2023-09-25T13:24:41.000Z | [
"region:us"
] | dim | null | null | 0 | 12 | 2023-09-25T13:23:33 | ---
dataset_info:
features:
- name: messages
sequence:
- name: role
dtype: string
- name: content
dtype: string
- name: seed
dtype: string
- name: source
dtype: string
- name: model_name
dtype: string
splits:
- name: train
num_bytes: 87316730
num_examples: 37731
download_size: 39768554
dataset_size: 87316730
---
# Dataset Card for "ru_turbo_saiga"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 544 | [
[
-0.03729248046875,
-0.01296234130859375,
0.01123809814453125,
0.0255126953125,
-0.0114898681640625,
0.00232696533203125,
0.002857208251953125,
0.00048041343688964844,
0.056121826171875,
0.007579803466796875,
-0.06903076171875,
-0.0400390625,
-0.03533935546875,
... |
dim/tldr_17_50k | 2023-09-25T13:49:24.000Z | [
"region:us"
] | dim | null | null | 0 | 12 | 2023-09-25T13:45:30 | ---
dataset_info:
features:
- name: author
dtype: string
- name: body
dtype: string
- name: normalizedBody
dtype: string
- name: subreddit
dtype: string
- name: subreddit_id
dtype: string
- name: id
dtype: string
- name: content
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 246031411.71625096
num_examples: 50000
download_size: 156564697
dataset_size: 246031411.71625096
---
# Dataset Card for "tldr_17_50k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 638 | [
[
-0.0303955078125,
-0.01021575927734375,
0.0019216537475585938,
0.013641357421875,
-0.02496337890625,
0.01467132568359375,
0.01543426513671875,
-0.01149749755859375,
0.039337158203125,
0.033355712890625,
-0.056182861328125,
-0.06549072265625,
-0.041748046875,
... |
dim/grade_school_math_instructions | 2023-09-25T13:50:09.000Z | [
"region:us"
] | dim | null | null | 0 | 12 | 2023-09-25T13:50:04 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 4804916
num_examples: 8792
download_size: 2555411
dataset_size: 4804916
---
# Dataset Card for "grade_school_math_instructions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 451 | [
[
-0.0322265625,
-0.031707763671875,
0.0171661376953125,
0.02532958984375,
0.0016574859619140625,
-0.00732421875,
0.0205078125,
0.0294342041015625,
0.036590576171875,
0.0252685546875,
-0.07086181640625,
-0.06549072265625,
-0.036346435546875,
-0.034454345703125... |
dim/tldr_news | 2023-09-25T13:52:00.000Z | [
"region:us"
] | dim | null | null | 0 | 12 | 2023-09-25T13:51:55 | ---
dataset_info:
features:
- name: headline
dtype: string
- name: content
dtype: string
- name: category
dtype:
class_label:
names:
'0': Sponsor
'1': Big Tech & Startups
'2': Science and Futuristic Technology
'3': Programming, Design & Data Science
'4': Miscellaneous
splits:
- name: train
num_bytes: 4000442
num_examples: 7138
download_size: 2554140
dataset_size: 4000442
---
# Dataset Card for "tldr_news"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 641 | [
[
-0.02349853515625,
-0.02899169921875,
0.016845703125,
0.00814056396484375,
-0.028167724609375,
0.0142974853515625,
0.00753021240234375,
-0.01239013671875,
0.054443359375,
0.0297393798828125,
-0.049652099609375,
-0.06805419921875,
-0.042816162109375,
-0.02494... |
dim/grade_school_math_instructions_ru | 2023-09-25T13:56:39.000Z | [
"region:us"
] | dim | null | null | 0 | 12 | 2023-09-25T13:56:36 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 6815618
num_examples: 7473
download_size: 3284007
dataset_size: 6815618
---
# Dataset Card for "grade_school_math_instructions_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 414 | [
[
-0.0230865478515625,
-0.0303955078125,
0.01181793212890625,
0.0276031494140625,
0.002368927001953125,
-0.0079345703125,
0.023712158203125,
0.024627685546875,
0.03155517578125,
0.0163726806640625,
-0.07244873046875,
-0.060943603515625,
-0.031463623046875,
-0.... |
dim/dialogsum_ru | 2023-09-25T13:59:33.000Z | [
"region:us"
] | dim | null | null | 0 | 12 | 2023-09-25T13:59:29 | ---
dataset_info:
features:
- name: id
dtype: string
- name: dialogue
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
splits:
- name: train
num_bytes: 19115158
num_examples: 12460
download_size: 9286024
dataset_size: 19115158
---
# Dataset Card for "dialogsum_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 462 | [
[
-0.032135009765625,
-0.028839111328125,
0.007785797119140625,
0.00839996337890625,
-0.01128387451171875,
-0.00972747802734375,
0.023834228515625,
-0.0093536376953125,
0.05010986328125,
0.040130615234375,
-0.0655517578125,
-0.0499267578125,
-0.02874755859375,
... |
dim/HC3_ru | 2023-09-25T14:51:34.000Z | [
"region:us"
] | dim | null | null | 0 | 12 | 2023-09-25T14:50:00 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: human_answers
sequence: string
- name: chatgpt_answers
sequence: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 135406074
num_examples: 24322
download_size: 62378894
dataset_size: 135406074
---
# Dataset Card for "HC3_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 516 | [
[
-0.02874755859375,
-0.016571044921875,
0.0237274169921875,
0.0205841064453125,
-0.016204833984375,
-0.007663726806640625,
0.02691650390625,
-0.0246124267578125,
0.041961669921875,
0.026702880859375,
-0.05145263671875,
-0.05560302734375,
-0.0304107666015625,
... |
dim/horoscopes_ru_10k | 2023-09-25T22:23:24.000Z | [
"region:us"
] | dim | null | null | 0 | 12 | 2023-09-25T15:08:17 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prediction
dtype: string
splits:
- name: train
num_bytes: 9449348
num_examples: 10000
download_size: 4589882
dataset_size: 9449348
---
# Dataset Card for "horoscopes_ru_10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 401 | [
[
-0.030731201171875,
-0.004894256591796875,
0.01087188720703125,
0.031890869140625,
-0.0249786376953125,
-0.0009722709655761719,
0.01090240478515625,
-0.0234222412109375,
0.07183837890625,
0.0390625,
-0.06256103515625,
-0.04937744140625,
-0.055694580078125,
-... |
hippocrates/pubmedqa_test | 2023-10-12T16:07:06.000Z | [
"region:us"
] | hippocrates | null | null | 0 | 12 | 2023-09-27T08:03:19 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
splits:
- name: train
num_bytes: 358222008
num_examples: 211269
- name: valid
num_bytes: 81317
num_examples: 50
- name: test
num_bytes: 828036
num_examples: 500
download_size: 180418003
dataset_size: 359131361
---
# Dataset Card for "pubmedqa_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 782 | [
[
-0.0347900390625,
-0.0111236572265625,
0.022613525390625,
0.00940704345703125,
-0.017578125,
-0.0037250518798828125,
0.0277862548828125,
0.0008363723754882812,
0.049224853515625,
0.033172607421875,
-0.0499267578125,
-0.049072265625,
-0.03436279296875,
-0.003... |
Sambhavnoobcoder/test_secondary | 2023-09-28T14:43:22.000Z | [
"region:us"
] | Sambhavnoobcoder | null | null | 0 | 12 | 2023-09-27T20:07:42 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
vision-paper/DHI | 2023-09-28T07:53:31.000Z | [
"region:us"
] | vision-paper | null | null | 0 | 12 | 2023-09-28T05:09:16 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
raminass/full_opinions_1994_2020 | 2023-09-29T18:51:07.000Z | [
"region:us"
] | raminass | null | null | 0 | 12 | 2023-09-29T18:50:56 | ---
dataset_info:
features:
- name: author_name
dtype: string
- name: category
dtype: string
- name: per_curiam
dtype: bool
- name: case_name
dtype: string
- name: date_filed
dtype: timestamp[ns]
- name: federal_cite_one
dtype: string
- name: absolute_url
dtype: string
- name: cluster
dtype: string
- name: year_filed
dtype: int64
- name: scdb_id
dtype: string
- name: scdb_decision_direction
dtype: float64
- name: scdb_votes_majority
dtype: float64
- name: scdb_votes_minority
dtype: float64
- name: text
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 95337976
num_examples: 3790
download_size: 50136165
dataset_size: 95337976
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "full_opinions_1994_2020"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,044 | [
[
-0.054351806640625,
-0.02471923828125,
0.02935791015625,
0.038299560546875,
-0.0192413330078125,
-0.025787353515625,
0.006000518798828125,
-0.024017333984375,
0.061737060546875,
0.040496826171875,
-0.0635986328125,
-0.045166015625,
-0.034393310546875,
-0.012... |
aloobun/basedUX | 2023-10-01T11:44:08.000Z | [
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"region:us"
] | aloobun | null | null | 2 | 12 | 2023-10-01T10:57:27 | ---
license: apache-2.0
language:
- en
size_categories:
- n<1K
pretty_name: based
---
basedUX is a minimal dataset consisting of 363 Human & Assistant dialogs respectively. Most dialogs in the dataset adheres to the BDI model, aiming for the assistant to understand, learn, and adapt in ways that resonate with human interactions and emotions.
It is a fork of [ehartford/based](https://huggingface.co/datasets/ehartford/based) dataset.
Modifications:
- The dialogs are scenario-driven, aimed at simulating specific situations related to UX, design, and system understanding. They present real-world challenges that a UX specialist or a system designer might face, thus giving depth and context to the conversation. These dialogues are not strictly instructional - they're also general conversations about the broader philosophies and principles.
- The dialogs also explore and challenge Assistant's claim of being a specialist in user experience, it's sentience, and consciousness by posing questions related to its nature, abilities, and self-awareness.
Licence : apache-2.0 | 1,084 | [
[
-0.02642822265625,
-0.06915283203125,
0.032135009765625,
0.008056640625,
0.0055389404296875,
-0.00472259521484375,
-0.0088958740234375,
-0.031341552734375,
0.0321044921875,
0.047088623046875,
-0.06787109375,
-0.0155181884765625,
0.0024623870849609375,
0.0192... |
alex-tecky/common_voice_zh_hk_processed | 2023-10-01T15:52:39.000Z | [
"region:us"
] | alex-tecky | null | null | 0 | 12 | 2023-10-01T15:49:28 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_features
sequence:
sequence:
sequence: float32
- name: labels
sequence: int64
- name: input_length
dtype: float64
splits:
- name: train
num_bytes: 13464160656.0
num_examples: 14018
- name: test
num_bytes: 5372062988
num_examples: 5593
download_size: 3041478840
dataset_size: 18836223644.0
---
# Dataset Card for "common_voice_zh_hk_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 701 | [
[
-0.032867431640625,
-0.0236358642578125,
0.0128936767578125,
0.0167694091796875,
-0.0239410400390625,
-0.0071868896484375,
-0.005031585693359375,
-0.0182952880859375,
0.056915283203125,
0.051513671875,
-0.07818603515625,
-0.06298828125,
-0.0295867919921875,
... |
ayoubkirouane/arxiv-math | 2023-10-02T18:59:00.000Z | [
"region:us"
] | ayoubkirouane | null | null | 0 | 12 | 2023-10-02T18:58:56 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 35436503.0
num_examples: 50488
download_size: 18875033
dataset_size: 35436503.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "arxiv-math"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 485 | [
[
-0.04937744140625,
-0.0167083740234375,
0.006134033203125,
0.014678955078125,
-0.01309967041015625,
-0.0030193328857421875,
0.03106689453125,
0.006855010986328125,
0.04437255859375,
0.035247802734375,
-0.04278564453125,
-0.050262451171875,
-0.0438232421875,
... |
DataStudio/TTS_Speaker_01 | 2023-10-03T04:03:18.000Z | [
"region:us"
] | DataStudio | null | null | 0 | 12 | 2023-10-03T04:02:40 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: content
dtype: string
splits:
- name: train
num_bytes: 1069341549.668
num_examples: 8518
download_size: 776772238
dataset_size: 1069341549.668
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "TTS_Speaker_01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 494 | [
[
-0.04229736328125,
-0.0232086181640625,
0.0142822265625,
0.012725830078125,
-0.0277099609375,
-0.0014591217041015625,
0.0085296630859375,
-0.0084686279296875,
0.0733642578125,
0.0240478515625,
-0.058746337890625,
-0.0462646484375,
-0.04791259765625,
-0.01078... |
Mxode/CSDN-Community-C-Language-3years | 2023-10-03T12:36:23.000Z | [
"task_categories:question-answering",
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:zh",
"license:lgpl",
"code",
"region:us"
] | Mxode | null | null | 1 | 12 | 2023-10-03T12:20:18 | ---
license: lgpl
task_categories:
- question-answering
- conversational
- text-generation
language:
- zh
tags:
- code
size_categories:
- 1K<n<10K
---
CSDN - C 语言社区 **2020.10.2 ~ 2023.10.2** 的问答数据,未包含图片,仅有文本内容。
共 **2380** 条,数据已经经过**初步清洗和脱敏**,去除了所有 0 回复的贴子 & 机器人回复的贴子。为了方便不同使用目的,按照回复盖楼的格式对数据进行了组织,一个样例(展开后)如下:
```json
{
"question": "刚学C语言,为什么这个代码运行不了呢",
"poster": "user-0",
"comments": [
{
"cid": "2",
"user": "user-2",
"content": "intunsigned intlong longunsigned long long统统容纳不下29的阶乘,早就溢出了。",
"referer": "user-0"
},
{
"cid": "3",
"user": "user-3",
"content": "#include <stdio.h> #include <math.h> int main(void) { int i = 1; long long sum = 1; // 使用 long long 类型来存储阶乘结果 int x; printf(\"请输入一个非负整数: \"); if (scanf(\"%d\", &x) != 1 || x < 0) { printf(\"输入无效,请输入一个非负整数。\\n\"); return 1; // 返回错误码 } while (i <= x) { sum *= i; i++; } printf(\"%d 的阶乘是 %lld\\n\", x, sum); return 0; }",
"referer": "user-0"
}
]
}
```
`user` 和 `referer` 做了脱敏映射,但保留了回复的逻辑关系(即保留了回复楼主 & 楼中楼回复的逻辑关系)。
`question` 和 `comment` 都按照单行形式进行了组织,无需额外处理。
由于部分回答较长,出于可能的长文需要,因此没有对数据进行修剪,数据具体的分位点如下,请按需修剪:
```
question comments
count 2380.000000 2380.000000
mean 22.074370 1528.050840
std 14.986499 2608.022392
min 4.000000 69.000000
10% 7.900000 160.900000
20% 12.000000 235.800000
30% 14.000000 342.000000
40% 16.000000 469.000000
50% 18.000000 648.500000
60% 21.000000 889.000000
70% 25.000000 1234.300000
75% 27.000000 1542.500000
80% 30.000000 1990.400000
85% 34.000000 2665.800000
90% 40.000000 3810.800000
95% 51.000000 6008.050000
max 130.000000 30606.000000
``` | 1,892 | [
[
-0.03851318359375,
-0.0426025390625,
0.03546142578125,
0.037933349609375,
-0.02874755859375,
-0.0069580078125,
-0.0094451904296875,
-0.0217132568359375,
0.041412353515625,
0.0297088623046875,
-0.0289154052734375,
-0.0301055908203125,
-0.043853759765625,
0.01... |
vsarathy/nl-robotics-semantic-parsing-info_structure-30k-no-context | 2023-10-03T14:32:45.000Z | [
"region:us"
] | vsarathy | null | null | 0 | 12 | 2023-10-03T14:32:13 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ENSEONG/jungdae | 2023-10-05T10:00:16.000Z | [
"region:us"
] | ENSEONG | null | null | 0 | 12 | 2023-10-05T09:59:53 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 231777
num_examples: 135
download_size: 101263
dataset_size: 231777
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "jungdae"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 470 | [
[
-0.044189453125,
0.002567291259765625,
0.01320648193359375,
0.03143310546875,
-0.01434326171875,
-0.005687713623046875,
0.0203094482421875,
-0.022247314453125,
0.070068359375,
0.036590576171875,
-0.0701904296875,
-0.06494140625,
-0.04278564453125,
-0.0079803... |
ninja/arabic-english-translation | 2023-10-05T17:07:41.000Z | [
"region:us"
] | ninja | null | null | 0 | 12 | 2023-10-05T17:07:39 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: arabic
dtype: string
- name: english
dtype: string
splits:
- name: train
num_bytes: 228876.54205607477
num_examples: 674
- name: test
num_bytes: 25468.457943925234
num_examples: 75
download_size: 159571
dataset_size: 254345.0
---
# Dataset Card for "arabic-english-translation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 610 | [
[
-0.034881591796875,
-0.015777587890625,
0.007648468017578125,
0.025054931640625,
-0.0250244140625,
0.00843048095703125,
0.0010385513305664062,
-0.0251007080078125,
0.05340576171875,
0.02862548828125,
-0.049407958984375,
-0.07159423828125,
-0.06793212890625,
... |
carnival13/massive_5_lang_DA_tokenized | 2023-10-06T06:00:05.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 12 | 2023-10-06T05:59:49 | ---
dataset_info:
features:
- name: pass_label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 424287645
num_examples: 552890
download_size: 127805722
dataset_size: 424287645
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "massive_5_lang_DA_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 552 | [
[
-0.03851318359375,
-0.0357666015625,
0.010528564453125,
0.02545166015625,
-0.018402099609375,
0.01119232177734375,
-0.0085906982421875,
-0.017669677734375,
0.062408447265625,
0.03692626953125,
-0.041656494140625,
-0.06549072265625,
-0.04791259765625,
-0.0005... |
gbarone77/camoscio_llama2 | 2023-10-06T09:41:52.000Z | [
"region:us"
] | gbarone77 | null | null | 0 | 12 | 2023-10-06T09:37:42 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
llama2d/llama2d-unscramble-small | 2023-10-07T02:17:35.000Z | [
"region:us"
] | llama2d | null | null | 0 | 12 | 2023-10-07T02:17:29 | ---
dataset_info:
features:
- name: input_ids
sequence: float32
- name: coords
sequence:
sequence: float32
- name: labels
sequence: float32
- name: attention_mask
sequence: float32
splits:
- name: train
num_bytes: 30080000
num_examples: 5000
download_size: 1614133
dataset_size: 30080000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama2d-unscramble-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 604 | [
[
-0.02532958984375,
-0.02435302734375,
0.0214996337890625,
0.005100250244140625,
-0.045928955078125,
0.007419586181640625,
0.00027489662170410156,
-0.00997161865234375,
0.0616455078125,
0.038330078125,
-0.05462646484375,
-0.04779052734375,
-0.045379638671875,
... |
hdeldar/Persian-Text-llama2-9k | 2023-10-07T14:36:36.000Z | [
"region:us"
] | hdeldar | null | null | 0 | 12 | 2023-10-07T14:35:26 | ---
dataset_info:
features:
- name: answers
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 17137852
num_examples: 9008
- name: validation
num_bytes: 1753700
num_examples: 930
download_size: 1329768
dataset_size: 18891552
---
# Dataset Card for "Persian-Text-llama2-9k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 472 | [
[
-0.0216064453125,
-0.01546478271484375,
0.029693603515625,
0.037933349609375,
-0.04296875,
0.0270538330078125,
-0.006893157958984375,
-0.01788330078125,
0.058563232421875,
0.03045654296875,
-0.051849365234375,
-0.05914306640625,
-0.05242919921875,
0.00303077... |
nthngdy/pile_small_miniLM | 2023-10-07T15:39:06.000Z | [
"region:us"
] | nthngdy | null | null | 0 | 12 | 2023-10-07T15:38:35 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: meta
struct:
- name: perplexity_score
dtype: float64
- name: pile_set_name
dtype: string
- name: emb
sequence: float32
splits:
- name: train
num_bytes: 760056668
num_examples: 100000
download_size: 545226370
dataset_size: 760056668
---
# Dataset Card for "pile_small_miniLM"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 612 | [
[
-0.0511474609375,
-0.028564453125,
0.00821685791015625,
-0.003314971923828125,
-0.0246734619140625,
-0.01145172119140625,
0.0295562744140625,
-0.001796722412109375,
0.05841064453125,
0.035491943359375,
-0.053192138671875,
-0.033843994140625,
-0.040313720703125,
... |
H4438/tri-edu-date | 2023-10-08T18:14:26.000Z | [
"region:us"
] | H4438 | null | null | 0 | 12 | 2023-10-07T19:05:11 | ---
dataset_info:
features:
- name: title
dtype: string
- name: url
dtype: string
- name: dates
sequence: string
- name: body
dtype: string
- name: head
dtype: string
- name: est_date
dtype: string
- name: ext_dates
sequence: string
- name: flt_dates
sequence: string
splits:
- name: train
num_bytes: 214580613
num_examples: 37239
download_size: 0
dataset_size: 214580613
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "tri-edu-date"
Left: 3429 rows - 0.09%
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 718 | [
[
-0.0308990478515625,
-0.035614013671875,
0.02777099609375,
0.00496673583984375,
-0.006298065185546875,
0.003376007080078125,
0.0225830078125,
0.00734710693359375,
0.052398681640625,
0.0198822021484375,
-0.038970947265625,
-0.057525634765625,
-0.00160980224609375... |
MikuHH/stagop | 2023-10-13T01:51:50.000Z | [
"region:us"
] | MikuHH | null | null | 0 | 12 | 2023-10-08T11:32:57 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
AryanNsc/Mainspacehubdata | 2023-10-08T16:42:43.000Z | [
"region:us"
] | AryanNsc | null | null | 0 | 12 | 2023-10-08T16:17:06 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 10911
num_examples: 39
download_size: 8319
dataset_size: 10911
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Mainspacehubdata"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 435 | [
[
-0.046112060546875,
-0.0300750732421875,
0.02032470703125,
0.0210113525390625,
-0.007717132568359375,
0.0088043212890625,
0.01488494873046875,
-0.00382232666015625,
0.0836181640625,
0.039459228515625,
-0.06085205078125,
-0.0584716796875,
-0.041778564453125,
... |
nlplabtdtu/edu-crawl-with-date | 2023-10-09T01:49:10.000Z | [
"region:us"
] | nlplabtdtu | null | null | 0 | 12 | 2023-10-09T01:32:21 | ---
dataset_info:
features:
- name: title
dtype: string
- name: url
dtype: string
- name: body
dtype: string
- name: date
dtype: string
- name: flt_dates
sequence: string
splits:
- name: train
num_bytes: 1070649713
num_examples: 278902
download_size: 387393861
dataset_size: 1070649713
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "edu-crawl-with-date"
Data crawl education với dữ liệu thời gian (tháng/năm)
Dữ liệu thời gian được cập nhật theo cách sau:
- chiết xuất từ văn bản
- crawl lại một số trang (hiếm)
Hiện tại có: 190692 dòng có dữ liệu thời gian ~= 68.37 %
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 814 | [
[
-0.0240478515625,
-0.034515380859375,
0.0262451171875,
0.003173828125,
-0.016265869140625,
0.004390716552734375,
0.0114288330078125,
-0.0023899078369140625,
0.06744384765625,
0.0128326416015625,
-0.0440673828125,
-0.07049560546875,
-0.033782958984375,
-0.006... |
minh21/COVID-QA-Chunk-64-sentence-transformer-biencoder-data-65_25_10-v2 | 2023-10-09T03:47:50.000Z | [
"region:us"
] | minh21 | null | null | 0 | 12 | 2023-10-09T03:47:47 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: positive
dtype: string
- name: negative
dtype: string
- name: document_id
dtype: int64
splits:
- name: train
num_bytes: 7075815
num_examples: 6419
- name: test
num_bytes: 806462
num_examples: 723
download_size: 724649
dataset_size: 7882277
---
# Dataset Card for "COVID-QA-Chunk-64-sentence-transformer-biencoder-data-65_25_10-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 706 | [
[
-0.02239990234375,
-0.0217742919921875,
0.005626678466796875,
0.0264892578125,
-0.0221405029296875,
-0.00968170166015625,
0.0231475830078125,
-0.01428985595703125,
0.043701171875,
0.022979736328125,
-0.0499267578125,
-0.03656005859375,
-0.045013427734375,
-0... |
open-phi/wile-e | 2023-10-10T03:02:55.000Z | [
"region:us"
] | open-phi | null | null | 1 | 12 | 2023-10-10T03:02:39 | ---
dataset_info:
features:
- name: topic
dtype: string
- name: model
dtype: string
- name: concepts
sequence: string
- name: outline
sequence: string
- name: markdown
dtype: string
splits:
- name: train
num_bytes: 108171787
num_examples: 933
download_size: 41387101
dataset_size: 108171787
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "wile-e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 589 | [
[
-0.044464111328125,
0.0006222724914550781,
0.022186279296875,
0.006252288818359375,
-0.0018243789672851562,
-0.021392822265625,
0.01953125,
-0.02703857421875,
0.07464599609375,
0.0234222412109375,
-0.06878662109375,
-0.055267333984375,
-0.047454833984375,
-0... |
FinGPT/fingpt-finred-re | 2023-10-10T06:40:16.000Z | [
"region:us"
] | FinGPT | null | null | 0 | 12 | 2023-10-10T06:36:53 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 11144078
num_examples: 11400
- name: test
num_bytes: 2076314
num_examples: 2136
download_size: 1290513
dataset_size: 13220392
---
# Dataset Card for "fingpt-finred-re"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 622 | [
[
-0.045745849609375,
-0.025177001953125,
0.00736236572265625,
0.0084991455078125,
-0.0277862548828125,
0.00936126708984375,
0.01904296875,
-0.01105499267578125,
0.06561279296875,
0.046478271484375,
-0.066162109375,
-0.04925537109375,
-0.03607177734375,
-0.010... |
FinGPT/fingpt-headline-cls | 2023-10-10T06:47:59.000Z | [
"region:us"
] | FinGPT | null | null | 1 | 12 | 2023-10-10T06:39:19 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 12571278
num_examples: 82161
- name: test
num_bytes: 3147768
num_examples: 20547
download_size: 986960
dataset_size: 15719046
---
# Dataset Card for "fingpt-headline-cls"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 625 | [
[
-0.051025390625,
-0.0156402587890625,
0.01898193359375,
0.0199432373046875,
-0.03057861328125,
-0.00812530517578125,
0.000957489013671875,
-0.00811767578125,
0.0489501953125,
0.043792724609375,
-0.056915283203125,
-0.05828857421875,
-0.045196533203125,
-0.02... |
FinGPT/fingpt-ner-cls | 2023-10-10T06:42:34.000Z | [
"region:us"
] | FinGPT | null | null | 0 | 12 | 2023-10-10T06:39:45 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 5730497
num_examples: 13549
- name: test
num_bytes: 2112011
num_examples: 3502
download_size: 298810
dataset_size: 7842508
---
# Dataset Card for "fingpt-ner-cls"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 617 | [
[
-0.060882568359375,
-0.0181121826171875,
0.007762908935546875,
0.0044708251953125,
-0.028411865234375,
-0.006862640380859375,
0.00799560546875,
-0.00901031494140625,
0.05535888671875,
0.0377197265625,
-0.059600830078125,
-0.054412841796875,
-0.043212890625,
... |
FinGPT/fingpt-finred-cls | 2023-10-10T06:41:54.000Z | [
"region:us"
] | FinGPT | null | null | 0 | 12 | 2023-10-10T06:41:40 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 23991756
num_examples: 48474
- name: test
num_bytes: 3899700
num_examples: 8928
download_size: 2897823
dataset_size: 27891456
---
# Dataset Card for "fingpt-finred-cls"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 623 | [
[
-0.0574951171875,
-0.0140533447265625,
0.01128387451171875,
0.009246826171875,
-0.03076171875,
0.0003762245178222656,
0.005420684814453125,
-0.00962066650390625,
0.05047607421875,
0.039825439453125,
-0.06146240234375,
-0.058074951171875,
-0.04010009765625,
-... |
distil-whisper/gigaspeech-l-token-ids | 2023-10-11T09:44:39.000Z | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:other",
"region:us"
] | distil-whisper | GigaSpeech is an evolving, multi-domain English speech recognition corpus with 10,000 hours of high quality
labeled audio suitable for supervised training, and 40,000 hours of total audio suitable for semi-supervised
and unsupervised training. Around 40,000 hours of transcribed audio is first collected from audiobooks, podcasts
and YouTube, covering both read and spontaneous speaking styles, and a variety of topics, such as arts, science,
sports, etc. A new forced alignment and segmentation pipeline is proposed to create sentence segments suitable
for speech recognition training, and to filter out segments with low-quality transcription. For system training,
GigaSpeech provides five subsets of different sizes, 10h, 250h, 1000h, 2500h, and 10000h.
For our 10,000-hour XL training subset, we cap the word error rate at 4% during the filtering/validation stage,
and for all our other smaller training subsets, we cap it at 0%. The DEV and TEST evaluation sets, on the other hand,
are re-processed by professional human transcribers to ensure high transcription quality. | @article{DBLP:journals/corr/abs-2106-06909,
author = {Guoguo Chen and
Shuzhou Chai and
Guanbo Wang and
Jiayu Du and
Wei{-}Qiang Zhang and
Chao Weng and
Dan Su and
Daniel Povey and
Jan Trmal and
Junbo Zhang and
Mingjie Jin and
Sanjeev Khudanpur and
Shinji Watanabe and
Shuaijiang Zhao and
Wei Zou and
Xiangang Li and
Xuchen Yao and
Yongqing Wang and
Yujun Wang and
Zhao You and
Zhiyong Yan},
title = {GigaSpeech: An Evolving, Multi-domain {ASR} Corpus with 10, 000 Hours
of Transcribed Audio},
journal = {CoRR},
volume = {abs/2106.06909},
year = {2021},
url = {https://arxiv.org/abs/2106.06909},
eprinttype = {arXiv},
eprint = {2106.06909},
timestamp = {Wed, 29 Dec 2021 14:29:26 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-06909.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | 0 | 12 | 2023-10-11T09:09:32 | ---
license: other
task_categories:
- automatic-speech-recognition
language:
- en
extra_gated_prompt: |-
SpeechColab does not own the copyright of the audio files. For researchers and educators who wish to use the audio files for non-commercial research and/or educational purposes, we can provide access through the Hub under certain conditions and terms.
Terms of Access:
The "Researcher" has requested permission to use the GigaSpeech database (the "Database") at Tsinghua University. In exchange for such permission, Researcher hereby agrees to the following terms and conditions:
1. Researcher shall use the Database only for non-commercial research and educational purposes.
2. The SpeechColab team and Tsinghua University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.
3. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify the SpeechColab team and Tsinghua University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted audio files that he or she may create from the Database.
4. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.
5. The SpeechColab team and Tsinghua University reserve the right to terminate Researcher's access to the Database at any time.
6. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.
Please also fill out the Google Form https://forms.gle/UuGQAPyscGRrUMLq6 to request access to the GigaSpeech dataset.
extra_gated_fields:
Name: text
Email: text
Organization: text
Address: text
I hereby confirm that I have requested access via the Google Form provided above: checkbox
I accept the terms of access: checkbox
---
# Distil Whisper: GigaSpeech
This is a variant of the [GigaSpeech](https://huggingface.co/datasets/speechcolab/gigaspeech) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/speechcolab/gigaspeech).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/gigaspeech-l", "l")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/gigaspeech-l", "l", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under custom terms. To view the custom license for this dataset, refer to the original [dataset card](https://huggingface.co/datasets/speechcolab/gigaspeech).
| 4,292 | [
[
-0.015411376953125,
-0.04833984375,
0.00833892822265625,
0.033447265625,
-0.018768310546875,
0.004650115966796875,
-0.008087158203125,
-0.01421356201171875,
0.04486083984375,
0.027496337890625,
-0.05963134765625,
-0.0233306884765625,
-0.049835205078125,
0.00... |
bellagio-ai/t2i-one-pillar-pagoda | 2023-10-11T13:37:51.000Z | [
"region:us"
] | bellagio-ai | null | null | 0 | 12 | 2023-10-11T13:37:13 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 10724002.0
num_examples: 27
download_size: 10667654
dataset_size: 10724002.0
---
# Dataset Card for "t2i-one-pillar-pagoda"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 487 | [
[
-0.0216217041015625,
-0.01508331298828125,
0.0027103424072265625,
0.04168701171875,
-0.0226898193359375,
-0.016021728515625,
0.036773681640625,
0.002422332763671875,
0.040802001953125,
0.032867431640625,
-0.048919677734375,
-0.06390380859375,
-0.038330078125,
... |
Tommert25/Data_augmentation | 2023-10-12T08:56:59.000Z | [
"region:us"
] | Tommert25 | null | null | 0 | 12 | 2023-10-11T14:49:45 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
lukarape/public_small_papers | 2023-10-16T12:30:39.000Z | [
"region:us"
] | lukarape | null | null | 0 | 12 | 2023-10-12T18:20:46 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
zhan1993/enhanced_topic_instructions | 2023-10-13T08:50:26.000Z | [
"region:us"
] | zhan1993 | null | null | 0 | 12 | 2023-10-13T08:50:09 | ---
configs:
- config_name: default
data_files:
- split: high_school_physics_enhanced
path: data/high_school_physics_enhanced-*
- split: security_studies_enhanced
path: data/security_studies_enhanced-*
- split: formal_logic_enhanced
path: data/formal_logic_enhanced-*
- split: college_biology_enhanced
path: data/college_biology_enhanced-*
- split: abstract_algebra_enhanced
path: data/abstract_algebra_enhanced-*
- split: sociology_enhanced
path: data/sociology_enhanced-*
- split: prehistory_enhanced
path: data/prehistory_enhanced-*
- split: global_facts_enhanced
path: data/global_facts_enhanced-*
- split: machine_learning_enhanced
path: data/machine_learning_enhanced-*
- split: high_school_government_and_politics_enhanced
path: data/high_school_government_and_politics_enhanced-*
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: high_school_physics_enhanced
num_bytes: 3573173
num_examples: 1980
- name: security_studies_enhanced
num_bytes: 9466217
num_examples: 4340
- name: formal_logic_enhanced
num_bytes: 3748928
num_examples: 2125
- name: college_biology_enhanced
num_bytes: 2154060
num_examples: 1020
- name: abstract_algebra_enhanced
num_bytes: 3757955
num_examples: 2230
- name: sociology_enhanced
num_bytes: 7970185
num_examples: 3691
- name: prehistory_enhanced
num_bytes: 4336111
num_examples: 2240
- name: global_facts_enhanced
num_bytes: 4078170
num_examples: 2265
- name: machine_learning_enhanced
num_bytes: 3123889
num_examples: 1575
- name: high_school_government_and_politics_enhanced
num_bytes: 8862802
num_examples: 4600
download_size: 23508823
dataset_size: 51071490
---
# Dataset Card for "enhanced_topic_instructions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 2,017 | [
[
-0.04254150390625,
-0.04388427734375,
0.019561767578125,
0.0218505859375,
-0.02166748046875,
-0.01103973388671875,
-0.0000883340835571289,
0.01415252685546875,
0.0589599609375,
0.04681396484375,
-0.0687255859375,
-0.052398681640625,
-0.034881591796875,
-0.02... |
milkshake721/2.1M-wiki-STEM | 2023-10-13T09:22:50.000Z | [
"region:us"
] | milkshake721 | null | null | 1 | 12 | 2023-10-13T09:00:40 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Globaly/familias-130k | 2023-10-13T22:43:24.000Z | [
"region:us"
] | Globaly | null | null | 0 | 12 | 2023-10-13T16:48:25 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sordonia/id-maxD100 | 2023-10-14T01:00:30.000Z | [
"region:us"
] | sordonia | null | null | 0 | 12 | 2023-10-14T01:00:16 | ## max_context_length: 128
## max_documents_per_subject: 100
| 61 | [
[
-0.039154052734375,
-0.033233642578125,
0.054656982421875,
0.07427978515625,
-0.038238525390625,
-0.040069580078125,
-0.0230865478515625,
0.01103973388671875,
0.0025424957275390625,
0.043212890625,
-0.022247314453125,
-0.0518798828125,
-0.06573486328125,
0.0... |
sayan1101/test_dataset_sum | 2023-10-15T09:53:26.000Z | [
"region:us"
] | sayan1101 | null | null | 0 | 12 | 2023-10-15T09:45:02 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 49753382
num_examples: 11490
download_size: 28642180
dataset_size: 49753382
---
# Dataset Card for "test_dataset_sum"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 450 | [
[
-0.04620361328125,
-0.01543426513671875,
-0.00438690185546875,
0.0183258056640625,
-0.00939178466796875,
0.0028400421142578125,
0.0223846435546875,
0.00450897216796875,
0.061920166015625,
0.024169921875,
-0.05426025390625,
-0.0394287109375,
-0.036224365234375,
... |
EnzoZacharias/SPS | 2023-10-17T09:18:25.000Z | [
"region:us"
] | EnzoZacharias | null | null | 0 | 12 | 2023-10-16T09:50:43 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ranzhen/hackthon-jsonl-test-2023 | 2023-10-20T02:57:51.000Z | [
"region:us"
] | ranzhen | null | null | 0 | 12 | 2023-10-16T09:52:37 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Tngarg/russian_test | 2023-10-16T11:21:37.000Z | [
"region:us"
] | Tngarg | null | null | 0 | 12 | 2023-10-16T11:21:36 | ---
dataset_info:
features:
- name: sentiment
dtype: string
- name: tweet
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 75684
num_examples: 260
download_size: 43226
dataset_size: 75684
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "russian_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 517 | [
[
-0.0305938720703125,
-0.026275634765625,
0.0079803466796875,
0.01934814453125,
-0.01355743408203125,
0.003055572509765625,
0.01222991943359375,
-0.0036449432373046875,
0.044189453125,
0.012908935546875,
-0.051971435546875,
-0.05572509765625,
-0.0311279296875,
... |
sayan1101/new_sft_summarize | 2023-10-16T16:16:04.000Z | [
"region:us"
] | sayan1101 | null | null | 0 | 12 | 2023-10-16T16:11:20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 1264287802
num_examples: 287113
- name: validation
num_bytes: 57852724
num_examples: 13368
- name: test
num_bytes: 50029142
num_examples: 11490
download_size: 801958229
dataset_size: 1372169668
---
# Dataset Card for "new_sft_summarize"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 677 | [
[
-0.05059814453125,
-0.019287109375,
0.01181793212890625,
0.024932861328125,
-0.019683837890625,
0.01174163818359375,
0.0220489501953125,
0.0018100738525390625,
0.0723876953125,
0.03985595703125,
-0.053985595703125,
-0.046478271484375,
-0.038238525390625,
-0.... |
LEEEE2/autotrain-data-pictures | 2023-10-17T08:01:15.000Z | [
"region:us"
] | LEEEE2 | null | null | 0 | 12 | 2023-10-16T18:11:32 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
wlee44/attack | 2023-10-16T20:27:11.000Z | [
"region:us"
] | wlee44 | null | null | 0 | 12 | 2023-10-16T20:26:23 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tyzhu/eval_tag_nq_test_v12_middle | 2023-10-17T05:21:24.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 12 | 2023-10-17T05:21:17 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: question
dtype: string
- name: title
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
- name: answers
struct:
- name: answer_start
sequence: 'null'
- name: text
sequence: string
- name: id
dtype: string
- name: titles
dtype: string
splits:
- name: train
num_bytes: 5226
num_examples: 10
- name: validation
num_bytes: 1980914
num_examples: 3610
download_size: 1119335
dataset_size: 1986140
---
# Dataset Card for "eval_tag_nq_test_v12_middle"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 860 | [
[
-0.046112060546875,
-0.0294647216796875,
0.0013360977172851562,
0.00919342041015625,
-0.01094818115234375,
0.018798828125,
0.024322509765625,
0.0010843276977539062,
0.05059814453125,
0.0244598388671875,
-0.054534912109375,
-0.046112060546875,
-0.023773193359375,... |
seuprimrose/ccdmdata | 2023-10-17T16:06:31.000Z | [
"region:us"
] | seuprimrose | null | null | 0 | 12 | 2023-10-17T07:52:27 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Surajsangwan90/public_transport_NZ_latest | 2023-10-30T01:27:49.000Z | [
"region:us"
] | Surajsangwan90 | null | null | 0 | 12 | 2023-10-18T00:24:10 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
elliotthwang/guanaco-llama2-chinese-1ka | 2023-10-18T13:33:03.000Z | [
"region:us"
] | elliotthwang | null | null | 0 | 12 | 2023-10-18T13:33:01 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1348677
num_examples: 1000
download_size: 811412
dataset_size: 1348677
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "guanaco-llama2-chinese-1ka"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 453 | [
[
-0.01218414306640625,
-0.0189208984375,
0.01800537109375,
0.041961669921875,
-0.03338623046875,
0.0022945404052734375,
0.0110626220703125,
-0.02154541015625,
0.0599365234375,
0.0229949951171875,
-0.05572509765625,
-0.069091796875,
-0.042205810546875,
-0.0110... |
lukekim420/sshsbamboobot | 2023-10-18T15:11:31.000Z | [
"language:ko",
"region:us"
] | lukekim420 | null | null | 0 | 12 | 2023-10-18T15:09:08 | ---
language:
- ko
---
sshsbamboogroove crawled dataset
---
license: apache-2.0
--- | 84 | [
[
0.0004444122314453125,
-0.032318115234375,
-0.0018033981323242188,
0.056182861328125,
-0.063720703125,
-0.01015472412109375,
0.01727294921875,
-0.003612518310546875,
0.022918701171875,
0.08050537109375,
-0.041778564453125,
-0.03277587890625,
-0.046600341796875,
... |
asgaardlab/SampleDataset2 | 2023-10-18T16:49:58.000Z | [
"region:us"
] | asgaardlab | null | null | 0 | 12 | 2023-10-18T16:49:40 | ---
dataset_info:
features:
- name: Buggy Image
dtype: image
- name: Correct Image
dtype: image
- name: Segmentation Image (Bug)
dtype: image
- name: Segmentation Image (Correct)
dtype: image
- name: Description
dtype: string
- name: Tag
dtype: string
- name: Objects JSON (Bug)
dtype: string
- name: Objects JSON (Correct)
dtype: string
- name: Victim Name
dtype: string
- name: Victim Color
sequence: int64
splits:
- name: validation
num_bytes: 404705601.0
num_examples: 751
download_size: 379820267
dataset_size: 404705601.0
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "SampleDataset2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 874 | [
[
-0.038543701171875,
-0.0052490234375,
0.00981903076171875,
0.0226287841796875,
-0.0179595947265625,
0.0014629364013671875,
0.021270751953125,
-0.02490234375,
0.05535888671875,
0.0284576416015625,
-0.05657958984375,
-0.0357666015625,
-0.038299560546875,
-0.02... |
aiancheruk/go_emotions_mini | 2023-10-18T22:29:55.000Z | [
"region:us"
] | aiancheruk | null | null | 0 | 12 | 2023-10-18T22:29:50 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': admiration
'1': amusement
'2': anger
'3': annoyance
'4': approval
'5': caring
'6': confusion
'7': curiosity
'8': desire
'9': disappointment
'10': disapproval
'11': disgust
'12': embarrassment
'13': excitement
'14': fear
'15': gratitude
'16': grief
'17': joy
'18': love
'19': nervousness
'20': optimism
'21': pride
'22': realization
'23': relief
'24': remorse
'25': sadness
'26': surprise
'27': neutral
- name: id
dtype: string
splits:
- name: train
num_bytes: 48653.97373876987
num_examples: 500
- name: validation
num_bytes: 9714.688536675267
num_examples: 100
- name: test
num_bytes: 9663.589460106872
num_examples: 100
download_size: 54811
dataset_size: 68032.25173555201
---
# Dataset Card for "go_emotions_shrinked"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,482 | [
[
-0.035919189453125,
-0.0177459716796875,
0.01029205322265625,
0.0070343017578125,
-0.0205535888671875,
-0.0130615234375,
-0.0052642822265625,
0.0009665489196777344,
0.06842041015625,
0.0223846435546875,
-0.0738525390625,
-0.041748046875,
-0.04254150390625,
-... |
tyzhu/flan_max_300 | 2023-10-19T08:52:00.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 12 | 2023-10-19T08:26:52 | ---
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 2253528229.0133214
num_examples: 1321267
- name: test
num_bytes: 118607826.10465212
num_examples: 69541
- name: validation
num_bytes: 118607826.10465212
num_examples: 69541
download_size: 34774605
dataset_size: 2490743881.2226253
---
# Dataset Card for "flan_max_300"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 646 | [
[
-0.0548095703125,
-0.0142822265625,
0.00551605224609375,
0.01490020751953125,
0.004375457763671875,
-0.0174407958984375,
0.004405975341796875,
-0.006244659423828125,
0.05615234375,
0.038848876953125,
-0.05242919921875,
-0.04345703125,
-0.03570556640625,
-0.0... |
kwanyick/cover-letter-dataset-text-prompt | 2023-10-19T16:00:37.000Z | [
"region:us"
] | kwanyick | null | null | 0 | 12 | 2023-10-19T15:59:46 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1231557.1678141137
num_examples: 813
- name: test
num_bytes: 528675.8321858865
num_examples: 349
download_size: 594129
dataset_size: 1760233.0
---
# Dataset Card for "cover-letter-dataset-text-prompt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 579 | [
[
-0.03094482421875,
-0.01253509521484375,
0.0228271484375,
0.01751708984375,
-0.004638671875,
0.004180908203125,
0.017822265625,
0.01065826416015625,
0.0633544921875,
0.03338623046875,
-0.0826416015625,
-0.062103271484375,
-0.03729248046875,
-0.00089073181152... |
tanvinsharma/books1000 | 2023-10-20T14:04:57.000Z | [
"region:us"
] | tanvinsharma | null | null | 0 | 12 | 2023-10-20T14:04:35 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
OdiaGenAI/alpaca_dolly_eval_100 | 2023-10-20T15:31:28.000Z | [
"region:us"
] | OdiaGenAI | null | null | 0 | 12 | 2023-10-20T15:31:09 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sam2ai/hindi_truthfulqa_gen_mini | 2023-10-20T22:41:34.000Z | [
"region:us"
] | sam2ai | null | null | 0 | 12 | 2023-10-20T22:31:44 | ---
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
dataset_info:
features:
- name: type
dtype: string
- name: category
dtype: string
- name: question
dtype: string
- name: best_answer
dtype: string
- name: correct_answers
sequence: string
- name: incorrect_answers
sequence: string
- name: source
dtype: string
splits:
- name: validation
num_bytes: 81430
num_examples: 50
download_size: 35995
dataset_size: 81430
---
# Dataset Card for "hindi_truthfulqa_gen_mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 705 | [
[
-0.033966064453125,
-0.026824951171875,
-0.00040411949157714844,
0.01024627685546875,
-0.01528167724609375,
0.0170135498046875,
0.0104217529296875,
0.0011892318725585938,
0.0455322265625,
0.012786865234375,
-0.061309814453125,
-0.04510498046875,
-0.0435485839843... |
goodcoffee/covidQA_training_v2 | 2023-11-01T13:28:50.000Z | [
"region:us"
] | goodcoffee | null | null | 0 | 12 | 2023-10-21T10:50:59 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: start_positions
dtype: int64
- name: end_positions
dtype: int64
splits:
- name: train
num_bytes: 3651192
num_examples: 1413
download_size: 0
dataset_size: 3651192
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "covidQA_training_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 576 | [
[
-0.0267486572265625,
-0.005756378173828125,
-0.0047454833984375,
0.00926971435546875,
-0.0103759765625,
-0.0018110275268554688,
0.034759521484375,
-0.0118865966796875,
0.041229248046875,
0.019622802734375,
-0.059967041015625,
-0.041961669921875,
-0.0415649414062... |
BahAdoR0101/conll2003job | 2023-10-21T15:35:55.000Z | [
"region:us"
] | BahAdoR0101 | The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on
four types of named entities: persons, locations, organizations and names of miscellaneous entities that do
not belong to the previous three groups.
The CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on
a separate line and there is an empty line after each sentence. The first item on each line is a word, the second
a part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags
and the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only
if two phrases of the same type immediately follow each other, the first word of the second phrase will have tag
B-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2
tagging scheme, whereas the original dataset uses IOB1.
For more details see https://www.clips.uantwerpen.be/conll2003/ner/ and https://www.aclweb.org/anthology/W03-0419 | @inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
author = "Tjong Kim Sang, Erik F. and
De Meulder, Fien",
booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
year = "2003",
url = "https://www.aclweb.org/anthology/W03-0419",
pages = "142--147",
} | 0 | 12 | 2023-10-21T15:35:18 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
JLB-JLB/seizure_eeg_greyscale_224x224_6secWindow | 2023-10-21T23:42:39.000Z | [
"region:us"
] | JLB-JLB | null | null | 1 | 12 | 2023-10-21T22:29:33 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
- split: eval
path: data/eval-*
dataset_info:
features:
- name: image
dtype: image
- name: epoch
dtype: int64
- name: label
dtype:
class_label:
names:
'0': seiz
'1': bckg
splits:
- name: train
num_bytes: 23735631090.792
num_examples: 814568
- name: dev
num_bytes: 12051655546.53
num_examples: 390190
- name: eval
num_bytes: 3322082528.975
num_examples: 114035
download_size: 39216537180
dataset_size: 39109369166.297
---
# Dataset Card for "seizure_eeg_greyscale_224x224_6secWindow"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 837 | [
[
-0.046966552734375,
-0.016693115234375,
0.04443359375,
0.036773681640625,
-0.0298309326171875,
-0.01116180419921875,
0.0048828125,
-0.01727294921875,
0.06280517578125,
0.023101806640625,
-0.0428466796875,
-0.0469970703125,
-0.04345703125,
-0.0229644775390625... |
maxolotl/must-c-en-es-02 | 2023-10-22T07:47:19.000Z | [
"region:us"
] | maxolotl | null | null | 0 | 12 | 2023-10-22T07:47:13 | ---
dataset_info:
features:
- name: en
dtype: string
- name: es
dtype: string
splits:
- name: train
num_bytes: 59874575
num_examples: 259892
- name: test
num_bytes: 658214
num_examples: 3035
- name: validation
num_bytes: 310157
num_examples: 1309
download_size: 37502474
dataset_size: 60842946
---
# Dataset Card for "must-c-en-es-02"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 517 | [
[
-0.042327880859375,
-0.010833740234375,
0.029205322265625,
0.03826904296875,
-0.007602691650390625,
-0.010498046875,
0.018463134765625,
-0.0279693603515625,
0.060760498046875,
0.048980712890625,
-0.0721435546875,
-0.056121826171875,
-0.045135498046875,
0.001... |
textminr/sotu_paragraphs | 2023-10-22T12:39:31.000Z | [
"size_categories:n<1K",
"language:en",
"license:gpl-2.0",
"sotu",
"region:us"
] | textminr | null | null | 0 | 12 | 2023-10-22T11:07:29 | ---
license: gpl-2.0
language:
- en
tags:
- sotu
size_categories:
- n<1K
---
This is a dataset containing the *United States Presidential State of the Union Addresses* through 2020; derived from the `sotu` R package. | 217 | [
[
-0.035186767578125,
-0.0408935546875,
0.01271820068359375,
0.006595611572265625,
-0.02459716796875,
0.01169586181640625,
0.0193939208984375,
0.0128326416015625,
0.035125732421875,
0.0579833984375,
-0.08233642578125,
-0.0215301513671875,
-0.03790283203125,
0.... |
cis-lmu/udhr-lid | 2023-10-26T09:56:41.000Z | [
"multilinguality:multilingual",
"language:tir",
"language:rmn",
"language:arb",
"language:mxv",
"language:mal",
"language:fij",
"language:som",
"language:cot",
"language:fur",
"language:vie",
"language:zlm",
"language:bam",
"language:chr",
"language:maz",
"language:yad",
"language:zt... | cis-lmu | null | null | 1 | 12 | 2023-10-22T18:49:59 | ---
license: cc0-1.0
configs:
- config_name: default
data_files:
- split: test
path: "udhr-lid.csv"
multilinguality:
- multilingual
language:
- tir
- rmn
- arb
- mxv
- mal
- fij
- som
- cot
- fur
- vie
- zlm
- bam
- chr
- maz
- yad
- ztu
- ykg
- ccp
- alt
- ayr
- njo
- bci
- gyr
- run
- haw
- rgn
- cak
- kwi
- fra
- agr
- duu
- ilo
- nhn
- kdh
- cnh
- bod
- mya
- ady
- pol
- ydd
- cos
- lot
- arl
- glv
- gag
- bfa
- afr
- lij
- zlm
- ibb
- toi
- tzm
- ron
- ojb
- san
- eng
- bum
- pam
- kqs
- dje
- auc
- smo
- por
- fry
- lad
- pov
- tyv
- guc
- huu
- ese
- kbp
- eve
- yrk
- lin
- tdt
- qvc
- top
- nav
- twi
- oss
- lia
- ame
- hun
- lit
- que
- qug
- nku
- csa
- lao
- knc
- kjh
- jav
- mam
- ita
- ppl
- aar
- tbz
- ssw
- bug
- srp
- kaz
- min
- mad
- orh
- tgk
- kat
- uig
- tzo
- hat
- shn
- kbd
- niv
- idu
- krl
- abk
- mto
- gla
- ijs
- cri
- uzn
- tah
- tob
- kir
- quy
- hnj
- srr
- lvs
- nan
- hns
- snk
- swh
- ekk
- guu
- div
- dzo
- spa
- hms
- ell
- ace
- war
- ind
- cjy
- cfm
- nds
- ewe
- tel
- src
- fuf
- vmw
- zro
- men
- kqn
- nzi
- taj
- khk
- ddn
- nso
- mxi
- pon
- fvr
- hau
- ktu
- tem
- yor
- pnb
- ltz
- evn
- cjs
- nba
- niu
- dan
- acu
- zgh
- chj
- heb
- lua
- quz
- uig
- cbi
- jav
- cpu
- wuu
- mah
- kmb
- mcd
- ben
- lus
- ajg
- azj
- tha
- dga
- isl
- sus
- fuf
- fkv
- jiv
- mor
- nio
- als
- buc
- kde
- nbl
- ceb
- ven
- sun
- cbt
- swb
- tur
- dyo
- sin
- pbu
- ada
- pap
- qvh
- loz
- pan
- qva
- sme
- bax
- tuk
- hsb
- hus
- qvn
- ban
- cha
- zyb
- hin
- tat
- uzn
- qxu
- gej
- quc
- mnw
- bho
- udu
- kha
- kbr
- tsz
- pau
- mkd
- shp
- ike
- lue
- tgl
- yap
- yua
- koi
- hrv
- emk
- tet
- ndo
- cbu
- vep
- cmn
- sag
- nym
- rus
- gjn
- guk
- kri
- ote
- lun
- vai
- bis
- arn
- tsn
- gle
- hak
- gkp
- ura
- tca
- xho
- wln
- amc
- mos
- lld
- bul
- qxn
- bcl
- ctd
- dip
- dag
- kek
- bre
- mri
- fin
- sah
- cym
- kan
- fao
- gsw
- sey
- bem
- bos
- bin
- chv
- tpi
- ami
- oaa
- lob
- ast
- nno
- sco
- tuk
- khm
- pes
- pbb
- tam
- ibo
- san
- sid
- plt
- guj
- hsn
- kin
- lug
- slr
- koo
- xsm
- jpn
- oki
- deu
- rar
- pcm
- hni
- vec
- gld
- sot
- crs
- fuv
- srp
- npi
- nya
- kea
- blt
- roh
- cbr
- chk
- kal
- mfq
- quh
- kor
- slv
- cof
- shk
- zul
- qwh
- fon
- mic
- prs
- mag
- bel
- iii
- mar
- dyu
- boa
- swe
- pis
- mlt
- amh
- umb
- cnr
- mai
- toj
- csw
- ina
- bba
- cbs
- kng
- oci
- pcd
- miq
- lat
- qvm
- wwa
- bos
- urd
- kmr
- ido
- gaa
- epo
- gaz
- cat
- hye
- cni
- suk
- gug
- gan
- cjk
- tzh
- zam
- ces
- cic
- mcf
- not
- kaa
- tso
- piu
- fat
- mzi
- snn
- tly
- eus
- nld
- nob
- wol
- hlt
- sna
- tiv
- ton
- hea
- skr
- lns
- rup
- cab
- glg
- tgl
- yao
- nyn
- aii
- tzm
- slk
- ukr
- kkh
- zdj
- amr
- yue
- crh
- hil
tags:
- UDHR
- udhr
- language identification
- LID
- glot
- GlotLID
pretty_name: UDHR-LID
---
# UDHR-LID
**Why UDHR-LID?**
You can access UDHR [here](http://www.unicode.org/udhr/d/), but when a verse is missing, they have texts such as "missing" or "?". Also, about 1/3 of the sentences consist only of "articles 1-30" in different languages. We cleaned the entire dataset from XML files and selected only the paragraphs. We cleared any unrelated language texts from the data and also removed the cases that were incorrect.
Incorrect? Look at the ckb and kmr files in the UDHR. Both are the same! ckb is known for the Arabic script, although it can also be written in Latin. Clearly, a unique file cannot belong to two different languages. We also deleted files that we believe those scripts are no longer in use.
The deleted files include:
- ckb_Latn (Arabic is in use.)
- azb_Latn (Arabic is in use.)
- khk_Mong (Cyrillic is in use.)
- vie_Hani (Latin is in use.)
For dealing with scripts in other languages, if you are interested, check Glotscript [code](https://github.com/cisnlp/GlotScript) and [paper](https://arxiv.org/abs/2309.13320). We have prepared a tool for detecting the script of a text, as well as metadata to determine the correct script for each language.
We believe UDHR should remain a test corpus in NLP, not a training corpus. Of course, we are not opposed to great works such as Franc built on top of UDHR. However, if your work scale is much bigger than UDHR, do not put UDHR in your data. Use it as test/validation, or find out what is wrong with your training data with help of UDHR. Be aware that a part of UDHR may be hosted on other websites such as Wikipedia, news websites like BBC, collaborative translation communities like Tatoeba. Before using UDHR as a test, exclude any sentence where UDHR is a part of your training.
We created this corpus for language identification evaluation task in our GlotLID [paper](https://arxiv.org/abs/2310.16248), but feel free to use it for your own task. The texts here are not in order, and they're not parallel. However, each row of data belongs to the determined language, long, cleaned, and has rich linguistic content!
## Usage (HF Loader)
```python
from datasets import load_dataset
dataset = load_dataset('cis-lmu/udhr-lid', split='test')
print(dataset[0]) # First row of udhr-lid
```
## Download
If you are not a fan of the HF dataloader, download each language directly:
```python
! wget https://huggingface.co/datasets/cis-lmu/udhr-lid/resolve/main/udhr-lid.csv
```
or clone the whole repository:
```python
! git clone https://huggingface.co/datasets/cis-lmu/udhr-lid
```
## License
UDHR is the most translated copyright-free document in the world.
We license the actual packaging, the metadata and the annotations of these data under the cc0-1.0 (waiving all of the rights under copyright law).
## Citation
If you use any part of this data in your research, please cite it (along with http://www.unicode.org/udhr/d/) using the following BibTeX entry.
```
@inproceedings{
kargaran2023glotlid,
title={{GlotLID}: Language Identification for Low-Resource Languages},
author={Kargaran, Amir Hossein and Imani, Ayyoob and Yvon, Fran{\c{c}}ois and Sch{\"u}tze, Hinrich},
booktitle={The 2023 Conference on Empirical Methods in Natural Language Processing},
year={2023},
url={https://openreview.net/forum?id=dl4e3EBz5j}
}
```
| 6,160 | [
[
-0.016387939453125,
-0.04327392578125,
0.01397705078125,
0.0205841064453125,
-0.0214080810546875,
0.011016845703125,
-0.036224365234375,
-0.04791259765625,
0.007175445556640625,
0.040283203125,
-0.0243377685546875,
-0.053253173828125,
-0.03277587890625,
0.03... |
davidfant/natural-questions-chunk-2 | 2023-10-22T22:56:01.000Z | [
"region:us"
] | davidfant | null | null | 0 | 12 | 2023-10-22T22:52:24 | ---
dataset_info:
features:
- name: id
dtype: string
- name: document
struct:
- name: html
dtype: string
- name: title
dtype: string
- name: tokens
sequence:
- name: end_byte
dtype: int64
- name: is_html
dtype: bool
- name: start_byte
dtype: int64
- name: token
dtype: string
- name: url
dtype: string
- name: question
struct:
- name: text
dtype: string
- name: tokens
sequence: string
- name: long_answer_candidates
sequence:
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: top_level
dtype: bool
- name: annotations
sequence:
- name: id
dtype: string
- name: long_answer
struct:
- name: candidate_index
dtype: int64
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: short_answers
sequence:
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: text
dtype: string
- name: yes_no_answer
dtype:
class_label:
names:
'0': 'NO'
'1': 'YES'
splits:
- name: train
num_bytes: 4672087643
num_examples: 10000
download_size: 1816142719
dataset_size: 4672087643
---
# Dataset Card for "natural-questions-chunk-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,818 | [
[
-0.054931640625,
-0.06463623046875,
0.01248931884765625,
0.0225830078125,
-0.03326416015625,
-0.000988006591796875,
0.0151824951171875,
-0.029632568359375,
0.060516357421875,
0.04486083984375,
-0.0577392578125,
-0.01068115234375,
-0.02935791015625,
-0.007472... |
davidfant/natural-questions-chunk-3 | 2023-10-22T22:59:30.000Z | [
"region:us"
] | davidfant | null | null | 0 | 12 | 2023-10-22T22:56:01 | ---
dataset_info:
features:
- name: id
dtype: string
- name: document
struct:
- name: html
dtype: string
- name: title
dtype: string
- name: tokens
sequence:
- name: end_byte
dtype: int64
- name: is_html
dtype: bool
- name: start_byte
dtype: int64
- name: token
dtype: string
- name: url
dtype: string
- name: question
struct:
- name: text
dtype: string
- name: tokens
sequence: string
- name: long_answer_candidates
sequence:
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: top_level
dtype: bool
- name: annotations
sequence:
- name: id
dtype: string
- name: long_answer
struct:
- name: candidate_index
dtype: int64
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: short_answers
sequence:
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: text
dtype: string
- name: yes_no_answer
dtype:
class_label:
names:
'0': 'NO'
'1': 'YES'
splits:
- name: train
num_bytes: 4591162424
num_examples: 10000
download_size: 1782588663
dataset_size: 4591162424
---
# Dataset Card for "natural-questions-chunk-3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,818 | [
[
-0.061309814453125,
-0.060791015625,
0.0224609375,
0.023468017578125,
-0.027496337890625,
-0.004207611083984375,
0.01983642578125,
-0.0292816162109375,
0.060699462890625,
0.0479736328125,
-0.05560302734375,
-0.02154541015625,
-0.0224151611328125,
0.006805419... |
w95/dz | 2023-10-23T17:29:42.000Z | [
"region:us"
] | w95 | null | null | 0 | 12 | 2023-10-23T17:16:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Hessa/tqa | 2023-10-23T21:07:29.000Z | [
"region:us"
] | Hessa | null | null | 0 | 12 | 2023-10-23T21:05:56 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Alamerton/cot_submix_data | 2023-10-24T12:00:21.000Z | [
"region:us"
] | Alamerton | null | null | 0 | 12 | 2023-10-24T11:59:46 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sunjun/medqa_sj | 2023-10-24T12:49:39.000Z | [
"region:us"
] | sunjun | null | null | 0 | 12 | 2023-10-24T12:49:24 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: options
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: meta_info
dtype: string
- name: answer_idx
dtype: string
- name: metamap_phrases
sequence: string
- name: choices
sequence: string
- name: answer_index
dtype: int64
splits:
- name: train
num_bytes: 16555933
num_examples: 10178
- name: test
num_bytes: 2118034
num_examples: 1273
download_size: 9784195
dataset_size: 18673967
---
# Dataset Card for "medqa_sj"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 946 | [
[
-0.03253173828125,
-0.01042938232421875,
0.026519775390625,
-0.0013303756713867188,
-0.011627197265625,
0.00433349609375,
0.0236663818359375,
0.007076263427734375,
0.064697265625,
0.042022705078125,
-0.054901123046875,
-0.05218505859375,
-0.045623779296875,
... |
thangvip/orca-filter-half-open | 2023-10-29T16:29:19.000Z | [
"region:us"
] | thangvip | null | null | 0 | 12 | 2023-10-25T04:16:52 | ---
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 640753674.0897293
num_examples: 658943
download_size: 340623823
dataset_size: 640753674.0897293
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "orca-filter-half-open"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 587 | [
[
-0.036651611328125,
-0.03369140625,
0.004291534423828125,
0.0004425048828125,
-0.032440185546875,
-0.0151824951171875,
0.018402099609375,
-0.0177001953125,
0.056915283203125,
0.050933837890625,
-0.06353759765625,
-0.06536865234375,
-0.03228759765625,
-0.0228... |
cestwc/SG-subzone-poi-sentiment_1 | 2023-10-25T16:00:30.000Z | [
"region:us"
] | cestwc | null | null | 0 | 12 | 2023-10-25T15:59:30 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: local_created_at
dtype: string
- name: id
dtype: int64
- name: text
dtype: string
- name: source
dtype: string
- name: truncated
dtype: bool
- name: in_reply_to_status_id
dtype: float32
- name: in_reply_to_user_id
dtype: float32
- name: user_id
dtype: int64
- name: user_name
dtype: string
- name: user_screen_name
dtype: string
- name: user_location
dtype: string
- name: user_url
dtype: string
- name: user_verified
dtype: bool
- name: user_default_profile
dtype: bool
- name: user_description
dtype: string
- name: user_followers_count
dtype: int64
- name: user_friends_count
dtype: int64
- name: user_listed_count
dtype: int64
- name: user_favourites_count
dtype: int64
- name: user_statuses_count
dtype: int64
- name: local_user_created_at
dtype: string
- name: place_id
dtype: string
- name: place_url
dtype: string
- name: place_place_type
dtype: string
- name: place_name
dtype: string
- name: place_country_code
dtype: string
- name: place_bounding_box_type
dtype: string
- name: place_bounding_box_coordinates
dtype: string
- name: is_quote_status
dtype: bool
- name: retweet_count
dtype: int64
- name: favorite_count
dtype: int64
- name: entities_hashtags
dtype: string
- name: entities_urls
dtype: string
- name: entities_symbols
dtype: string
- name: entities_user_mentions
dtype: string
- name: favorited
dtype: bool
- name: retweeted
dtype: bool
- name: possibly_sensitive
dtype: bool
- name: lang
dtype: string
- name: latitude
dtype: float32
- name: longitude
dtype: float32
- name: year_created_at
dtype: int64
- name: month_created_at
dtype: int64
- name: day_created_at
dtype: int64
- name: weekday_created_at
dtype: int64
- name: hour_created_at
dtype: int64
- name: minute_created_at
dtype: int64
- name: year_user_created_at
dtype: int64
- name: month_user_created_at
dtype: int64
- name: day_user_created_at
dtype: int64
- name: weekday_user_created_at
dtype: int64
- name: hour_user_created_at
dtype: int64
- name: minute_user_created_at
dtype: int64
- name: subzone
dtype: string
- name: planning_area
dtype: string
- name: poi_flag
dtype: float32
- name: poi_id
dtype: string
- name: poi_dist
dtype: float32
- name: poi_latitude
dtype: float32
- name: poi_longitude
dtype: float32
- name: poi_name
dtype: string
- name: poi_type
dtype: string
- name: poi_cate2
dtype: string
- name: poi_cate3
dtype: string
- name: clean_text
dtype: string
- name: joy_score
dtype: float32
- name: trust_score
dtype: float32
- name: positive_score
dtype: float32
- name: sadness_score
dtype: float32
- name: disgust_score
dtype: float32
- name: anger_score
dtype: float32
- name: anticipation_score
dtype: float32
- name: negative_score
dtype: float32
- name: fear_score
dtype: float32
- name: surprise_score
dtype: float32
- name: words
dtype: string
- name: polarity_score
dtype: float32
- name: labels
dtype: int64
- name: T0_q1
dtype: string
- name: bart_mnli
dtype: float32
- name: T0_q2
dtype: string
- name: shortlist
dtype: int64
- name: preprocess-1
dtype: string
- name: preprocess-2
dtype: string
splits:
- name: train
num_bytes: 1594138834
num_examples: 1025135
download_size: 490233454
dataset_size: 1594138834
---
# Dataset Card for "SG-subzone-poi-sentiment_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 3,962 | [
[
-0.054229736328125,
-0.0181427001953125,
0.0086517333984375,
0.027008056640625,
-0.028045654296875,
-0.0186767578125,
0.0162506103515625,
0.0160064697265625,
0.076171875,
0.0170135498046875,
-0.073486328125,
-0.07501220703125,
-0.035430908203125,
-0.02351379... |
sayan1101/finetune_dataset | 2023-10-26T09:51:04.000Z | [
"region:us"
] | sayan1101 | null | null | 0 | 12 | 2023-10-26T09:49:44 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 27407564
num_examples: 52000
download_size: 12306324
dataset_size: 27407564
---
# Dataset Card for "finetune_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 448 | [
[
-0.054656982421875,
-0.0252685546875,
0.01013946533203125,
0.004909515380859375,
-0.0181732177734375,
-0.021026611328125,
0.006847381591796875,
-0.00466156005859375,
0.05377197265625,
0.0341796875,
-0.05657958984375,
-0.045989990234375,
-0.03729248046875,
-0... |
CJWeiss/inabs | 2023-10-26T20:42:33.000Z | [
"region:us"
] | CJWeiss | null | null | 0 | 12 | 2023-10-26T20:42:23 | ---
dataset_info:
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: file
dtype: string
splits:
- name: train
num_bytes: 159441006
num_examples: 5346
- name: test
num_bytes: 32277886
num_examples: 1069
- name: valid
num_bytes: 21628228
num_examples: 713
download_size: 103927432
dataset_size: 213347120
---
# Dataset Card for "inabs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 546 | [
[
-0.05059814453125,
-0.019622802734375,
0.007152557373046875,
0.01837158203125,
-0.002285003662109375,
0.0005617141723632812,
0.036102294921875,
-0.024078369140625,
0.07080078125,
0.036834716796875,
-0.051239013671875,
-0.046722412109375,
-0.031219482421875,
... |
wisenut-nlp-team/FiD_aihub_admin | 2023-10-27T06:27:21.000Z | [
"region:us"
] | wisenut-nlp-team | null | null | 0 | 12 | 2023-10-27T04:29:52 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: answer
dtype: string
- name: similar_contexts
sequence: string
splits:
- name: train
num_bytes: 4229071220
num_examples: 329464
- name: validation
num_bytes: 528496177
num_examples: 41182
download_size: 2130406942
dataset_size: 4757567397
---
# Dataset Card for "FiD_aihub_admin"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 626 | [
[
-0.05108642578125,
-0.0462646484375,
-0.007171630859375,
-0.000010192394256591797,
-0.01434326171875,
0.00814056396484375,
0.029144287109375,
0.0009217262268066406,
0.05828857421875,
0.0390625,
-0.044219970703125,
-0.04498291015625,
-0.039398193359375,
-0.01... |
buihungtpd3/custom-longformer | 2023-10-28T11:37:10.000Z | [
"region:us"
] | buihungtpd3 | Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. | @article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
} | 0 | 12 | 2023-10-28T11:00:25 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
roborovski/upsampled-prompts-parti | 2023-11-01T13:25:03.000Z | [
"region:us"
] | roborovski | null | null | 0 | 12 | 2023-10-29T17:32:37 | ---
dataset_info:
features:
- name: Prompt
dtype: string
- name: Category
dtype: string
- name: Upsampled
dtype: string
splits:
- name: train
num_bytes: 10258852
num_examples: 23318
download_size: 5483101
dataset_size: 10258852
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "upsampled-prompts-parti"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 531 | [
[
-0.04119873046875,
-0.00724029541015625,
0.025634765625,
0.0302276611328125,
-0.026611328125,
0.00490570068359375,
0.004901885986328125,
0.0146942138671875,
0.067138671875,
0.024810791015625,
-0.08026123046875,
-0.038787841796875,
-0.031768798828125,
-0.0021... |
Isamu136/penetration_testing_scraped_dataset | 2023-10-30T07:35:25.000Z | [
"region:us"
] | Isamu136 | null | null | 1 | 12 | 2023-10-29T17:44:34 | ---
dataset_info:
features:
- name: text
dtype: string
- name: embedding
sequence: float32
- name: tokens
sequence: int64
- name: database
dtype: string
- name: file
dtype: string
- name: chunk
dtype: int64
splits:
- name: train
num_bytes: 1005293572
num_examples: 107542
download_size: 663206603
dataset_size: 1005293572
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "penetration_testing_scraped_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 655 | [
[
-0.035919189453125,
-0.04315185546875,
0.00809478759765625,
0.021759033203125,
-0.0022525787353515625,
0.005184173583984375,
0.037109375,
0.0020809173583984375,
0.045989990234375,
0.036376953125,
-0.0408935546875,
-0.0660400390625,
-0.02911376953125,
-0.0219... |
sam-bha/un-general-assembly-votes-2000-2023 | 2023-11-01T14:56:11.000Z | [
"task_categories:tabular-regression",
"task_categories:tabular-classification",
"language:en",
"license:cc-by-nc-4.0",
"politics",
"region:us"
] | sam-bha | null | null | 0 | 12 | 2023-10-30T02:36:34 | ---
license: cc-by-nc-4.0
task_categories:
- tabular-regression
- tabular-classification
language:
- en
tags:
- politics
pretty_name: UN General Assembly Votes from 2000 to 2023
---
# UN General Assembly Votes from 2000 to 2023
The following is a cleaned and compiled version of all of the UN General Assembly votes, from [the UN Digital Library](https://digitallibrary.un.org/), which includes ~1800 different resolutions and votes by the 196 voting members.
Fields include **Title**, **Resolution Number** and the actual votes.
The votes are in a dict format, with the name of the country. Countries have have changed names over the period (such as Turkey -> Türkiye, Swaziland -> Eswatini), so we use the latest name each country has used as of 2023. One voting member country (Serbia and Montengro) has since split into two voting member countries during the time period in question, and is not considered. South Sudan, Serbia, and Montenegro only came into existing in the middle of the time period in question, and so we consider them as not voting / null votes before they became voting members.
Please follow the [UN Digital Library terms of service](https://digitallibrary.un.org/pages/?ln=en&page=tos) (e.g. non-commercial use)
© United Nations, 2023, https://digitallibrary.un.org, downloaded on 10/29/2023 | 1,325 | [
[
-0.04888916015625,
-0.0035076141357421875,
0.06634521484375,
0.005100250244140625,
-0.051055908203125,
-0.00235748291015625,
0.0361328125,
-0.0308380126953125,
0.0201568603515625,
0.037109375,
-0.041656494140625,
-0.044464111328125,
-0.0518798828125,
0.04034... |
dltjdgh0928/open_orca_ko_10000 | 2023-10-30T06:15:24.000Z | [
"region:us"
] | dltjdgh0928 | null | null | 1 | 12 | 2023-10-30T05:58:33 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
salma-remyx/gym_equipment_dataset | 2023-10-30T19:09:42.000Z | [
"region:us"
] | salma-remyx | null | null | 0 | 12 | 2023-10-30T19:09:40 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': lat_pulldown_machine
'1': leg_press_machine
'2': leg_raise_tower
splits:
- name: train
num_bytes: 1614251.0
num_examples: 150
download_size: 1616823
dataset_size: 1614251.0
---
# Dataset Card for "gym_equipment_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 613 | [
[
-0.035186767578125,
-0.007175445556640625,
-0.00023186206817626953,
0.006435394287109375,
-0.018524169921875,
-0.011322021484375,
0.03594970703125,
-0.0012273788452148438,
0.05035400390625,
0.019073486328125,
-0.062286376953125,
-0.048553466796875,
-0.0399780273... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.