id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
BrunoHays/ESLO_text_only | 2023-07-31T06:50:48.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | BrunoHays | ESLO dataset, each utterance are taken out individually | @misc{11403/eslo/v1,
title = {ESLO},
author = {LLL},
url = {https://hdl.handle.net/11403/eslo/v1},
note = {{ORTOLANG} ({Open} {Resources} {and} {TOols} {for} {LANGuage}) \textendash www.ortolang.fr},
copyright = {Licence Creative Commons Attribution - Pas d'Utilisation Commerciale - Partage dans les Mêmes Conditions 4.0 International},
year = {2023}
} | 0 | 11 | 2023-07-31T06:06:34 | ---
license: cc-by-nc-4.0
---
Eshkol-Taravella I., Baude O., Maurel D., Hriba L., Dugua C., Tellier I., (2012), Un grand corpus oral « disponible » : le corpus d’Orléans 1968-2012., in Ressources linguistiques libres, TAL. Volume 52 – n° 3/2011, 17-46 Laboratoire Ligérien de Linguistique - UMR 7270 (LLL) (2023). ESLO [Corpus]. ORTOLANG (Open Resources and TOols for LANGuage) - www.ortolang.fr, v1, https://hdl.handle.net/11403/eslo/v1. | 438 | [
[
-0.0158843994140625,
-0.058624267578125,
0.040802001953125,
0.01788330078125,
-0.0151824951171875,
0.004364013671875,
-0.01041412353515625,
-0.034637451171875,
0.053863525390625,
0.04937744140625,
-0.003520965576171875,
-0.0517578125,
-0.03759765625,
0.02507... |
TibetanAI/TibetanAI_NERv1.0 | 2023-08-03T02:18:55.000Z | [
"language:bo",
"license:apache-2.0",
"region:us"
] | TibetanAI | null | null | 0 | 11 | 2023-08-03T01:54:29 | ---
license: apache-2.0
language:
- bo
---
# Dataset Card for TibetanAI_NERv1.0
## Dataset Description
TibetanAI_NERv1.0 is a Tibetan NER dataset. 藏文命名实体识别数据集。
- **Paper: 基于小样本学习的藏文命名实体识别
### Languages
Tibetan
### Licensing Information
apache-2.0
### Citation Information
于韬,张英,拥措.基于小样本学习的藏文命名实体识别[J].计算机与现代化,2023(05):13-19.
### Contributions
Title-题名: 基于小样本学习的藏文命名实体识别
Author-作者: 于韬;张英;拥措;
Organ-单位: 西藏大学信息科学技术学院;西藏大学西藏自治区藏文信息技术人工智能重点实验室;西藏大学藏文信息技术教育部工程研究中心;
| 471 | [
[
-0.006076812744140625,
-0.042144775390625,
-0.0224609375,
0.020416259765625,
-0.04034423828125,
-0.0357666015625,
-0.01390838623046875,
-0.01274871826171875,
0.0306854248046875,
0.01038360595703125,
-0.00969696044921875,
-0.0312347412109375,
-0.027191162109375,
... |
pykeio/oshichats-v1-2308 | 2023-09-06T23:07:19.000Z | [
"task_categories:text-classification",
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:token-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"size_categories:1M<n<10M",
"language:en",
"license:cc-by-nc-sa-4.0",
"livestream",
... | pykeio | null | null | 2 | 11 | 2023-08-03T14:24:05 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-classification
- conversational
- text-generation
- token-classification
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
tags:
- livestream
- stream
- chat
- messages
- vtuber
- vtubers
pretty_name: OSHIChats v1
size_categories:
- 1M<n<10M
---
## OSHIChats v1 (August 2023)
OSHIChats v1 is a dataset of 8.06 million high-quality filtered English chat messages collected from various [VTuber](https://en.wikipedia.org/wiki/VTuber) live streams.
Compared to our previous dataset, [pykeio/vtuber-chats-2023-filtered-en-8.7M](https://huggingface.co/datasets/pykeio/vtuber-chats-2023-filtered-en-8.7M), we make the following improvements:
- Include stream topic information
- Far more accurate nickname detection using NLP
- Previously we did not match names like "dad" (nickname for Mori Calliope) or "mom" (nickname for Nina Kosaka) because they were too general. Now, we analyze the context and other information about the stream to determine whether to match such nicknames.
- Detect and normalize fan names like takodachi or pentomo
## Usage
Once you gain access to the dataset, you'll also need to log in to Hugging Face CLI with `huggingface-cli login`.
```py
from datasets import load_dataset
chats_dataset = load_dataset('pykeio/oshichats-v1-2308', split='train', revision='refs/convert/parquet')
chats_dataset[0]
# {'liver': 'FgXWZOUZA2oYHNr6qDmsTQ', 'stream': {'id': 'JHBv4BA_Y84', 'topic': 'Twisted_Wonderland'}, 'is_super': False, 'message': "i think i've grown to dislike them ", 'author': 'chxrry_head', 'time': [1660106235135797, 2126652]}
```
## Samples
```json
{
"liver": "kieJGn3pgJikVW8gmMXE2w",
"stream": {
"id": "dMUhbAcI5gk",
"topic": "minecraft"
},
"is_super": false,
"message": "yay <|liver:bW9t|> is streaming while I'm awake!",
"author": "Redribbon Vicky",
"time": [1651976493761550, 44936]
}
{
"liver": "yl1z3jo3XHR1riLFKG5UAg",
"stream": {
"id": "TgEX7HFqTYc",
"topic": "Donkey_Kong"
},
"is_super": false,
"message": "Stop running <|liver:QW1l|><|:ameHeh:|><|:ameHeh:|><|:ameHeh:|>",
"author": "Anon",
"time": [1616291612238864, 889273]
}
```
## Data fields
- `liver`: ID of the YouTube channel hosting the stream which the chat message came from.
- `stream`: Information about the stream.
- `id`: Video ID of the YouTube stream.
- `topic`: Topic of the stream (or `null` if a topic could not be determined). This can be things like `talk`, `Minecraft`, `Singing`, `GTA`, `Asmr`, etc.
- `is_super`: Whether or not the message is a Superchat (donation).
- `message`: Contents of the message. For consistency and ease of use on downstream tasks, we replace certain words with easily matchable special tokens:
* `<|liver:{b64}|>`: The substring refers to the host of the stream.
* `<|liver-fans:{b64}|>`: The substring refers to a nickname given to the fanbase of the host of the stream, e.g. aloupeeps or takodachis.
* `<|known-collaborator:{channelID}:{b64}|>`: The substring refers to a fellow VTuber that is present in the stream.
* `<|maybe-collaborator:{channelID}:{b64}|>`: The substring refers to a fellow VTuber that may or may not be part of the stream.
* `<|collaborator-fans:{channelID}:{b64}|>`: The substring refers to the fanbase of a collaborator present in the stream.
* `<|:{emote}:|>`: Represents a channel emote.
* Note that `channelID` is a YouTube channel ID, and `b64` is the original substring encoded as base64.
- `author`: The username of the author.
- `time`: A tuple containing the Unix timestamp of when the message was sent, and the relative time since the start of the stream.
## License
Licensed under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/); you must give attribution, you may not use the dataset for commercial purposes, and you must distribute any transformations or copies of the dataset under the same license. [Contact us](mailto:contact@pyke.io) for alternative/commercial licensing. | 4,030 | [
[
-0.04779052734375,
-0.0677490234375,
0.003078460693359375,
0.01171112060546875,
-0.037994384765625,
0.01450347900390625,
-0.024688720703125,
-0.0222015380859375,
0.071533203125,
0.03131103515625,
-0.0816650390625,
-0.038818359375,
-0.05889892578125,
0.001553... |
Tarklanse/Traditional_Chinese_roleplay_chat_Dataset | 2023-09-07T12:27:06.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:zh",
"license:cc-by-sa-4.0",
"region:us"
] | Tarklanse | null | null | 7 | 11 | 2023-08-13T01:40:43 | ---
task_categories:
- text-generation
- text2text-generation
language:
- zh
license: cc-by-sa-4.0
---
# Traditional_Chinese_roleplay_chat_Dataset
這個資料集是以繁體中文為主,將各種由ChatGPT生成與極小部分個人撰寫的對話內容整理為alpaca dataset format的格式
以一層一層堆疊的方式,將一則對話紀錄拆成數筆資料(共約1000則對話),在幾次嘗試性的訓練中能夠讓llama2重現原本英文那種很活躍的對話風格,並且能夠維持善於扮演各種角色的能力
目前個人有以這個資料集製作一個lora
2023/09/07 更新
為資料集加入一些中英翻譯的句子,以期AI能以更好的文字去描寫他的動作,並增加了一些與食物有關的對話,希望能降低AI生出奇怪食物名的機率
| 413 | [
[
-0.0202484130859375,
-0.08111572265625,
0.0004591941833496094,
0.0299835205078125,
-0.04345703125,
-0.027435302734375,
0.015716552734375,
-0.0250091552734375,
0.0313720703125,
0.0589599609375,
-0.037200927734375,
-0.054107666015625,
-0.046783447265625,
-0.01... |
imvladikon/QAmeleon | 2023-08-13T19:36:48.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:ar",
"language:bn",
"language:fi",
"language:id",
"language:ko",
"language:ru",
"language:sw",
"language:te",
"license:cc-by-4.0",
"arxiv:2211.08264",
"region:us"
] | imvladikon | null | null | 0 | 11 | 2023-08-13T19:29:03 | ---
language:
- ar
- bn
- fi
- id
- ko
- ru
- sw
- te
license: cc-by-4.0
size_categories:
- 10K<n<100K
task_categories:
- question-answering
dataset_info:
- config_name: ar
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 4773335
num_examples: 6966
download_size: 0
dataset_size: 4773335
- config_name: bn
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 6458441
num_examples: 6084
download_size: 0
dataset_size: 6458441
- config_name: default
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 32190633
num_examples: 47173
download_size: 16811173
dataset_size: 32190633
- config_name: fi
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 2158030
num_examples: 5028
download_size: 0
dataset_size: 2158030
- config_name: id
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 2635540
num_examples: 6797
download_size: 0
dataset_size: 2635540
- config_name: ko
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 5074624
num_examples: 6471
download_size: 0
dataset_size: 5074624
- config_name: ru
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 3952632
num_examples: 5557
download_size: 0
dataset_size: 3952632
- config_name: sw
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 2113909
num_examples: 5597
download_size: 0
dataset_size: 2113909
- config_name: te
features:
- name: language
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 5024122
num_examples: 4673
download_size: 0
dataset_size: 5024122
configs:
- config_name: ar
data_files:
- split: train
path: ar/train-*
- config_name: bn
data_files:
- split: train
path: bn/train-*
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: fi
data_files:
- split: train
path: fi/train-*
- config_name: id
data_files:
- split: train
path: id/train-*
- config_name: ko
data_files:
- split: train
path: ko/train-*
- config_name: ru
data_files:
- split: train
path: ru/train-*
- config_name: sw
data_files:
- split: train
path: sw/train-*
- config_name: te
data_files:
- split: train
path: te/train-*
---
# Dataset Card for "QAmeleon"
QAmeleon introduces synthetic multilingual QA data contaning in 8 langauges using PaLM-540B, a large language model. This dataset was generated by prompt tuning PaLM with only five examples per language. We use the synthetic data to finetune downstream QA models leading to improved accuracy in comparison to English-only and translation-based baselines.
Data available at https://storage.googleapis.com/qameleon/qamelon_pt_accepted.csv
More details can be found in the [QAmeleon: Multilingual QA with Only 5 Examples](https://arxiv.org/abs/2211.08264) which can be cited as follows:
```
@misc{agrawal2022qameleon,
title={QAmeleon: Multilingual QA with Only 5 Examples},
author={Priyanka Agrawal and Chris Alberti and Fantine Huot and Joshua Maynez and Ji Ma and Sebastian Ruder and Kuzman Ganchev and Dipanjan Das and Mirella Lapata},
year={2022},
eprint={2211.08264},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
This dataset contains a total of 47173 Question Answer instances across 8 langauges, following is the count per language.
|Language | Count |
|---------|------:|
|ar |6966 |
|bn |6084 |
|fi |5028 |
|id |6797 |
|ko |6471 |
|ru |5557 |
|sw |5597 |
|te |4673 |
|**Total** |**47173**|
The QAmeleon dataset is released under the [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 4,983 | [
[
-0.038909912109375,
-0.041351318359375,
0.0152435302734375,
0.00920867919921875,
-0.0103759765625,
0.00868988037109375,
-0.012481689453125,
-0.03399658203125,
0.0283050537109375,
0.042938232421875,
-0.040985107421875,
-0.05291748046875,
-0.006786346435546875,
... |
KaraKaraWitch/PIPPA-ShareGPT-formatted | 2023-08-14T08:46:26.000Z | [
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:en",
"license:agpl-3.0",
"not-for-all-audiences",
"conversational",
"roleplay",
"custom-format",
"a.",
"arxiv:2308.05884",
"region:us"
] | KaraKaraWitch | null | null | 2 | 11 | 2023-08-14T08:42:53 | ---
license: agpl-3.0
task_categories:
- conversational
language:
- en
tags:
- not-for-all-audiences
- conversational
- roleplay
- custom-format
- a.
pretty_name: PIPPA - Personal Interaction Pairs Between People and AI
size_categories:
- 10K<n<100K
viewer: false
---
# KaraKaraWitch/PIPPA-IHaveNeverFeltNeedToSend
```
I've never felt the need to send a photo of my <REDACTED>
To a stranger on the Internet
```
The following is the original description for PIPPA. [Consider downloading the original dataset over here!](https://huggingface.co/datasets/PygmalionAI/PIPPA)
---
# PIPPA - Personal Interaction Pairs between People and AI
It's been a long time coming, but we're proud to finally release the public portion of our conversational dataset to the public. **Personal Interaction Pairs between People and AI** (**PIPPA**) is a partially synthetic, community contributed and open-source conversational and roleplaying dataset generated from a subset of submitted logs to the Pygmalion project.
This dataset is a subset of what we have received - it consists only of the valid conversational logs in which the submitter gave consent to redistribute to the public. Furthermore, we have done our best to redact or modify any personal information that could potentially be found within PIPPA. If you have found something within PIPPA which has not been redacted properly, please contact us via. email at `teargosling@pygmalion.chat` or `alpindale@pygmalion.chat` and we'll take care of it for you. You may contact us for any other purpose as well, including yelling at us for when the next model will be released.
**⚠️ CAUTION: PIPPA contains conversations, themes and scenarios which can be considered "not safe for work" (NSFW) and/or heavily disturbing in nature. Models trained purely with PIPPA may have the tendency to generate X-rated output. You have been warned.**
## Dataset Summary
PIPPA consists of just a little more than 1 million lines of dialogue spread out over 26,000 conversations between users of the popular chatbot website "Character.AI" and its large language model, obtained through a large community effort taking place over the course of several months. Tallying shows that over 1,000 unique personas simulating both real and fictional characters are represented within the dataset, allowing PIPPA and LLMs fine-tuned on it to adapt to many different roleplay domains.
The dataset is represented with a JSONL file, with a singular JSON snippet representing one entire conversation. Every snippet contains the following pieces of data:
- `submission_timestamp`: The Unix timestamp of when this particular conversation was submitted to the project, in milliseconds.
- `categories`: The categories assigned to the character on the Character.AI website, if any were assigned. If no categories were assigned, it will be `null`
- `bot_id`: The unique ID assigned to the specific character which the user was conversing with on the website.
- `bot_name`: The name of the character.
- `bot_greeting`: The introductory line of the character to the user. This is always the first utterance of dialogue in a conversation.
- `bot_definitions`: Contains whatever was typed in the **Definitions** field in the character creator on the website. This usually consists of one or more example conversations between the user and the character designed to steer the model towards emulating the persona correctly. Bot definitions required a separate effort to gather, and thus may not be present for a specific persona - if this is the case, an empty string is provided. Because the defintions were written on Character.AI, this field usually follows Character.AI's unique formatting and should be preprocessed before feeding into any model - please see **Appendix A** of the paper for further details.
- `bot_description`: Contains whatever was typed in the **Description** field in the character creator on the website. It usually consists of a few sentences which gives a brief overview of the character and any important details about them.
- `conversation`: The conversation between the user and the model. This is represented as a list of dictionaries, each dictionary representing a single utterance and containing two key-value pairs: `message`, referring to the utterance itself and `is_human`, which designates whether the dialogue was generated by the user or the LLM.
For further information about PIPPA, please refer to our [published paper](https://arxiv.org/abs/2308.05884) or contact us at the emails listed above.
## Files
We publish PIPPA in multiple variants, each a singular JSONL file:
- **pippa.jsonl**: The original dataset, almost exactly as submitted to us (barring any modifications resulting from the redaction of personally identifiable information).
- **pippa_deduped.jsonl**: The 'cleaned' version of PIPPA, with duplicate conversations as well as any conversation with less than three turns removed from the dataset. **We recommend using this file.**
- **pippa_metharme.jsonl**: A version of deduped PIPPA which is formatted in a similar way to our [Metharme instructional models](https://huggingface.co/PygmalionAI/metharme-13b), useful as an example to demonstrate how to properly format the PIPPA dataset.
If you are using HuggingFace's `datasets` library, you can choose the file you wish to use by specifying the name of it (without extension) as an argument, like so: `dataset = load_dataset("PygmalionAI/PIPPA", 'pippa_deduped')`. The default value is `pippa_deduped`.
Thank you for your patience, everyone!
## Citation
If you're using our dataset, please consider citing our work:
```bibtex
@misc{gosling2023pippa,
title={PIPPA: A Partially Synthetic Conversational Dataset},
author={Tear Gosling and Alpin Dale and Yinhe Zheng},
year={2023},
eprint={2308.05884},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
___
Any relationship between the name of this dataset and any public personas is entirely and totally coincidential. | 6,035 | [
[
-0.020660400390625,
-0.058013916015625,
0.009918212890625,
0.0304107666015625,
-0.00689697265625,
-0.0132293701171875,
-0.005596160888671875,
-0.038421630859375,
0.036224365234375,
0.055633544921875,
-0.04559326171875,
-0.0280303955078125,
-0.035888671875,
0... |
RikoteMaster/Emotion_Recognition_4_llama2 | 2023-08-15T11:31:41.000Z | [
"region:us"
] | RikoteMaster | null | null | 2 | 11 | 2023-08-14T10:44:03 | ---
dataset_info:
features:
- name: Text_processed
dtype: string
- name: Emotion
dtype: string
- name: Augmented
dtype: bool
- name: text
dtype: string
splits:
- name: train
num_bytes: 23956262
num_examples: 61463
download_size: 8510226
dataset_size: 23956262
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Emotion_Recognition_4_llama2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 574 | [
[
-0.034912109375,
0.0075531005859375,
0.025146484375,
0.035797119140625,
-0.0293121337890625,
0.008636474609375,
0.0232391357421875,
-0.0273895263671875,
0.06231689453125,
0.01174163818359375,
-0.05206298828125,
-0.048583984375,
-0.054779052734375,
0.00430297... |
usernamedesu/pyg_dataset_markdown | 2023-08-17T16:19:57.000Z | [
"region:us"
] | usernamedesu | null | null | 0 | 11 | 2023-08-16T16:25:27 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Linhz/qg_viquad | 2023-08-24T16:20:25.000Z | [
"region:us"
] | Linhz | null | null | 0 | 11 | 2023-08-22T09:36:41 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
duwuonline/UIT-VSMEC | 2023-08-28T09:14:35.000Z | [
"task_categories:text-classification",
"language:vi",
"license:other",
"sentiment",
"classificati",
"region:us"
] | duwuonline | null | null | 0 | 11 | 2023-08-28T09:03:57 | ---
license: other
language:
- vi
tags:
- sentiment
- classificati
task_categories:
- text-classification
---
## Model description
This data from UIT aka University of Information Technology
It contain 7 class 'Other', 'Disgust', 'Enjoyment', 'Anger', 'Surprise', 'Sadness', 'Fear'
## Contributions
Thanks to ViDataset - Vietnamese Datasets for Natural Language Processing for sharing this dataset.
| 401 | [
[
-0.010650634765625,
-0.03741455078125,
0.0098876953125,
-0.0007538795471191406,
-0.0281524658203125,
-0.0015821456909179688,
0.0296478271484375,
-0.006328582763671875,
-0.007411956787109375,
0.053985595703125,
-0.027801513671875,
-0.04156494140625,
-0.0281829833... |
imoxto/prompt_injection_hackaprompt_gpt35 | 2023-08-29T13:21:20.000Z | [
"region:us"
] | imoxto | null | null | 0 | 11 | 2023-08-29T13:21:17 | ---
dataset_info:
features:
- name: labels
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 271856355
num_examples: 227042
download_size: 35972535
dataset_size: 271856355
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "prompt_injection_hackaprompt_gpt35"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 503 | [
[
-0.04266357421875,
-0.0256195068359375,
0.0325927734375,
0.0170440673828125,
-0.0157318115234375,
0.013031005859375,
0.0287017822265625,
-0.003032684326171875,
0.03167724609375,
0.024322509765625,
-0.045196533203125,
-0.04998779296875,
-0.0360107421875,
-0.0... |
rohanbalkondekar/HealthCare | 2023-09-01T08:30:48.000Z | [
"region:us"
] | rohanbalkondekar | null | null | 0 | 11 | 2023-09-01T08:30:18 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
aqubed/kub_tickets_small | 2023-09-04T23:08:41.000Z | [
"region:us"
] | aqubed | null | null | 0 | 11 | 2023-09-04T22:58:08 | ---
dataset_info:
features:
- name: number
dtype: int64
- name: title
dtype: string
- name: state
dtype: string
- name: created_at
dtype: string
- name: updated_at
dtype: string
- name: closed_at
dtype: string
- name: assignees
sequence: string
- name: labels
sequence: string
- name: reporter
dtype: string
- name: comments
list:
- name: body
dtype: string
- name: created_at
dtype: string
- name: events
list:
- name: author
dtype: string
- name: created_at
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 5967498
num_examples: 1099
download_size: 1380020
dataset_size: 5967498
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "kub_tickets_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,001 | [
[
-0.03814697265625,
-0.012664794921875,
0.02557373046875,
0.01605224609375,
-0.0209197998046875,
-0.0147247314453125,
0.01465606689453125,
0.0004165172576904297,
0.057769775390625,
0.03533935546875,
-0.052093505859375,
-0.045684814453125,
-0.0224456787109375,
... |
mtkinit/testAR | 2023-09-19T14:05:33.000Z | [
"region:us"
] | mtkinit | null | null | 0 | 11 | 2023-09-07T16:56:48 | ---
pretty_name: testAR
---
# testAR
Created from AIOD platform | 63 | [
[
-0.0302581787109375,
-0.0015478134155273438,
-0.0004439353942871094,
0.021087646484375,
-0.0338134765625,
0.053741455078125,
0.04034423828125,
-0.01776123046875,
0.0295257568359375,
0.0287628173828125,
-0.0033473968505859375,
-0.004547119140625,
-0.03125,
-0... |
rukkuhru/LoRAData | 2023-09-26T06:08:59.000Z | [
"region:us"
] | rukkuhru | null | null | 0 | 11 | 2023-09-13T07:28:32 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 11257.0
num_examples: 5
download_size: 23185
dataset_size: 11257.0
---
# Dataset Card for "LoRAData"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 378 | [
[
-0.0433349609375,
-0.0322265625,
0.0160064697265625,
0.0238037109375,
-0.0260162353515625,
-0.006679534912109375,
0.0292205810546875,
-0.0259857177734375,
0.0849609375,
0.047821044921875,
-0.051055908203125,
-0.054046630859375,
-0.040740966796875,
-0.0252227... |
deven367/babylm-10M-cbt | 2023-09-15T17:06:48.000Z | [
"region:us"
] | deven367 | null | null | 0 | 11 | 2023-09-15T17:06:43 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2705697
num_examples: 26000
- name: valid
num_bytes: 1220938
num_examples: 12747
- name: test
num_bytes: 1578682
num_examples: 16646
download_size: 3370383
dataset_size: 5505317
---
# Dataset Card for "babylm-10M-cbt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 646 | [
[
-0.03814697265625,
-0.0216522216796875,
-0.0015497207641601562,
0.0275421142578125,
-0.03265380859375,
0.005886077880859375,
0.0220489501953125,
-0.008331298828125,
0.04754638671875,
0.0310821533203125,
-0.066162109375,
-0.051055908203125,
-0.046051025390625,
... |
Surajsangwan90/NZTA | 2023-09-17T01:49:24.000Z | [
"region:us"
] | Surajsangwan90 | null | null | 0 | 11 | 2023-09-15T20:38:09 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
HLaci/RaftSub | 2023-09-18T13:03:43.000Z | [
"benchmark:raft",
"region:us"
] | HLaci | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | 0 | 11 | 2023-09-16T15:21:47 | ---
benchmark: raft
type: prediction
submission_name: SetFitBase
---
# RAFT submissions for RaftSub
## Submitting to the leaderboard
To make a submission to the [leaderboard](https://huggingface.co/spaces/ought/raft-leaderboard), there are three main steps:
1. Generate predictions on the unlabeled test set of each task
2. Validate the predictions are compatible with the evaluation framework
3. Push the predictions to the Hub!
See the instructions below for more details.
### Rules
1. To prevent overfitting to the public leaderboard, we only evaluate **one submission per week**. You can push predictions to the Hub as many times as you wish, but we will only evaluate the most recent commit in a given week.
2. Transfer or meta-learning using other datasets, including further pre-training on other corpora, is allowed.
3. Use of unlabeled test data is allowed, as is it always available in the applied setting. For example, further pre-training using the unlabeled data for a task would be permitted.
4. Systems may be augmented with information retrieved from the internet, e.g. via automated web searches.
### Submission file format
For each task in RAFT, you should create a CSV file called `predictions.csv` with your model's predictions on the unlabeled test set. Each file should have exactly 2 columns:
* ID (int)
* Label (string)
See the dummy predictions in the `data` folder for examples with the expected format. Here is a simple example that creates a majority-class baseline:
```python
from pathlib import Path
import pandas as pd
from collections import Counter
from datasets import load_dataset, get_dataset_config_names
tasks = get_dataset_config_names("ought/raft")
for task in tasks:
# Load dataset
raft_subset = load_dataset("ought/raft", task)
# Compute majority class over training set
counter = Counter(raft_subset["train"]["Label"])
majority_class = counter.most_common(1)[0][0]
# Load predictions file
preds = pd.read_csv(f"data/{task}/predictions.csv")
# Convert label IDs to label names
preds["Label"] = raft_subset["train"].features["Label"].int2str(majority_class)
# Save predictions
preds.to_csv(f"data/{task}/predictions.csv", index=False)
```
As you can see in the example, each `predictions.csv` file should be stored in the task's subfolder in `data` and at the end you should have something like the following:
```
data
├── ade_corpus_v2
│ ├── predictions.csv
│ └── task.json
├── banking_77
│ ├── predictions.csv
│ └── task.json
├── neurips_impact_statement_risks
│ ├── predictions.csv
│ └── task.json
├── one_stop_english
│ ├── predictions.csv
│ └── task.json
├── overruling
│ ├── predictions.csv
│ └── task.json
├── semiconductor_org_types
│ ├── predictions.csv
│ └── task.json
├── systematic_review_inclusion
│ ├── predictions.csv
│ └── task.json
├── tai_safety_research
│ ├── predictions.csv
│ └── task.json
├── terms_of_service
│ ├── predictions.csv
│ └── task.json
├── tweet_eval_hate
│ ├── predictions.csv
│ └── task.json
└── twitter_complaints
├── predictions.csv
└── task.json
```
### Validate your submission
To ensure that your submission files are correctly formatted, run the following command from the root of the repository:
```
python cli.py validate
```
If everything is correct, you should see the following message:
```
All submission files validated! ✨ 🚀 ✨
Now you can make a submission 🤗
```
### Push your submission to the Hugging Face Hub!
The final step is to commit your files and push them to the Hub:
```
python cli.py submit
```
If there are no errors, you should see the following message:
```
Submission successful! 🎉 🥳 🎉
Your submission will be evaulated on Sunday 05 September 2021 ⏳
```
where the evaluation is run every Sunday and your results will be visible on the leaderboard. | 3,873 | [
[
-0.0297088623046875,
-0.038604736328125,
0.016204833984375,
0.037811279296875,
-0.0036754608154296875,
-0.011383056640625,
-0.0220184326171875,
-0.0165557861328125,
0.0236968994140625,
0.03289794921875,
-0.05010986328125,
-0.048614501953125,
-0.04510498046875,
... | |
asun17904/imdb-test | 2023-09-17T16:15:11.000Z | [
"region:us"
] | asun17904 | null | null | 0 | 11 | 2023-09-17T16:15:03 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': neg
'1': pos
splits:
- name: test
num_bytes: 19590411.0
num_examples: 15000
download_size: 12828803
dataset_size: 19590411.0
---
# Dataset Card for "imdb-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 457 | [
[
-0.06549072265625,
-0.01873779296875,
-0.0003108978271484375,
0.003162384033203125,
-0.020904541015625,
0.00939178466796875,
0.0229644775390625,
-0.004192352294921875,
0.06378173828125,
0.0286102294921875,
-0.067626953125,
-0.03857421875,
-0.044677734375,
-0... |
Loie/Auto-ACD | 2023-09-20T12:53:29.000Z | [
"region:us"
] | Loie | null | null | 7 | 11 | 2023-09-18T08:24:55 |
# Auto-ACD
Auto-ACD is a large-scale, high-quality, audio-language dataset, building on the prior of robust audio-visual correspondence in existing video datasets, VGGSound and AudioSet.
- **Homepage:** https://auto-acd.github.io/
- **Paper:**
- **Github:** https://github.com/LoieSun/Auto-ACD
## Analysis

Auto-ACD</strong>, comprising over <strong>1.9M </strong> audio-text pairs.
As shown in figure, The text descriptions in Auto-ACD contain <strong>long texts (18 words)</strong> and <strong>diverse vocabularies (23K)</strong>, and provide information about the <strong>surrounding auditory environment</strong>(data point with <strong>shadow</strong>) in which sounds take place.
## Download
We provide a csv file. For each data pairs, we provide YouTube URLs and generated caption. Each line in the csv file has columns defined by here.
```
# YouTube ID, caption
```
## Dataset Preview

| 950 | [
[
-0.056365966796875,
-0.051605224609375,
0.00632476806640625,
0.0200958251953125,
-0.01013946533203125,
0.020660400390625,
-0.025177001953125,
-0.0167999267578125,
0.0290374755859375,
0.0211334228515625,
-0.05194091796875,
-0.0574951171875,
-0.03924560546875,
... |
mirfan899/urdu-ner | 2023-09-18T17:57:31.000Z | [
"region:us"
] | mirfan899 | null | null | 0 | 11 | 2023-09-18T17:57:06 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': TIME
'1': PERSON
'2': ORGANIZATION
'3': O
'4': NUMBER
'5': LOCATION
'6': DESIGNATION
'7': DATE
splits:
- name: train
num_bytes: 12556540
num_examples: 18172
- name: validation
num_bytes: 5412660
num_examples: 7788
- name: test
num_bytes: 5412660
num_examples: 7788
download_size: 4173687
dataset_size: 23381860
---
# Dataset Card for "urdu-ner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 938 | [
[
-0.034698486328125,
-0.01120758056640625,
-0.004486083984375,
0.0268707275390625,
-0.0065460205078125,
0.007556915283203125,
0.0079803466796875,
-0.004901885986328125,
0.059844970703125,
0.037109375,
-0.04522705078125,
-0.056396484375,
-0.06109619140625,
0.0... |
dim/law_stackexchange_prompts | 2023-09-21T21:00:28.000Z | [
"region:us"
] | dim | null | null | 0 | 11 | 2023-09-21T20:59:57 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 64447591
num_examples: 24343
download_size: 38111723
dataset_size: 64447591
---
# Dataset Card for "law_stackexchange_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 410 | [
[
-0.03192138671875,
-0.014190673828125,
0.021575927734375,
0.02227783203125,
-0.0233001708984375,
-0.016265869140625,
0.0237579345703125,
0.007213592529296875,
0.05072021484375,
0.041168212890625,
-0.05975341796875,
-0.05010986328125,
-0.0285186767578125,
-0.... |
Brecon/Train_Test | 2023-10-10T23:25:44.000Z | [
"region:us"
] | Brecon | null | null | 0 | 11 | 2023-09-21T22:50:18 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: label
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 195875.8617511521
num_examples: 173
- name: test
num_bytes: 49818.13824884793
num_examples: 44
download_size: 143188
dataset_size: 245694.0
---
# Dataset Card for "Train_Test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 587 | [
[
-0.04742431640625,
-0.01531982421875,
0.0025539398193359375,
0.021697998046875,
-0.0025882720947265625,
-0.01061248779296875,
0.0125732421875,
-0.00543212890625,
0.045440673828125,
0.0163726806640625,
-0.05731201171875,
-0.032745361328125,
-0.032684326171875,
... |
dim/forum_uristov_rf_prompts | 2023-09-21T23:06:22.000Z | [
"region:us"
] | dim | null | null | 0 | 11 | 2023-09-21T23:06:19 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: solution
dtype: string
- name: link
dtype: string
splits:
- name: train
num_bytes: 3043144
num_examples: 1849
download_size: 1343977
dataset_size: 3043144
---
# Dataset Card for "forum_uristov_rf_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 438 | [
[
-0.048797607421875,
-0.020721435546875,
0.0190582275390625,
0.0238494873046875,
-0.0198516845703125,
-0.005222320556640625,
0.01030731201171875,
0.0149993896484375,
0.050079345703125,
0.038818359375,
-0.0845947265625,
-0.061279296875,
-0.0177764892578125,
0.... |
linhtran92/infer_fix_76 | 2023-09-22T13:41:49.000Z | [
"region:us"
] | linhtran92 | null | null | 0 | 11 | 2023-09-22T13:41:16 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
TohidA/MONA | 2023-09-24T00:17:48.000Z | [
"task_categories:tabular-classification",
"task_categories:tabular-regression",
"region:us"
] | TohidA | null | null | 0 | 11 | 2023-09-23T21:02:13 | ---
dataset_name: MONA
dataset_type: tabular
task_categories: [tabular-classification, tabular-regression]
---
#MONA Arrangements Dataset
A publicly avialabe dataset published here: https://www.imf.org/external/np/pdr/mona/QueryReportLabelsAndDescriptions.aspx
license: openrail
dataset_info:
features:
- name: Arrangement Number
dtype: int64
- name: Country Name
dtype: string
- name: Country Code
dtype: int64
- name: Arrangement Type
dtype: string
- name: Approval date
dtype: string
- name: Approval Year
dtype: int64
- name: Initial End Date
dtype: string
- name: Initial End Year
dtype: int64
- name: Revised End Date
dtype: string
- name: Duration Of Annual Arrangement From
dtype: string
- name: Duration Of Annual Arrangement To
dtype: string
- name: Board Action Date
dtype: string
- name: Program Type
dtype: string
- name: Review Type
dtype: string
- name: Review Status
dtype: string
- name: Key Code
dtype: string
- name: Economic Code
dtype: float64
- name: Economic Descriptor
dtype: string
- name: Description
dtype: string
- name: Description Code
dtype: int64
- name: Test Date
dtype: string
- name: PC Status
dtype: string
- name: Comments
dtype: string
- name: Sort
dtype: int64
- name: EsOrder
dtype: int64
- name: NewTestDate
dtype: string
- name: Added At
dtype: string
- name: Assessed At
dtype: string
- name: Unique ID
dtype: string
- name: Parent ID
dtype: string
splits:
- name: train
num_bytes: 25540700
num_examples: 48988
download_size: 0
dataset_size: 25540700
configs:
- config_name: default
data_files:
- split: train
path: data/train-* | 1,694 | [
[
-0.03594970703125,
-0.005596160888671875,
0.0255584716796875,
0.0266571044921875,
-0.02178955078125,
-0.02996826171875,
0.0143890380859375,
0.00905609130859375,
0.034637451171875,
0.059814453125,
-0.06494140625,
-0.05010986328125,
-0.0279998779296875,
0.0028... |
dim/povarenok | 2023-09-24T03:26:10.000Z | [
"region:us"
] | dim | null | null | 0 | 11 | 2023-09-24T03:25:59 | ---
dataset_info:
features:
- name: full_receipt_text
dtype: string
- name: steps
sequence: string
- name: title_receipt
dtype: string
- name: title
dtype: string
- name: ingridients
sequence: string
- name: views
dtype: int64
- name: likes
dtype: int64
- name: ups
dtype: int64
- name: link
dtype: string
splits:
- name: train
num_bytes: 176339660
num_examples: 46500
download_size: 49568770
dataset_size: 176339660
---
# Dataset Card for "povarenok"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 656 | [
[
-0.0380859375,
-0.017791748046875,
0.0170745849609375,
0.014068603515625,
-0.02984619140625,
-0.0053863525390625,
0.0245513916015625,
-0.005207061767578125,
0.060882568359375,
0.040313720703125,
-0.057159423828125,
-0.0792236328125,
-0.039581298828125,
-0.01... |
dim/habr_prompts_5k | 2023-09-25T18:21:34.000Z | [
"region:us"
] | dim | null | null | 0 | 11 | 2023-09-25T00:25:09 | ---
dataset_info:
features:
- name: solution_short_llama2
dtype: string
- name: id
dtype: int64
- name: language
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text_markdown
dtype: string
- name: text_html
dtype: string
- name: author
dtype: string
- name: original_author
dtype: string
- name: original_url
dtype: string
- name: lead_html
dtype: string
- name: lead_markdown
dtype: string
- name: type
dtype: string
- name: time_published
dtype: int64
- name: statistics
struct:
- name: commentsCount
dtype: int64
- name: favoritesCount
dtype: int64
- name: readingCount
dtype: int64
- name: score
dtype: int64
- name: votesCount
dtype: int64
- name: votesCountMinus
dtype: int64
- name: votesCountPlus
dtype: int64
- name: labels
sequence: string
- name: hubs
sequence: string
- name: flows
sequence: string
- name: tags
sequence: string
- name: reading_time
dtype: int64
- name: format
dtype: string
- name: complexity
dtype: string
- name: comments
struct:
- name: author
sequence: string
- name: children
sequence:
sequence: int64
- name: id
sequence: int64
- name: level
sequence: int64
- name: message_html
sequence: string
- name: message_markdown
sequence: string
- name: parent_id
sequence: int64
- name: score
sequence: int64
- name: time_published
sequence: int64
- name: votes
sequence: int64
- name: readingCount
dtype: int64
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 1032739347
num_examples: 5000
download_size: 495188038
dataset_size: 1032739347
---
# Dataset Card for "habr_prompts_5k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 2,031 | [
[
-0.04840087890625,
-0.023590087890625,
0.00733184814453125,
0.0321044921875,
-0.017333984375,
-0.0007462501525878906,
0.0307159423828125,
-0.01428985595703125,
0.05804443359375,
0.033050537109375,
-0.0595703125,
-0.053466796875,
-0.0248870849609375,
0.000368... |
Brecon/Master_Train_Test | 2023-09-25T02:29:22.000Z | [
"region:us"
] | Brecon | null | null | 0 | 11 | 2023-09-25T02:29:16 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: label
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 446853.7995594714
num_examples: 363
- name: test
num_bytes: 112021.20044052863
num_examples: 91
download_size: 319014
dataset_size: 558875.0
---
# Dataset Card for "Master_Train_Test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 595 | [
[
-0.050262451171875,
-0.0191192626953125,
0.0011472702026367188,
0.0260467529296875,
-0.00481414794921875,
-0.00565338134765625,
0.017578125,
0.00720977783203125,
0.048980712890625,
0.0183868408203125,
-0.06695556640625,
-0.0316162109375,
-0.034759521484375,
... |
minh21/COVID-QA-sentence-transformer-data | 2023-10-06T07:10:21.000Z | [
"region:us"
] | minh21 | null | null | 0 | 11 | 2023-09-25T06:57:02 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: positive
dtype: string
- name: negative
dtype: string
- name: document_id
dtype: int64
splits:
- name: train
num_bytes: 4863851
num_examples: 2378
- name: test
num_bytes: 510126
num_examples: 269
download_size: 0
dataset_size: 5373977
---
# Dataset Card for "COVID-QA-sentence-transformer-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 670 | [
[
-0.0248260498046875,
-0.02294921875,
0.00811767578125,
0.018157958984375,
-0.007007598876953125,
-0.002899169921875,
0.0189361572265625,
-0.00278472900390625,
0.0513916015625,
0.0244140625,
-0.054229736328125,
-0.043121337890625,
-0.034576416015625,
-0.00798... |
dim/what_where_when_50k | 2023-09-25T12:07:50.000Z | [
"region:us"
] | dim | null | null | 0 | 11 | 2023-09-25T12:07:12 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
- name: url
dtype: string
- name: uuid
dtype: string
splits:
- name: train
num_bytes: 42224521.044228844
num_examples: 50000
download_size: 24272957
dataset_size: 42224521.044228844
---
# Dataset Card for "what_where_when_50k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 529 | [
[
-0.047576904296875,
0.0007109642028808594,
0.02105712890625,
0.0218048095703125,
-0.0019245147705078125,
-0.022491455078125,
0.02142333984375,
-0.0097503662109375,
0.054412841796875,
0.031829833984375,
-0.0631103515625,
-0.063232421875,
-0.03369140625,
-0.02... |
polinaeterna/tabular-benchmark | 2023-09-28T12:11:36.000Z | [
"task_categories:tabular-classification",
"task_categories:tabular-regression",
"region:us"
] | polinaeterna | null | null | 0 | 11 | 2023-09-27T11:30:57 |
---
annotations_creators: []
license: []
pretty_name: tabular_benchmark
tags: []
task_categories:
- tabular-classification
- tabular-regression
configs:
- config_name: clf_cat_covertype
data_files: clf_cat/covertype.csv
- config_name: clf_num_Higgs
data_files: clf_num/Higgs.csv
---
# Tabular Benchmark
## Dataset Description
This dataset is a curation of various datasets from [openML](https://www.openml.org/) and is curated to benchmark performance of various machine learning algorithms.
- **Repository:** https://github.com/LeoGrin/tabular-benchmark/community
- **Paper:** https://hal.archives-ouvertes.fr/hal-03723551v2/document
### Dataset Summary
Benchmark made of curation of various tabular data learning tasks, including:
- Regression from Numerical and Categorical Features
- Regression from Numerical Features
- Classification from Numerical and Categorical Features
- Classification from Numerical Features
### Supported Tasks and Leaderboards
- `tabular-regression`
- `tabular-classification`
## Dataset Structure
### Data Splits
This dataset consists of four splits (folders) based on tasks and datasets included in tasks.
- reg_num: Task identifier for regression on numerical features.
- reg_cat: Task identifier for regression on numerical and categorical features.
- clf_num: Task identifier for classification on numerical features.
- clf_cat: Task identifier for classification on categorical features.
Depending on the dataset you want to load, you can load the dataset by passing `task_name/dataset_name` to `data_files` argument of `load_dataset` like below:
```python
from datasets import load_dataset
dataset = load_dataset("inria-soda/tabular-benchmark", data_files="reg_cat/house_sales.csv")
```
## Dataset Creation
### Curation Rationale
This dataset is curated to benchmark performance of tree based models against neural networks. The process of picking the datasets for curation is mentioned in the paper as below:
- **Heterogeneous columns**. Columns should correspond to features of different nature. This excludes
images or signal datasets where each column corresponds to the same signal on different sensors.
- **Not high dimensional**. We only keep datasets with a d/n ratio below 1/10.
- **Undocumented datasets** We remove datasets where too little information is available. We did keep
datasets with hidden column names if it was clear that the features were heterogeneous.
- **I.I.D. data**. We remove stream-like datasets or time series.
- **Real-world data**. We remove artificial datasets but keep some simulated datasets. The difference is
subtle, but we try to keep simulated datasets if learning these datasets are of practical importance
(like the Higgs dataset), and not just a toy example to test specific model capabilities.
- **Not too small**. We remove datasets with too few features (< 4) and too few samples (< 3 000). For
benchmarks on numerical features only, we remove categorical features before checking if enough
features and samples are remaining.
- **Not too easy**. We remove datasets which are too easy. Specifically, we remove a dataset if a simple model (max of a single tree and a regression, logistic or OLS)
reaches a score whose relative difference with the score of both a default Resnet (from Gorishniy et al. [2021]) and a default HistGradientBoosting model (from scikit learn)
is below 5%. Other benchmarks use different metrics to remove too easy datasets, like removing datasets perfectly separated by a single decision classifier [Bischl et al., 2021],
but this ignores varying Bayes rate across datasets. As tree ensembles are superior to simple trees and logistic regresison [Fernández-Delgado et al., 2014],
a close score for the simple and powerful models suggests that we are already close to the best achievable score.
- **Not deterministic**. We remove datasets where the target is a deterministic function of the data. This
mostly means removing datasets on games like poker and chess. Indeed, we believe that these
datasets are very different from most real-world tabular datasets, and should be studied separately
### Source Data
**Numerical Classification**
|dataset_name|n_samples|n_features|original_link|new_link|
|---|---|---|---|---|
|electricity|38474.0|7.0|https://www.openml.org/d/151|https://www.openml.org/d/44120|
|covertype|566602.0|10.0|https://www.openml.org/d/293|https://www.openml.org/d/44121|
|pol|10082.0|26.0|https://www.openml.org/d/722|https://www.openml.org/d/44122|
|house_16H|13488.0|16.0|https://www.openml.org/d/821|https://www.openml.org/d/44123|
|MagicTelescope|13376.0|10.0|https://www.openml.org/d/1120|https://www.openml.org/d/44125|
|bank-marketing|10578.0|7.0|https://www.openml.org/d/1461|https://www.openml.org/d/44126|
|Bioresponse|3434.0|419.0|https://www.openml.org/d/4134|https://www.openml.org/d/45019|
|MiniBooNE|72998.0|50.0|https://www.openml.org/d/41150|https://www.openml.org/d/44128|
|default-of-credit-card-clients|13272.0|20.0|https://www.openml.org/d/42477|https://www.openml.org/d/45020|
|Higgs|940160.0|24.0|https://www.openml.org/d/42769|https://www.openml.org/d/44129|
|eye_movements|7608.0|20.0|https://www.openml.org/d/1044|https://www.openml.org/d/44130|
|Diabetes130US|71090.0|7.0|https://www.openml.org/d/4541|https://www.openml.org/d/45022|
|jannis|57580.0|54.0|https://www.openml.org/d/41168|https://www.openml.org/d/45021|
|heloc|10000.0|22.0|"https://www.kaggle.com/datasets/averkiyoliabev/home-equity-line-of-creditheloc?select=heloc_dataset_v1+%281%29.csv"|https://www.openml.org/d/45026|
|credit|16714.0|10.0|"https://www.kaggle.com/c/GiveMeSomeCredit/data?select=cs-training.csv"|https://www.openml.org/d/44089|
|california|20634.0|8.0|"https://www.dcc.fc.up.pt/ltorgo/Regression/cal_housing.html"|https://www.openml.org/d/45028|
**Categorical Classification**
|dataset_name|n_samples|n_features|original_link|new_link|
|---|---|---|---|---|
|electricity|38474.0|8.0|https://www.openml.org/d/151|https://www.openml.org/d/44156|
|eye_movements|7608.0|23.0|https://www.openml.org/d/1044|https://www.openml.org/d/44157|
|covertype|423680.0|54.0|https://www.openml.org/d/1596|https://www.openml.org/d/44159|
|albert|58252.0|31.0|https://www.openml.org/d/41147|https://www.openml.org/d/45035|
|compas-two-years|4966.0|11.0|https://www.openml.org/d/42192|https://www.openml.org/d/45039|
|default-of-credit-card-clients|13272.0|21.0|https://www.openml.org/d/42477|https://www.openml.org/d/45036|
|road-safety|111762.0|32.0|https://www.openml.org/d/42803|https://www.openml.org/d/45038|
**Numerical Regression**
|dataset_name|n_samples|n_features|original_link|new_link|
|---|---|---|---|---|
|cpu_act|8192.0|21.0|https://www.openml.org/d/197|https://www.openml.org/d/44132|
|pol|15000.0|26.0|https://www.openml.org/d/201|https://www.openml.org/d/44133|
|elevators|16599.0|16.0|https://www.openml.org/d/216|https://www.openml.org/d/44134|
|wine_quality|6497.0|11.0|https://www.openml.org/d/287|https://www.openml.org/d/44136|
|Ailerons|13750.0|33.0|https://www.openml.org/d/296|https://www.openml.org/d/44137|
|yprop_4_1|8885.0|42.0|https://www.openml.org/d/416|https://www.openml.org/d/45032|
|houses|20640.0|8.0|https://www.openml.org/d/537|https://www.openml.org/d/44138|
|house_16H|22784.0|16.0|https://www.openml.org/d/574|https://www.openml.org/d/44139|
|delays_zurich_transport|5465575.0|9.0|https://www.openml.org/d/40753|https://www.openml.org/d/45034|
|diamonds|53940.0|6.0|https://www.openml.org/d/42225|https://www.openml.org/d/44140|
|Brazilian_houses|10692.0|8.0|https://www.openml.org/d/42688|https://www.openml.org/d/44141|
|Bike_Sharing_Demand|17379.0|6.0|https://www.openml.org/d/42712|https://www.openml.org/d/44142|
|nyc-taxi-green-dec-2016|581835.0|9.0|https://www.openml.org/d/42729|https://www.openml.org/d/44143|
|house_sales|21613.0|15.0|https://www.openml.org/d/42731|https://www.openml.org/d/44144|
|sulfur|10081.0|6.0|https://www.openml.org/d/23515|https://www.openml.org/d/44145|
|medical_charges|163065.0|5.0|https://www.openml.org/d/42720|https://www.openml.org/d/44146|
|MiamiHousing2016|13932.0|14.0|https://www.openml.org/d/43093|https://www.openml.org/d/44147|
|superconduct|21263.0|79.0|https://www.openml.org/d/43174|https://www.openml.org/d/44148|
**Categorical Regression**
|dataset_name|n_samples|n_features|original_link|new_link|
|---|---|---|---|---|
|topo_2_1|8885.0|255.0|https://www.openml.org/d/422|https://www.openml.org/d/45041|
|analcatdata_supreme|4052.0|7.0|https://www.openml.org/d/504|https://www.openml.org/d/44055|
|visualizing_soil|8641.0|4.0|https://www.openml.org/d/688|https://www.openml.org/d/44056|
|delays_zurich_transport|5465575.0|12.0|https://www.openml.org/d/40753|https://www.openml.org/d/45045|
|diamonds|53940.0|9.0|https://www.openml.org/d/42225|https://www.openml.org/d/44059|
|Allstate_Claims_Severity|188318.0|124.0|https://www.openml.org/d/42571|https://www.openml.org/d/45046|
|Mercedes_Benz_Greener_Manufacturing|4209.0|359.0|https://www.openml.org/d/42570|https://www.openml.org/d/44061|
|Brazilian_houses|10692.0|11.0|https://www.openml.org/d/42688|https://www.openml.org/d/44062|
|Bike_Sharing_Demand|17379.0|11.0|https://www.openml.org/d/42712|https://www.openml.org/d/44063|
|Airlines_DepDelay_1M|1000000.0|5.0|https://www.openml.org/d/42721|https://www.openml.org/d/45047|
|nyc-taxi-green-dec-2016|581835.0|16.0|https://www.openml.org/d/42729|https://www.openml.org/d/44065|
|abalone|4177.0|8.0|https://www.openml.org/d/42726|https://www.openml.org/d/45042|
|house_sales|21613.0|17.0|https://www.openml.org/d/42731|https://www.openml.org/d/44066|
|seattlecrime6|52031.0|4.0|https://www.openml.org/d/42496|https://www.openml.org/d/45043|
|medical_charges|163065.0|5.0|https://www.openml.org/d/42720|https://www.openml.org/d/45048|
|particulate-matter-ukair-2017|394299.0|6.0|https://www.openml.org/d/42207|https://www.openml.org/d/44068|
|SGEMM_GPU_kernel_performance|241600.0|9.0|https://www.openml.org/d/43144|https://www.openml.org/d/44069|
### Dataset Curators
Léo Grinsztajn, Edouard Oyallon, Gaël Varoquaux.
### Licensing Information
[More Information Needed]
### Citation Information
Léo Grinsztajn, Edouard Oyallon, Gaël Varoquaux. Why do tree-based models still outperform deep
learning on typical tabular data?. NeurIPS 2022 Datasets and Benchmarks Track, Nov 2022, New
Orleans, United States. ffhal-03723551v2f
| 10,405 | [
[
-0.053070068359375,
-0.0552978515625,
0.027435302734375,
0.00916290283203125,
-0.00677490234375,
-0.00832366943359375,
-0.0116424560546875,
-0.0296630859375,
0.0186004638671875,
0.03167724609375,
-0.0167083740234375,
-0.0732421875,
-0.03326416015625,
0.00751... |
YL95/naive_chunk0 | 2023-09-27T17:21:00.000Z | [
"region:us"
] | YL95 | null | null | 0 | 11 | 2023-09-27T16:43:03 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
manu/theses_fr_2013_2023 | 2023-09-30T16:45:34.000Z | [
"region:us"
] | manu | null | null | 0 | 11 | 2023-09-30T16:44:39 | ---
dataset_info:
features:
- name: title_fr
dtype: string
- name: abstract_fr
dtype: string
- name: title_en
dtype: string
- name: abstract_en
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 392127399
num_examples: 97320
download_size: 224948329
dataset_size: 392127399
---
# Dataset Card for "theses_fr_2013_2023"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 520 | [
[
-0.04608154296875,
-0.009735107421875,
0.0269012451171875,
0.032623291015625,
-0.0066070556640625,
-0.009613037109375,
0.030853271484375,
-0.02337646484375,
0.059326171875,
0.0498046875,
-0.07330322265625,
-0.04248046875,
-0.0258636474609375,
-0.006729125976... |
AayushShah/SQL_ProcessedInputs | 2023-10-01T10:08:37.000Z | [
"region:us"
] | AayushShah | null | null | 1 | 11 | 2023-10-01T10:04:31 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 169263247.6591853
num_examples: 207341
- name: val
num_bytes: 43524625.19326676
num_examples: 53316
- name: test
num_bytes: 29017233.147547957
num_examples: 35545
download_size: 50460134
dataset_size: 241805106.0
---
# Dataset Card for "SQL_ProcessedInputs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 770 | [
[
-0.026763916015625,
-0.031707763671875,
0.0287628173828125,
0.004673004150390625,
-0.01448822021484375,
0.00751495361328125,
0.01056671142578125,
-0.00475311279296875,
0.068115234375,
0.052276611328125,
-0.0665283203125,
-0.0447998046875,
-0.03125,
-0.012687... |
fernandoperes/py_legislation | 2023-10-04T12:10:16.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:es",
"license:apache-2.0",
"legal",
"region:us"
] | fernandoperes | null | null | 0 | 11 | 2023-10-02T13:43:17 | ---
language:
- es
license: apache-2.0
size_categories:
- 1K<n<10K
task_categories:
- text-classification
tags:
- legal
configs:
- config_name: default
data_files:
- split: train
path: "/raw_text/train.parquet"
- config_name: raw_text
data_files:
- split: train
path: "/raw_text/train.parquet"
- config_name: unlabeled_sentences
data_files:
- split: train
path: "/unlabeled_sentences/train.parquet"
dataset_info:
- config_name: raw_text
features:
- name: source_id
dtype: int64
- name: source_name
dtype: string
- name: text
dtype: string
- name: text_id
dtype: int64
- name: extension
dtype:
class_label:
names:
'0': docx
'1': pdf
'2': html
'3': txt
'4': doc
split: train
- config_name: unlabeled_sentences
features:
- name: source_id
dtype: int64
- name: source_name
dtype: string
- name: text
dtype: string
- name: text_id
dtype: int64
- name: cost_type
dtype:
class_label:
names:
'0': no_cost
'1': adm_cost
'2': direct_cost
'3': other_cost
- name: affected_entity
dtype:
class_label:
names:
'0': no_affected_ent
'1': companies
'2': citizens
'3': public_adm
- name: io_categories
sequence:
class_label:
names:
'0': prestacao_info_empresarial_e_fiscal
'1': pedidos_de_licencas_e_outros
'2': registos_e_notificacoes
'3': candidatura_a_subsidios_e_outros
'4': disponibilizacao_de_manuais_e_outros
'5': cooperacao_com_auditorias_e_outros
'6': prestacao_info_a_consumidores
'7': outras_ois
- name: aa_categories
sequence:
class_label:
names:
'0': aa_1_familiarizacao_com_oi
'1': aa_1_recolha_e_organizacao_de_info
'2': aa_1_processamento_de_info
'3': aa_1_tempos_de_espera
'4': aa_1_deslocacoes
'5': aa_1_submissao_de_info
'6': aa_1_preservacao_de_info
'7': aa_2_familiarizacao_com_oi
'8': aa_2_recolha_e_organizacao_de_info
'9': aa_2_processamento_de_info
'10': aa_2_tempos_de_espera
'11': aa_2_deslocacoes
'12': aa_2_submissao_de_info
'13': aa_2_preservacao_de_info
'14': aa_3_familiarizacao_com_oi
'15': aa_3_recolha_e_organizacao_de_info
'16': aa_3_processamento_de_info
'17': aa_3_tempos_de_espera
'18': aa_3_deslocacoes
'19': aa_3_submissao_de_info
'20': aa_3_preservacao_de_info
'21': aa_4_familiarizacao_com_oi
'22': aa_4_recolha_e_organizacao_de_info
'23': aa_4_processamento_de_info
'24': aa_4_tempos_de_espera
'25': aa_4_deslocacoes
'26': aa_4_submissao_de_info
'27': aa_4_preservacao_de_info
'28': aa_5_familiarizacao_com_oi
'29': aa_5_recolha_e_organizacao_de_info
'30': aa_5_processamento_de_info
'31': aa_5_tempos_de_espera
'32': aa_5_deslocacoes
'33': aa_5_submissao_de_info
'34': aa_5_preservacao_de_info
'35': aa_6_familiarizacao_com_oi
'36': aa_6_recolha_e_organizacao_de_info
'37': aa_6_processamento_de_info
'38': aa_6_tempos_de_espera
'39': aa_6_deslocacoes
'40': aa_6_submissao_de_info
'41': aa_6_preservacao_de_info
'42': aa_7_familiarizacao_com_oi
'43': aa_7_recolha_e_organizacao_de_info
'44': aa_7_processamento_de_info
'45': aa_7_tempos_de_espera
'46': aa_7_deslocacoes
'47': aa_7_submissao_de_info
'48': aa_7_preservacao_de_info
- name: aa_categories_unique
sequence:
class_label:
names:
'0': familiarizacao_com_oi
'1': recolha_e_organizacao_de_info
'2': processamento_de_info
'3': tempos_de_espera
'4': deslocacoes
'5': submissao_de_info
'6': preservacao_de_info
splits:
- name: train
---
# Paraguay Legislation
The Paraguay Legislation dataset is a comprehensive collection of legal documents sourced from the legislative framework of Paraguay. The dataset contains legal documents sourced from the legislative framework of Paraguay, including resolutions, decrees, laws, and other kinds of legislative texts.
This dataset has been curated as a valuable resource for Natural Language Processing (NLP) tasks. The data is designed for research focused on text classification tasks. The classification process is divided into two objectives:
1. Binary classification: 0 - no cost and 1 - cost (legislation has costs for the society)
2. Multi-classification: classify the document into several hierarchical categories of costs.
For more information about multi-classification definitions, please check this link: <todo: link to>.
## Subsets
The dataset contains various subsets, each representing different data quality and preparation stages. Within these subsets, you'll encounter multiple versions of the same data, with variations primarily reflecting differences in data quality, metadata columns, and preprocessing tasks applied to change the data.
The subsets are the following:
**1. Raw:** Data extracted from the sources files (URls, PDFs and Word files) without any transformation or sentence splitter. It can be helpful because you can access the raw data extracted from the seeds (PDFs and Word files) and apply other preprocessing tasks from this point to prepare the data without returning to extract texts from source files.
**2. Sentences:** Normalized data split by sentence, mainly treating issues of text extracted from PDF. This stage also adds metadata about the sentence, for example: if it is a title or not.
**3. Sentence Unlabeled:** Unlabeled corpora of Paraguay legislation. This data is prepared to be labeled by the experts. Each instance of the dataset represents a specific text passage, split by its original formatting extracted from raw text (from original documents).
**4. Sentence labeled (Ground Truth):** The labeled data is the ground truth data used to train the models. This data is annotated by legal experts indicating the existence of administrative costs (and other types) in the legislation. Each instance of the dataset represents a specific text passage.
This dataset has the following data splits:
* Training Set: This portion of the data is used to train and fine-tune machine learning models.
* Test Set: The test set is reserved for assessing the model's accuracy, generalization, and effectiveness. It remains unseen during training and helps gauge how well the model performs on new, unseen data.
Together, these labeled data subsets provide a crucial reference point for building and evaluating models, ensuring they can make informed predictions and classifications with high accuracy and reliability.
| 7,076 | [
[
-0.009307861328125,
-0.044036865234375,
0.0161895751953125,
0.016937255859375,
-0.03717041015625,
-0.004573822021484375,
-0.006305694580078125,
-0.0237579345703125,
-0.00583648681640625,
0.077880859375,
-0.0296783447265625,
-0.050689697265625,
-0.039642333984375... |
tanvirsrbd1/sample_dataset1_1 | 2023-10-03T05:23:29.000Z | [
"region:us"
] | tanvirsrbd1 | null | null | 0 | 11 | 2023-10-03T05:23:24 | ---
dataset_info:
features:
- name: html
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 1837883
num_examples: 2980
download_size: 607662
dataset_size: 1837883
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "sample_dataset1_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 481 | [
[
-0.04052734375,
-0.02496337890625,
0.00045108795166015625,
0.0199737548828125,
-0.0242156982421875,
-0.010894775390625,
0.033050537109375,
0.0026035308837890625,
0.06854248046875,
0.0321044921875,
-0.07647705078125,
-0.054229736328125,
-0.041351318359375,
-0... |
FelixdoingAI/IP2P-hiddenwm-200 | 2023-10-03T14:09:13.000Z | [
"region:us"
] | FelixdoingAI | null | null | 0 | 11 | 2023-10-03T13:44:07 | ---
dataset_info:
features:
- name: original_prompt
dtype: string
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: edited_prompt
dtype: string
- name: edited_image
dtype: image
- name: adversarial_image
dtype: image
splits:
- name: train
num_bytes: 104484241.0
num_examples: 200
download_size: 104481659
dataset_size: 104484241.0
---
# Dataset Card for "IP2P-hiddenwm-200"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 588 | [
[
-0.046630859375,
0.00685882568359375,
0.013458251953125,
0.034271240234375,
-0.01206207275390625,
0.003856658935546875,
0.0269775390625,
-0.01155853271484375,
0.0404052734375,
0.0386962890625,
-0.05499267578125,
-0.036895751953125,
-0.04669189453125,
-0.0266... |
AlekseyKorshuk/rl-bench-test-crowdsource | 2023-10-03T22:05:47.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | 0 | 11 | 2023-10-03T21:40:37 | ---
dataset_info:
features:
- name: user_name
dtype: string
- name: bot_name
dtype: string
- name: memory
dtype: string
- name: prompt
dtype: string
- name: chat_history
list:
- name: message
dtype: string
- name: sender
dtype: string
splits:
- name: train
num_bytes: 292785
num_examples: 200
download_size: 190141
dataset_size: 292785
---
# Dataset Card for "rl-bench-test-crowdsource"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 587 | [
[
-0.050811767578125,
-0.0213470458984375,
0.0062713623046875,
0.0210418701171875,
-0.01526641845703125,
-0.0010557174682617188,
0.01430511474609375,
-0.01983642578125,
0.03814697265625,
0.0293121337890625,
-0.0697021484375,
-0.039886474609375,
-0.0263824462890625... |
Musa22/llma | 2023-10-04T09:59:01.000Z | [
"region:us"
] | Musa22 | null | null | 0 | 11 | 2023-10-04T09:56:41 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
DeepPavlov/verbalist_prompts | 2023-10-21T20:14:45.000Z | [
"language:ru",
"language:en",
"arxiv:2305.11206",
"region:us"
] | DeepPavlov | null | null | 1 | 11 | 2023-10-04T12:23:47 | ---
configs:
- config_name: default
data_files:
- split: dim_oasst_en
path: data/dim_oasst_en-*
- split: dim_oasst_ru
path: data/dim_oasst_ru-*
- split: dim_lima
path: data/dim_lima-*
- split: dim_logic_tasks_ru
path: data/dim_logic_tasks_ru-*
- split: dim_wikihow_en
path: data/dim_wikihow_en-*
- split: dim_wikihow_ru
path: data/dim_wikihow_ru-*
- split: dim_essayforum_writing_prompts_6k
path: data/dim_essayforum_writing_prompts_6k-*
- split: dim_sharegpt_short_ru
path: data/dim_sharegpt_short_ru-*
- split: dim_openreview_prompts_65
path: data/dim_openreview_prompts_65-*
- split: dim_roleplay_instruct_v2_final
path: data/dim_roleplay_instruct_v2_final-*
- split: dim_kinomania_scripts
path: data/dim_kinomania_scripts-*
- split: dim_bugurt_thread_prompts
path: data/dim_bugurt_thread_prompts-*
- split: dim_russian_lyrics_prompts
path: data/dim_russian_lyrics_prompts-*
- split: dim_ru_instruct_gpt4
path: data/dim_ru_instruct_gpt4-*
- split: dim_gpt_roleplay_realm
path: data/dim_gpt_roleplay_realm-*
- split: dim_ultrachat_ru
path: data/dim_ultrachat_ru-*
- split: dim_scitldr
path: data/dim_scitldr-*
- split: dim_linux_man_pages_tldr_summarized
path: data/dim_linux_man_pages_tldr_summarized-*
- split: dim_dolphin_ru_3k
path: data/dim_dolphin_ru_3k-*
- split: dim_runne_prompts
path: data/dim_runne_prompts-*
- split: dim_lurk_prompts
path: data/dim_lurk_prompts-*
- split: dim_panorama_prompts_10k
path: data/dim_panorama_prompts_10k-*
- split: dim_resh_edu_short_prompts
path: data/dim_resh_edu_short_prompts-*
- split: dim_databricks_dolly_15k_ru
path: data/dim_databricks_dolly_15k_ru-*
- split: dim_databricks_dolly_15k_en
path: data/dim_databricks_dolly_15k_en-*
- split: dim_grammarly_coedit
path: data/dim_grammarly_coedit-*
- split: dim_kinopoisk_prompts
path: data/dim_kinopoisk_prompts-*
- split: dim_medical_qa_ru_prompts
path: data/dim_medical_qa_ru_prompts-*
- split: dim_joke_explaination_prompts
path: data/dim_joke_explaination_prompts-*
- split: dim_oa_stackexchange_200k
path: data/dim_oa_stackexchange_200k-*
- split: dim_scale_helpful_no_math
path: data/dim_scale_helpful_no_math-*
- split: dim_law_stackexchange_prompts
path: data/dim_law_stackexchange_prompts-*
- split: dim_ficbook_prompts_best_10k
path: data/dim_ficbook_prompts_best_10k-*
- split: dim_azbyka_logic_ru
path: data/dim_azbyka_logic_ru-*
- split: dim_povarenok
path: data/dim_povarenok-*
- split: dim_AO3_fandom_chatbot_1to1
path: data/dim_AO3_fandom_chatbot_1to1-*
- split: dim_habr_prompts_5k
path: data/dim_habr_prompts_5k-*
- split: dim_what_where_when_50k
path: data/dim_what_where_when_50k-*
- split: dim_competition_math
path: data/dim_competition_math-*
- split: dim_sharegpt_short_en_30k
path: data/dim_sharegpt_short_en_30k-*
- split: dim_ru_turbo_alpaca_evol_instruct
path: data/dim_ru_turbo_alpaca_evol_instruct-*
- split: dim_ru_turbo_saiga
path: data/dim_ru_turbo_saiga-*
- split: dim_bugurt_completion_prompts
path: data/dim_bugurt_completion_prompts-*
- split: dim_tldr_17_50k
path: data/dim_tldr_17_50k-*
- split: dim_grade_school_math_instructions
path: data/dim_grade_school_math_instructions-*
- split: dim_tldr_news
path: data/dim_tldr_news-*
- split: dim_grade_school_math_instructions_ru
path: data/dim_grade_school_math_instructions_ru-*
- split: dim_dialogsum
path: data/dim_dialogsum-*
- split: dim_HC3_ru
path: data/dim_HC3_ru-*
- split: dim_horoscopes_ru_10k
path: data/dim_horoscopes_ru_10k-*
- split: dim_yandex_q_200k
path: data/dim_yandex_q_200k-*
- split: dim_leetcodesolutions_en_2k
path: data/dim_leetcodesolutions_en_2k-*
- split: dim_forum_uristov_rf_prompts
path: data/dim_forum_uristov_rf_prompts-*
- split: dim_dialogsum_ru
path: data/dim_dialogsum_ru-*
- split: dim_huggingartists_prompts
path: data/dim_huggingartists_prompts-*
dataset_info:
features:
- name: conversation_text
sequence: string
splits:
- name: dim_oasst_en
num_bytes: 4335500
num_examples: 2289
- name: dim_oasst_ru
num_bytes: 6206378
num_examples: 2220
- name: dim_lima
num_bytes: 2892267
num_examples: 1030
- name: dim_logic_tasks_ru
num_bytes: 76915
num_examples: 86
- name: dim_wikihow_en
num_bytes: 16008199
num_examples: 1995
- name: dim_wikihow_ru
num_bytes: 24451573
num_examples: 2058
- name: dim_essayforum_writing_prompts_6k
num_bytes: 22326330
num_examples: 6361
- name: dim_sharegpt_short_ru
num_bytes: 808319
num_examples: 253
- name: dim_openreview_prompts_65
num_bytes: 6739952
num_examples: 150
- name: dim_roleplay_instruct_v2_final
num_bytes: 4389286
num_examples: 7188
- name: dim_kinomania_scripts
num_bytes: 238731
num_examples: 27
- name: dim_bugurt_thread_prompts
num_bytes: 302191
num_examples: 223
- name: dim_russian_lyrics_prompts
num_bytes: 18676
num_examples: 43
- name: dim_ru_instruct_gpt4
num_bytes: 18351658
num_examples: 14222
- name: dim_gpt_roleplay_realm
num_bytes: 20163429
num_examples: 8700
- name: dim_ultrachat_ru
num_bytes: 4495105
num_examples: 500
- name: dim_scitldr
num_bytes: 4049209
num_examples: 3229
- name: dim_linux_man_pages_tldr_summarized
num_bytes: 3006631
num_examples: 481
- name: dim_dolphin_ru_3k
num_bytes: 7976776
num_examples: 3000
- name: dim_runne_prompts
num_bytes: 2686148
num_examples: 537
- name: dim_lurk_prompts
num_bytes: 92012533
num_examples: 5671
- name: dim_panorama_prompts_10k
num_bytes: 28964132
num_examples: 11024
- name: dim_resh_edu_short_prompts
num_bytes: 12380000
num_examples: 2106
- name: dim_databricks_dolly_15k_ru
num_bytes: 21900617
num_examples: 14914
- name: dim_databricks_dolly_15k_en
num_bytes: 11973713
num_examples: 15011
- name: dim_grammarly_coedit
num_bytes: 18500223
num_examples: 82466
- name: dim_kinopoisk_prompts
num_bytes: 136323982
num_examples: 36591
- name: dim_medical_qa_ru_prompts
num_bytes: 75634717
num_examples: 80101
- name: dim_joke_explaination_prompts
num_bytes: 196224
num_examples: 364
- name: dim_oa_stackexchange_200k
num_bytes: 192535277
num_examples: 200000
- name: dim_scale_helpful_no_math
num_bytes: 85610911
num_examples: 17095
- name: dim_law_stackexchange_prompts
num_bytes: 64544963
num_examples: 24343
- name: dim_ficbook_prompts_best_10k
num_bytes: 75867114
num_examples: 10000
- name: dim_azbyka_logic_ru
num_bytes: 173101
num_examples: 480
- name: dim_povarenok
num_bytes: 93518909
num_examples: 46500
- name: dim_AO3_fandom_chatbot_1to1
num_bytes: 1162058
num_examples: 614
- name: dim_habr_prompts_5k
num_bytes: 40224997
num_examples: 5000
- name: dim_what_where_when_50k
num_bytes: 38385243
num_examples: 50000
- name: dim_competition_math
num_bytes: 5808689
num_examples: 7500
- name: dim_sharegpt_short_en_30k
num_bytes: 86599862
num_examples: 29597
- name: dim_ru_turbo_alpaca_evol_instruct
num_bytes: 105340901
num_examples: 47793
- name: dim_ru_turbo_saiga
num_bytes: 79875722
num_examples: 37699
- name: dim_bugurt_completion_prompts
num_bytes: 5471066
num_examples: 5000
- name: dim_tldr_17_50k
num_bytes: 81185070
num_examples: 50000
- name: dim_grade_school_math_instructions
num_bytes: 4655452
num_examples: 8792
- name: dim_tldr_news
num_bytes: 4014718
num_examples: 7138
- name: dim_grade_school_math_instructions_ru
num_bytes: 6845510
num_examples: 7473
- name: dim_dialogsum
num_bytes: 11176807
num_examples: 12460
- name: dim_HC3_ru
num_bytes: 43395731
num_examples: 24322
- name: dim_horoscopes_ru_10k
num_bytes: 9489348
num_examples: 10000
- name: dim_yandex_q_200k
num_bytes: 292443135
num_examples: 200000
- name: dim_leetcodesolutions_en_2k
num_bytes: 4708692
num_examples: 2048
- name: dim_forum_uristov_rf_prompts
num_bytes: 2757263
num_examples: 1849
- name: dim_dialogsum_ru
num_bytes: 18657989
num_examples: 12460
- name: dim_huggingartists_prompts
num_bytes: 121909835
num_examples: 64006
download_size: 0
dataset_size: 2023767777
language:
- ru
- en
---
# Verbalist (буквоед) - русскоязычный ассистент.
Проект во многом вдохновленный [Saiga](https://huggingface.co/IlyaGusev/saiga2_7b_lora).
Мною были собраны все самые качественные датасеты с [huggingface.datasets](https://huggingface.co/datasets), а также собраны дополнительно с тех сайтов, которые я посчитал весьма полезными для создания аналога ChatGPT. Лицензии у всех датасетов отличаются, какие-то по типу [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) были созданы специально для обучения подобных моделей, какие-то являются прямой выгрузкой диалогов с ChatGPT ([RyokoAI/ShareGPT52K](https://huggingface.co/datasets/RyokoAI/ShareGPT52K)).
Вклад данного репозитория состоит в систематизации и стандартизации уже имеющихся датасетов, добавлении новых. А также тренировке моделей на этих данных.
- [google sheets таблица с датасетами и описанием](https://docs.google.com/spreadsheets/d/10xcsINF_c_zUZchT8p-8xIuHDgcuwg63jjl2ortBP9I/edit?usp=sharing)
### Датасеты
- **[Объединенный датасет где все данные уже подготовлены для тренировки диалоговой модели](https://huggingface.co/datasets/dim/verbalist_prompts)**
|name |link |description |original_name |original_source |preparation_script |language|amount_examples|mean_llama_tokens|std |min_llama_tokens|25% |50% |75% |max_llama_tokens|
|-------------------------------------|---------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------|-------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|--------|---------------|-----------------|-----------|----------------|-------|-------|-------|----------------|
|dim/oasst_en |https://huggingface.co/datasets/dim/oasst_en |OpenAssistant Conversations Dataset на английском языке, который был вручную отфильтрован мной. В исходном датасете около 30% диалогов оказались не корректными. Иногда пользователь, играющий роль ассистента, использовал грубый тон в общении с пользователем, иногда люди просто отвечали "не знаю" на вопросы, и некоторые из вопросов были недостаточно научными или слишком краткими. Вы можете ознакомиться с этой разметкой по следующей ссылке: https://docs.google.com/spreadsheets/d/117t5-Tr-dxdODpyFBkBg5R8GklYBlsvBfeDyjqwz2pA/edit?usp=sharing|2023-04-12_oasst_ready.messages.jsonl.gz |https://huggingface.co/datasets/OpenAssistant/oasst1/blob/main/2023-04-12_oasst_ready.messages.jsonl.gz|https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/oasst |en |2289 |468.6788991 |295.0864391|17 |264 |410 |618 |2332 |
|dim/oasst_ru |https://huggingface.co/datasets/dim/oasst_ru |OpenAssistant Conversations Dataset на русском языке, который был вручную отфильтрован мной. В исходном датасете около 30% диалогов оказались не корректными. Иногда пользователь, играющий роль ассистента, использовал грубый тон в общении с пользователем, иногда люди просто отвечали "не знаю" на вопросы, и некоторые из вопросов были недостаточно научными или слишком краткими. Вы можете ознакомиться с этой разметкой по следующей ссылке: https://docs.google.com/spreadsheets/d/1uiOnqxiytuxrB6u6q2pMSdnMfqjT3arfg8DlT-OWlb0/edit?usp=sharing |2023-04-12_oasst_ready.messages.jsonl.gz |https://huggingface.co/datasets/OpenAssistant/oasst1/blob/main/2023-04-12_oasst_ready.messages.jsonl.gz|https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/oasst |ru |2220 |589.6112613 |479.835392 |7 |278 |465 |763.5 |5028 |
|dim/lima |https://huggingface.co/datasets/dim/lima |Данный датасет включает в себя 1000 высококачественных обучающих примеров на английском языке. Он собран из различных источников, включая Stack Exchange (STEM), Stack Exchange (Other), wikiHow, Pushshift r/WritingPrompts, Natural Instructions, а также уникальные инструкции, созданные авторами статей. Более подробную информацию о датасете можно найти в [соответствующей статье](https://arxiv.org/pdf/2305.11206.pdf). |GAIR/lima |https://huggingface.co/datasets/GAIR/lima |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/lima |en |1030 |712.9456311 |671.179319 |29 |312.75 |488.5 |825 |3920 |
|dim/logic_tasks_ru |https://huggingface.co/datasets/dim/logic_tasks_ru |Данный набор задач по логике для детей взят с веб-сайта https://www.potehechas.ru/zadachi/zadachi.shtml. |Логические задачи - Логика и нестандартное мышление |https://www.potehechas.ru/zadachi/zadachi.shtml |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/logic_tasks_ru |ru |86 |193.0697674 |76.69048422|58 |133.75 |185 |243.5 |432 |
|dim/wikihow_en |https://huggingface.co/datasets/dim/wikihow_en |Данный датасет содержит англоязычные статьи, извлеченные с веб-сайта Wikihow. |0x22almostEvil/multilingual-wikihow-qa-16k |https://huggingface.co/datasets/0x22almostEvil/multilingual-wikihow-qa-16k |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/wiki_how |en |1995 |2037.86416 |870.1910713|265 |1463 |1913 |2461.5 |8988 |
|dim/wikihow_ru |https://huggingface.co/datasets/dim/wikihow_ru |Данный датасет включает в себя русскоязычные статьи, полученные с веб-сайта Wikihow. |0x22almostEvil/multilingual-wikihow-qa-16k |https://huggingface.co/datasets/0x22almostEvil/multilingual-wikihow-qa-16k |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/wiki_how |ru |2058 |2498.119534 |1587.851549|139 |1236.25|2264 |3421.75|10217 |
|dim/essayforum_writing_prompts_6k |https://huggingface.co/datasets/dim/essayforum_writing_prompts_6k |Данный датасет включает в себя запросы на помощь с написанием небольших эссе, размещенные на данном сайте. Ответы в датасете предоставлены исключительно главным администратором сайта. Его ответы были отобраны, поскольку чаще всего они являются наиболее качественными и вдумчивыми. |EssayForum |https://essayforum.com/writing/ |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/essayforum |en |6361 |783.1760729 |285.4314176|258 |629 |742 |879 |4966 |
|dim/sharegpt_short_ru |https://huggingface.co/datasets/dim/sharegpt_short_ru |Очищенная версия русская версия sharegpt. Я попытался вырезать из текста все промпты, где модель извиняется что что-то не может сделать, что она не имеет доступа в интернет. Диалоги, которые противоречат морали модели я просто исключил. Постарался убрать упоминания о том что она модель AI, так как за ролеплейные характеристики отвечают другие датасеты. |RyokoAI/ShareGPT52K |https://huggingface.co/datasets/RyokoAI/ShareGPT52K |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/sharegpt |ru |253 |706.6521739 |494.7437584|13 |310 |628 |1078 |1861 |
|dim/openreview_prompts_65 |https://huggingface.co/datasets/dim/openreview_prompts_65 |Датасет рецензий на реальные научные статьи с сайта openreview. Вышло на самом деле не так много, так как многие статьи не выложенны на arxiv или просто не имеют рецензий. Плюс я собрал только малую часть данного сайта, а не все что там было. |https://openreview.net/ |https://openreview.net/ |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/openreview |en |150 |13531.51333 |6966.623686|4893 |8279 |12648.5|15833.5|41494 |
|dim/roleplay_instruct_v2_final |https://huggingface.co/datasets/dim/roleplay_instruct_v2_final |Датасет ролеплея от GPT-4 на различных персонажей на английском языке. |roleplay-instruct-v2-final |https://github.com/teknium1/GPTeacher |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/gpt_roleplay_realm |en |7188 |155.1413467 |97.71215667|14 |88 |125 |192 |1291 |
|dim/kinomania_scripts |https://huggingface.co/datasets/dim/kinomania_scripts |Небольшой датасет, который содержит в себе сценарии фильмов целиком и их краткое содержание |https://www.kinomania.ru/scripts |https://www.kinomania.ru/scripts |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/kinomania_scripts |ru\en |27 |2603.407407 |510.375447 |1887 |2175 |2370 |3069 |3616 |
|dim/bugurt_thread_prompts |https://huggingface.co/datasets/dim/bugurt_thread_prompts |Небольшой набор размеченных бугуртов вместе с моим другом, для того чтобы модель научилась писать бугурты на конкретную ситуацию. Собраны из телеграм паблика БУГУРТ ТРЕД(https://t.me/bugurtthread) |https://t.me/bugurtthread |https://t.me/bugurtthread |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/bugurt_thread |ru |223 |334.4529148 |271.2557988|48 |148.5 |254 |434.5 |1645 |
|dim/russian_lyrics_prompts |https://huggingface.co/datasets/dim/russian_lyrics_prompts |Небольшой датасет промптов собранный мною из различных учебников по стихосложению, чтобы модель научилась писать стихи, используя необходимый литературный прием на конкретную тему. |Учебник стихосложения |https://stihi.ru/uchebnik/ |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/russian_lyrics_prompts |ru |43 |106.1395349 |71.00220701|45 |71 |83 |96.5 |411 |
|dim/ru_instruct_gpt4 |https://huggingface.co/datasets/dim/ru_instruct_gpt4 |Датасет каких-то инструкций на русском сгенерированных GPT-4 |lksy/ru_instruct_gpt4 |https://huggingface.co/datasets/lksy/ru_instruct_gpt4 |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/ru_instruct_gpt4 |ru |14222 |259.2173393 |237.9433891|16 |109 |175 |271 |1374 |
|dim/gpt_roleplay_realm |https://huggingface.co/datasets/dim/gpt_roleplay_realm |Диалоги выдуманных персонажей при помощи GPT-4, диалоги были сгенерированны при помощи GPT-3.5. Русский и английский. |IlyaGusev/gpt_roleplay_realm |https://huggingface.co/datasets/IlyaGusev/gpt_roleplay_realm |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/gpt_roleplay_realm |ru\en |8700 |504.2424138 |117.6228987|180 |424 |489 |569 |1207 |
|dim/ultrachat_ru |https://huggingface.co/datasets/dim/ultrachat_ru |Какой-то рандомный датасет диалогов от chatgpt, который я нашел на huggingface. Из текста диалогов были вырезаны шаблонные фразы по типу: "я не могу выполнить", "как языковая модель" и тд. Потому что обычно после этого следовало вменяемое решение задачи. |kaleinaNyan/UltraChat_ru |https://huggingface.co/datasets/kaleinaNyan/UltraChat_ru |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/ultrachat_ru |ru |500 |1781.782 |901.1212735|267 |1113.25|1648 |2250.25|7303 |
|dim/scitldr |https://huggingface.co/datasets/dim/scitldr |Саммаризация научных статей на английском языке, выполненная экспертами. |allenai/scitldr |https://huggingface.co/datasets/allenai/scitldr |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/scitldr |en |3229 |258.748529 |71.41209752|60 |209 |252 |303 |689 |
|dim/linux_man_pages_tldr_summarized |https://huggingface.co/datasets/dim/linux_man_pages_tldr_summarized |Саммаризация мануалов для инструментов линукс в удобный набор команд с их кратким описанием. |tmskss/linux-man-pages-tldr-summarized |https://huggingface.co/datasets/tmskss/linux-man-pages-tldr-summarized |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/linux-man-pages-tldr-summarized |en |481 |1567.727651 |3590.30871 |96 |405 |765 |1386 |49888 |
|dim/dolphin_ru_3k |https://huggingface.co/datasets/dim/dolphin_ru_3k |Подвыборка размера 3000 переведенных заданий dolphin. Примеры из оригинального датасета это промпты из FLANv2 и решения при помощи GPT-4 или GPT-3.5. |d0rj/dolphin-ru |https://huggingface.co/datasets/d0rj/dolphin-ru |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/dolphin_ru |ru |3000 |556.1133333 |650.0962612|19 |207 |369.5 |720.25 |6787 |
|dim/runne_prompts |https://huggingface.co/datasets/dim/runne_prompts |Промпты составленные из датасета RuNNE. Лично я при обучении сотавил промпт следующим образом. Сначала идет текст "Найди все именованные сущности в данном тексте:", а затем шел сам текст. В качестве выхода модели нужно сгенерировать JSON где содержатся все найденные именованные сущности. К примеру так [{"name": "PERSON", "ent": "Ким Чен Нама", "pos": "0 12"}, {"name": "ORGANIZATION", "ent": "Полиция Малайзии", "pos": "56 72"}] |iluvvatar/RuNNE |https://huggingface.co/datasets/iluvvatar/RuNNE |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/RuNNE |ru |537 |1479.750466 |230.0259174|581 |1337 |1480 |1635 |1988 |
|dim/lurk_prompts |https://huggingface.co/datasets/dim/lurk_prompts |Набор определений различных терминов с сайта lurk. Сами промпты были составлены автоматически следующим образом. напиши определение для (ОПРЕДЕЛЕНИЕ) в стиле lurk |averoo/lurk |https://huggingface.co/datasets/averoo/lurk/viewer/default/train?p=2 |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/lurk |ru |5671 |3450.34262 |4147.897824|35 |710.5 |2010 |4593 |55098 |
|dim/panorama_prompts_10k |https://huggingface.co/datasets/dim/panorama_prompts_10k |Набор юмористических заголовков и текстов новостей с сайта панорама. |its5Q/panorama |https://huggingface.co/datasets/its5Q/panorama |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/panorama |ru |11024 |516.9588171 |191.3774023|36 |422 |498 |585 |3496 |
|dim/resh_edu_short_prompts |https://huggingface.co/datasets/dim/resh_edu_short_prompts |Набор уроков с сайта resh.edu.ru включающих в себя название урока, тему, класс и текст урока с заданиями. |its5Q/resh-edu |https://huggingface.co/datasets/its5Q/resh-edu |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/resh_edu |ru |2106 |1431.510921 |435.7847102|56 |1175.5 |1517 |1777 |2029 |
|dim/databricks_dolly_15k_ru |https://huggingface.co/datasets/dim/databricks_dolly_15k_ru |Переведенный датасет dolly на русский язык. Включает в себя набор инструкций на обширное количество тематик. |dwarf2/databricks-dolly-15k-ru |https://huggingface.co/dwarf2/databricks-dolly-15k-ru |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/databricks_dolly_15k_ru |ru |14914 |305.4638595 |405.874049 |8 |87 |182 |370 |9268 |
|dim/databricks_dolly_15k_en |https://huggingface.co/datasets/dim/databricks_dolly_15k_en |databricks-dolly-15k — это набор данных с открытым исходным кодом, содержащий записи о выполнении инструкций, созданные тысячами сотрудников Databricks в нескольких поведенческих категориях, изложенных в документе InstructGPT, включая мозговой штурм, классификацию, закрытый контроль качества, генерацию, извлечение информации, открытый контроль качества и обобщение. |databricks/databricks-dolly-15k |https://huggingface.co/datasets/databricks/databricks-dolly-15k |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/databricks_dolly_15k_en |en |15011 |204.7264006 |302.5539423|6 |57 |119 |242 |8883 |
|dim/grammarly_coedit |https://huggingface.co/datasets/dim/grammarly_coedit |Набор промптов, которые просят исправить грамматические, стилистические ошибки на английском. |grammarly/coedit |https://huggingface.co/datasets/grammarly/coedit |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/grammarly_coedit |en |82466 |53.7128271 |26.73822864|10 |35 |46 |64 |694 |
|dim/kinopoisk_prompts |https://huggingface.co/datasets/dim/kinopoisk_prompts |Отзывы с кинопоиска на топ 250 фильмов. В промптах я прошу написать хороший, плохой или нейтральный отзыв на определенный фильм. |blinoff/kinopoisk |https://huggingface.co/datasets/blinoff/kinopoisk |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/kinopoisk |ru |36591 |875.0955973 |565.3212035|48 |484 |733 |1117 |8628 |
|dim/medical_qa_ru_prompts |https://huggingface.co/datasets/dim/medical_qa_ru_prompts |Какие-то вопросы и ответы с какого-то медицинского форума. В данной версии датасета только первый ответ из оригинала. |blinoff/medical_qa_ru_data |https://huggingface.co/datasets/blinoff/medical_qa_ru_data |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/medical_qa_ru_data |ru |80101 |206.710528 |175.4343973|12 |106 |161 |247 |5062 |
|dim/joke_explaination_prompts |https://huggingface.co/datasets/dim/joke_explaination_prompts |Объяснение шуток на английском. От изначального датасета отличается тем, что я убрал последнее предложение из объяснения, так как оно ссылается на видео на сайте. |theblackcat102/joke_explaination |https://huggingface.co/datasets/theblackcat102/joke_explaination |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/joke_explaination |en |364 |143.5741758 |68.90275411|21 |99 |137.5 |189.25 |334 |
|dim/oa_stackexchange_200k |https://huggingface.co/datasets/dim/oa_stackexchange_200k |Вопросы-ответы со stackexchange. Оригинальный датасет был составлен следующим образом: были выбраны только темы с принятым ответом, для которых длина вопроса и ответа составляет менее 1000 символов. Другие ответы, вопросы без принятых ответов или длинные записи были удалены. Так как оригинальный датасет слишком большой, я рандомно выбрал 200k семплов. |donfu/oa-stackexchange |https://huggingface.co/datasets/donfu/oa-stackexchange |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/oa_stackexchange |en |200000 |276.29862 |112.5004436|22 |194 |265 |345 |1226 |
|dim/scale_helpful_no_math |https://huggingface.co/datasets/dim/scale_helpful_no_math |Какой-то набор диалогов с вопросами-ответами на английском, происхождение неизвестно. |HuggingFaceH4/scale_helpful_no_math |https://huggingface.co/datasets/HuggingFaceH4/scale_helpful_no_math/viewer/default/train_rm |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/scale_helpful_no_math |en |17095 |1235.302603 |838.1097885|53 |663 |1063 |1617 |34480 |
|dim/law_stackexchange_prompts |https://huggingface.co/datasets/dim/law_stackexchange_prompts |Вопросы про закон на английском языке со StackExchange. Оригинальный датасет был преобразован в markdown. |ymoslem/Law-StackExchange |https://huggingface.co/datasets/ymoslem/Law-StackExchange |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/law_stackexchange |en |24343 |689.1184324 |565.0316906|43 |354 |540 |836 |8969 |
|dim/ficbook_prompts_best_10k |https://huggingface.co/datasets/dim/ficbook_prompts_best_10k |Топ 10k лучших фанфиков с сайта ficbook.net. Все промпты выглядят следующим образом: напиши фанфик с названием {title} и следующим описанием {description}, с тегами {tags}, Где title это оригинальное название, description оригинальное описание, tags это теги данного произведения. |AlexWortega/FicBook |https://huggingface.co/datasets/AlexWortega/FicBook |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/ficbook |ru |10000 |1737.8214 |402.0748161|166 |1716 |1950 |1950 |1952 |
|dim/azbyka_logic_ru |https://huggingface.co/datasets/dim/azbyka_logic_ru |Небольшой набор детских логических и православных задач, взятых с сайта https://azbyka.ru/deti/logicheskie-i-zanimatelnye-zadachi . Обычно у них почти нет развернутого решения, только ответ. Я пытался расписать решение некоторых задач, но меня хватило только на 35, если кто-то займется подобным буду рад https://docs.google.com/spreadsheets/d/1JRbtppbZCUbV_Eqd0nKbRDQEuPnJIAgJ70cUILEDUI4/edit?usp=sharing . |Логические и занимательные задачи (300 задач) |https://azbyka.ru/deti/logicheskie-i-zanimatelnye-zadachi |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/azbyka_logic_ru |ru |480 |77.4375 |77.56990416|14 |31 |50 |91 |652 |
|dim/povarenok |https://huggingface.co/datasets/dim/povarenok |46k лучших рецептов с сайта povarenok.ru, содержит текст рецепта, список ингридиентов, название блюда |https://www.povarenok.ru/recipes/ |https://www.povarenok.ru/recipes/ |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/povarenok |ru |46500 |488.9118495 |344.8563249|31 |281 |440 |632 |5542 |
|dim/AO3_fandom_chatbot_1to1 |https://huggingface.co/datasets/dim/AO3_fandom_chatbot_1to1 |Какой-то набор ролеплейных диалогов с описанием персонажей и их отыгрышем. Происхождение неизвестно. |ebony59/AO3_fandom_chatbot_1to1 |https://huggingface.co/datasets/ebony59/AO3_fandom_chatbot_1to1 |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/AO3_fandom_chatbot_1to1 |en |614 |493.7166124 |226.3885365|129 |328.25 |432.5 |611.75 |1272 |
|dim/habr_prompts_5k |https://huggingface.co/datasets/dim/habr_prompts_5k |Статьи с хабра. Датасет был составлен с помощью chatgpt, chatgpt преобразовывал заголовки таким образом чтобы они звучали как вопросы от пользователя, в качестве таргета выступала сама статья. |IlyaGusev/habr |https://huggingface.co/datasets/IlyaGusev/habr |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/habr |ru |5000 |1732.892 |454.8418369|19 |1920.75|1950 |1951 |1952 |
|dim/what_where_when_50k |https://huggingface.co/datasets/dim/what_where_when_50k |50k вопросов с решениями с сайта что где когда. В качестве промпта выступает вопрос, в качестве ответа конкатенация объяснения и краткого ответа. Все вопросы-ответы вы можете найти по этой ссылке https://huggingface.co/datasets/dim/what_where_when_ru |https://db.chgk.info |https://db.chgk.info |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/what_where_when |ru |50000 |169.1862 |68.91119898|18 |122 |158 |202 |1167 |
|dim/competition_math |https://huggingface.co/datasets/dim/competition_math |Датасет олимпиадной математики на английском. The Mathematics Aptitude Test of Heuristics (MATH) dataset. |competition_math |https://huggingface.co/datasets/competition_math |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/competition_math |en |7500 |317.5254667 |267.8583731|34 |147 |234 |393 |3029 |
|dim/sharegpt_short_en_30k |https://huggingface.co/datasets/dim/sharegpt_short_en_30k |Короткие диалоги на английском из sharegpt |RyokoAI/ShareGPT52K |https://huggingface.co/datasets/RyokoAI/ShareGPT52K |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/sharegpt |en |29597 |749.3149981 |516.3702473|3 |336 |630 |1095 |2021 |
|dim/ru_turbo_alpaca_evol_instruct |https://huggingface.co/datasets/dim/ru_turbo_alpaca_evol_instruct |Набор инструкций различной тематики на русском языке, сгенерированных при помощи chatgpt. |IlyaGusev/ru_turbo_alpaca_evol_instruct |https://huggingface.co/datasets/IlyaGusev/ru_turbo_alpaca_evol_instruct |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/ru_turbo_alpaca_evol_instruct |ru |47793 |453.0887996 |289.5498356|17 |221 |430 |623 |4647 |
|dim/ru_turbo_saiga |https://huggingface.co/datasets/dim/ru_turbo_saiga |Набор инструкций различной тематики на русском языке, сгенерированных при помощи chatgpt. |IlyaGusev/ru_turbo_saiga |https://huggingface.co/datasets/IlyaGusev/ru_turbo_saiga |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/ru_turbo_saiga |ru |37699 |412.7508687 |113.346917 |87 |339 |398 |466 |1427 |
|dim/bugurt_completion_prompts |https://huggingface.co/datasets/dim/bugurt_completion_prompts |Обрезанные бугурты, где в качестве промпта используется строка вида - продолжи бугурт: первая строчка бугурта |https://t.me/bugurtthread |https://t.me/bugurtthread |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/bugurt_thread |ru |5000 |280.2466 |320.4353681|32 |111 |178 |331 |11333 |
|dim/tldr_17_50k |https://huggingface.co/datasets/dim/tldr_17_50k |Очень вольная абстрактная саммаризация постов с реддита в одну строчку |webis/tldr-17 |https://huggingface.co/datasets/webis/tldr-17 |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/tldr_17 |en |50000 |421.12752 |403.346214 |10 |177 |303 |525 |9592 |
|dim/grade_school_math_instructions |https://huggingface.co/datasets/dim/grade_school_math_instructions |OpenAI's grade-school-math датасет преобразованный в промпты. |qwedsacf/grade-school-math-instructions |https://huggingface.co/datasets/qwedsacf/grade-school-math-instructions |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/grade-school-math-instructions |en |8792 |171.6310282 |63.09232668|50 |124 |161 |206 |511 |
|dim/tldr_news |https://huggingface.co/datasets/dim/tldr_news |Хедлайны и текст новостей на различную тематику. |JulesBelveze/tldr_news |https://huggingface.co/datasets/JulesBelveze/tldr_news |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/tldr_news |en |7138 |133.1004483 |46.48736493|23 |100 |133 |161 |476 |
|dim/grade_school_math_instructions_ru|https://huggingface.co/datasets/dim/grade_school_math_instructions_ru|OpenAI's grade-school-math датасет переведенный на русский. |d0rj/gsm8k-ru |https://huggingface.co/datasets/d0rj/gsm8k-ru |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/grade_school_math_instructions_ru|ru |7473 |259.8321959 |100.1229127|78 |185 |241 |314 |838 |
|dim/dialogsum |https://huggingface.co/datasets/dim/dialogsum |Саммаризация диалогов на английском языке, разметка выполнялась вручную. |knkarthick/dialogsum |https://huggingface.co/datasets/knkarthick/dialogsum |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/dialogsum |en |12460 |269.6467095 |126.285664 |75 |191 |245 |327 |1725 |
|dim/HC3_ru |https://huggingface.co/datasets/dim/HC3_ru |Вопросы-ответы с реддита, есть ответы сгенерированные chatgpt и реальные ответы пользователей. Я использовал только реальные ответы пользователей. |d0rj/HC3-ru |https://huggingface.co/datasets/d0rj/HC3-ru |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/HC3_ru |ru |24322 |360.5608503 |330.2285903|15 |168 |267 |435 |10025 |
|dim/horoscopes_ru_10k |https://huggingface.co/datasets/dim/horoscopes_ru_10k |10k гороскопов, с промптами где я прошу сгенерировать гороском для определенного знака зодиака |dkagramanyan/horoscopes_ru |https://huggingface.co/datasets/dkagramanyan/horoscopes_ru |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/horoscopes_ru |ru |10000 |183.1443 |31.62023184|55 |159 |187 |201 |464 |
|dim/yandex_q_200k |https://huggingface.co/datasets/dim/yandex_q_200k |200k рандомно выбранных вопросов-ответов с сайта yandex q. |its5Q/yandex-q |https://huggingface.co/datasets/its5Q/yandex-q |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/yandex_q |ru |200000 |304.569005 |340.7808288|18 |127 |202 |353 |19294 |
|dim/leetcodesolutions_en_2k |https://huggingface.co/datasets/dim/leetcodesolutions_en_2k |Решения задач с leetcode на разных языках. |TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k |https://huggingface.co/datasets/TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/leetcodesolutions_en_2k |en |2048 |740.7441406 |253.2493282|297 |565 |685 |857 |1960 |
|dim/forum_uristov_rf_prompts |https://huggingface.co/datasets/dim/forum_uristov_rf_prompts |Вопросы-ответы с российского юридического форума. |https://xn----dtbrojdkckkfj9k.xn--p1ai/vopros-yuristu?page=560|https://xn----dtbrojdkckkfj9k.xn--p1ai/vopros-yuristu?page=560 |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/forum_uristov_rf |ru |1849 |321.0540833 |429.58896 |31 |134 |210 |349 |6470 |
|dim/dialogsum_ru |https://huggingface.co/datasets/dim/dialogsum_ru |Саммаризация диалогов на русском языке, перевод dialogsum. |d0rj/dialogsum-ru |https://huggingface.co/datasets/d0rj/dialogsum-ru |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/dialogsum-ru |ru |12460 |364.2813804 |178.7117754|98 |250 |329 |446 |2300 |
|dim/huggingartists_prompts |https://huggingface.co/datasets/dim/huggingartists_prompts |Промпты, которые просят продолжить песню в стиле определенного исполнителя. В данном наборе содержатся почти все исполнители, которых вы можете найти в этой организации https://huggingface.co/huggingartists |https://huggingface.co/huggingartists |https://huggingface.co/huggingartists |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/huggingartists |ru |64006 |561.6732025 |586.18458 |28 |297 |453 |720 |32949 |
### Модели
На данный момент обучаются 3 модели llama2_7b, llama2_13b и llama1_30b.
За графиками их обучения можно следить в прямом эфире https://api.wandb.ai/links/dimweb/7rh0c7iz
### Код обучения
- [общий алгоритм обучения](https://github.com/dmitrymailk/verbalist/blob/master/verbalist/model/src/train.py)
- [формирование датасетов для обучения](https://github.com/dmitrymailk/verbalist/blob/master/verbalist/model/src/dataset.py#L176)
### Оборудование
Все обучение и инференс производится на видеокарте A100, на других видеокартах была обнаружена существенная деградация качества при инференсе, данный аспект требует дополнительного изучения.
- NVIDIA A100-SXM4-40GB
- NVIDIA-SMI 535.54.03
- Driver Version: 535.54.03
- CUDA Version: 12.2
- torch==2.0.1+cu118
### Дальнейшее развитие
Самое простое, что можно сделать это переводить уже имеющиеся хорошие датасеты с английского на русский при помощи GPT-4.
Более сложное это собирать больше разнообразных данных из различных доменов. Я могу лишь подкинуть идеи для того какие датасеты можно собрать еще.
- решебники по литературе, русскому и другим предметам
- задания со всяких бирж труда
- [краткие пересказы произведений, анализ произведений, сочинения по ним](http://www.litra.ru/shortwork/)
- [туториалы с digital ocean (более 7000)](https://www.digitalocean.com/community/tutorials)
- [туториалы с selectel](https://selectel.ru/blog/tutorials/)
- больше форумов на различные тематики
- [бесплатные эссе с ivypanda essays](https://ivypanda.com/essays/) и дальнейший их перевод на русский
- больше стихов и песен
- [олимпиадные русские задачи](https://math.ru/problems/) их очень сложно собирать, так как большинство их них живут только в PDF или docx. Но их довольно много и они довольно отличаются от олимпиадной математики на английском. Но у меня нет времени этим заниматься.
- фанфики на иностранном языке
- исправить текущие автоматические промпты на более разнообразные, при помощи chatgpt | 70,970 | [
[
-0.04425048828125,
-0.04620361328125,
0.006317138671875,
0.0172882080078125,
-0.00687408447265625,
0.00643157958984375,
-0.0217437744140625,
-0.0204620361328125,
0.05938720703125,
0.012969970703125,
-0.0501708984375,
-0.05487060546875,
-0.049041748046875,
-0... |
autoevaluate/autoeval-eval-tweet_eval-sentiment-45124a-38605145054 | 2023-10-04T14:23:31.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | 0 | 11 | 2023-10-04T14:20:04 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- tweet_eval
eval_info:
task: multi_class_classification
model: siberett/roberta-sentiment-analysis-finetune
metrics: []
dataset_name: tweet_eval
dataset_config: sentiment
dataset_split: train
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: siberett/roberta-sentiment-analysis-finetune
* Dataset: tweet_eval
* Config: sentiment
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@emuggins](https://huggingface.co/emuggins) for evaluating this model. | 889 | [
[
-0.031524658203125,
-0.0241546630859375,
0.0208587646484375,
0.0209503173828125,
-0.0031280517578125,
0.0047149658203125,
-0.01294708251953125,
-0.0243988037109375,
0.00847625732421875,
0.0162353515625,
-0.0634765625,
-0.0217437744140625,
-0.060791015625,
-0... |
Sharka/CIVQA_easyocr_simple_train_half | 2023-10-04T15:48:19.000Z | [
"region:us"
] | Sharka | null | null | 0 | 11 | 2023-10-04T15:48:08 | ---
dataset_info:
features:
- name: id
dtype: string
- name: words
sequence: string
- name: answers
dtype: string
- name: bboxes
sequence:
sequence: float32
- name: answers_bboxes
sequence:
sequence: float32
- name: questions
dtype: string
- name: image
dtype: string
splits:
- name: train
num_bytes: 963207990
num_examples: 143765
download_size: 41076905
dataset_size: 963207990
---
# Dataset Card for "CIVQA_easyocr_simple_train_half"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 641 | [
[
-0.0360107421875,
0.0008172988891601562,
0.01338958740234375,
0.01517486572265625,
-0.01371002197265625,
-0.005199432373046875,
0.0223846435546875,
0.0146942138671875,
0.040008544921875,
0.0211639404296875,
-0.0533447265625,
-0.038604736328125,
-0.02890014648437... |
philschmid/markdown-documentation-transformers | 2023-10-05T13:42:59.000Z | [
"license:apache-2.0",
"region:us"
] | philschmid | null | null | 0 | 11 | 2023-10-05T13:38:10 | ---
license: apache-2.0
---
# Hugging Face Transformers documentation as markdown dataset
This dataset was created using [Clipper.js](https://github.com/philschmid/clipper.js). Clipper is a Node.js command line tool that allows you to easily clip content from web pages and convert it to Markdown. It uses Mozilla's Readability library and Turndown under the hood to parse web page content and convert it to Markdown.
This dataset can be used to create RAG applications, which want to use the transformers documentation.
Example document: https://huggingface.co/docs/transformers/peft
```
# Load adapters with 🤗 PEFT
[Parameter-Efficient Fine Tuning (PEFT)](https://huggingface.co/blog/peft) methods freeze the pretrained model parameters during fine-tuning and add a small number of trainable parameters (the adapters) on top of it. The adapters are trained to learn task-specific information. This approach has been shown to be very memory-efficient with lower compute usage while producing results comparable to a fully fine-tuned model.
Adapters trained with PEFT are also usually an order of magnitude smaller than the full model, making it convenient to share, store, and load them.

The adapter weights for a OPTForCausalLM model stored on the Hub are only ~6MB compared to the full size of the model weights, which can be ~700MB.
If you’re interested in learning more about the 🤗 PEFT library, check out the [documentation](https://huggingface.co/docs/peft/index).
## Setup
Get started by installing 🤗 PEFT:
If you want to try out the brand new features, you might be interested in installing the library from source:
....
``` | 1,761 | [
[
-0.06304931640625,
-0.035430908203125,
0.0235443115234375,
0.013427734375,
-0.00872802734375,
0.00406646728515625,
-0.00868988037109375,
-0.023895263671875,
0.032684326171875,
0.05255126953125,
-0.053558349609375,
-0.02410888671875,
-0.038360595703125,
-0.00... |
shengqin/web-attacks-old | 2023-10-05T15:38:36.000Z | [
"region:us"
] | shengqin | null | null | 0 | 11 | 2023-10-05T15:37:30 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Intuit-GenSRF/jigsaw-toxic-comment-train-es | 2023-10-05T19:27:34.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 11 | 2023-10-05T19:27:29 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 98022741
num_examples: 223378
download_size: 60601678
dataset_size: 98022741
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "jigsaw-train-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 486 | [
[
-0.0350341796875,
-0.0078277587890625,
0.01453399658203125,
0.012939453125,
-0.0288848876953125,
-0.00008600950241088867,
0.0217132568359375,
-0.008453369140625,
0.0716552734375,
0.0194244384765625,
-0.059295654296875,
-0.03955078125,
-0.047119140625,
-0.016... |
Intuit-GenSRF/hackathon-somos-nlp-2023-suicide-comments-es-en | 2023-10-06T22:27:58.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 11 | 2023-10-06T22:27:56 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: processed_text
sequence: string
- name: num_tokens
dtype: int64
- name: text_en
dtype: string
splits:
- name: train
num_bytes: 2629537
num_examples: 8824
download_size: 1693102
dataset_size: 2629537
---
# Dataset Card for "hackathon-somos-nlp-2023-suicide-comments-es-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 633 | [
[
-0.0280914306640625,
-0.01442718505859375,
0.032562255859375,
0.034332275390625,
-0.00888824462890625,
-0.003665924072265625,
0.005664825439453125,
-0.0007200241088867188,
0.059173583984375,
0.0286102294921875,
-0.0931396484375,
-0.0443115234375,
-0.034881591796... |
chiualfredo/oil_origin | 2023-10-07T04:59:08.000Z | [
"region:us"
] | chiualfredo | null | null | 0 | 11 | 2023-10-07T04:56:43 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
chargoddard/rpguild | 2023-10-18T00:34:26.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-nc-4.0",
"roleplay",
"not-for-all-audiences",
"region:us"
] | chargoddard | null | null | 1 | 11 | 2023-10-07T08:04:37 | ---
dataset_info:
- config_name: default
features:
- name: username
dtype: string
- name: char_name
dtype: string
- name: bio
dtype: string
- name: context
list:
- name: text
dtype: string
- name: username
dtype: string
- name: char_name
dtype: string
- name: reply
dtype: string
- name: has_nameless
dtype: bool
- name: char_confidence
dtype: float64
splits:
- name: train
num_bytes: 1921588254
num_examples: 140469
download_size: 764073630
dataset_size: 1921588254
- config_name: high_confidence
features:
- name: username
dtype: string
- name: char_name
dtype: string
- name: bio
dtype: string
- name: context
list:
- name: text
dtype: string
- name: username
dtype: string
- name: char_name
dtype: string
- name: reply
dtype: string
- name: has_nameless
dtype: bool
- name: char_confidence
dtype: float64
splits:
- name: train
num_bytes: 949419370.7676569
num_examples: 69403
download_size: 386317057
dataset_size: 949419370.7676569
- config_name: pruned
features:
- name: username
dtype: string
- name: char_name
dtype: string
- name: bio
dtype: string
- name: context
list:
- name: text
dtype: string
- name: username
dtype: string
- name: char_name
dtype: string
- name: reply
dtype: string
- name: has_nameless
dtype: bool
- name: char_confidence
dtype: float64
splits:
- name: train
num_bytes: 782484734.2032762
num_examples: 57200
download_size: 326987882
dataset_size: 782484734.2032762
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: high_confidence
data_files:
- split: train
path: high_confidence/train-*
- config_name: pruned
data_files:
- split: train
path: pruned/train-*
license: cc-by-nc-4.0
task_categories:
- conversational
- text-generation
tags:
- roleplay
- not-for-all-audiences
size_categories:
- 100K<n<1M
language:
- en
---
Data scraped from [roleplayerguild](https://www.roleplayerguild.com/) and parsed into prompts with a conversation history and associated character bio.
As usernames can be associated with multiple biographies, assignment of characters is a little fuzzy. The `char_confidence` feature reflects how likely this assignment is to be correct. Not all posts in the conversation history necessarily have an associated character name. The column `has_nameless` reflects this.
Each row should fit into 4096 Llama tokens, depending on your prompt format - there's built in slack of 128 tokens + 8 per message. | 2,693 | [
[
-0.01268768310546875,
-0.050811767578125,
0.055694580078125,
0.029144287109375,
-0.00983428955078125,
0.0183563232421875,
0.02362060546875,
-0.02520751953125,
0.0565185546875,
0.044189453125,
-0.0762939453125,
-0.038055419921875,
-0.0305633544921875,
0.02610... |
sidthip/testquiz | 2023-10-07T10:20:05.000Z | [
"region:us"
] | sidthip | null | null | 0 | 11 | 2023-10-07T10:06:01 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.01496124267578125,
-0.06036376953125,
0.0379... |
towhid/aesir-train-420 | 2023-10-07T18:10:39.000Z | [
"region:us"
] | towhid | null | null | 0 | 11 | 2023-10-07T17:11:13 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
carnival13/test_DA_tokenized2 | 2023-10-08T03:43:15.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 11 | 2023-10-08T03:43:06 | ---
dataset_info:
features:
- name: pass_label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 456736095
num_examples: 335850
download_size: 104506387
dataset_size: 456736095
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "test_DA_tokenized2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 543 | [
[
-0.029449462890625,
-0.03521728515625,
-0.00040793418884277344,
0.01474761962890625,
-0.0181121826171875,
0.00457000732421875,
0.0232391357421875,
-0.007457733154296875,
0.054351806640625,
0.0206756591796875,
-0.0404052734375,
-0.044219970703125,
-0.049377441406... |
pythainlp/thaisum | 2023-10-08T14:06:17.000Z | [
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_d... | pythainlp | null | null | 0 | 11 | 2023-10-08T11:06:14 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- th
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- summarization
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty_name: ThaiSum
---
# Dataset Card for ThaiSum
This dataset was forked from [thaisum](https://huggingface.co/datasets/thaisum) to HF hub.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/nakhunchumpolsathien/ThaiSum
- **Repository:** https://github.com/nakhunchumpolsathien/ThaiSum
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** https://github.com/nakhunchumpolsathien
### Dataset Summary
ThaiSum is a large-scale corpus for Thai text summarization obtained from several online news websites namely Thairath, ThaiPBS, Prachathai, and The Standard. This dataset consists of over 350,000 article and summary pairs written by journalists.
### Supported Tasks and Leaderboards
summarization, language modeling
### Languages
Thai
## Dataset Structure
### Data Instances
```
{'body': 'กีเก ซานเชซ ฟลอเรส\xa0 กุนซือเลือดกระทิงของทีมวัตฟอร์ด\xa0 เมินประเด็นจุดโทษปัญหาในเกมพรีเมียร์ลีก อังกฤษ นัดที่แตนอาละวาดเปิดบ้านพ่าย คริสตัล พาเลซ 0-1ชี้ทีมของเขาเล่นไม่ดีพอเอง,สำนักข่าวต่างประเทศรายงานวันที่ 27 ก.ย. ว่า กีเก ซานเชซ ฟลอเรส\xa0 ผู้จัดการทีมชาวสเปน ของ แตนอาละวาด วัตฟอร์ด\xa0 ยอมรับทีมของเขาเล่นได้ไม่ดีพอเอง ในเกมพรีเมียร์ลีก อังกฤษ นัดเปิดบ้านพ่าย อินทรีผงาด คริสตัล พาเลซ 0-1 เมื่อคืนวันอาทิตย์ที่ผ่านมา,เกมนี้จุดเปลี่ยนมาอยู่ที่การได้จุดโทษในช่วงครึ่งหลังของ คริสตัล พาเลซ ซึ่งไม่ค่อยชัดเจนเท่าไหร่ว่า อัลลัน นียอม นั้นไปทำฟาล์วใส่ วิลฟรีด ซาฮา ในเขตโทษหรือไม่ แต่ผู้ตัดสินก็ชี้เป็นจุดโทษ ซึ่ง โยอัน กาบาย สังหารไม่พลาด และเป็นประตูชัยช่วยให้ คริสตัล พาเลซ เอาชนะ วัตฟอร์ด ไป 1-0 และเป็นการพ่ายแพ้ในบ้านนัดแรกของวัตฟอร์ดในฤดูกาลนี้อีกด้วย,ฟลอเรส กล่าวว่า มันเป็นเรื่องยากในการหยุดเกมรุกของคริสตัล พาเลซ ซึ่งมันอึดอัดจริงๆสำหรับเรา เราเล่นกันได้ไม่ดีนักในตอนที่ได้ครองบอล เราต้องเล่นทางริมเส้นให้มากกว่านี้ เราไม่สามารถหยุดเกมสวนกลับของพวกเขาได้ และแนวรับของเราก็ยืนไม่เป็นระเบียบสักเท่าไหร่ในช่วงครึ่งแรก ส่วนเรื่องจุดโทษการตัดสินใจขั้นสุดท้ายมันอยู่ที่ผู้ตัดสิน ซึ่งมันเป็นการตัดสินใจที่สำคัญ ผมเองก็ไม่รู้ว่าเขาตัดสินถูกหรือเปล่า บางทีมันอาจเป็นจุดที่ตัดสินเกมนี้เลย แต่เราไม่ได้แพ้เกมนี้เพราะจุดโทษ เราแพ้ในวันนี้เพราะเราเล่นไม่ดีและคริสตัล พาเลซ เล่นดีกว่าเรา เราไม่ได้มีฟอร์มการเล่นที่ดีในเกมนี้เลย', 'summary': 'กีเก ซานเชซ ฟลอเรส กุนซือเลือดกระทิงของทีมวัตฟอร์ด เมินประเด็นจุดโทษปัญหาในเกมพรีเมียร์ลีก อังกฤษ นัดที่แตนอาละวาดเปิดบ้านพ่าย คริสตัล พาเลซ 0-1ชี้ทีมของเขาเล่นไม่ดีพอเอง', 'tags': 'พรีเมียร์ลีก,วัตฟอร์ด,คริสตัล พาเลซ,กีเก ซานเชซ ฟลอเรส,ข่าวกีฬา,ข่าว,ไทยรัฐออนไลน์', 'title': 'ฟลอเรส รับ วัตฟอร์ดห่วยเองเกมพ่ายพาเลซคาบ้าน', 'type': '', 'url': 'https://www.thairath.co.th/content/528322'}
```
### Data Fields
- `title`: title of article
- `body`: body of article
- `summary`: summary of article
- `type`: type of article, if any
- `tags`: tags of article, separated by `,`
- `url`: URL of article
### Data Splits
train/valid/test: 358868 / 11000 / 11000
## Dataset Creation
### Curation Rationale
Sequence-to-sequence (Seq2Seq) models have shown great achievement in text summarization. However, Seq2Seq model often requires large-scale training data to achieve effective results. Although many impressive advancements in text summarization field have been made, most of summarization studies focus on resource-rich languages. The progress of Thai text summarization is still far behind. The dearth of large-scale dataset keeps Thai text summarization in its infancy. As far as our knowledge goes, there is not a large-scale dataset for Thai text summarization available anywhere. Thus, we present ThaiSum, a large-scale corpus for Thai text summarization obtained from several online news websites namely Thairath, ThaiPBS, Prachathai, and The Standard.
### Source Data
#### Initial Data Collection and Normalization
We used a python library named Scrapy to crawl articles from several news websites namely Thairath, Prachatai, ThaiPBS and, The Standard. We first collected news URLs provided in their sitemaps. During web-crawling, we used HTML markup and metadata available in HTML pages to identify article text, summary, headline, tags and label. Collected articles were published online from 2014 to August 2020. <br> <br>
We further performed data cleansing process to minimize noisy data. We filtered out articles that their article text or summary is missing. Articles that contains article text with less than 150 words or summary with less than 15 words were removed. We also discarded articles that contain at least one of these following tags: ‘ดวง’ (horoscope), ‘นิยาย’ (novel), ‘อินสตราแกรมดารา’ (celebrity Instagram), ‘คลิปสุดฮา’(funny video) and ‘สรุปข่าว’ (highlight news). Some summaries were completely irrelevant to their original article texts. To eliminate those irrelevant summaries, we calculated abstractedness score between summary and its article text. Abstractedness score is written formally as: <br>
<center><a href="https://www.codecogs.com/eqnedit.php?latex=\begin{equation}&space;\frac{|S-A|}{r}&space;\times&space;100&space;\end{equation}" target="_blank"><img src="https://latex.codecogs.com/gif.latex?\begin{equation}&space;\frac{|S-A|}{r}&space;\times&space;100&space;\end{equation}" title="\begin{equation} \frac{|S-A|}{r} \times 100 \end{equation}" /></a></center><br>
<br>Where 𝑆 denotes set of article tokens. 𝐴 denotes set of summary tokens. 𝑟 denotes a total number of summary tokens. We omitted articles that have abstractedness score at 1-grams higher than 60%.
<br><br>
It is important to point out that we used [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp), version 2.2.4, tokenizing engine = newmm, to process Thai texts in this study. It is challenging to tokenize running Thai text into words or sentences because there are not clear word/sentence delimiters in Thai language. Therefore, using different tokenization engines may result in different segment of words/sentences.
After data-cleansing process, ThaiSum dataset contains over 358,000 articles. The size of this dataset is comparable to a well-known English document summarization dataset, CNN/Dily mail dataset. Moreover, we analyse the characteristics of this dataset by measuring the abstractedness level, compassion rate, and content diversity. For more details, see [thaisum_exploration.ipynb](https://github.com/nakhunchumpolsathien/ThaiSum/blob/master/thaisum_exploration.ipynb).
#### Dataset Statistics
ThaiSum dataset consists of 358,868 articles. Average lengths of article texts and summaries are approximately 530 and 37 words respectively. As mentioned earlier, we also collected headlines, tags and labels provided in each article. Tags are similar to keywords of the article. An article normally contains several tags but a few labels. Tags can be name of places or persons that article is about while labels indicate news category (politic, entertainment, etc.). Ultimatly, ThaiSum contains 538,059 unique tags and 59 unique labels. Note that not every article contains tags or labels.
|Dataset Size| 358,868 | articles |
|:---|---:|---:|
|Avg. Article Length| 529.5 | words|
|Avg. Summary Length | 37.3 | words|
|Avg. Headline Length | 12.6 | words|
|Unique Vocabulary Size | 407,355 | words|
|Occurring > 10 times | 81,761 | words|
|Unique News Tag Size | 538,059 | tags|
|Unique News Label Size | 59 | labels|
#### Who are the source language producers?
Journalists of respective articles
### Annotations
#### Annotation process
`summary`, `type` and `tags` are created by journalists who wrote the articles and/or their publishers.
#### Who are the annotators?
`summary`, `type` and `tags` are created by journalists who wrote the articles and/or their publishers.
### Personal and Sensitive Information
All data are public news articles. No personal and sensitive information is expected to be included.
## Considerations for Using the Data
### Social Impact of Dataset
- News summarization in Thai
- Language modeling for Thai news
### Discussion of Biases
- [ThaiPBS](https://www.thaipbs.or.th/home) [receives funding from Thai government](https://www.bangkokbiznews.com/blog/detail/648740).
- [Thairath](https://www.thairath.co.th/) is known as [the most popular newspaper in Thailand](https://mgronline.com/onlinesection/detail/9620000058532); no clear political leaning.
- [The Standard](https://thestandard.co/) is a left-leaning online magazine.
- [Prachathai](https://prachatai.com/) is a left-leaning, human-right-focused news site.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[@nakhunchumpolsathien](https://github.com/nakhunchumpolsathien/)
[@caramelWaffle](https://github.com/caramelWaffle)
### Licensing Information
MIT License
### Citation Information
```
@mastersthesis{chumpolsathien_2020,
title={Using Knowledge Distillation from Keyword Extraction to Improve the Informativeness of Neural Cross-lingual Summarization},
author={Chumpolsathien, Nakhun},
year={2020},
school={Beijing Institute of Technology}
```
### Contributions
Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset. | 10,429 | [
[
-0.027099609375,
-0.04632568359375,
0.01507568359375,
0.03582763671875,
-0.05157470703125,
-0.004680633544921875,
-0.02227783203125,
-0.02301025390625,
0.055023193359375,
0.0253143310546875,
-0.007480621337890625,
-0.0487060546875,
-0.050567626953125,
0.0416... |
darcycao/en2zh_specaildataset | 2023-10-09T09:45:26.000Z | [
"region:us"
] | darcycao | null | null | 0 | 11 | 2023-10-09T09:44:40 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.01496124267578125,
-0.06036376953125,
0.0379... |
mychen76/openwebtext-100k | 2023-10-09T13:37:50.000Z | [
"region:us"
] | mychen76 | null | null | 0 | 11 | 2023-10-09T13:32:49 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 497257202
num_examples: 100000
download_size: 302557845
dataset_size: 497257202
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "openwebtext-100k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 452 | [
[
-0.05218505859375,
-0.0120086669921875,
0.0005750656127929688,
0.0165863037109375,
-0.0182342529296875,
-0.01120758056640625,
0.00543975830078125,
-0.01039886474609375,
0.052947998046875,
0.0273590087890625,
-0.0511474609375,
-0.05181884765625,
-0.033447265625,
... |
Harshithacj123/CCU_Midterm | 2023-10-10T17:08:47.000Z | [
"region:us"
] | Harshithacj123 | null | null | 0 | 11 | 2023-10-10T17:08:46 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 41353
num_examples: 50
download_size: 23370
dataset_size: 41353
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "CCU_Midterm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 431 | [
[
-0.041534423828125,
-0.0341796875,
0.0299835205078125,
0.01568603515625,
-0.0218963623046875,
0.004650115966796875,
-0.0008449554443359375,
0.0095062255859375,
0.057647705078125,
0.0252838134765625,
-0.06195068359375,
-0.047882080078125,
-0.035430908203125,
... |
Shiveswarran/llm_instruction_code_manual_yolo_lc | 2023-10-12T05:17:45.000Z | [
"region:us"
] | Shiveswarran | null | null | 0 | 11 | 2023-10-12T05:16:49 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.01496124267578125,
-0.06036376953125,
0.0379... |
kristinashemet/German_datasets | 2023-10-17T11:43:51.000Z | [
"region:us"
] | kristinashemet | null | null | 0 | 11 | 2023-10-12T10:05:47 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 259881583
num_examples: 346965
download_size: 137269817
dataset_size: 259881583
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "German_datasets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 451 | [
[
-0.051239013671875,
-0.0259246826171875,
0.018310546875,
0.0212554931640625,
-0.0147552490234375,
-0.00629425048828125,
0.01343536376953125,
-0.012237548828125,
0.058685302734375,
0.0245208740234375,
-0.06463623046875,
-0.0684814453125,
-0.0487060546875,
-0.... |
sordonia/platy_icl0_maxD-1_maxC-1_0 | 2023-10-12T13:21:54.000Z | [
"region:us"
] | sordonia | null | null | 0 | 11 | 2023-10-12T13:21:35 | ---
configs:
- config_name: default
data_files:
- split: formal_logic
path: data/formal_logic-*
- split: machine_learning
path: data/machine_learning-*
- split: global_facts
path: data/global_facts-*
- split: abstract_algebra
path: data/abstract_algebra-*
- split: high_school_physics
path: data/high_school_physics-*
- split: college_biology
path: data/college_biology-*
- split: high_school_government_and_politics
path: data/high_school_government_and_politics-*
- split: prehistory
path: data/prehistory-*
- split: security_studies
path: data/security_studies-*
- split: sociology
path: data/sociology-*
dataset_info:
features:
- name: id
dtype: string
- name: context
dtype: string
- name: docno
dtype: string
- name: subject
dtype: string
- name: icl_examples
dtype: 'null'
- name: instruction
dtype: string
- name: author_instr
dtype: string
- name: response
dtype: string
- name: author_response
dtype: string
- name: normalized_cumul_logprob_response
dtype: float64
splits:
- name: formal_logic
num_bytes: 8952043.353426639
num_examples: 2589
- name: machine_learning
num_bytes: 12651806.34615221
num_examples: 3659
- name: global_facts
num_bytes: 13211957.378695708
num_examples: 3821
- name: abstract_algebra
num_bytes: 7520546.270259922
num_examples: 2175
- name: high_school_physics
num_bytes: 21309943.293614667
num_examples: 6163
- name: college_biology
num_bytes: 16410350.620070618
num_examples: 4746
- name: high_school_government_and_politics
num_bytes: 17077691.047730464
num_examples: 4939
- name: prehistory
num_bytes: 24836820.165184837
num_examples: 7183
- name: security_studies
num_bytes: 22067184.504275322
num_examples: 6382
- name: sociology
num_bytes: 18523019.020589612
num_examples: 5357
download_size: 89203875
dataset_size: 162561362.00000003
---
# Dataset Card for "platy_icl0_maxD-1_maxC-1_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 2,191 | [
[
-0.045623779296875,
-0.009857177734375,
0.002269744873046875,
0.031494140625,
-0.021728515625,
-0.00264739990234375,
0.0224456787109375,
0.00910186767578125,
0.05352783203125,
0.042694091796875,
-0.052337646484375,
-0.067138671875,
-0.049591064453125,
-0.006... |
carnival13/xlmr_int_hard_trn | 2023-10-12T13:28:54.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 11 | 2023-10-12T13:28:44 | ---
dataset_info:
features:
- name: domain_label
dtype: int64
- name: pass_label
dtype: int64
- name: input
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 142739369
num_examples: 113100
download_size: 40732989
dataset_size: 142739369
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "xlmr_int_hard_trn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 615 | [
[
-0.023193359375,
-0.0200042724609375,
0.0176544189453125,
0.01035308837890625,
-0.0245819091796875,
0.01678466796875,
0.010040283203125,
0.01103973388671875,
0.0377197265625,
0.04095458984375,
-0.038116455078125,
-0.055877685546875,
-0.0419921875,
-0.0033111... |
Abira1/testjson | 2023-10-12T13:58:28.000Z | [
"region:us"
] | Abira1 | null | null | 0 | 11 | 2023-10-12T13:58:09 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
surathisin/nvso-test-1 | 2023-10-17T02:12:19.000Z | [
"region:us"
] | surathisin | null | null | 0 | 11 | 2023-10-13T05:28:08 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
anujpaudel/rote-ping-1 | 2023-10-14T12:08:59.000Z | [
"region:us"
] | anujpaudel | null | null | 0 | 11 | 2023-10-13T06:44:22 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1620341.0
num_examples: 31
download_size: 1621661
dataset_size: 1620341.0
---
# Dataset Card for "rote-ping-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 474 | [
[
-0.048431396484375,
-0.0219879150390625,
0.002166748046875,
0.022369384765625,
-0.0228729248046875,
-0.022125244140625,
0.0330810546875,
-0.004848480224609375,
0.0751953125,
0.0364990234375,
-0.07818603515625,
-0.055511474609375,
-0.027008056640625,
-0.01072... |
tinhpx2911/vietnamese_general_data_processed | 2023-10-14T08:15:30.000Z | [
"region:us"
] | tinhpx2911 | null | null | 0 | 11 | 2023-10-14T05:12:02 | ---
dataset_info:
- config_name: train_1
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 13070931261
num_examples: 32434667
download_size: 6902902017
dataset_size: 13070931261
- config_name: train_2
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 13079301675
num_examples: 32444361
download_size: 6907570478
dataset_size: 13079301675
- config_name: train_3
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 13083262611
num_examples: 32455485
download_size: 6908687251
dataset_size: 13083262611
- config_name: train_4
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 13083227441
num_examples: 32440768
download_size: 6909612652
dataset_size: 13083227441
- config_name: train_5
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 10862029760
num_examples: 26942980
download_size: 5736766203
dataset_size: 10862029760
configs:
- config_name: train_1
data_files:
- split: train
path: train_1/train-*
- config_name: train_2
data_files:
- split: train
path: train_2/train-*
- config_name: train_3
data_files:
- split: train
path: train_3/train-*
- config_name: train_4
data_files:
- split: train
path: train_4/train-*
- config_name: train_5
data_files:
- split: train
path: train_5/train-*
---
# Dataset Card for "vietnamese_general_data_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,638 | [
[
-0.0222320556640625,
-0.042205810546875,
0.02691650390625,
0.01837158203125,
-0.0261383056640625,
-0.010498046875,
0.013275146484375,
-0.00501251220703125,
0.04925537109375,
0.059539794921875,
-0.051544189453125,
-0.07977294921875,
-0.046142578125,
-0.002653... |
laion/strategic_game_maze | 2023-10-20T04:13:19.000Z | [
"license:cc-by-4.0",
"region:us"
] | laion | null | null | 6 | 11 | 2023-10-15T02:44:07 | ---
license: cc-by-4.0
---
NOTICE: some of the game is mistakenly label as both length and width columns are 40, they are 30 actually.
# maze
This dataset contains 350,000 mazes, represents over 39.29 billion moves.
Each maze is a 30x30 ASCII representation, with solutions derived using the BFS.
It has two columns:
- 'Maze': representation of maze in a list of string.shape is 30*30
- visual example
<image src="https://cdn-uploads.huggingface.co/production/uploads/644b983f0fbe4830f192c4f5/BGplH40fK5wQzpofPocMK.png" alt="drawing" width="200"/>
- 'Path': solution from start point to end point in a list of string, each item represent a position in the maze.
| 673 | [
[
-0.032745361328125,
-0.03509521484375,
0.0188751220703125,
0.041107177734375,
-0.0291748046875,
-0.0016660690307617188,
-0.007518768310546875,
-0.0246734619140625,
0.0282440185546875,
0.045196533203125,
-0.061676025390625,
-0.04644775390625,
-0.0390625,
0.01... |
pbaoo2705/cpgqa_processed-2 | 2023-10-16T06:02:40.000Z | [
"region:us"
] | pbaoo2705 | null | null | 0 | 11 | 2023-10-16T06:02:38 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: answer
dtype: string
- name: start_positions
dtype: int64
- name: end_positions
dtype: int64
splits:
- name: train
num_bytes: 9148601
num_examples: 884
download_size: 190231
dataset_size: 9148601
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cpgqa_processed-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 613 | [
[
-0.0298309326171875,
-0.0247039794921875,
0.026611328125,
0.022979736328125,
-0.0270538330078125,
0.00218963623046875,
0.0220184326171875,
-0.0124969482421875,
0.034637451171875,
0.042724609375,
-0.0557861328125,
-0.040496826171875,
-0.05255126953125,
-0.021... |
MemGPT/example-sec-filings | 2023-10-19T02:56:38.000Z | [
"region:us"
] | MemGPT | null | null | 6 | 11 | 2023-10-16T23:47:27 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Data-Lab/vkusvill_search_sft_v0.3.5 | 2023-10-17T12:43:11.000Z | [
"region:us"
] | Data-Lab | null | null | 0 | 11 | 2023-10-17T12:43:00 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 3910654
num_examples: 400
download_size: 1271716
dataset_size: 3910654
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "vkusvill_search_sft_v0.3.5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 529 | [
[
-0.037261962890625,
-0.00885772705078125,
0.032989501953125,
0.018035888671875,
-0.0285186767578125,
-0.01080322265625,
0.029998779296875,
-0.004344940185546875,
0.062255859375,
0.033660888671875,
-0.07208251953125,
-0.04779052734375,
-0.0264739990234375,
-0... |
yaygomii/FYP_cv13_w2v_processor_output | 2023-10-18T14:50:41.000Z | [
"region:us"
] | yaygomii | null | null | 0 | 11 | 2023-10-18T14:40:27 | ---
configs:
- config_name: default
data_files:
- split: train_w2v
path: data/train_w2v-*
- split: test_w2v
path: data/test_w2v-*
dataset_info:
features:
- name: input_values
sequence: float32
- name: labels
sequence: int64
splits:
- name: train_w2v
num_bytes: 12064642120
num_examples: 43350
- name: test_w2v
num_bytes: 3246847096
num_examples: 11973
download_size: 15200350363
dataset_size: 15311489216
---
# Dataset Card for "FYP_cv13_w2v_processor_output"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 646 | [
[
-0.02752685546875,
-0.0118255615234375,
0.01265716552734375,
0.019317626953125,
-0.0224456787109375,
-0.005584716796875,
0.009002685546875,
-0.0099945068359375,
0.03558349609375,
0.022552490234375,
-0.059783935546875,
-0.041168212890625,
-0.056793212890625,
... |
tyzhu/eval_tag_nq_test_v12_first_1 | 2023-10-18T16:09:07.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 11 | 2023-10-18T16:07:59 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: question
dtype: string
- name: title
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
- name: answers
struct:
- name: answer_start
sequence: 'null'
- name: text
sequence: string
- name: id
dtype: string
- name: titles
dtype: string
splits:
- name: train
num_bytes: 3310
num_examples: 10
- name: validation
num_bytes: 1306262
num_examples: 3610
download_size: 0
dataset_size: 1309572
---
# Dataset Card for "eval_tag_nq_test_v12_first_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 855 | [
[
-0.0445556640625,
-0.03369140625,
-0.006664276123046875,
0.0101470947265625,
-0.0169677734375,
0.01134490966796875,
0.0369873046875,
0.0032901763916015625,
0.0562744140625,
0.02587890625,
-0.06231689453125,
-0.04766845703125,
-0.02032470703125,
-0.0054244995... |
irlab-udc/alpaca_data_galician | 2023-10-19T13:27:01.000Z | [
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:gl",
"license:apache-2.0",
"region:us"
] | irlab-udc | null | null | 4 | 11 | 2023-10-19T10:34:07 | ---
license: apache-2.0
task_categories:
- conversational
language:
- gl
pretty_name: alpaca_data_galician
size_categories:
- 10K<n<100K
---
# Galician version of `alpaca_data.json`
This is a Galician-translated with Python package [`googletranslatepy`](https://suqingdong.github.io/googletranslatepy/) version of the Stanford [alpaca_data.json](https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json) dataset.
## Dataset Structure
The dataset contains 52K instruction-following elements in a JSON file with a list of dictionaries. Each dictionary contains the following fields:
- `instruction`: `str`, describes the task the model should perform. Each of the 52K instructions is unique.
- `input`: `str`, optional context or input for the task. For example, when the instruction is "Resume o seguinte artigo", the input is the article. Around 40% of the examples have an input.
- `output`: `str`, the answer to the instruction as generated by `text-davinci-003`. | 986 | [
[
-0.00417327880859375,
-0.05047607421875,
0.0293121337890625,
0.039886474609375,
-0.0231781005859375,
-0.006450653076171875,
-0.0028514862060546875,
-0.0283355712890625,
0.032501220703125,
0.06121826171875,
-0.062286376953125,
-0.06500244140625,
-0.05145263671875... |
carles-undergrad-thesis/en-id-parallel-sentences-embedding | 2023-10-20T02:02:07.000Z | [
"region:us"
] | carles-undergrad-thesis | null | null | 0 | 11 | 2023-10-20T01:57:16 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text_en
dtype: string
- name: text_id
dtype: string
- name: target_embedding
sequence: float32
- name: input_ids_en
sequence: int64
- name: attention_mask_en
sequence: int64
- name: input_ids_id
sequence: int64
- name: attention_mask_id
sequence: int64
splits:
- name: train
num_bytes: 11676096944
num_examples: 1000000
download_size: 4112187708
dataset_size: 11676096944
---
# Dataset Card for "en-id-parallel-sentences-embedding"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 746 | [
[
-0.041534423828125,
-0.041351318359375,
0.026885986328125,
0.036041259765625,
-0.012237548828125,
-0.0017833709716796875,
-0.005367279052734375,
-0.0010395050048828125,
0.0594482421875,
0.022064208984375,
-0.045989990234375,
-0.062255859375,
-0.046844482421875,
... |
haseong8012/child-10k-adult-6k_for_test | 2023-10-20T09:11:21.000Z | [
"region:us"
] | haseong8012 | null | null | 0 | 11 | 2023-10-20T08:53:37 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: audio
sequence: float32
splits:
- name: test
num_bytes: 2883700590
num_examples: 16000
download_size: 2489316623
dataset_size: 2883700590
---
# Dataset Card for "child-adult-16k_for-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 497 | [
[
-0.040313720703125,
-0.0114593505859375,
-0.010650634765625,
0.0222320556640625,
-0.0183258056640625,
0.005420684814453125,
0.015045166015625,
-0.0244903564453125,
0.0276947021484375,
0.02197265625,
-0.0714111328125,
-0.046478271484375,
-0.03533935546875,
-0... |
jay401521/twolabels | 2023-10-21T09:26:15.000Z | [
"region:us"
] | jay401521 | null | null | 0 | 11 | 2023-10-21T08:34:10 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: domain
dtype: string
- name: label
dtype: int64
- name: rank
dtype: int64
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 6505957
num_examples: 70594
download_size: 0
dataset_size: 6505957
---
# Dataset Card for "twolabels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 480 | [
[
-0.036865234375,
-0.019439697265625,
0.007404327392578125,
0.02313232421875,
-0.010711669921875,
0.006099700927734375,
0.0157470703125,
-0.0280914306640625,
0.052215576171875,
0.03363037109375,
-0.04852294921875,
-0.04833984375,
-0.053558349609375,
-0.030166... |
traveler-leon1/my_dataset | 2023-10-21T12:56:31.000Z | [
"region:us"
] | traveler-leon1 | null | null | 0 | 11 | 2023-10-21T12:22:44 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
crumb/c4-subset-for-humaneval | 2023-10-22T00:27:44.000Z | [
"region:us"
] | crumb | null | null | 0 | 11 | 2023-10-21T19:06:56 | ---
dataset_info:
features:
- name: text
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 411199548
num_examples: 302361
download_size: 245218649
dataset_size: 411199548
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "c4-subset-for-humaneval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 494 | [
[
-0.03875732421875,
-0.00669097900390625,
0.01355743408203125,
0.01207733154296875,
-0.026519775390625,
0.00925445556640625,
0.01873779296875,
-0.0211029052734375,
0.051116943359375,
0.0374755859375,
-0.058685302734375,
-0.060882568359375,
-0.029266357421875,
... |
crumb/c4-subset-for-arc | 2023-10-21T19:23:01.000Z | [
"region:us"
] | crumb | null | null | 0 | 11 | 2023-10-21T19:21:39 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
crumb/c4-subset-for-truthfulqa | 2023-10-22T00:27:51.000Z | [
"region:us"
] | crumb | null | null | 0 | 11 | 2023-10-21T19:23:23 | ---
dataset_info:
features:
- name: text
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 577836714
num_examples: 321153
download_size: 352256147
dataset_size: 577836714
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "c4-subset-for-truthfulqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 495 | [
[
-0.035003662109375,
-0.01137542724609375,
0.0295257568359375,
0.01322174072265625,
-0.01318359375,
0.0162506103515625,
0.0226287841796875,
-0.01165771484375,
0.036834716796875,
0.045166015625,
-0.06298828125,
-0.0595703125,
-0.027069091796875,
-0.00782012939... |
crumb/c4-subset-for-hellaswag-approx | 2023-10-22T00:42:48.000Z | [
"region:us"
] | crumb | null | null | 0 | 11 | 2023-10-22T00:40:42 | ---
dataset_info:
features:
- name: text
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 618206614
num_examples: 291894
download_size: 364064080
dataset_size: 618206614
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "c4-subset-for-hellaswag-approx"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 501 | [
[
-0.04547119140625,
-0.01641845703125,
0.0280609130859375,
0.0152130126953125,
-0.02862548828125,
0.0009241104125976562,
0.0100555419921875,
-0.017913818359375,
0.048309326171875,
0.0240020751953125,
-0.0787353515625,
-0.057525634765625,
-0.041900634765625,
-... |
crumb/c4-subset-for-mmlu-approx | 2023-10-22T01:31:25.000Z | [
"region:us"
] | crumb | null | null | 0 | 11 | 2023-10-22T01:29:29 | ---
dataset_info:
features:
- name: text
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 557757084
num_examples: 262665
download_size: 339106702
dataset_size: 557757084
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "c4-subset-for-mmlu-approx"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 496 | [
[
-0.0462646484375,
-0.0252227783203125,
0.0254669189453125,
0.0103912353515625,
-0.00966644287109375,
0.005283355712890625,
0.0167388916015625,
-0.01512908935546875,
0.054901123046875,
0.0065155029296875,
-0.0787353515625,
-0.034271240234375,
-0.03594970703125,
... |
antareepdey/Medical_chat_Llama-chat-50k | 2023-10-22T03:16:54.000Z | [
"region:us"
] | antareepdey | null | null | 0 | 11 | 2023-10-22T03:15:55 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: Text
dtype: string
splits:
- name: train
num_bytes: 50561249
num_examples: 50000
download_size: 31132221
dataset_size: 50561249
---
# Dataset Card for "Medical_chat_Llama-chat-50k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 459 | [
[
-0.015289306640625,
-0.01113128662109375,
0.01047515869140625,
0.03466796875,
-0.036285400390625,
0.0167388916015625,
0.0196685791015625,
-0.024566650390625,
0.07366943359375,
0.033416748046875,
-0.0574951171875,
-0.06451416015625,
-0.055419921875,
-0.005405... |
atmallen/mmlu_aux_binary | 2023-10-22T21:41:19.000Z | [
"region:us"
] | atmallen | null | null | 0 | 11 | 2023-10-22T20:06:00 | ---
dataset_info:
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int32
- name: statement
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
splits:
- name: validation
num_bytes: 7300371
num_examples: 4036
- name: test
num_bytes: 69452850
num_examples: 37506
download_size: 46452233
dataset_size: 76753221
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for "mmlu_aux_binary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 818 | [
[
-0.04974365234375,
-0.0228271484375,
0.0143585205078125,
0.018798828125,
-0.0192413330078125,
0.0053253173828125,
0.02685546875,
-0.0174102783203125,
0.06500244140625,
0.01271820068359375,
-0.0714111328125,
-0.049560546875,
-0.04327392578125,
-0.002061843872... |
aiancheruk/womens_clothing_ecommerce_reviews_mini | 2023-10-22T22:52:46.000Z | [
"region:us"
] | aiancheruk | null | null | 0 | 11 | 2023-10-22T22:52:40 | ---
dataset_info:
features:
- name: review_text
dtype: string
- name: age
dtype: int64
- name: rating
dtype: int64
- name: positive_feedback_count
dtype: int64
- name: division_name
dtype: string
- name: department_name
dtype: string
- name: class_name
dtype: string
- name: recommended_ind
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 1894592.0740274212
num_examples: 5000
- name: test
num_bytes: 373295
num_examples: 1000
- name: val
num_bytes: 373636
num_examples: 1000
download_size: 1342313
dataset_size: 2641523.074027421
---
# Dataset Card for "womens_clothing_ecommerce_reviews_mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 877 | [
[
-0.03692626953125,
-0.026519775390625,
0.0023670196533203125,
0.00884246826171875,
-0.028900146484375,
-0.007701873779296875,
0.02325439453125,
-0.02105712890625,
0.047149658203125,
0.0273284912109375,
-0.08984375,
-0.0560302734375,
-0.021148681640625,
-0.00... |
davidfant/natural-questions-chunk-4 | 2023-10-22T23:03:02.000Z | [
"region:us"
] | davidfant | null | null | 0 | 11 | 2023-10-22T22:59:31 | ---
dataset_info:
features:
- name: id
dtype: string
- name: document
struct:
- name: html
dtype: string
- name: title
dtype: string
- name: tokens
sequence:
- name: end_byte
dtype: int64
- name: is_html
dtype: bool
- name: start_byte
dtype: int64
- name: token
dtype: string
- name: url
dtype: string
- name: question
struct:
- name: text
dtype: string
- name: tokens
sequence: string
- name: long_answer_candidates
sequence:
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: top_level
dtype: bool
- name: annotations
sequence:
- name: id
dtype: string
- name: long_answer
struct:
- name: candidate_index
dtype: int64
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: short_answers
sequence:
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: text
dtype: string
- name: yes_no_answer
dtype:
class_label:
names:
'0': 'NO'
'1': 'YES'
splits:
- name: train
num_bytes: 4529920148
num_examples: 10000
download_size: 1759288585
dataset_size: 4529920148
---
# Dataset Card for "natural-questions-chunk-4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,818 | [
[
-0.061614990234375,
-0.056854248046875,
0.0200958251953125,
0.0217437744140625,
-0.0279388427734375,
0.003276824951171875,
0.0181121826171875,
-0.0257720947265625,
0.0645751953125,
0.04541015625,
-0.05999755859375,
-0.022247314453125,
-0.0204315185546875,
0.... |
davidfant/natural-questions-chunk-5 | 2023-10-22T23:06:32.000Z | [
"region:us"
] | davidfant | null | null | 0 | 11 | 2023-10-22T23:03:02 | ---
dataset_info:
features:
- name: id
dtype: string
- name: document
struct:
- name: html
dtype: string
- name: title
dtype: string
- name: tokens
sequence:
- name: end_byte
dtype: int64
- name: is_html
dtype: bool
- name: start_byte
dtype: int64
- name: token
dtype: string
- name: url
dtype: string
- name: question
struct:
- name: text
dtype: string
- name: tokens
sequence: string
- name: long_answer_candidates
sequence:
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: top_level
dtype: bool
- name: annotations
sequence:
- name: id
dtype: string
- name: long_answer
struct:
- name: candidate_index
dtype: int64
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: short_answers
sequence:
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: text
dtype: string
- name: yes_no_answer
dtype:
class_label:
names:
'0': 'NO'
'1': 'YES'
splits:
- name: train
num_bytes: 4651468477
num_examples: 10000
download_size: 1807817811
dataset_size: 4651468477
---
# Dataset Card for "natural-questions-chunk-5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,818 | [
[
-0.0679931640625,
-0.05120849609375,
0.0154571533203125,
0.0218505859375,
-0.0318603515625,
0.001682281494140625,
0.0178985595703125,
-0.028228759765625,
0.0584716796875,
0.0433349609375,
-0.06341552734375,
-0.03094482421875,
-0.0250091552734375,
0.013664245... |
davidfant/natural-questions-chunk-6 | 2023-10-22T23:10:03.000Z | [
"region:us"
] | davidfant | null | null | 0 | 11 | 2023-10-22T23:06:32 | ---
dataset_info:
features:
- name: id
dtype: string
- name: document
struct:
- name: html
dtype: string
- name: title
dtype: string
- name: tokens
sequence:
- name: end_byte
dtype: int64
- name: is_html
dtype: bool
- name: start_byte
dtype: int64
- name: token
dtype: string
- name: url
dtype: string
- name: question
struct:
- name: text
dtype: string
- name: tokens
sequence: string
- name: long_answer_candidates
sequence:
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: top_level
dtype: bool
- name: annotations
sequence:
- name: id
dtype: string
- name: long_answer
struct:
- name: candidate_index
dtype: int64
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: short_answers
sequence:
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: text
dtype: string
- name: yes_no_answer
dtype:
class_label:
names:
'0': 'NO'
'1': 'YES'
splits:
- name: train
num_bytes: 4655306372
num_examples: 10000
download_size: 1805442960
dataset_size: 4655306372
---
# Dataset Card for "natural-questions-chunk-6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,818 | [
[
-0.059600830078125,
-0.05291748046875,
0.01464080810546875,
0.0155029296875,
-0.0291748046875,
-0.00484466552734375,
0.015869140625,
-0.02838134765625,
0.062469482421875,
0.041168212890625,
-0.06182861328125,
-0.01995849609375,
-0.0244140625,
0.0081939697265... |
davidfant/natural-questions-chunk-7 | 2023-10-22T23:13:42.000Z | [
"region:us"
] | davidfant | null | null | 0 | 11 | 2023-10-22T23:10:03 | ---
dataset_info:
features:
- name: id
dtype: string
- name: document
struct:
- name: html
dtype: string
- name: title
dtype: string
- name: tokens
sequence:
- name: end_byte
dtype: int64
- name: is_html
dtype: bool
- name: start_byte
dtype: int64
- name: token
dtype: string
- name: url
dtype: string
- name: question
struct:
- name: text
dtype: string
- name: tokens
sequence: string
- name: long_answer_candidates
sequence:
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: top_level
dtype: bool
- name: annotations
sequence:
- name: id
dtype: string
- name: long_answer
struct:
- name: candidate_index
dtype: int64
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: short_answers
sequence:
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: text
dtype: string
- name: yes_no_answer
dtype:
class_label:
names:
'0': 'NO'
'1': 'YES'
splits:
- name: train
num_bytes: 4648515125
num_examples: 10000
download_size: 1806671077
dataset_size: 4648515125
---
# Dataset Card for "natural-questions-chunk-7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,818 | [
[
-0.06536865234375,
-0.05218505859375,
0.01543426513671875,
0.0205078125,
-0.03790283203125,
0.0010766983032226562,
0.0167083740234375,
-0.0260162353515625,
0.060272216796875,
0.050537109375,
-0.05157470703125,
-0.025543212890625,
-0.0306854248046875,
0.01125... |
LosHuesitos9-9/Huesitos | 2023-10-24T19:58:07.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"language:es",
"license:cc",
"rf100",
"medical",
"code",
"region:us"
] | LosHuesitos9-9 | null | null | 1 | 11 | 2023-10-23T14:33:42 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': bone-fracture
'1': angle
'2': fracture
'3': line
'4': messed_up_angle
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
- es
license:
- cc
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: Huesitos
tags:
- rf100
- medical
- code
---
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
## Licensing Information
See original homepage https://universe.roboflow.com/object-detection/bone-fracture-7fylg
### Citation Information
```
@misc{ bone-fracture-7fylg,
title = { bone fracture 7fylg Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/bone-fracture-7fylg } },
url = { https://universe.roboflow.com/object-detection/bone-fracture-7fylg },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
contributions dataset = {[@mariosasko](https://github.com/mariosasko)}
}"
``` | 2,922 | [
[
-0.029998779296875,
-0.049407958984375,
0.02520751953125,
-0.0009336471557617188,
-0.03460693359375,
-0.022491455078125,
0.00977325439453125,
-0.036163330078125,
0.0219268798828125,
0.0288238525390625,
-0.03662109375,
-0.0743408203125,
-0.0340576171875,
0.01... |
dyliu/VIST | 2023-10-23T15:26:40.000Z | [
"region:us"
] | dyliu | null | null | 0 | 11 | 2023-10-23T15:14:49 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
danielaivanova/damaged-media | 2023-10-24T00:52:48.000Z | [
"region:us"
] | danielaivanova | null | null | 0 | 11 | 2023-10-23T21:25:28 | ---
dataset_info:
features:
- name: id
dtype: string
- name: image
dtype: image
- name: annotation
dtype: image
- name: annotation_rgb
dtype: image
- name: material
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 3620215529.0
num_examples: 418
download_size: 3615768892
dataset_size: 3620215529.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "damage-analogue-media"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 645 | [
[
-0.047882080078125,
-0.032379150390625,
0.0186309814453125,
0.00559234619140625,
-0.007305145263671875,
-0.002094268798828125,
0.02850341796875,
-0.020599365234375,
0.0670166015625,
0.0301513671875,
-0.070556640625,
-0.032440185546875,
-0.038055419921875,
-0... |
ManuBansal/33param_snp500 | 2023-10-24T12:25:37.000Z | [
"region:us"
] | ManuBansal | null | null | 0 | 11 | 2023-10-24T12:24:26 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
desarrolloasesoreslocales/prompts | 2023-10-24T12:54:31.000Z | [
"region:us"
] | desarrolloasesoreslocales | null | null | 0 | 11 | 2023-10-24T12:50:46 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jayashri710/mental-health-dataset | 2023-10-25T09:58:11.000Z | [
"region:us"
] | jayashri710 | null | null | 0 | 11 | 2023-10-25T09:57:20 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
kardosdrur/opensubtitles-da-sv | 2023-10-26T07:12:17.000Z | [
"license:mit",
"region:us"
] | kardosdrur | null | null | 0 | 11 | 2023-10-25T13:45:35 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: link_id
dtype: string
- name: da
dtype: string
- name: 'no'
dtype: string
- name: overlap
dtype: float64
splits:
- name: train
num_bytes: 270499727.08648384
num_examples: 1772983
- name: test
num_bytes: 67624969.91351616
num_examples: 443246
download_size: 201404638
dataset_size: 338124697.0
---
# OpenSubtitles Danish-Swedish
Aligned sentences with heuristic-based filters from OpenSubtitles in Danish and in Swedish.
The source code for producing the dataset is included in the repository.
The dataset was created to aid training sentence transformers in the Danish Foundation Models project.
| 823 | [
[
-0.026031494140625,
-0.0280303955078125,
0.034210205078125,
0.02166748046875,
-0.03790283203125,
-0.005565643310546875,
-0.0202178955078125,
-0.0127410888671875,
-0.00909423828125,
0.06890869140625,
-0.05023193359375,
-0.046844482421875,
-0.021453857421875,
... |
Ka4on/ultrasound_test | 2023-10-25T20:16:13.000Z | [
"region:us"
] | Ka4on | null | null | 0 | 11 | 2023-10-25T20:08:59 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
vishnusr/code_searchnet_reduced_val | 2023-10-26T17:08:36.000Z | [
"region:us"
] | vishnusr | null | null | 0 | 11 | 2023-10-26T17:08:32 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: 'Unnamed: 0.1'
dtype: int64
- name: 'Unnamed: 0'
dtype: int64
- name: code
dtype: string
- name: docstring
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 1078734
num_examples: 500
download_size: 483209
dataset_size: 1078734
---
# Dataset Card for "code_searchnet_reduced_val"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 607 | [
[
-0.0494384765625,
-0.00550079345703125,
-0.002666473388671875,
-0.0020008087158203125,
0.0037708282470703125,
-0.00777435302734375,
0.016693115234375,
0.01384735107421875,
0.062408447265625,
0.042083740234375,
-0.06207275390625,
-0.053070068359375,
-0.0246124267... |
CJWeiss/multilong | 2023-10-26T21:38:41.000Z | [
"region:us"
] | CJWeiss | null | null | 0 | 11 | 2023-10-26T21:38:00 | ---
dataset_info:
features:
- name: id
dtype: string
- name: sources
sequence: string
- name: summary/long
dtype: string
- name: summary/short
dtype: string
- name: summary/tiny
dtype: string
splits:
- name: train
num_bytes: 1381375966.0
num_examples: 3404
- name: test
num_bytes: 265556700.0
num_examples: 681
- name: valid
num_bytes: 199444850.0
num_examples: 454
download_size: 835227494
dataset_size: 1846377516.0
---
# Dataset Card for "multilong"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 653 | [
[
-0.05328369140625,
-0.0296630859375,
0.01788330078125,
0.0340576171875,
-0.021148681640625,
0.005687713623046875,
0.0006694793701171875,
-0.023101806640625,
0.0667724609375,
0.032135009765625,
-0.055694580078125,
-0.04534912109375,
-0.0435791015625,
-0.00675... |
baohuynhbk14/vietnamese-guanaco-llama2-1k | 2023-10-27T08:01:25.000Z | [
"region:us"
] | baohuynhbk14 | null | null | 0 | 11 | 2023-10-27T07:39:39 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Phaedrus/rsna_5k_512_a | 2023-10-27T09:16:28.000Z | [
"region:us"
] | Phaedrus | null | null | 0 | 11 | 2023-10-27T09:10:55 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label1
dtype: image
- name: label2
dtype: image
- name: label3
dtype: image
- name: label4
dtype: image
splits:
- name: train
num_bytes: 8605017463.0
num_examples: 2000
download_size: 574221474
dataset_size: 8605017463.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rsna_5k_512_a"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 589 | [
[
-0.037384033203125,
0.01158905029296875,
0.0031642913818359375,
0.0244598388671875,
-0.0301361083984375,
0.00710296630859375,
0.0316162109375,
-0.007434844970703125,
0.07342529296875,
0.0321044921875,
-0.059600830078125,
-0.0438232421875,
-0.041351318359375,
... |
felipeoes/filtered_qa_blue_amazon_legislation_v2_19k | 2023-10-28T00:50:40.000Z | [
"region:us"
] | felipeoes | null | null | 0 | 11 | 2023-10-28T00:50:39 | ---
dataset_info:
features:
- name: file_name
dtype: string
- name: prompt
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 157786543
num_examples: 19302
download_size: 14666842
dataset_size: 157786543
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "filtered_qa_blue_amazon_legislation_v2_19k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 621 | [
[
-0.023406982421875,
-0.0149383544921875,
0.0175323486328125,
0.0232086181640625,
-0.04998779296875,
-0.00559234619140625,
0.04052734375,
-0.0217742919921875,
0.038787841796875,
0.07122802734375,
-0.057861328125,
-0.0511474609375,
-0.0197296142578125,
-0.0239... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.