id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
czyzi0/the-mc-speech-dataset | 2023-07-23T13:59:19.000Z | [
"task_categories:text-to-speech",
"task_categories:automatic-speech-recognition",
"size_categories:10K<n<100K",
"language:pl",
"license:cc0-1.0",
"region:us"
] | czyzi0 | This is public domain speech dataset consisting of 24018 short audio clips of a single speaker
reading sentences in Polish. A transcription is provided for each clip. Clips have total length of
more than 22 hours.
Texts are in public domain. The audio was recorded in 2021-22 as a part of my master's thesis and
is in public domain. | @masterthesis{mcspeech,
title={Analiza porównawcza korpusów nagrań mowy dla celów syntezy mowy w języku polskim},
author={Czyżnikiewicz, Mateusz},
year={2022},
month={December},
school={Warsaw University of Technology},
type={Master's thesis},
doi={10.13140/RG.2.2.26293.24800},
note={Available at \\url{http://dx.doi.org/10.13140/RG.2.2.26293.24800}},
} | null | 0 | 3 | ---
license: cc0-1.0
task_categories:
- text-to-speech
- automatic-speech-recognition
language:
- pl
pretty_name: The MC Speech Dataset
size_categories:
- 10K<n<100K
---
This is public domain speech dataset consisting of 24018 short audio clips of a single speaker reading sentences in Polish. A transcription is provided for each clip. Clips have total length of more than 22 hours.
Texts are in public domain. The audio was recorded in 2021-22 as a part of my [master's thesis](http://dx.doi.org/10.13140/RG.2.2.26293.24800) and is in public domain.
If you use this dataset, please cite:
```
@masterthesis{mcspeech,
title={Analiza porównawcza korpusów nagrań mowy dla celów syntezy mowy w języku polskim},
author={Czyżnikiewicz, Mateusz},
year={2022},
month={December},
school={Warsaw University of Technology},
type={Master's thesis},
doi={10.13140/RG.2.2.26293.24800},
note={Available at \url{http://dx.doi.org/10.13140/RG.2.2.26293.24800}},
}
```
More info about the dataset can be found at https://github.com/czyzi0/the-mc-speech-dataset
|
nyuuzyou/stickers | 2023-07-19T18:24:04.000Z | [
"task_categories:image-classification",
"license:wtfpl",
"region:us"
] | nyuuzyou | null | null | null | 0 | 3 | ---
task_categories:
- image-classification
license: wtfpl
viewer: false
---
# Telegram Stickers Image Classification Dataset
This dataset consists of a collection of Telegram stickers that have been converted into images for the purpose of image classification.
## Dataset Details
- Image Size: 512x512 pixels
- Number of Classes: 1276
- Total Number of Images: 672,911
The dataset was created by extracting stickers from 23,681 sets of stickers in Telegram. Animated and video stickers were removed, and sets that had only one emoji assigned to all stickers were ignored. Stickers that did not fit the 512x512 size were padded with empty pixels. Furthermore, all stickers were converted to the .png format to ensure consistency.
The class names for the stickers were assigned based on the Unicode emoji given to them by the author. For example, the Unicode U+1F917 represents the 🤗 emoji. Each sticker in the dataset is labeled with the corresponding Unicode code as its class.
The name of each image in the dataset corresponds to the file ID of the sticker in Telegram. This unique identifier can be used to reference the original sticker in the Telegram platform.
## Dataset Split
- Training Set:
- Number of Images: 605,043
- Validation Set:
- Number of Images: 33,035
- Test Set:
- Number of Images: 34,833
### Additional Information
The training set `train.zip` has been divided into multiple parts, each of which is approximately 20 GB in size. To extract the dataset, you will need a program that supports extracting split archives, such as 7z.
In the `dataset_resized` folder, you will find the resized version of the dataset. The images in this folder have been resized to 128x128 pixels.
Please note that the original dataset provided is in the format of 512x512-pixel images, while the `dataset_resized` folder contains the resized images of 128x128 pixels.
|
llm-book/aio_from_tohoku | 2023-07-14T11:33:15.000Z | [
"region:us"
] | llm-book | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: qid
dtype: string
- name: competition
dtype: string
- name: timestamp
dtype: string
- name: section
dtype: string
- name: number
dtype: string
- name: original_question
dtype: string
- name: original_answer
dtype: string
- name: original_additional_info
dtype: string
- name: question
dtype: string
- name: answers
list: string
splits:
- name: train
num_bytes: 9464003
num_examples: 22335
- name: validation
num_bytes: 409779
num_examples: 1000
download_size: 2267163
dataset_size: 9873782
---
# Dataset Card for llm-book/aio
書籍『大規模言語モデル入門』で使用する、「AI王」コンペティションのQAデータセットです。
GitHub リポジトリ [cl-tohoku/quiz-datasets](https://github.com/cl-tohoku/quiz-datasets) で公開されているデータセットを利用しています。
## Licence
本データセットに含まれる一部のクイズ問題の著作権は [abc/EQIDEN 実行委員会](https://abc-dive.com/portal/)に帰属するものであり、これらのクイズ問題は本書における使用許諾を得ているものです。
本データセットに含まれる一部のクイズ問題は[株式会社キュービック](http://www.qbik.co.jp/)および[株式会社カプリティオ](https://capriccio.tokyo/)に依頼し作成したものであり、これらのクイズ問題は[クリエイティブ・コモンズ表示・継承ライセンス 4.0 (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/deed.ja) ライセンスの下に提供されています。
本データセットにパッセージとして付与されている Wikipedia のコンテンツは、[クリエイティブ・コモンズ表示・継承ライセンス 3.0 (CC BY-SA 3.0)](https://creativecommons.org/licenses/by-sa/3.0/deed.ja) および [GNU 自由文書ライセンス (GFDL)](https://www.gnu.org/licenses/fdl.html) の下に配布されているものです。
クイズ問題のライセンスについて、詳細は [cl-tohoku/quiz-datasets](https://github.com/cl-tohoku/quiz-datasets) を参照してください。
|
pankajmathur/orca_minis_uncensored_dataset | 2023-07-04T05:56:20.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | pankajmathur | null | null | null | 9 | 3 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-generation
language:
- en
size_categories:
- 100K<n<1M
---
Uncensored explain tuned WizardLM + Alpaca + Dolly V-2 datasets ~104K created using approaches from Orca Research Paper.
We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets.
This helps student models like orca_mini_v2_7b to learn thought process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version).
Please see how the System prompt is added before each instruction. |
MuzammilJethwa/Benetech_Graph_Data | 2023-07-05T06:30:29.000Z | [
"region:us"
] | MuzammilJethwa | null | null | null | 1 | 3 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': dot
'1': horizontal_bar
'2': line
'3': scatter
'4': vertical_bar
splits:
- name: train
num_bytes: 130477446.0
num_examples: 7000
- name: validation
num_bytes: 39057040.0
num_examples: 2000
- name: test
num_bytes: 18922095.0
num_examples: 1000
download_size: 173854949
dataset_size: 188456581.0
---
# Dataset Card for "Benetech_Graph_Data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ohilikeit/empathetic_dialogues_mutli_turn_ko | 2023-08-04T02:59:46.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:ko",
"license:apache-2.0",
"region:us"
] | ohilikeit | null | null | null | 3 | 3 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- ko
size_categories:
- 10K<n<100K
---
# Dataset Card for "한국어 일상 속 공감형 대화 데이터셋(멀티-턴)"
## Dataset Summary
- boostCamp AI Tech 5기 과정 중 NLP 12조 훈제연어들 팀의 최종 프로젝트에서 제작한 데이터입니다.
- 일상 속 다양한 상황에서 사용자와 챗봇 간의 대화를 담은 데이터셋 입니다.
- GPT4, GPT3.5-turbo로 제작된 합성데이터이며 싱글-턴, 2-턴, 3-턴 대화로 구성되어 있습니다.
- 답변은 **[공감적 표현 - 일반적인 대화 - 관련된 질문]** 의 형태를 가집니다.
## Generation Prompt Example(GPT3.5-turbo)
```
Take a close look at the following example and Conditions. Create nine sessions that each of the session is ongoing conversation about a single topic.
[Conditions]
- The questioner asks a question of appropriate length (1-2 lines) and you respond with an appropriate answer.
- The answer should be a complete sentence or two, but the first sentence should be sympathetic to the questioner's question and situation, and should be answered in a friendly and non-rude manner.
- The second sentence should be free-form, but the last sentence MUST end with an additional question related to the question and answer.
- The total length of each question and answer combined should be no more than nine words
- If the question is positive, you should be excited, upbeat, and positive in tone.
- If the question is negative, you should be warm, sensitive, and caring in tone.
- You may offer solutions to fit the situation.
- All answers should be written in a way that makes the person feel empathetic and comforted by the paired question.
- The content should be based on simple, everyday situations, but with enough variety to include specific situations (e.g., a counselor and a client, a student talking to a teacher about a problem, a conversation between friends).
- The conversations should be as generic and diverse as possible. However, try to include many topics related to psychology and everyday concerns.
- All responses should be polㅇite, respectful, caring, warm, understanding, and compassionate in tone and content.
- Do not generate topic or word about 'stress'.
- You should answer question 1 with answer 1, ask question 2 related to answer 1, answer question 2 with answer 2, ask question 3 related to answer 2, and answer question 3 with answer 3.
- Do not re-mension about question on the following answer.
[Example Session]
(Session 1)
Question 1: I had a fight with my boyfriend and I'm so angry I can't calm down.
Answer 1: Oh no, I'm so sorry to hear that. Relationship conflict is always tough, and it's completely understandable to feel angry and upset. Can I ask what the fight was about? Talking about it could help you process your feelings.
Question 2: He constantly 10 minutes late for appointments and never say sorry.
Answer 2: Oh, that's so frustrating! Punctuality is a matter of respect for other people's time, and it sounds like your boyfriend isn't showing you that respect. You're allowed to be upset about that. How long has this been going on? Have you talked to him about it before, or is this the first time you've brought it up?
Question 3: It keeps happening. This is the fourth time in the past month that I've recognized it. Does that make sense?
Answer 3: Yes, that totally makes sense! Four times in a month is way too much. It's not just an occasional, forgivable mistake at that point. It's becoming a pattern of behavior that's disrespectful of your time. Have you told him how much it bothers you, or has he just brushed it off when you've mentioned it?
(Session 2)
Question 1:
Answer 1:
Question 2:
Anwer 2:
.....
(Session 9)
Each session must be about one topic and has three question-answer conversation pair. nine sessions must have different topics. Create as many as possible sessions you can. Examples are examples, don't copy them.
create it in Korean. please create nine sessions.
```
## Links
- repository : [boostcampaitech5/level3_nlp_finalproject-nlp-12](https://github.com/boostcampaitech5/level3_nlp_finalproject-nlp-12)
- huggingface : [ohilikeit/empathetic_dialogues_kr](https://huggingface.co/datasets/ohilikeit/empathetic_dialogues_kr)
## License
- Apache-2.0
|
Veucci/turkish-lyric-to-genre | 2023-07-05T19:22:09.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:tr",
"license:cc-by-nc-4.0",
"music",
"region:us"
] | Veucci | null | null | null | 1 | 3 | ---
license: cc-by-nc-4.0
size_categories:
- 1K<n<10K
task_categories:
- text-classification
language:
- tr
tags:
- music
---
# Song Lyrics Dataset
## Description
This dataset contains a collection of song lyrics from various artists and genres in Turkish. It is intended to be used for research, analysis, and other non-commercial purposes.
## Dataset Details
The dataset is organized in a tabular format with the following columns:
- `Genre` (int): Genre of the lyrics
- `Lyrics` (str): The lyrics of the song.
- Pop: 1085 rows
- Rock: 765 rows
- Hip-Hop: 969 rows
- Arabesk: 353 rows
## Usage
Feel free to use this dataset for non-commercial purposes such as academic research, natural language processing tasks, sentiment analysis, or personal projects. You are allowed to analyze, modify, and derive insights from the dataset.
If you use this dataset in your work, we kindly request that you provide attribution by citing this repository or linking back to it.
## License
This dataset is released under the Creative Commons Attribution-NonCommercial license. This means that you are not allowed to use the dataset for commercial purposes. For detailed information about the license, please refer to the [LICENSE](./LICENSE) file.
## Contact
If you have any questions, suggestions, or concerns regarding this dataset, please feel free to reach out to email at [efe.ozkan732@gmail.com](mailto:efe.ozkan732@gmail.com).
Happy exploring and analyzing the world of song lyrics!
|
wisenut-nlp-team/korquad_v1.0_multiple_gqa | 2023-07-10T07:48:54.000Z | [
"region:us"
] | wisenut-nlp-team | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: answers
sequence: string
- name: similar_context
sequence: string
- name: questions
sequence: string
splits:
- name: train
num_bytes: 120956461
num_examples: 9053
- name: validation
num_bytes: 11697414
num_examples: 880
download_size: 0
dataset_size: 132653875
---
# Dataset Card for "korquad_v1.0_multiple_gqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yardeny/mlm_test_set_context_len_512 | 2023-07-06T18:32:23.000Z | [
"region:us"
] | yardeny | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: train
num_bytes: 576000
num_examples: 160
download_size: 188966
dataset_size: 576000
---
# Dataset Card for "loss_landscape_test_set"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
iamkaikai/amazing_logos | 2023-07-11T20:28:47.000Z | [
"license:unknown",
"region:us"
] | iamkaikai | null | null | null | 0 | 3 | ---
license: unknown
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 57955506.86
num_examples: 6866
download_size: 53988605
dataset_size: 57955506.86
---
Super high quality logos from Logobook.com
|
janPiljan/SaGIS | 2023-07-15T03:19:03.000Z | [
"task_categories:table-question-answering",
"size_categories:n<1K",
"language:en",
"license:mit",
"chemistry",
"biology",
"medical",
"general",
"region:us"
] | janPiljan | null | null | null | 1 | 3 | ---
license: mit
task_categories:
- table-question-answering
language:
- en
tags:
- chemistry
- biology
- medical
- general
pretty_name: The Scientific and General Information (Data)Set
size_categories:
- n<1K
---
SaGIS: The Scientific and General Information (Data)Set.
The information stored in the dataset is information from OpenAI GPT 3.5-Turbo, Google PaLM, and Anthropic Claude (2). The information may not be entirely factual. |
ssbuild/gpt_conversations_3.5m_cn | 2023-07-08T10:57:50.000Z | [
"license:agpl-3.0",
"region:us"
] | ssbuild | null | null | null | 1 | 3 | ---
license: agpl-3.0
---
|
OpenLeecher/Teatime | 2023-07-09T11:00:42.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"language:ko",
"license:apache-2.0",
"region:us"
] | OpenLeecher | null | null | null | 19 | 3 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
- ko
size_categories:
- 10K<n<100K
---
### INFO:
These are the parsed logs from the "teatime logs" xlsx files.
Every user edit or message regeneration makes a new branch in the conversation tree. This leads to message duplication in the 'all_logs.json' file. Every change creates a fresh branch, copying all earlier messages.
The 'longest' files are different. They only contain the longest path from the first to the last message. This approach aims to avoid duplication. Ideally, the '_longest' files should have no repeat messages.
### all_logs.json
Total tokens: 237442515
Average chat token length: 4246.03
Median chat token length: 3797.0
Average messages per chat: 18.96
Median messages per chat: 15.0
Total number of chats: 55921
### all_logs_longest.json
Total tokens: 27611121
Average chat token length: 2499.65
Median chat token length: 1335.5
Average messages per chat: 11.27
Median messages per chat: 5.0
Total number of chats: 11046
 |
Csplk/hipcam | 2023-07-08T16:14:00.000Z | [
"task_categories:image-classification",
"task_categories:image-segmentation",
"size_categories:1K<n<10K",
"language:en",
"license:openrail",
"webcams",
"outdoor",
"indoor",
"region:us"
] | Csplk | null | null | null | 0 | 3 | ---
license: openrail
task_categories:
- image-classification
- image-segmentation
language:
- en
tags:
- webcams
- outdoor
- indoor
pretty_name: hipca
size_categories:
- 1K<n<10K
--- |
mdroth/huggingface-course_section-5_zst | 2023-07-29T16:19:37.000Z | [
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"region:us"
] | mdroth | null | null | null | 0 | 3 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
pretty_name: section 5 zst datasets
---
# Hugging Face course section 5 .zst datasets
You can use [these datasets](https://huggingface.co/datasets/mdroth/PubMed-200k-RTC/tree/main/data) for whatever you want (note the [Apache 2.0 license](https://huggingface.co/datasets/mdroth/PubMed-200k-RTC/blob/main/data/Apache_2.0), though) but their primary purpose is to serve as a drop-in replacement for the sub-datasets of [The Pile](https://pile.eleuther.ai/) used in [section 5](https://huggingface.co/learn/nlp-course/chapter5/4?fw=pt#what-is-the-pile) of the [HuggingFace course](https://huggingface.co/learn/nlp-course/chapter5/4?fw=pt#what-is-the-pile).
## Data sources
- PubMed-200k-RTC:<br>https://www.kaggle.com/datasets/matthewjansen/pubmed-200k-rtc/download?datasetVersionNumber=5
- LegalText-classification:<br>https://www.kaggle.com/datasets/shivamb/legal-citation-text-classification/download?datasetVersionNumber=1
These are Kaggle datasets. So you need to be logged into a [Kaggle account](https://www.kaggle.com/account/login?phase=startSignInTab&returnUrl=%2F) to download them from Kaggle. However, you actually don't need to download (and preprocess) them from Kaggle – you can just use them as shown in the following **Usage** section.
## Usage
To load a dataset from this repo, run
```python
import zstandard
from datasets import load_dataset
load_dataset("json", data_files=url, split="train")
```
where `url` should be one of the following download links:
- `LegalText-classification_train.jsonl.zst`:<br>https://huggingface.co/datasets/mdroth/PubMed-200k-RTC/resolve/main/data/LegalText-classification_train.jsonl.zst,
- `LegalText-classification_train_min.jsonl.zst`:<br>https://huggingface.co/datasets/mdroth/PubMed-200k-RTC/resolve/main/data/LegalText-classification_train_min.jsonl.zst,
- `PubMed-200k-RTC_train.jsonl.zst`:<br>https://huggingface.co/datasets/mdroth/PubMed-200k-RTC/resolve/main/data/PubMed-200k-RTC_train.jsonl.zst, or
- `PubMed-200k-RTC_train_min.jsonl.zst`:<br>https://huggingface.co/datasets/mdroth/PubMed-200k-RTC/resolve/main/data/PubMed-200k-RTC_train_min.jsonl.zst.
Example:
```python
import zstandard
from datasets import load_dataset
url = "https://huggingface.co/datasets/mdroth/PubMed-200k-RTC/resolve/main/data/LegalText-classification_train_min.jsonl.zst"
load_dataset("json", data_files=url, split="train")
``` |
ssbuild/alpaca_tabular | 2023-07-09T05:15:27.000Z | [
"license:apache-2.0",
"region:us"
] | ssbuild | null | null | null | 3 | 3 | ---
license: apache-2.0
---
|
hermanshid/doctor-id-qa | 2023-07-09T14:56:26.000Z | [
"task_categories:text2text-generation",
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:id",
"license:apache-2.0",
"doctor",
"qa",
"region:us"
] | hermanshid | null | null | null | 2 | 3 | ---
license: apache-2.0
task_categories:
- text2text-generation
- question-answering
language:
- id
tags:
- doctor
- qa
pretty_name: Indonesian Health Question Answer
size_categories:
- 1K<n<10K
--- |
pierre-loic/climate-news-articles | 2023-07-09T18:26:00.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:fr",
"license:cc",
"climate",
"news",
"region:us"
] | pierre-loic | null | null | null | 1 | 3 | ---
license: cc
task_categories:
- text-classification
language:
- fr
tags:
- climate
- news
pretty_name: Titres de presse française avec labellisation "climat/pas climat"
size_categories:
- 1K<n<10K
---
# 🌍 Jeu de données d'articles de presse française labellisés comme traitant ou non des sujets liés au climat
*🇬🇧 / 🇺🇸 : as this data set is based only on French data, all explanations are written in French in this repository. The goal of the dataset is to train a model to classify titles of French newspapers in two categories : if it's about climate or not.*
## 🗺️ Le contexte
Ce jeu de données de classification de **titres d'article de presse française** a été réalisé pour l'association [Data for good](https://dataforgood.fr/) à Grenoble et plus particulièrement pour l'association [Quota climat](https://www.quotaclimat.org/).
## 💾 Le jeu de données
Le jeu de données d'entrainement contient 2007 titres d'articles de presse (1923 ne concernant pas le climat et 84 concernant le climat). Le jeu de données de test contient 502 titres d'articles de presse (481 ne concernant pas le climat et 21 concernant le climat).
 |
ccosme/FiReCS | 2023-07-12T00:13:04.000Z | [
"task_categories:text-classification",
"task_categories:zero-shot-classification",
"size_categories:10K<n<100K",
"language:en",
"language:tl",
"license:cc-by-4.0",
"code-switching",
"sentiment analysis",
"low-resource languages",
"taglish",
"Filipino",
"region:us"
] | ccosme | null | null | null | 0 | 3 | ---
license: cc-by-4.0
task_categories:
- text-classification
- zero-shot-classification
language:
- en
- tl
tags:
- code-switching
- sentiment analysis
- low-resource languages
- taglish
- Filipino
size_categories:
- 10K<n<100K
---
# Dataset Card for Filipino-English Reviews with Code-Switching (FiReCS)
### Dataset Summary
We introduce FiReCS, the first sentiment-annotated corpus of product and service reviews involving Filipino-English code-switching. The data set is composed of 10,487 reviews with a fairly balanced number per sentiment class. Inter-annotator agreement is high with a Kripendorffs’s α for ordinal metric of 0.83. Three human annotators were tasked to manually label reviews according to three polarity classes: Positive, Neutral, and Negative.
### Supported Tasks and Leaderboards
Sentiment analysis of bilingual text with code-switching / code-mixing.
### Languages
- Filipino
- English
## Dataset Structure
### Data Fields
* `review`: a string containing the body of the review
* `label`: an integer containing the label encoding of the gold-truth label provided by the human annotators
#### Label encoding
* 2 - Positive
* 1 - Neutral
* 0 - Negative
### Data Splits
| Data set split | Positive | Neutral | Negative | Total |
| -------------- | -------- | ------- | -------- | ----- |
| Train set | 2,410 | 2,549 | 2,381 | 7,340 |
| Test set | 1,033 | 1,087 | 1,027 | 3,147 |
### Dataset Creation and Annotation
The data set was created using publicly available online service and product reviews from Google Maps Reviews and Shopee Philippines. Only the rating and review fields were collected and stored.
Three annotators, all native speakers of Filipino and fluent in English, were tasked to manually label the data set. The first two annotators labeled the same full set of reviews. Any disagreements were sent to a third annotator.
### Personal and Sensitive Information
No personal information were collected and stored.
### Licensing Information
The FiReCS data set version 1.0 is released under the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
### Citation Information
TBA
|
artemsnegirev/blended_skill_talk_ru | 2023-07-11T16:14:31.000Z | [
"task_categories:conversational",
"task_ids:dialogue-generation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ru",
"license:unknown",
"arxiv:2004.08449",
"region:us"
] | artemsnegirev | null | null | null | 0 | 3 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ru
license:
- unknown
multilinguality:
- monolingual
pretty_name: BlendedSkillTalk
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- conversational
task_ids:
- dialogue-generation
paperswithcode_id: blended-skill-talk
dataset_info:
features:
- name: personas
sequence: string
- name: additional_context
dtype: string
- name: previous_utterance
sequence: string
- name: context
dtype: string
- name: free_messages
sequence: string
- name: guided_messages
sequence: string
- name: suggestions
sequence:
- name: convai2
dtype: string
- name: empathetic_dialogues
dtype: string
- name: wizard_of_wikipedia
dtype: string
- name: guided_chosen_suggestions
sequence: string
splits:
- name: train
num_examples: 4819
- name: validation
num_examples: 1009
- name: test
num_examples: 980
---
# Dataset Card for "blended_skill_talk"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://parl.ai/projects/bst/](https://parl.ai/projects/bst/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Can You Put it All Together: Evaluating Conversational Agents' Ability to Blend Skills](https://arxiv.org/abs/2004.08449v1)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
Russian version of the Blended Skill Talk dataset. Each replica was translated separately using a paid translator. A dataset of 7k conversations explicitly designed to exhibit multiple conversation modes: displaying personality, having empathy, and demonstrating knowledge.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
"personas": ["мне все время звонит женщина.", "однажды мне предложили профессионально заниматься баскетболом."],
"additional_context": "",
"previous_utterance": ["Я по-настоящему обрадовался, когда мой папа подарил мне мой первый автомобиль. Это было просто счастливое чувство", "Да. Мне знакомо это чувство, хотя мне пришлось купить свой собственный."],
"context": "empathetic_dialogues", "free_messages": ["Автомобиль был именно таким, как я хотел, - спортивная машина новой модели.", "Mustang GT с откидным верхом", "Несколько лет назад я помогал с реставрацией 67-го. Это было просто великолепно."],
"guided_messages": ["Это был хороший выбор, что это была за машина?", "Мило! Жаль, что вам не удалось заполучить в свои руки "Мустанг II" 1963 года выпуска. Это моя любимая машина.", "Это звучит потрясающе. Вы восстановили его вместе со своим отцом?"],
"suggestions": {
"convai2": ["я не большой любитель спортивных автомобилей, лол. мне нужна машина, которая может испачкаться", "я не большой любитель спортивных автомобилей, лол. мне нужна машина, которая может испачкаться", "мы с папой восстановили мой, он принадлежал ему."],
"empathetic_dialogues": ["Это был хороший выбор, рассматривали ли вы какие-либо другие марки / модели?", "О, ничего себе, это потрясающая машина.", "Это звучит как классический автомобиль. Ты часто катался на нем со своим отцом?"],
"wizard_of_wikipedia": ["Это круто. Мне нравятся экономичные автомобили, потому что они доступны по цене", "Мило! Жаль, что вам не удалось заполучить в свои руки Mustang II 1963 года выпуска, который представляет собой четырехместный концепт-кар.", "Мило! Жаль, что вам не удалось заполучить в свои руки Mustang II 1963 года выпуска, который представляет собой четырехместный концепт-кар."]},
"guided_chosen_suggestions": ["empathetic_dialogues", "wizard_of_wikipedia", "empathetic_dialogues"]
}
```
Original version of dataset has "label_candidates" field. It was not translated.
### Data Fields
The data fields are the same among all splits.
- `personas`: a `list` of `string` features.
- `additional_context`: a `string` feature.
- `previous_utterance`: a `list` of `string` features.
- `context`: a `string` feature.
- `free_messages`: a `list` of `string` features.
- `guided_messgaes`: a `list` of `string` features.
- `suggestions`: a dictionary feature containing:
- `convai2`: a `string` feature.
- `empathetic_dialogues`: a `string` feature.
- `wizard_of_wikipedia`: a `string` feature.
- `guided_chosen_suggestions`: a `list` of `string` features.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default| 4819| 1009| 980|
## Additional Information
### Citation Information
```
@misc{smith2020evaluating,
title={Can You Put it All Together: Evaluating Conversational Agents' Ability to Blend Skills},
author={Eric Michael Smith and Mary Williamson and Kurt Shuster and Jason Weston and Y-Lan Boureau},
year={2020},
eprint={2004.08449},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@artemsnegirev](https://github.com/artemsnegirev), [Dmitriy Sidorenko](https://github.com/DimaSidorenko) for adding this dataset. |
AlekseyKorshuk/crowdsource-v2.0 | 2023-07-11T22:23:55.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: bot_id
dtype: string
- name: conversation_id
dtype: string
- name: conversation
list:
- name: content
dtype: string
- name: do_train
dtype: bool
- name: role
dtype: string
- name: bot_config
struct:
- name: bot_label
dtype: string
- name: description
dtype: string
- name: developer_uid
dtype: string
- name: first_message
dtype: string
- name: image_url
dtype: string
- name: introduction
dtype: string
- name: max_history
dtype: int64
- name: memory
dtype: string
- name: model
dtype: string
- name: name
dtype: string
- name: prompt
dtype: string
- name: repetition_penalty
dtype: float64
- name: response_length
dtype: int64
- name: temperature
dtype: float64
- name: theme
dtype: 'null'
- name: top_k
dtype: int64
- name: top_p
dtype: float64
- name: user_label
dtype: string
- name: conversation_history
dtype: string
- name: system
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 106588734
num_examples: 19541
download_size: 65719430
dataset_size: 106588734
---
# Dataset Card for "crowdsource-v2.0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
0x22almostEvil/words-operations-rewards-5k | 2023-07-11T21:15:27.000Z | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"language:ru",
"license:apache-2.0",
"semantics",
"region:us"
] | 0x22almostEvil | null | null | null | 0 | 3 | ---
license: apache-2.0
task_categories:
- text-classification
- token-classification
- question-answering
language:
- en
- ru
tags:
- semantics
size_categories:
- 1K<n<10K
---
# Dataset Card for words-operations-rewards-5k with 5K entries.
### Dataset Summary
License: Apache-2.0. Contains JSONL. Use this for Reward Models.
# Solved tasks:
- Count Letters;
- Write Backwards;
- Write Character on a Position;
- Repeat Word;
- Write In Case;
- Change Case on a Position;
- Write Numbering;
- Connect Characters;
- Write a Word from Characters;
- Count Syllables;
# Example:
```json
{
"message_tree_id": "00000000-0000-0000-0000-000000000004",
"tree_state": "ready_for_export",
"prompt": {
"message_id": "00000000-0000-0000-0000-000000000004",
"text": "Count the number of letters in the word «detailed»",
"role": "prompter",
"lang": "en",
"replies": [
{ "message_id": "00000000-0000-0000-0000-000000000005",
"text": "8", "role": "assistant", "lang": "en",
"meta": {"rank": 1}, "replies": []},
{ "message_id": "00000000-0000-0000-0000-000000000006",
"text": "7", "role": "assistant", "lang": "en",
"meta": {"rank": 0}, "replies": []},
{"message_id": "00000000-0000-0000-0000-000000000007",
"text": "7 or 9", "role": "assistant", "lang": "en",
"meta": {"rank": 0}, "replies": []}]
}
}
``` |
autopilot-ai/correct-incorrect-spelling-pairs | 2023-07-12T02:08:05.000Z | [
"task_categories:text-classification",
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:gu",
"license:apache-2.0",
"region:us"
] | autopilot-ai | null | null | null | 0 | 3 | ---
license: apache-2.0
task_categories:
- text-classification
- text2text-generation
language:
- gu
pretty_name: spelling pairs
size_categories:
- 100K<n<1M
---
This is a dataset containing correct and incorrect spelling pairs in Gujarati, created by us using artificial noise. |
Vezora/Mini_Orca_Code_Uncencored_alpaca_Format | 2023-08-14T04:51:11.000Z | [
"license:apache-2.0",
"region:us"
] | Vezora | null | null | null | 1 | 3 | ---
license: apache-2.0
---
This is dataset is a modified version of "psmathur's" Mini orca dataset, formated in the alpaca format and uncencored.
This dataset is filtered to only feature coding instructions around 50k code examples.
For ALPACA LORA users:
Modules you can target with lora:"gate_proj", "down_proj", "up_proj", "q_proj", "v_proj", "k_proj", "o_proj"
Most lora models use:"q_proj", "v_proj", "k_proj", "o_proj"
Platypus which got terrific results: "gate_proj", "down_proj", "up_proj"
Research on targeting certain modules still needs to be done, but if you don't want to train over a previously trained models newly learned abilities, target different modules than the ones used for original training.
Hyper perameters used by Platypus:
Hyperparameters for 13B and 70B Models
Hyperparameter Platypus2-13B / 70B
batch size 16
micro batch size 1
num epochs 1
learning rate 4e-4 / 3e-4
cutoff len 4096
lora rank 16
lora alpha 16
lora dropout 0.05
lora target modules gate_proj, down_proj, up_proj
train on inputs False
add eos token False
group by length False
prompt template alpaca
lr scheduler cosine
warmup steps 100
I would reccomend using a batch size of 4-10, and cutt off length to ≤ 2048 to avoid using vram issues. Load_in_4bit, Normal Float, and bf16. For single 24 gig card.
If training with oobabooga you must edit the "training.py" file in the "oobabooga_windows\text-generation-webui\modules" folder. In line 49 edit standard modules to the modules you would like to target.
If training with alpaca lora use the argument --lora_target_modules when running the train.py command. To load in 4bit you must edit the train file, adding load in 4 bit, bf16, and normal float quant.
|
yuean/EuroSAT-2750 | 2023-07-13T02:04:28.000Z | [
"region:us"
] | yuean | null | null | null | 0 | 3 | Entry not found |
J0nasW/paperswithcode | 2023-07-31T12:23:25.000Z | [
"task_categories:text-classification",
"task_categories:feature-extraction",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] | J0nasW | null | null | null | 0 | 3 | ---
license: mit
task_categories:
- text-classification
- feature-extraction
language:
- en
size_categories:
- 10K<n<100K
---
# A cleaned dataset from [paperswithcode.com](https://paperswithcode.com/)
*Last dataset update: July 2023*
This is a cleaned up dataset optained from [paperswithcode.com](https://paperswithcode.com/) through their [API](https://paperswithcode.com/api/v1/docs/) service. It represents a set of around 56K carefully categorized papers into 3K tasks and 16 areas. The papers contain arXiv and NIPS IDs as well as title, abstract and other meta information.
It can be used for training text classifiers that concentrate on the use of specific AI and ML methods and frameworks.
### Contents
It contains the following tables:
- papers.csv (around 56K)
- papers_train.csv (80% from 56K)
- papers_test.csv (20% from 56K)
- tasks.csv
- areas.csv
### Specials
UUIDs were added to the dataset since the PapersWithCode IDs (pwc_ids) are not distinct enough. These UUIDs may change in the future with new versions of the dataset.
Also, embeddings were calculated for all of the 56K papers using the brilliant model [SciNCL](https://huggingface.co/malteos/scincl) as well as dimensionality-redused 2D coordinates using UMAP.
There is also a simple Python Notebook which was used to optain and refactor the dataset. |
daqc/wikihow-spanish | 2023-07-14T12:26:09.000Z | [
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:es",
"license:cc",
"wikihow",
"gpt2",
"spanish",
"region:us"
] | daqc | null | null | null | 1 | 3 | ---
license: cc
task_categories:
- text-generation
- question-answering
language:
- es
tags:
- wikihow
- gpt2
- spanish
pretty_name: wikihow-spanish
size_categories:
- 10K<n<100K
---
## Wikihow en Español ##
Este dataset fue obtenido desde el repositorio Github de [Wikilingua](https://github.com/esdurmus/Wikilingua).
## Licencia ##
- Artículo proporcionado por wikiHow <https://www.wikihow.com/Main-Page>, una wiki que construye el manual de instrucciones más grande y de mayor calidad del mundo. Por favor, edita este artículo y encuentra los créditos del autor en wikiHow.com. El contenido de wikiHow se puede compartir bajo la[licencia Creative Commons](http://creativecommons.org/licenses/by-nc-sa/3.0/).
- Consulta [esta página web](https://www.wikihow.com/wikiHow:Attribution) para obtener las pautas específicas de atribución.
|
Myashka/SO-Python_QA-filtered-2023-no_code-tanh_score | 2023-07-18T09:48:50.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] | Myashka | null | null | null | 2 | 3 | ---
license: mit
task_categories:
- question-answering
language:
- en
size_categories:
- 10K<n<100K
---
SO dataset of `python`tag data
Question filters:
- images
- links
- code blocks
- Q_Score > 0
- Answer_count > 0
Answers filters:
- images
- links
- code blocks
Scores are tanh applied to scaled with AbsMaxScaler to IQR range of Original SO Answers' scores
|
declip/Minecraft-Server-Chat | 2023-07-14T19:25:10.000Z | [
"license:cc0-1.0",
"region:us"
] | declip | null | null | null | 0 | 3 | ---
license: cc0-1.0
---
# Minecraft Server Chat
Important Info: This dataset contains swears. I filtered out as much racism as possible. People who were racist were banned from the server. I am not affiliated with the server in any way.
A collection of 2,000,000 messages said across two years in a minecraft server. The minecraft semi-anarchy server logged all of its messages to discord between 2020 and 2023. I downloaded all of them and made them into a json in chronological order. I also cleaned the messages and removed any racial slurs or offensive words. That being said, there are still swears.
|
dj801117/sexy_girl_data | 2023-07-15T07:27:13.000Z | [
"license:unlicense",
"region:us"
] | dj801117 | null | null | null | 0 | 3 | ---
license: unlicense
---
---
dataset_info:
features:
- name: content
dtype: string
- name: summary
dtype: string
task_categories:
- question-answering
language:
- zh
--- |
geekyrakshit/LoL-Dataset | 2023-07-15T08:43:12.000Z | [
"license:unknown",
"computer-vision",
"arxiv:1808.04560",
"region:us"
] | geekyrakshit | null | null | null | 0 | 3 | ---
license: unknown
tags:
- computer-vision
---
The LOL dataset is composed of 500 low-light and normal-light image pairs and is divided into 485 training pairs and 15 testing pairs. The low-light images contain noise produced during the photo capture process. Most of the images are indoor scenes. All the images have a resolution of 400×600. The dataset was introduced in the paper [Deep Retinex Decomposition for Low-Light Enhancement](https://arxiv.org/abs/1808.04560v1). |
hayden-donnelly/world-heightmaps-01 | 2023-07-31T01:50:57.000Z | [
"task_categories:unconditional-image-generation",
"task_categories:image-classification",
"task_categories:text-to-image",
"license:apache-2.0",
"region:us"
] | hayden-donnelly | null | null | null | 2 | 3 | ---
license: apache-2.0
task_categories:
- unconditional-image-generation
- image-classification
- text-to-image
pretty_name: World Heightmaps 01
viewer: false
---
# World Heightmaps 01
This is a dataset of ~600k 360x360 heightmaps. The heightmaps are labelled according to their latitude and longitude,
as well as a 1-dimensional slice ID. This slice ID specifies how they were sliced from the overall latitude/longitude image (see below for visualization).
There are a max of 100 such slices for each latitude/longitude, but usually there are fewer because of data cleaning.
<img src="https://i.imgur.com/6gaoxEV.png" width="1000px"></img>
# Directory Structure
Folder name is latitude/longitude, image name is slice ID. Some folders also contain an image called "whole" which is the whole latitude/longitude image scaled down to 360x360.
```
n00_e010/
0.png
1.png
3.png
...
99.png
whole.png
n00_e011/
0.png
1.png
2.png
...
99.png
whole.png
...
``` |
Icannos/chess_studies | 2023-07-16T13:37:14.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:cc0-1.0",
"region:us"
] | Icannos | Chess studies and annotated games from the top lichess studies and from https://www.angelfire.com/games3/smartbridge/
This dataset consists of annotated chess games from several sources and aggregated into a single dataset. It is intended
to train language models to generate chess games and studies. | TO COME. | null | 1 | 3 | ---
license: cc0-1.0
task_categories:
- text-generation
language:
- en
pretty_name: Annotated chess games and studies
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Point of Contact:** maxime.darrin@outlook.com
### Dataset Summary
This dataset consists of annotated chess games and chess studies by humans. It has two subsets, the first one "lichess" consists of the top lichess studies scrapped
from lichess.org. The "others" subset mainly consist of games from https://www.angelfire.com/games3/smartbridge/
### Supported Tasks and Leaderboards
It is intended from training chess text generative models.
### Languages
The main language represented is english, although some other languages might be present in unsignificant amounts.
## Dataset Structure
### How to use:
```python
from datasets import load_dataset
import chess.pgn
import io
dataset = load_dataset("Icannos/chess_studies", "lichess", streaming=True)
for d in dataset['train']:
pgn = io.StringIO(d['text'])
game = chess.pgn.read_game(pgn)
print(game)
break
```
### Data Instances
Example of annotated game / study from lichess. The annotations includes arrows and circles drawn on the board in addition to natural language commentaries and sometimes
computer evaluation.
```
[Event "🇷🇺 Petrov Defense 🇷🇺: Nimzowitsch Attack"]
[Site "https://lichess.org/study/OnPMlzHT/oG7xbZFE"]
[Date "????.??.??"]
[Round "?"]
[White "?"]
[Black "?"]
[Result "*"]
[Annotator "https://lichess.org/@/LeninPerez"]
[ECO "C42"]
[Opening "Russian Game: Nimzowitsch Attack"]
[UTCDate "2021.02.11"]
[UTCTime "00:54:33"]
[Variant "Standard"]
1. e4 { Do you remember the movements from the previous chapter? I hope so, because you should do them now :D } 1... e5 { That's! } 2. Nf3 { And now? } 2... Nf6 { Great job! } 3. Nxe5 { You will find this frequently in your games with this defense. That is, the most common in move 3 is that the white player takes the pawn.
How can you drive the white knight of e5? } 3... d6 { Very well! [%csl Re5][%cal Rd6e5] } 4. Nf3 { You know what you have to do now, right? } 4... Nxe4 { Excellent, you get the pawn back!
The blue arrows represent all the options the white player has to play now. [%cal Bd2d3,Bd3d4,Bb1c3,Bd1e2] } 5. Nc3 { This is the Nimzowitsch Attack!
Change the knights [%cal Re4c3,Rc3e4] } 5... Nxc3 6. dxc3 { The white player must deal with the doubled pawns on the c column
Develop your bishop [%csl Gf8] } 6... Be7 7. Be3 { What would you play now?
(Psst, your king is in the center) } 7... O-O 8. Qd2 { White wants the queenside castling
Now you must take your knight to f3, what is the shortest route? [%csl Gc1,Gf6,Gb8][%cal Ge1c1] } 8... Nd7 { That's! } 9. O-O-O { This is really the Nimzowitsch Attack.
White castles long to plan a battle of attacks on opposite flanks!
Where should this knight go? [%csl Gd7] } 9... Nf6 10. Bd3 { Play 10.c5 [%csl Gc5][%cal Gc7c5] } 10... c5 { Very well! Now the white player wants to attack your king with the pawns on the queenside.
You must play as I indicate with the arrows, that is, attack the weak point a2 and improve your towers. [%csl Ra2][%cal Ba8c8,Bf8e8,Yc8e6,Yd8a5] } *
Process finished with exit code 0
```
### Data Fields
The only field is "text". Each row contains exactly one game/pgn file in the text field.
### Data Splits
A single train split.
## Dataset Creation
### Source Data
#### Lichess studies
The lichess studies consist of the first 10 pages of studies (ranked by stars) on lichess (https://lichess.org/study/all/popular).
#### Others studies
I relied mainly on the compilation built over the years on https://www.angelfire.com/games3/smartbridge/ and which consists of top player games.
## Other Known Limitations
The annotations are mainly in english (although some are annotated in french).
## Citation information
TO COME.
|
HeshamHaroon/QA_Arabic | 2023-07-16T10:37:36.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:ar",
"license:apache-2.0",
"question-answer",
"language-learning",
"chatbot",
"region:us"
] | HeshamHaroon | null | null | null | 6 | 3 | ---
language:
- "ar"
pretty_name: "Questions and Answers Dataset in Arabic"
tags:
- "question-answer"
- "language-learning"
- "chatbot"
license: "apache-2.0"
task_categories:
- "question-answering"
- "text-generation"
- "text2text-generation"
---
# JSON File Description
## Overview
This JSON file contains a collection of questions and answers in Arabic. Each question is associated with its corresponding answer. The file is structured in a way that allows easy retrieval and utilization of the question-answer pairs.
## File Structure
The JSON file follows the following structure:
```json
{
"questions": [
{
"question": "من هو أول من نزل على سطح القمر؟",
"answer": "نيل أمسترونج"
},
{
"question": "كم عدد الأسنان في فم الإنسان العادي؟",
"answer": "32 سنا"
},
{
"question": "كم عدد أعين الذبابة؟",
"answer": "5 أعين"
},
{
"question": "كم عدد أرجل العنكبوت؟",
"answer": "ج4 - 8 أرجل"
},
{
"question": "س5 - ماذا يسمى بيت النمل؟",
"answer": "ج5 - قرية النمل"
},
{
"question": "س6 - كم عظمة توجد في جسم الإنسان؟",
"answer": "ج6 - 206 عظمات"
},
...
]
}
The file consists of a single object with one key, "questions," which contains an array of question-answer pairs. Each question-answer pair is represented as an object with two keys: "question" and "answer".
Usage:
- Question-Answer Retrieval: Parse the JSON file and access the question-answer pairs programmatically to retrieve specific questions and their corresponding answers.
- Language Learning: Utilize the question-answer pairs to develop language learning applications or quizzes where users can practice answering questions in Arabic.
- Chatbot Integration: Integrate the JSON file with a chatbot system to provide automated responses based on the questions and answers available.
Feel free to modify the JSON file by adding more question-answer pairs or use it as a reference to create your own question-answer datasets.
Contributing:
If you have additional questions and answers that you would like to contribute to this JSON file, please feel free to submit a pull request. Your contributions are greatly appreciated!
|
SIA86/TechnicalSupportCalls | 2023-07-27T07:23:19.000Z | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:ru",
"license:openrail",
"technical_support",
"region:us"
] | SIA86 | null | null | null | 0 | 3 | ---
license: openrail
task_categories:
- text-classification
language:
- ru
tags:
- technical_support
pretty_name: TSC
size_categories:
- n<1K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': -Не работает почта ЕПС(Единая почтовая система(str.mos.ru)
'1': -Отремонтировать, настроить МФУ(многофункциональное устройство).Устранить замятие, проверить подключение МФУ, плоттер или сканер
'2': -Настроить почту на мобильном(настроить почту на смартфоне)
'3': -Проблема с почтой(Переполнен ящик, не отправляются или не приходят письма)
'4': -Заменить картридж(Заменим расходные материалы)
'5': -Установка Программного обеспечения(Установить программное обеспечение)
'6': -Сдать оборудование(Заберём оборудование на склад)
'7': -Настроить электронную подпись(Поможем установить сертификат ЭП)
'8': -Переместить или настроить рабочее место или оборудование(Переместить Автоматизированное рабочее место или оборудование, настроить для нового сотрудника)
'9': -Не работает телефон(Не включается или не получается дозвониться)
'10': -Не работает компьютер(Не включается системный блок, монитор, клавиатура, мышь или принтер)
'11': -Создание внутренней учетной записи(Для нового сотрудника), создать ящик(запросить создание почтового ящика, общего почтового ящика и др), разблокировать учетную запись
'12': -Разблокировать доступ в Автоматизированную Систему МГГТ(Разблокировать Доступ к базе)
'13': -Получить, восстановить доступ в МОСЭДО(Московский электронный документооборот)
'14': -Восстановить пароль в СДО(Система документооборота)
'15': -Доступ к ИС(информационной системе).Доступ к БД Oracle(Базе данных Oracle), в АС Договор, АС Архив, АС Кадры и так далее. АС(Автоматизированная система)
'16': -Доступ к отчетам(Discover, Power BI, Oracle и другое)
'17': -Доступ к файловым ресурсам(Например к папке на диске X)
'18': -Доступ в СДО(Система документооборота)
'19': -Чтение/запись CD/DVD Дисков
'20': -Доступ в Интернет
'21': -Доступ к disk.mggt.ru
'22': -Доступ в VDI(виртуальный рабочий стол)
'23': -Удаленный доступ
'24': -Доступ в Комнату хранения(Добавить или исключить из списка)
'25': -Доступ в помещение(Добавить или исключить из списка)
'26': -Сообщить об инциденте(Незапланированное прерывание ИТ-услуги или снижение качества ИТ-услуги)
'27': -Запрос на обслуживание(Запрос пользователя на информацию, консультацию, на стандартное изменение или доступ к ИТ-услуге)
'28': -Запрос на оборудование
'29': -Не работает пропуск(Продлить, заказать пропуск и так далее)
'30': -Подать данные полевых бригад
'31': -Запрос на тестирование
'32': -Вопрос по работе:Генплан, Техпаспорт и так далее
'33': -Загрузка в АСУ ОДС(Автоматизированная система объединенной диспетчерской службы)
'34': -САПР МГГТ(Система автоматизированного проектирования)
'35': -Проблема с модулем согласования
'36': -Другие запросы
'37': -Проблемы с АС Договор(архив, кадры документ или базой данных)
--- |
SKT27182/Preprocessed_OpenOrca | 2023-07-25T03:56:32.000Z | [
"task_categories:text-classification",
"task_categories:conversational",
"language:en",
"license:mit",
"arxiv:2306.02707",
"arxiv:2301.13688",
"region:us"
] | SKT27182 | null | null | null | 0 | 3 | ---
language:
- en
license: mit
task_categories:
- text-classification
- conversational
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
- name: length_before_preprocessing
dtype: int64
splits:
- name: train
num_bytes: 3671168412.416216
num_examples: 2872771
- name: test
num_bytes: 458896850.2513517
num_examples: 359097
- name: validation
num_bytes: 458895572.3324322
num_examples: 359096
download_size: 2553683923
dataset_size: 4588960835.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Languages
Langugage of the dataset is mostly English.
## Dataset Structure
### Data Fields
The fields are:
- 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
- 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
- 'question', representing a question entry as provided by the FLAN Collection
- 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
### Data Splits
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Dataset is collected from huggingface's Open-Orca/OpenOrca.
## Additional Information
### Dataset Curators
This dataset is taken from `Open-Orca/OpenOrca` and then modified it's prompt. Made it's overall length of `prompt` + `question`
less than 512 to make it possible to give it input to mostly models whose Maximum input length is 512.
# Citation
```bibtex
@misc{OpenOrca,
title = {OpenOrca: An Open Dataset of GPT Augmented FLAN Reasoning Traces},
author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca},
}
```
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
```bibtex
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
``` |
Corianas/EnglishGrader | 2023-07-18T00:23:30.000Z | [
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"region:us"
] | Corianas | null | null | null | 1 | 3 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
---
This is inspired by the classifier of Textbooks is all you need.
Asking gpt-4 to rank samples an a scale of 0-4
You are a harsh English teacher, please determine the educational value of the following text for a student whose goal is to learn simple English with a single number from 0-4.
The numbers mean:
0 - No value
1 - low quality English
2 - medium quality English
3 - High quality english
4 - Perfect English
(the word harsh was not in all of the samples taken, and should be re-run with it.) |
dash8x/dv-presidential-speech | 2023-07-19T01:24:44.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"size_categories:1K<n<10K",
"language:dv",
"license:apache-2.0",
"audio",
"dhivehi",
"yag",
"speech",
"president",
"political",
"region:us"
] | dash8x | Dhivehi Presidential Speech is a Dhivehi speech dataset created from data extracted and
processed by [Sofwath](https://github.com/Sofwath) as part of a collection of Dhivehi
datasets found [here](https://github.com/Sofwath/DhivehiDatasets).
The dataset contains around 2.5 hrs (1 GB) of speech collected from Maldives President's Office
consisting of 7 speeches given by President Yaameen Abdhul Gayyoom. | @misc{Sofwath_2023,
title = "Dhivehi Presidential Speech Dataset",
url = "https://huggingface.co/datasets/dash8x/presidential_speech",
journal = "Hugging Face",
author = "Sofwath",
year = "2018",
month = jul
} | null | 0 | 3 | ---
license: apache-2.0
task_categories:
- automatic-speech-recognition
- text-to-speech
language:
- dv
tags:
- audio
- dhivehi
- yag
- speech
- president
- political
size_categories:
- 1K<n<10K
---
# Dataset Card for Dhivehi Presidential Speech 1.0
### Dataset Summary
Dhivehi Presidential Speech is a Dhivehi speech dataset created from data extracted and processed by [Sofwath](https://github.com/Sofwath) as part of a collection of Dhivehi datasets found [here](https://github.com/Sofwath/DhivehiDatasets).
The dataset contains around 2.5 hrs (1 GB) of speech collected from Maldives President's Office consisting of 7 speeches given by President Yaameen Abdhul Gayyoom.
### Supported Tasks and Leaderboards
- Automatic Speech Recognition
- Text-to-Speech
### Languages
Dhivehi
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file and its sentence.
```json
{
'path': 'dv-presidential-speech-train/waves/YAG2_77.wav',
'sentence': 'އަދި އަޅުގަނޑުމެންގެ ސަރަޙައްދުގައިވެސް މިކަހަލަ ބޭބޭފުޅުން',
'audio': {
'path': 'dv-presidential-speech-train/waves/YAG2_77.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000
},
}
```
### Data Fields
- path (string): The path to the audio file.
- sentence (string): The transcription for the audio file.
- audio (dict): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: dataset[0]["audio"] the audio file is automatically decoded and resampled to dataset.features["audio"].sampling_rate. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. dataset[0]["audio"] should always be preferred over dataset["audio"][0].
### Data Splits
The speech material has been subdivided into portions for train, test and validation. The test clips were generated from a speech not in the train split. For the validation split, there is a slight overlap of 1 speech in the train set.
| | Train | Validation | Test |
| ---------------- | -------- | ---------- | ----- |
| Speakers | 1 | 1 | 1 |
| Utterances | 1612 | 200 | 200 |
| Duration | 02:14:59 | 17:02 | 13:30 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Extracted and processed by [Sofwath](https://github.com/Sofwath) as part of a collection of Dhivehi datasets found [here](https://github.com/Sofwath/DhivehiDatasets).
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
health360/Healix-V1 | 2023-07-19T15:16:02.000Z | [
"size_categories:100K<n<1M",
"language:en",
"license:odc-by",
"biology",
"medical",
"region:us"
] | health360 | null | null | null | 0 | 3 | ---
license: odc-by
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 427613608
num_examples: 796239
download_size: 213902701
dataset_size: 427613608
language:
- en
tags:
- biology
- medical
size_categories:
- 100K<n<1M
---
# Healix-V1 Dataset
## Description
Healix-V1 is a rich and diverse dataset consisting of 809k Question-Answer pairs within the medical domain. This dataset has been meticulously curated to fuel research initiatives in the areas of medical language understanding, medical dialogue systems, and knowledge extraction. Healix-V1 serves as a valuable resource for developing and improving machine learning models for healthcare applications, enabling them to understand and generate human-like responses in medical context The dataset follows the format used in ALPACA model fine-tuning:
```plaintext
### Input:
Question
### Response:
Answer
## Data Sources
The dataset has been compiled from a variety of valuable and authoritative sources, each contributing different kinds of medical question-answer pairs:
1. **Medical books**: 426,241 QA pairs - These pairs are derived from an array of reputable medical books. The questions were extracted and provided as prompts to GPT-3.5, which in turn generated the corresponding answers.
2. **[jianghc/medical_chatbot](URL)**: 46,867 QA pairs - This is a dataset derived from a medical chatbot project.
3. **The Medical Question and Answering dataset(MQuAD)**: 23,802 QA pairs - MQuAD is a medical dataset specifically designed for the task of question answering.
4. **PubMed**: 1,000 QA pairs - These are pairs extracted from the extensive library of medical articles on PubMed.
5. **GenMedGPT**: 5,000 QA pairs - Derived from the GenMedGPT project aimed at generating medical language.
6. **iCliniq**: 7,321 QA pairs - iCliniq is a platform where users ask health-related questions which are answered by certified doctors.
7. **HealthCareMagic**: 100,000 QA pairs - HealthCareMagic is an interactive health platform with a vast amount of user-generated medical QAs.
8. **medical_meadow_wikidoc**: 10,000 QA pairs - These pairs are extracted from WikiDoc, a free medical textbook.
9. **medical_meadow_wikidoc_medical_flashcards**: 33,955 QA pairs - Medical flashcards provide concise medical information in a Q&A format.
10. **MedQA-USMLE-4-options**: 10,178 QA pairs - These are QAs similar to the format of the USMLE exam for medical licensing in the U.S.
## Potential Applications
Healix-V1 can serve a multitude of purposes such as:
- Training AI models for medical chatbots
- Developing advanced search engines for medical databases
- Creating tutoring systems for medical students
- Enhancing automated patient assistance systems
- Helping in developing systems for medical examination preparation
## Data Length Distribution
- (0.0, 256.0]: 96.724181%
- (256.0, 512.0]: 2.903792%
- (512.0, 768.0]: 0.299476%
- (768.0, 1024.0]: 0.050675%
- (1024.0, 2048.0]: 0.018910%
## Metadata
- **License:** ODC-BY
- **Language:** English
- **Tags:** Biology, Medical
- **Size Categories:** 100K<n<1M
## Dataset Info
- **Features:**
- name: text
- dtype: string
- **Splits:**
- name: train
- num_bytes: 419605911
- num_examples: 798902
- **Download Size:** 209261302 bytes
- **Dataset Size:** 419605911 bytes |
ashercn97/OpenOrcaSmaller2 | 2023-07-19T20:20:50.000Z | [
"region:us"
] | ashercn97 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 284383027
num_examples: 156291
download_size: 161343770
dataset_size: 284383027
---
# Dataset Card for "OpenOrcaSmaller2"
This is a small subset of the OpenOrca dataset that I got rid of all of the missing rows and changed it to an Alpaca format. I will hopefully use this to finetune a small model!
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
RyokoExtra/MissingKeys | 2023-10-10T15:31:32.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:text-to-image",
"task_categories:text-to-video",
"language:ja",
"license:apache-2.0",
"region:us"
] | RyokoExtra | null | null | null | 2 | 3 | ---
license: apache-2.0
task_categories:
- text-classification
- text-generation
- text-to-image
- text-to-video
language:
- ja
pretty_name: MissingKeys
---
# Dataset Card for MissingKeys
## Dataset Description
- **Homepage:** Here!
- **Repository:** N/A
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** KaraKaraWitch
### Dataset Summary
MissingKeys is a raw dataset archive of the misskey.io network.
### Supported Tasks and Leaderboards
This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes.
- text-classification
- text-generation
### Languages
Primarily japanese, however there are also english as well.
## Dataset Structure
All the files are located in jsonl files that has been compressed into .7z archives by date.
### Data Instances
Here is a sample with all the potential fields:
```json
{
"id": "9hh9iux6al",
"createdAt": "2023-07-22T07:38:17.994Z",
"userId": "9grv7htulz",
"user": {
"uid": "9grv7htulz#chikusa_nao@misskey.backspace.fm",
"name": "千種ナオ(ばすキー)",
"avatarUrl": "https://proxy.misskeyusercontent.com/avatar.webp?url=https%3A%2F%2Fs3.isk01.sakurastorage.jp%2Fbackspacekey%2Fmisskey%2Fca098593-5c2f-4488-8b82-18961149cf92.png&avatar=1",
"avatarBlurhash": "eGD8ztEK0KVb-=4TtSXm-jf4B7Vs~CEND*Fy%2Mct7%Lx.M{xcS0bv",
"states": "bot,nyaa~",
"hostInfo": "misskey@13.13.2#e4d440"
"emojis": {},
"onlineStatus": "unknown"
},
"text": "パソコン工房などのユニットコム系列だと、マザボ売るときにドライバディスクがないと30%買取金額が下がるという知見を得た",
"cw": null,
"visibility": "public",
"localOnly": false,
"renoteCount": 0,
"repliesCount": 0,
"reactions": {},
"reactionEmojis": {},
"emojis": {},
"fileIds": [],
"files": [],
"replyId": null,
"renoteId": null,
"uri": "https://misskey.backspace.fm/notes/9hh9iux6p7"
}
```
If the value is "Falsey" in python, it has been removed to save on space.
`states` is a comma seperated string that either includes: `bot` or `nyaa~` (Indicates they enabled cat mode) or both.
### Data Fields
Refer to the sample above. I'll drop in some additional notes:
`uid` in `user` follows this specific format:
`user_id#username@user_host`
### Data Splits
Each jsonl file is split at 100000 notes.
## Dataset Creation
### Curation Rationale
Because we need a SNS dataset, and since twitter appears to be quite reluctant, we went for the alternative.
### Source Data
#### Initial Data Collection and Normalization
None. No normalization is performed as this is a raw dump of the dataset. However we have removed empty and null fields to conserve on space.
#### Who are the source language producers?
The related users of misskey.io network.
### Annotations
#### Annotation process
No Annotations are present.
#### Who are the annotators?
No human annotators.
### Personal and Sensitive Information
We are certain there is no PII included in the dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
Misskey.io tends to be NSFW for images and is focused on Japanese culture.
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
KaraKaraWitch
### Licensing Information
Apache 2.0, for all parts of which KaraKaraWitch may be considered authors. All other material is distributed under fair use principles.
Ronsor Labs additionally is allowed to relicense the dataset as long as it has gone through processing.
### Citation Information
```
@misc{missingkeys,
title = {MissingKeys: A SNS dataset on misskey.io network},
author = {KaraKaraWitch},
year = {2023},
howpublished = {\url{https://huggingface.co/datasets/RyokoExtra/MissingKeys}},
}
```
### Name Etymology
N/A
### Contributions
- [@KaraKaraWitch (Twitter)](https://twitter.com/KaraKaraWitch) for gathering this dataset. |
AlekseyKorshuk/synthetic-romantic-characters | 2023-07-20T00:23:35.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: name
dtype: string
- name: categories
sequence: string
- name: personalities
sequence: string
- name: description
dtype: string
- name: conversation
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 14989220
num_examples: 5744
download_size: 7896899
dataset_size: 14989220
---
# Dataset Card for "synthetic-romantic-characters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AlekseyKorshuk/synthetic-fight-characters | 2023-07-21T20:41:01.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | null | 1 | 3 | ---
dataset_info:
features:
- name: name
dtype: string
- name: categories
sequence: string
- name: personalities
sequence: string
- name: description
dtype: string
- name: conversation
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 20660420
num_examples: 8053
download_size: 11571373
dataset_size: 20660420
---
# Dataset Card for "synthetic-fight-characters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
atoomic/emoticonnect-sample | 2023-07-20T21:28:15.000Z | [
"task_categories:text-classification",
"language:fr",
"license:artistic-2.0",
"region:us"
] | atoomic | null | null | null | 0 | 3 | ---
license: artistic-2.0
task_categories:
- text-classification
language:
- fr
---
# Description
Data is using `.jsonl` format (each line is self isolated .json and can be parsed on its own).
Each row contains a text indexed by the key `content:` and some ratings split into groups.
* csp
* feeling
* gen
* persona
* sex
At this stage only the `feeling` group is filled.
Note: for now all vectors are filled with `0` value when mising.
This could change over time to save some space.
## Sample
Row example (pretty)
```json
{
"content": "...some text...",
"metadata":
{
"lng": "fr"
},
"rating":
{
},
"ratings":
{
"csp":
{
"c1": 0,
"c2": 0,
"c3": 0,
"c4": 0,
"c5": 0,
"c6": 0,
"c7": 0,
"c8": 0
},
"feeling":
{
"f1": 0,
"f2": 100,
"f3": 0,
"f4": 0,
"f5": 0,
"f6": 0,
"f7": 0,
"f8": 0
},
"gen":
{
"g1": 0,
"g2": 0,
"g3": 0,
"g4": 0
},
"persona":
{
"p1": 0,
"p2": 0,
"p3": 0,
"p4": 0,
"p5": 0,
"p6": 0,
"p7": 0,
"p8": 0
},
"sex":
{
"s1": 0,
"s2": 0
}
}
}
```
Note: more than a field can be set for a group
```json
{
"content": "...some text...",
"metadata":
{
"lng": "fr"
},
"rating":
{
},
"ratings":
{
"csp":
{
"c1": 0,
"c2": 0,
"c3": 0,
"c4": 0,
"c5": 0,
"c6": 0,
"c7": 0,
"c8": 0
},
"feeling":
{
"f1": 0,
"f2": 0,
"f3": 0,
"f4": 0,
"f5": 33.33,
"f6": 66.67,
"f7": 0,
"f8": 0
},
"gen":
{
"g1": 0,
"g2": 0,
"g3": 0,
"g4": 0
},
"persona":
{
"p1": 0,
"p2": 0,
"p3": 0,
"p4": 0,
"p5": 0,
"p6": 0,
"p7": 0,
"p8": 0
},
"sex":
{
"s1": 0,
"s2": 0
}
}
}
```
|
AlekseyKorshuk/synthetic-friendly-characters | 2023-07-20T05:27:43.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | null | 1 | 3 | ---
dataset_info:
features:
- name: name
dtype: string
- name: categories
sequence: string
- name: personalities
sequence: string
- name: description
dtype: string
- name: conversation
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 10379252
num_examples: 3871
download_size: 5610826
dataset_size: 10379252
---
# Dataset Card for "synthetic-friendly-characters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
leonvanbokhorst/fire-havoc-philips-lac-eindhoven | 2023-07-20T12:05:36.000Z | [
"task_categories:unconditional-image-generation",
"size_categories:1K<n<10K",
"language:en",
"license:creativeml-openrail-m",
"fire",
"havoc",
"eindhoven",
"stable diffusion",
"fine-tuning",
"region:us"
] | leonvanbokhorst | null | null | null | 1 | 3 | ---
license: creativeml-openrail-m
tags:
- fire
- havoc
- eindhoven
- stable diffusion
- fine-tuning
pretty_name: Havoc after the Fire at Philips LAC Eindhoven
size_categories:
- 1K<n<10K
task_categories:
- unconditional-image-generation
language:
- en
---
# Image Dataset Havoc after the Fire at Philips LAC Eindhoven
## Dataset Description
A large fire in the center of Eindhoven, May 14th, 2023. The old Philips Lighting Application Centre was engulfed in flames, resulting in massive smoke clouds. Over a hundred firefighters were deployed, and there was significant disruption in the city center. This is a dataset containing images of the remains of the building two months later. The footage was taken on July 19, 2023.

## Dataset Structure
The dataset consists of 1167 images depicting the aftermath of the fire havoc. It is primarily designed for fine-tuning or training a Stable Diffusion model, although it can be used for other purposes as well. Each original image is divided into five cropped versions with between 2 to 8 additional random detail crops. Approximately 30 percent of the images are flipped horizontally. All images in the dataset have been resized to either 1024 x 1024, 768 x 1024, or 1024 x 768 resolution.
| Description | Value |
|---------------------------------------------------------|----------------------|
| Number of Images | 1167 |
| Purpose | Fine-tuning / Training Stable Diffusion model |
| Image Processing | Original image five-cropped (all corners and center) with added 1-8 random detail crops per original |
| Flipped Images | Approximately 30% |
| Resolutions | Hand picked 1024x1024, 768x1024, 1024x768 | |
RomanCast/WikiSpell_custom | 2023-07-25T12:59:58.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"arxiv:2212.10562",
"region:us"
] | RomanCast | null | null | null | 0 | 3 | ---
license: mit
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 129624
num_examples: 10000
- name: validation_top1
num_bytes: 10754
num_examples: 1000
- name: test_top1
num_bytes: 10948
num_examples: 1000
- name: validation_1_10
num_bytes: 11618
num_examples: 1000
- name: test_1_10
num_bytes: 11692
num_examples: 1000
- name: validation_10_20
num_bytes: 13401
num_examples: 1000
- name: test_10_20
num_bytes: 13450
num_examples: 1000
- name: validation_20_30
num_bytes: 15112
num_examples: 1000
- name: test_20_30
num_bytes: 15069
num_examples: 1000
- name: validation_bottom50
num_bytes: 15204
num_examples: 1000
- name: test_bottom50
num_bytes: 15076
num_examples: 1000
download_size: 241234
dataset_size: 261948
language:
- en
viewer: true
task_categories:
- text-generation
size_categories:
- 1K<n<10K
---
# WikiSpell
## Description
This dataset is a **custom implementation** of the WikiSpell dataset introduced in [Character-Aware Models Improve Visual Text Rendering](https://arxiv.org/pdf/2212.10562.pdf) by Liu et al. (2022).
Similarly to the original WikiSpell dataset, the training set is composed of 5000 words taken uniformly from the 50% least common Wiktionary words (taken from [this Wiktionary extraction](https://kaikki.org/dictionary/rawdata.html)), and 5000 words sampled according to their frequencies taken from the 50% most common Wiktionary words.
The validation and test are splitted in 5 sets, sampled depending on their frequency in the corpus:
- 1% most common words
- 1 - 10% most common words
- 10 - 20% most common words
- 20 - 30% most common words
- 50% least common words
Contrary to the original WikiSpell dataset, we compute the frequency of the words using the first 100k sentences from OpenWebText ([Skylion007/openwebtext](https://huggingface.co/datasets/Skylion007/openwebtext)) instead of mC4.
## Usage
This dataset is used for testing spelling in Large Language Models. To do so, the labels should be computed like in the following snippet:
```python
sample = ds["train"][0]
label = " ".join(sample["text"])
```
**The labels are not included in the dataset files directly.**
## Citation
Please cite the original paper introducing WikiSpell if you're using this dataset:
```
@inproceedings{liu-etal-2023-character,
title = "Character-Aware Models Improve Visual Text Rendering",
author = "Liu, Rosanne and
Garrette, Dan and
Saharia, Chitwan and
Chan, William and
Roberts, Adam and
Narang, Sharan and
Blok, Irina and
Mical, Rj and
Norouzi, Mohammad and
Constant, Noah",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.900",
pages = "16270--16297",
}
``` |
techiaith/legislation-gov-uk_en-cy | 2023-08-12T13:24:57.000Z | [
"task_categories:translation",
"task_categories:text-classification",
"task_categories:summarization",
"task_categories:sentence-similarity",
"size_categories:100M<n<1B",
"language:en",
"language:cy",
"license:other",
"region:us"
] | techiaith | null | null | null | 1 | 3 | ---
license: other
task_categories:
- translation
- text-classification
- summarization
- sentence-similarity
language:
- en
- cy
pretty_name: UK Government Legislation
size_categories:
- 100M<n<1B
---
# Dataset Card for legislation-gov-uk-en-cy
## Dataset Description
- **Homepage:** https://github.com/techiaith/legislation-gov-uk_dataset
- **Repository:** https://github.com/techiaith/legislation-gov-uk_dataset
- **Point of Contact:** techiaith@bangor.ac.uk
### Dataset Summary
This dataset consists of English-Welsh sentence pairs obtained via scraping the www.legislation.gov.uk website.
The total dataset is approximately 170 Mb in size.
### Supported Tasks and Leaderboards
- translation
- text-classification
- summarization
- sentence-similarity
### Languages
- English
- Welsh
## Dataset Structure
### Data Fields
- source
- target
### Data Splits
- train
## Dataset Creation
English-Welsh sentence pairs were obtained by scraping the www.legislation.gov.uk website and then cleaning the data using an internal processing pipeline.
### Source Data
#### Initial Data Collection and Normalization
Sentences were dropped from the original scrapped sources in the following cases:
- sentence contained too many misspelt words
- sentence length similarity variance too great.
#### Who are the source language producers?
The language data, including source and target language data, is derived from UK legislation. See [here](https://www.legislation.gov.uk/aboutus) for information.
### Licensing Information
This dataset's source data is Crown copyright and is licensed under the [Open Government License](https://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/). |
techiaith/cofnodycynulliad_en-cy | 2023-08-14T10:56:17.000Z | [
"task_categories:translation",
"task_categories:text-classification",
"task_categories:summarization",
"task_categories:sentence-similarity",
"size_categories:100K<n<1M",
"language:en",
"language:cy",
"license:other",
"region:us"
] | techiaith | null | null | null | 1 | 3 | ---
license: other
task_categories:
- translation
- text-classification
- summarization
- sentence-similarity
language:
- en
- cy
pretty_name: Cofnod Y Cynulliad en-cy
size_categories:
- 100K<n<1M
---
# Dataset Card for cofnodycynulliad_en-cy
## Dataset Description
- **Homepage:** https://github.com/techiaith/cofnod-y-cynulliad_dataset
- **Repository:** https://github.com/techiaith/cofnod-y-cynulliad_dataset.git
- **Point of Contact:** techiaith@bangor.ac.uk
### Dataset Summary
This dataset consists of English-Welsh sentence pairs obtained by parsing the data provided from the [Welsh Parliament](https://cofnod.senedd.cymru/) website.
### Supported Tasks and Leaderboards
- translation
- text classification
- sentence similarity
### Languages
- English
- Welsh
## Dataset Structure
### Data Fields
- source
- target
### Data Splits
- train
## Dataset Creation
The dataset was created via an internal pipeline employing DVC and Python.
### Source Data
#### Initial Data Collection and Normalization
Sentences were dropped from the original scrapped sources in the following cases:
- sentence contained too many misspelt words
- sentence length similarity variance too great.
#### Who are the source language producers?
The language data, including source and target language data, is derived from transcripts of the proceedings of the Senedd's Plenary meetings and their translations.
See [here](https://cofnod.senedd.cymru) for information.
### Licensing Information
This dataset's source data is Crown copyright and is licensed under the [Open Government License](https://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/). |
yashgoenka/gorilla-16k | 2023-08-06T03:18:33.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"license:apache-2.0",
"api",
"region:us"
] | yashgoenka | null | null | null | 2 | 3 | ---
license: apache-2.0
task_categories:
- text-generation
tags:
- api
pretty_name: j
size_categories:
- 10K<n<100K
---
# Training Dataset for Gorilla
<!-- Provide a quick summary of what the model is/does. -->
Gorilla's self instruct training datasets for huggingface, torchhub, and tensorflowhub apis.
Source: https://gorilla.cs.berkeley.edu/ |
samchain/BIS_Speeches_97_23 | 2023-07-23T15:12:41.000Z | [
"task_categories:text-classification",
"task_categories:token-classification",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"economics",
"finance",
"business",
"region:us"
] | samchain | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: sequenceA
dtype: string
- name: sequenceB
dtype: string
- name: next_sentence_label
dtype: int64
splits:
- name: train
num_bytes: 505762257.6721524
num_examples: 773395
- name: test
num_bytes: 89252509.32784761
num_examples: 136482
download_size: 365034957
dataset_size: 595014767
license: apache-2.0
task_categories:
- text-classification
- token-classification
language:
- en
tags:
- economics
- finance
- business
size_categories:
- 100K<n<1M
---
# Dataset Card for "BIS_Speeches_97_23"
This dataset is built from scrapped speeches on the Bank of International Settlements thanks to this repo : https://github.com/HanssonMagnus/scrape_bis. The dataset is made of 12k speeches from 1997 to 2023.
Each pair is built with extracted sentences from speeches, if B is following A then the 'next_sentence_label' is 1 else it is 0.
Negative pairs are built by choosing a sentence from another speech randomly.) |
garyzsu/custom_gym_dataset | 2023-07-23T17:52:56.000Z | [
"region:us"
] | garyzsu | null | null | null | 0 | 3 | |
FreedomIntelligence/MMLU_Japanese | 2023-08-06T08:06:24.000Z | [
"language:ja",
"license:mit",
"region:us"
] | FreedomIntelligence | null | null | null | 0 | 3 | ---
license: mit
language:
- ja
---
Japanese version of MMLU dataset tranlasted by gpt-3.5-turbo.
The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT). |
dinhquangson/FUNSD_RE | 2023-07-26T07:38:07.000Z | [
"task_categories:token-classification",
"license:mit",
"region:us"
] | dinhquangson | null | null | null | 0 | 3 | ---
license: mit
task_categories:
- token-classification
--- |
Falah/2000000_Style_art_prompts | 2023-07-30T07:11:12.000Z | [
"task_categories:text-classification",
"size_categories:1M<n<10M",
"language:en",
"license:apache-2.0",
"region:us"
] | Falah | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 1359241055
num_examples: 2000000
download_size: 151291961
dataset_size: 1359241055
license: apache-2.0
task_categories:
- text-classification
language:
- en
pretty_name: 2M style art diffusion
size_categories:
- 1M<n<10M
---
# Mythical Creatures Art Style Prompts (2M Prompts)
## Dataset Information
- Dataset Name: Mythical Creatures Art Style Prompts (2M Prompts)
- Description: This custom dataset contains a collection of art-style prompts centered around mythical creatures, aimed at inspiring creativity and generating unique artistic expressions. The prompts are designed to stimulate artists' imagination and encourage them to create stunning and imaginative artworks depicting various mythical creatures in different artistic styles.
- Features:
- prompt (string): The art style prompt to stimulate creative ideas for artistic expression.
## Dataset Splits
- Train Split:
- Number of Examples: 2,000,000
- Size: 1.36 GB (1,359,241,055 bytes)
## Dataset Size
- Total Dataset Size: 1.36 GB (1,359,241,055 bytes)
- Download Size: 151.29 MB (151,291,961 bytes)
## Dataset License
- This dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0). You are free to share and adapt the dataset for any purpose, even commercially, provided you give appropriate credit to the dataset creator, Falah G. Salieh.
## Citation
- If you use this dataset in your research or project, please cite it as follows:
```
@misc{mythical_creatures_art_style_prompts_generation_dataset,
author = {Falah G. Salieh},
title = {Mythical_Creatures_Art_Style_Prompts_Generation_Dataset},
year = {2023},
publisher = {Huggingface},
version = {1.0},
published = {\url{https://huggingface.co/datasets/Falah/2000000_Style_art_prompts}},
}
```
## Dataset Creation
- The Art_Style_Prompts_Generation_Dataset was curated and created by Falah G. Salieh. The prompts were carefully crafted to cover a diverse range of artistic styles, themes, and concepts, making it suitable for generating art with various creative visions.
## Application
- The Art_Style_Prompts_Generation_Dataset can be used for various applications, including:
- Artistic style prompt generation for AI-powered creative tools
- Training and evaluating machine learning models for art generation
- Exploring and analyzing patterns and trends in different artistic expressions
## Usage Examples with Stable diffusion SDXL0.9

-------------------------------


## Acknowledgements
- We acknowledge the valuable contributions of artists and creators whose inspiring works served as a basis for crafting the art style prompts in this dataset.
### Usage example
```python
from datasets import load_dataset
#Load the dataset
dataset = load_dataset("Falah/2000000_Style_art_prompts")
```
## Note
- The prompts in this dataset are designed for creative purposes and do not represent real-world scenarios or factual information.
- Users are encouraged to respect the Creative Commons license and give appropriate credit when using the dataset for their projects or research. |
Myashka/SO-Python_basics_QA-filtered-2023-tanh_score | 2023-07-25T11:03:13.000Z | [
"language:en",
"license:mit",
"region:us"
] | Myashka | null | null | null | 0 | 3 | ---
license: mit
language:
- en
---
SO dataset of python tag data and "Python basics and Envirinment" subcategory
Question filters:
- images
- links
- code blocks
- Q_Score > 0
- Answer_count > 0
Answers filters:
- images
- links
- code blocks
-
Scores are tanh applied to scaled with AbsMaxScaler to IQR range of Original SO Answers' scores |
badokorach/q_a | 2023-07-25T20:57:06.000Z | [
"region:us"
] | badokorach | null | null | null | 0 | 3 | Entry not found |
ArtifactAI/arxiv_research_code | 2023-07-26T19:13:22.000Z | [
"task_categories:text-generation",
"size_categories:10B<n<100B",
"language:en",
"license:bigscience-openrail-m",
"doi:10.57967/hf/0929",
"region:us"
] | ArtifactAI | null | null | null | 1 | 3 | ---
dataset_info:
features:
- name: repo
dtype: string
- name: file
dtype: string
- name: code
dtype: string
- name: file_length
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: extension_type
dtype: string
splits:
- name: train
num_bytes: 63445188751
num_examples: 4716175
download_size: 21776760509
dataset_size: 63445188751
license: bigscience-openrail-m
task_categories:
- text-generation
language:
- en
pretty_name: arxiv_research_code
size_categories:
- 10B<n<100B
---
# Dataset Card for "ArtifactAI/arxiv_research_code"
## Dataset Description
https://huggingface.co/datasets/ArtifactAI/arxiv_research_code
### Dataset Summary
ArtifactAI/arxiv_research_code contains over 21.8GB of source code files referenced strictly in ArXiv papers. The dataset serves as a curated dataset for Code LLMs.
### How to use it
```python
from datasets import load_dataset
# full dataset (21.8GB of data)
ds = load_dataset("ArtifactAI/arxiv_research_code", split="train")
# dataset streaming (will only download the data as needed)
ds = load_dataset("ArtifactAI/arxiv_research_code", streaming=True, split="train")
for sample in iter(ds): print(sample["code"])
```
## Dataset Structure
### Data Instances
Each data instance corresponds to one file. The content of the file is in the `code` feature, and other features (`repo`, `file`, etc.) provide some metadata.
### Data Fields
- `repo` (string): code repository name.
- `file` (string): file path in the repository.
- `code` (string): code within the file.
- `file_length`: (integer): number of characters in the file.
- `avg_line_length`: (float): the average line-length of the file.
- `max_line_length`: (integer): the maximum line-length of the file.
- `extension_type`: (string): file extension.
### Data Splits
The dataset has no splits and all data is loaded as train split by default.
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
34,099 active GitHub repository names were extracted from [ArXiv](https://arxiv.org/) papers from its inception through July 21st, 2023 totaling 773G of compressed github repositories.
These repositories were then filtered, and the code from each file was extracted into 4.7 million files.
#### Who are the source language producers?
The source (code) language producers are users of GitHub that created unique repository
### Personal and Sensitive Information
The released dataset may contain sensitive information such as emails, IP addresses, and API/ssh keys that have previously been published to public repositories on GitHub.
## Additional Information
### Dataset Curators
Matthew Kenney, Artifact AI, matt@artifactai.com
### Citation Information
```
@misc{arxiv_research_code,
title={arxiv_research_code},
author={Matthew Kenney},
year={2023}
}
``` |
jeffnyman/scifact | 2023-07-26T08:18:50.000Z | [
"language:en",
"license:cc-by-nc-2.0",
"region:us"
] | jeffnyman | SciFact
A dataset of expert-written scientific claims paired with evidence-containing
abstracts and annotated with labels and rationales. | @InProceedings{Wadden2020FactOF,
author = {David Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang,
Madeleine van Zuylen, Arman Cohan, Hannaneh Hajishirzi},
title = {Fact or Fiction: Verifying Scientific Claims},
booktitle = {EMNLP},
year = 2020,
} | null | 0 | 3 | ---
language:
- en
license:
- cc-by-nc-2.0
---
# Dataset Card for "scifact"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://scifact.apps.allenai.org/](https://scifact.apps.allenai.org/)
- **Paper:** [Fact or Fiction: Verifying Scientific Claims](https://aclanthology.org/2020.emnlp-main.609/)
### Dataset Summary
SciFact.
This is a dataset of expert-written scientific claims paired with evidence-containing abstracts and annotated with labels and rationales.
## Dataset Structure
### Data Instances
#### claims
- **Size of downloaded dataset files:** 2.72 MB
- **Size of the generated dataset:** 0.25 MB
- **Total amount of disk used:** 2.97 MB
An example of 'validation' looks as follows.
```
{
"cited_doc_ids": [14717500],
"claim": "1,000 genomes project enables mapping of genetic sequence variation consisting of rare variants with larger penetrance effects than common variants.",
"evidence_doc_id": "14717500",
"evidence_label": "SUPPORT",
"evidence_sentences": [2, 5],
"id": 3
}
```
#### corpus
- **Size of downloaded dataset files:** 2.72 MB
- **Size of the generated dataset:** 7.63 MB
- **Total amount of disk used:** 10.35 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"abstract": "[\"Alterations of the architecture of cerebral white matter in the developing human brain can affect cortical development and res...",
"doc_id": 4983,
"structured": false,
"title": "Microstructural development of human newborn cerebral white matter assessed in vivo by diffusion tensor magnetic resonance imaging."
}
```
### Data Fields
The data fields are the same among all splits.
#### claims
- `id`: a `int32` feature.
- `claim`: a `string` feature.
- `evidence_doc_id`: a `string` feature.
- `evidence_label`: a `string` feature.
- `evidence_sentences`: a `list` of `int32` features.
- `cited_doc_ids`: a `list` of `int32` features.
#### corpus
- `doc_id`: a `int32` feature.
- `title`: a `string` feature.
- `abstract`: a `list` of `string` features.
- `structured`: a `bool` feature.
### Data Splits
#### claims
| |train|validation|test|
|------|----:|---------:|---:|
|claims| 1261| 450| 300|
#### corpus
| |train|
|------|----:|
|corpus| 5183|
## Additional Information
### Licensing Information
https://github.com/allenai/scifact/blob/master/LICENSE.md
The SciFact dataset is released under the [CC BY-NC 2.0](https://creativecommons.org/licenses/by-nc/2.0/). By using the SciFact data, you are agreeing to its usage terms.
### Citation Information
```
@inproceedings{wadden-etal-2020-fact,
title = "Fact or Fiction: Verifying Scientific Claims",
author = "Wadden, David and
Lin, Shanchuan and
Lo, Kyle and
Wang, Lucy Lu and
van Zuylen, Madeleine and
Cohan, Arman and
Hajishirzi, Hannaneh",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.609",
doi = "10.18653/v1/2020.emnlp-main.609",
pages = "7534--7550",
}
```
|
hac541309/woori_spring_dict | 2023-08-15T11:00:14.000Z | [
"task_categories:table-question-answering",
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:question-answering",
"size_categories:1M<n<10M",
"language:ko",
"license:cc-by-sa-3.0",
"region:us"
] | hac541309 | null | null | null | 3 | 3 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 514345294
num_examples: 1168853
download_size: 201093378
dataset_size: 514345294
license: cc-by-sa-3.0
task_categories:
- table-question-answering
- text-generation
- text-classification
- question-answering
language:
- ko
pretty_name: 우리말샘
size_categories:
- 1M<n<10M
---
# Dataset Card for "woori_spring_dict"
This dataset is a NLP learnable form of [woori mal saem(우리말샘)](https://opendict.korean.go.kr/main) a Korean collaborative open source dictionary.
It follows the [original copyright policy (cc-by-sa-2.0)](https://opendict.korean.go.kr/service/copyrightPolicy)
This version is built from xls_20230602
[우리말샘](https://opendict.korean.go.kr/main)을 학습 가능한 형태로 처리한 데이터입니다.
[우리말샘](https://opendict.korean.go.kr/service/copyrightPolicy)의 저작권을 따릅니다.
xls_20230602으로부터 생성되었습니다.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hac541309/stdict_kor | 2023-07-26T12:01:59.000Z | [
"task_categories:table-question-answering",
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:question-answering",
"size_categories:1M<n<10M",
"language:ko",
"license:cc-by-sa-3.0",
"region:us"
] | hac541309 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 205618385
num_examples: 434361
download_size: 72515975
dataset_size: 205618385
license: cc-by-sa-3.0
task_categories:
- table-question-answering
- text-generation
- text-classification
- question-answering
language:
- ko
pretty_name: 국립국어원 표준국어대사전
size_categories:
- 1M<n<10M
---
# Dataset Card for "Standard Korean Dictionary"
This dataset is a NLP learnable form of [Standard Dictionary from the National Institute of Korean Language (국립국어원 표준국어대사전)](https://stdict.korean.go.kr/).
It follows the [original copyright policy (cc-by-sa-2.0)](https://stdict.korean.go.kr/join/copyrightPolicy.do)
This version is built from xls_20230601
[국립국어원 표준 국어 대사전](https://stdict.korean.go.kr/)을 학습 가능한 형태로 처리한 데이터입니다.
[국립국어원 표준 국어 대사전](https://stdict.korean.go.kr/join/copyrightPolicy.do)의 저작권을 따릅니다.
xls_20230601으로부터 생성되었습니다. |
RyokoExtra/TvTroper-Cleaned | 2023-07-26T13:12:57.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"training",
"text",
"region:us"
] | RyokoExtra | null | null | null | 1 | 3 | ---
license: apache-2.0
language:
- en
tags:
- training
- text
task_categories:
- text-classification
- text-generation
pretty_name: TvTroper Cleaned
size_categories:
- 100K<n<1M
---
# Dataset Card for TvTroper-Cleaned
*TvTroper-Cleaned is a cleaned dataset on TvTropes.org page.*
## Dataset Description
- **Homepage:** (TODO)
- **Repository:** N/A
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** KaraKaraWitch
### Dataset Summary
TvTroper-Cleaned is a cleaned dataset consisting of text from at most 651,522 wiki pages (excluding namespaces and date-grouped pages) from tvtropes.org.
### Supported Tasks and Leaderboards
This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes.
- text-classification
- text-generation
### Languages
- English
## Dataset Structure
All the files are located in jsonl files that has been split into 100,000 pages.
### Data Instances
```json
{"text":"<Title>\n\n<Article Content>","url":"https://tvtropes.org/<...>"}
```
### Data Fields
There is only 2 fields in the list. URL and content retrieved. Content retrieved may contain errors. If the page does not exist, the 404 error page is scraped.
URLs may not match the final url in which the page was retrieved from. As they may be redirects present while scraping.
#### Q-Score Distribution
Not Applicable
### Data Splits
The jsonl files are split into 100,000 chunks.
## Dataset Creation
### Curation Rationale
We have curated TvTropes.org as it serves as one of the best resource for common themes, narrative devices, and character archetypes that shape our various stories around the world.
### Source Data
#### Initial Data Collection and Normalization
None. No normalization is performed as this is a raw dump of the dataset.
#### Who are the source language producers?
The related editors/users of TvTropes.org
### Annotations
#### Annotation process
No Annotations are present.
#### Who are the annotators?
No human annotators.
### Personal and Sensitive Information
We are certain there is no PII included in the dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content.
It may also be useful for other languages depending on your language model.
### Discussion of Biases
This dataset contains mainly TV Tropes used in media.
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
KaraKaraWitch
### Licensing Information
Apache 2.0, for all parts of which KaraKaraWitch may be considered authors. All other material is distributed under fair use principles.
Ronsor Labs additionally is allowed to relicense the dataset as long as it has gone through processing.
### Citation Information
```
@misc{tvtroper-cleaned,
title = {TvTroper Cleaned: Tropes & Others.},
author = {KaraKaraWitch},
year = {2023},
howpublished = {\url{https://huggingface.co/datasets/RyokoExtra/TvTroper}},
}
```
### Name Etymology
N/A
### Contributions
- [@KaraKaraWitch (Twitter)](https://twitter.com/KaraKaraWitch) for gathering this dataset. |
HydraLM/math_dataset_standardized | 2023-07-27T17:16:11.000Z | [
"region:us"
] | HydraLM | null | null | null | 2 | 3 | Entry not found |
HydraLM/CodeAlpaca-20k_alpaca | 2023-07-27T18:42:52.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 6953173
num_examples: 20021
download_size: 3442058
dataset_size: 6953173
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "CodeAlpaca-20k_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/chemistry_dataset_alpaca | 2023-07-27T18:43:22.000Z | [
"region:us"
] | HydraLM | null | null | null | 1 | 3 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 45485759
num_examples: 19999
download_size: 21441377
dataset_size: 45485759
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "chemistry_dataset_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Shashkovich/Telecommunication_SMS_time_series | 2023-08-15T14:25:18.000Z | [
"task_categories:time-series-forecasting",
"license:gpl-3.0",
"SMS",
"fraud",
"forecasting",
"region:us"
] | Shashkovich | null | null | null | 1 | 3 | ---
license: gpl-3.0
task_categories:
- time-series-forecasting
tags:
- SMS
- fraud
- forecasting
pretty_name: SMS time series
---
This dataset contains various time series from vendors.
# Vendor A: 01.03.23-14.08.23
* TS_*_all - Count of all SMS


# Vendor A: January
* TS_*_fraud - Count of fraud


* TS_*_all - Count of all SMS


* TS_*_hlrDelay - Mean values of hlr delay


# Vendor B: January 1-8
* 1-8_TS_*_fraud - Count of fraud


* 1-8_TS_*_all - Count of all SMS


* 1-8_TS_*_hlrDelay - Mean values of hlr delay

 |
Fred666/ocnli | 2023-07-28T07:09:53.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:zh",
"license:gpl-3.0",
"arxiv:2010.05444",
"region:us"
] | Fred666 | null | null | null | 0 | 3 | ---
license: gpl-3.0
task_categories:
- text-classification
language:
- zh
size_categories:
- 10K<n<100K
---
This dataset is copied from [CLUE](https://github.com/CLUEbenchmark/OCNLI/tree/main/data/ocnli) with certain modifications.
The paper of CLUE is [OCNLI](https://arxiv.org/abs/2010.05444).
The modifications are:
1. Transform json file to csv file.
2. Encoding in UTF-8.
3. Remove data entries whose label value is '-'.
4. Replace label values, 'neutral' to 1, 'entailment' to 0, and 'contradiction' to 2.
5. Add one column 'sentence1', whose value is '前提:' + premise value + '结论:' + hypothsis value.
ocnli_train_std.csv comes from train.50k.json.
ocnli_test_std.csv comes from dev.json. |
jusKnows/linux_errors-solutions_onlyESP | 2023-07-28T10:18:10.000Z | [
"language:es",
"license:other",
"region:us"
] | jusKnows | null | null | null | 0 | 3 | ---
license: other
language:
- es
pretty_name: s
---
### Dataset creation method
This dataset was created using **Llama-2-70b-chat** version from **chat Petals** and Chatgpt
- First, we asked the Petals Llama-2 chat to create a random list of 30 common linux problems with step-by-step solutions.
- Second, we use chatgpt to create different versions of each text that forms the problems and solutions. This way we create different ways of asking and answering the same question.
- Finally, we unify all possible combinations for each problem id. |
yyy999/Unicauca-dataset-April-June-2019-Network-flows | 2023-07-28T10:43:07.000Z | [
"region:us"
] | yyy999 | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: ts
dtype: int64
- name: duration
dtype: int64
- name: src_ip
dtype: int64
- name: src_port
dtype: int64
- name: dst_ip
dtype: int64
- name: dst_port
dtype: int64
- name: proto
dtype: int64
- name: packets
dtype: int64
- name: bytes
dtype: int64
- name: packet_size
dtype: float64
splits:
- name: train
num_bytes: 173109680
num_examples: 2163871
- name: test
num_bytes: 43277440
num_examples: 540968
download_size: 99801648
dataset_size: 216387120
---
# Dataset Card for "Unicauca-dataset-April-June-2019-Network-flows"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ai-forever/paper_persi_chat | 2023-10-04T15:39:45.000Z | [
"task_categories:text-generation",
"task_categories:summarization",
"task_categories:conversational",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] | ai-forever | null | null | null | 7 | 3 | ---
license: mit
task_categories:
- text-generation
- summarization
- conversational
- question-answering
language:
- en
size_categories:
- 10K<n<100K
---
# PaperPersiChat Dataset
Dataset for paper [PaperPersiChat: Scientific Paper Discussion Chatbot using Transformers and Discourse Flow Management](https://aclanthology.org/2023.sigdial-1.54/)
# Dataset creation
To construct the dataset, we used the part of Semantic Scholar Open Research Corpus [https://github.com/allenai/s2orc] as the main source of scientific publications, namely the Computer Science section. We constructed dialogues over the segments of the papers where each segment consists of a combination of several sections of the paper that have the same type.
### Davinci dialogues
First, we tried to reproduce the dialogue of people discussing a particular segment of the paper. As the first utterance, the first speaker should introduce the section by providing a brief summary. Since the davinci model is capable of processing complex instructions, we selected it as the base model. We used the following prompt concatenated with the segment text as the model input:
`Generate a dialogue between you and another person based on the following paper. You have access to the paper. In the first utterance you should write a short summary. The other person sees only your summary and asks four (4) questions, separated by your answers.`
In this way, we collected 3588 raw outputs that were parsed further into summary and dialogue turns. All these summaries were used to train the summarization component. Then, we filtered unparsed outputs, short dialogues and dialogues with inconsistent structure (including incorrect order of speakers in utterances). Thus, we obtained the set of 2817 dialogues that were used to train the models from the QA session module.
### ChatGPT dialogues
To construct more qualitative dialogues, and also to consider the fact that a real person sees only summaries, we used two ChatGPT models talking to each other. The first acted as a bot, and the second as a real person. Here, we used the summarization model trained on the davinci outputs to construct the inputs of the second model. The prompts used are the following:
1. Bot-like model. "You should briefly answer the questions on the following text. If there is no answer in the given text, then you must answer that there is not enough information. Your answers should be brief." + full context
2. Person-like model. "You should be asking short questions about an article you can't see. You only see the following summary. Your task is to ask clarifying dependent questions in order to understand the source text. You can ask only single short question at each turn." + summary produced by our summarizer.
We carried out four dialogue turns between these two models for each segment. In this case, postprocessing parsing is not required, since each model generates only one utterance at each step. We collected 8787 dialogues in total.
# Dataset structure
We share the resulting dataset via two json files consisting instances with the structure demonstrated by the following example:
```json
{
"text": "Table 1 and Table 2 describe...",
"dialogue": "What is the improvement achieved...",
"meta_segments": [
{"id": "ffa_15", "title": "Model", "section_type": "methodology"},
{"id": "ffa_16", "title": "Comparison To Other Models", "section_type": "methodology"}
],
"meta_paper": {
"title": "Correcting Forecasts with Multifactor Neural Attention",
"paper_id": "ffa"
},
"parsed_dialogue": {
"summary": "This paper presents a multifactor attention approach...",
"turns":
[{"speaker": "person", "text": "What is the improvement achieved..."},
{"speaker": "bot", "text": "The proposed approach achieves..."}, ...]
}
}
```
Here, "text" is the entire input context, "dialogue" is the raw Davinci output or the dialogue constructed by two ChatGPT models joined by '\n' tokens, "meta_segments" and "meta_paper" show additional meta information about the segments (including scipdf parsing results). The "parsed_dialogue" field contains resulting postprocessed dialogues that have the summary produced by the summarization module in the case of ChatGPT or a generated summary in the case of Davinci.
# Citation
If you find this dataset helpful, feel free to cite our publication [PaperPersiChat: Scientific Paper Discussion Chatbot using Transformers and Discourse Flow Management](https://aclanthology.org/2023.sigdial-1.54/):
```
@inproceedings{chernyavskiy-etal-2023-paperpersichat,
title = "{P}aper{P}ersi{C}hat: Scientific Paper Discussion Chatbot using Transformers and Discourse Flow Management",
author = "Chernyavskiy, Alexander and
Bregeda, Max and
Nikiforova, Maria",
booktitle = "Proceedings of the 24th Meeting of the Special Interest Group on Discourse and Dialogue",
month = sep,
year = "2023",
address = "Prague, Czechia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.sigdial-1.54",
pages = "584--587",
}
``` |
TrainingDataPro/russian-marketplace-reviews-e-commerce-dataset | 2023-09-14T16:39:15.000Z | [
"task_categories:text-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"finance",
"code",
"region:us"
] | TrainingDataPro | null | null | null | 1 | 3 | ---
license: cc-by-nc-nd-4.0
task_categories:
- text-classification
language:
- en
tags:
- finance
- code
---
# Russian Marketplace Reviews E-Commerce Dataset
The **Russian Marketplace Reviews E-Commerce Dataset** is a comprehensive collection of data curated from a popular e-commerce platform. It contains a vast amount of *reviews, information about date and time of the review and its ratings*, offering valuable insights into consumer preferences and behaviors in the Russian marketplace.
This dataset encompasses a wide range of products across different categories, including *electronics, appliances, clothing, cosmetics, home goods, and more*. It is also valuable for sentiment analysis and opinion mining. Researchers can leverage the labeled review ratings to train models that classify reviews into *positive, negative, or neutral* sentiments.
### The dataset's possible applications:
- recommendation systems
- sentiment analysis algorithms
- consumer behavior analysis
- customer satisfaction analysis
- marketing and advertising

# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=russian-marketplace-reviews-e-commerce-dataset) to discuss your requirements, learn about the price and buy the dataset.
# File with the extension .xlsx
includes the following information:
- **product_url**: link to the product,
- **product_title**: title of the product,
- **user_nickname**: nickname of the comment's author,
- **comment_date**: date of the comment,
- **comment_stars**: number of stars given to the product,
- **comment_text**: text of the comment,
- **comment_likes_count**: number of likes on the comment,
- **comment_dislikes_count**: number of dislikes on the comment
# Reviews Parsing might be made in accordance with your requirements.
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=russian-marketplace-reviews-e-commerce-dataset) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
TrainingDataPro/amazon-reviews-dataset | 2023-09-14T16:38:13.000Z | [
"task_categories:text-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"region:us"
] | TrainingDataPro | null | null | null | 1 | 3 | ---
license: cc-by-nc-nd-4.0
task_categories:
- text-classification
language:
- en
tags:
- code
---
# Amazon Reviews Dataset
The Amazon Reviews Dataset is a comprehensive collection of customer reviews obtained from the popular e-commerce website, Amazon.com. This dataset encompasses reviews written in **5** different languages, making it a valuable resource for conducting **multilingual sentiment analysis and opinion mining**.
The dataset's multilingual nature makes it useful for natural language processing tasks, sentiment analysis algorithms, and other machine learning applications that require diverse language data for training and evaluation.
The dataset can be highly valuable in training and fine-tuning machine learning models to *automatically classify sentiments, predict customer satisfaction, or extract key information from customer reviews*.
### Languages in the dataset:
- Italian
- German
- Spainish
- French
- English
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=amazon-reviews-dataset) to discuss your requirements, learn about the price and buy the dataset.
# Content
For each item, we extracted:
- **user_name**: name of the reviewer
- **stars**: number of stars given to the review
- **country**: country of the author
- **date**: date of the review
- **title**: title of the review
- **text**: text of the review
- **helpful**: number of people who think that the review is helpful
# Amazon Reviews might be collected in accordance with your requirements.
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=amazon-reviews-dataset) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
Mireu-Lab/UNSW-NB15 | 2023-09-10T18:43:35.000Z | [
"license:gpl-3.0",
"Network Security",
"region:us"
] | Mireu-Lab | null | null | null | 0 | 3 | ---
license: gpl-3.0
tags:
- Network Security
---
# UNSW-NB15
> This data is provided through the Train, Test CSV file provided by UNSW-NB15.
[link](https://research.unsw.edu.au/projects/unsw-nb15-dataset)
## Labels
The label of the data set is as follows.
|#|Column|Non-Null|Count|Dtype|
|---|---|---|---|---|
|0|id|82332|non-null|int64|
|1|dur|82332|non-null|float64
|2|proto|82332|non-null|object|
|3|service|82332|non-null|object|
|4|state|82332|non-null|object|
|5|spkts|82332|non-null|int64|
|6|dpkts|82332|non-null|int64|
|7|sbytes|82332|non-null|int64|
|8|dbytes|82332|non-null|int64|
|9|rate|82332|non-null|float64
|10|sttl|82332|non-null|int64|
|11|dttl|82332|non-null|int64|
|12|sload|82332|non-null|float64
|13|dload|82332|non-null|float64
|14|sloss|82332|non-null|int64|
|15|dloss|82332|non-null|int64|
|16|sinpkt|82332|non-null|float64
|17|dinpkt|82332|non-null|float64
|18|sjit|82332|non-null|float64
|19|djit|82332|non-null|float64
|20|swin|82332|non-null|int64|
|21|stcpb|82332|non-null|int64|
|22|dtcpb|82332|non-null|int64|
|23|dwin|82332|non-null|int64|
|24|tcprtt|82332|non-null|float64
|25|synack|82332|non-null|float64
|26|ackdat|82332|non-null|float64
|27|smean|82332|non-null|int64|
|28|dmean|82332|non-null|int64|
|29|trans_depth|82332|non-null|int64|
|30|response_body_len|82332|non-null|int64|
|31|ct_srv_src|82332|non-null|int64|
|32|ct_state_ttl|82332|non-null|int64|
|33|ct_dst_ltm|82332|non-null|int64|
|34|ct_src_dport_ltm|82332|non-null|int64|
|35|ct_dst_sport_ltm|82332|non-null|int64|
|36|ct_dst_src_ltm|82332|non-null|int64|
|37|is_ftp_login|82332|non-null|int64|
|38|ct_ftp_cmd|82332|non-null|int64|
|39|ct_flw_http_mthd|82332|non-null|int64|
|40|ct_src_ltm|82332|non-null|int64|
|41|ct_srv_dst|82332|non-null|int64|
|42|is_sm_ips_ports|82332|non-null|int64|
|43|attack_cat|82332|non-null|object|
|44|label|82332|non-null|int64|
|
CyberHarem/pozemka_arknights | 2023-09-17T16:08:55.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 3 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of pozemka_arknights
This is the dataset of pozemka_arknights, containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 517 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 517 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 517 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 517 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
Moritz-Pfeifer/CentralBankCommunication | 2023-08-04T14:13:30.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"region:us"
] | Moritz-Pfeifer | null | null | null | 0 | 3 | ---
license: mit
task_categories:
- text-classification
language:
- en
size_categories:
- 1K<n<10K
---
This dataset contains two manually pre-labeled datasets:
In the **economic agents dataset**, we labeled 6,205 randomized sentences from a [Fed database](https://github.com/Moritz-Pfeifer/CentralBankRoBERTa/tree/main/Data/FED) containing speeches (1948-2023) as speaking either about households, firms, the financial sector, the government, or the central bank itself.
In the **sentiment dataset**, we labeled 6,683 randomized sentences from the same database, which are either labeled as being positive (1) or negative (0).
The datasets were used to train an [agent classifier](https://huggingface.co/Moritz-Pfeifer/CentralBankRoBERTa-agent-classifier) and a [sentiment classifier](https://huggingface.co/Moritz-Pfeifer/CentralBankRoBERTa-sentiment-classifier).
<table>
<tr>
<td colspan="2" style="border-top: 1px solid #ccc; padding: 5px; text-align: left;">
Please cite this model as Pfeifer, M. and Marohl, V.P. (2023) "CentralBankRoBERTa: A Fine-Tuned Large Language Model for Central Bank Communications" ADD SOURCE/LINK
</td>
</tr>
<tr>
<td style="padding: 5px;">
Moritz Pfeifer<br>
Institute for Economic Policy, University of Leipzig<br>
04109 Leipzig, Germany<br>
<a href="mailto:pfeifer@wifa.uni-leipzig.de">pfeifer@wifa.uni-leipzig.de</a>
</td>
<td style="padding: 5px;">
Vincent P. Marohl<br>
Department of Mathematics, Columbia University<br>
New York NY 10027, USA<br>
<a href="mailto:vincent.marohl@columbia.edu">vincent.marohl@columbia.edu</a>
</td>
</tr>
</table> |
elsaEU/ELSA500k_track2 | 2023-08-27T07:59:26.000Z | [
"license:cc-by-4.0",
"region:us"
] | elsaEU | null | null | null | 1 | 3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: original_prompt
dtype: string
- name: positive_prompt
dtype: string
- name: negative_prompt
dtype: string
- name: model
dtype: string
- name: filepath
dtype: string
- name: num_inference_steps
dtype: int64
- name: width
dtype: int64
- name: height
dtype: int64
- name: url
dtype: string
- name: image
dtype: image
- name: heatmap_labels
sequence: string
- name: heatmaps
sequence:
sequence:
sequence: float64
splits:
- name: train
num_bytes: 127788930013
num_examples: 501000
download_size: 54902331553
dataset_size: 127788930013
license: cc-by-4.0
---
# ELSA - Multimedia use case

**ELSA Multimedia is a large collection of Deep Fake images, generated using diffusion models**
### Dataset Summary
This dataset was developed as part of the EU project ELSA. Specifically for the Multimedia use-case.
Official webpage: https://benchmarks.elsa-ai.eu/
This dataset aims to develop effective solutions for detecting and mitigating the spread of deep fake images in multimedia content. Deep fake images, which are highly realistic and deceptive manipulations, pose significant risks to privacy, security, and trust in digital media. This dataset can be used to train robust and accurate models that can identify and flag instances of deep fake images.
### ELSA versions
| Name | Description | Link |
| ------------- | ------------- | ---------------------|
| ELSA1M_track1 | Dataset of 1M images generated using diffusion model | https://huggingface.co/datasets/elsaEU/ELSA1M_track1 |
| ELSA500k_track2 | Dataset of 500k images generated using diffusion model with diffusion attentive attribution maps [1] | https://huggingface.co/datasets/elsaEU/ELSA500k_track2 |
```python
from daam import WordHeatMap
from datasets import load_dataset
import torch
elsa_data = load_dataset("elsaEU/ELSA500k_track2", split="train", streaming=True)
for sample in elsa_data:
image = sample.pop("image")
heatmaps = sample.pop("heatmaps")
heatmap_labels = sample.pop("heatmap_labels")
metadata = sample
for j, (h, l) in enumerate(zip(heatmaps, heatmap_labels)):
heatmap = WordHeatMap(torch.Tensor(h), word=l)
heatmap.plot_overlay(image)
plt.show()
```
Using <a href="https://huggingface.co/docs/datasets/stream">streaming=True</a> lets you work with the dataset without downloading it.
## Dataset Structure
Each parquet file contains nearly 1k images and a JSON file with metadata.
The Metadata for generated images are:
- ID: Laion image ID
- original_prompt: Laion Prompt
- positive_prompt: positive prompt used for image generation
- negative_prompt: negative prompt used for image generation
- model: model used for the image generation
- nsfw: nsfw tag from Laion
- url_real_image: Url of the real image associated to the same prompt
- filepath: filepath of the fake image
- aspect_ratio: aspect ratio of the generated image
- heatmaps: diffusion attentive attribution maps
- heatmap_labels: words releated to the heatmaps
### Dataset Curators
- Leonardo Labs (rosario.dicarlo.ext@leonardo.com)
- UNIMORE (https://aimagelab.ing.unimore.it/imagelab/)
### References
[1] What the DAAM: Interpreting Stable Diffusion Using Cross Attention, 2023. Tang Raphael et al. |
harpomaxx/example-dataset | 2023-07-30T23:23:12.000Z | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"license:openrail",
"art",
"region:us"
] | harpomaxx | null | null | null | 0 | 3 | ---
license: openrail
task_categories:
- text-classification
language:
- en
tags:
- art
pretty_name: example-dataset
size_categories:
- n<1K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
iamshnoo/alpaca-cleaned-persian | 2023-09-15T23:20:43.000Z | [
"region:us"
] | iamshnoo | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 57273102
num_examples: 51760
download_size: 25446305
dataset_size: 57273102
---
Translated from yahma/alpaca-cleaned using NLLB-1.3B
# Dataset Card for "alpaca-cleaned-persian"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
iamshnoo/alpaca-cleaned-swahili | 2023-09-15T23:22:47.000Z | [
"region:us"
] | iamshnoo | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 32934835
num_examples: 51760
download_size: 18254346
dataset_size: 32934835
---
Translated from yahma/alpaca-cleaned using NLLB-1.3B
# Dataset Card for "alpaca-cleaned-swahili"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BrunoHays/ESLO_text_only | 2023-07-31T06:50:48.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | BrunoHays | ESLO dataset, each utterance are taken out individually | @misc{11403/eslo/v1,
title = {ESLO},
author = {LLL},
url = {https://hdl.handle.net/11403/eslo/v1},
note = {{ORTOLANG} ({Open} {Resources} {and} {TOols} {for} {LANGuage}) \textendash www.ortolang.fr},
copyright = {Licence Creative Commons Attribution - Pas d'Utilisation Commerciale - Partage dans les Mêmes Conditions 4.0 International},
year = {2023}
} | null | 0 | 3 | ---
license: cc-by-nc-4.0
---
Eshkol-Taravella I., Baude O., Maurel D., Hriba L., Dugua C., Tellier I., (2012), Un grand corpus oral « disponible » : le corpus d’Orléans 1968-2012., in Ressources linguistiques libres, TAL. Volume 52 – n° 3/2011, 17-46 Laboratoire Ligérien de Linguistique - UMR 7270 (LLL) (2023). ESLO [Corpus]. ORTOLANG (Open Resources and TOols for LANGuage) - www.ortolang.fr, v1, https://hdl.handle.net/11403/eslo/v1. |
CyberHarem/sangonomiya_kokomi_genshin | 2023-09-17T16:20:43.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 3 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of sangonomiya_kokomi_genshin
This is the dataset of sangonomiya_kokomi_genshin, containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 544 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 544 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 544 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 544 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
chaoyi-wu/PMC-Inline | 2023-08-06T00:40:40.000Z | [
"task_categories:text-generation",
"license:apache-2.0",
"biology",
"region:us"
] | chaoyi-wu | null | null | null | 3 | 3 | ---
license: apache-2.0
task_categories:
- text-generation
tags:
- biology
---
# PMC-Inline Dataset
- [PMC-Inline Dataset](#pmc-inline-dataset)
- [Daraset Structure](#dataset-structure)
- [Sample](#sample)
This is the text parts and the figure parts can be dowloaded from https://pan.baidu.com/s/1Src_rhXsaOFp8zJ_3zMFsQ?pwd=p3ne.
## Dataset Structure
**PMC-Inline** (PMC papers with inline figures).
We collect the cc lincense papers from pubmed central and remoce the bib, author info, table and iamge captions in the original paper xml files.
Based on the inline figure ref, we link back 11M images into the paper contexts.
Each paper is organized as a PMCxxxxxxx.json. ```xxxxxxx``` refers to the paper unique PMCid
-
## Sample
A json in dataset is organized as bellow,
| info | {"article-type": "research-article", "pmid": "17925856", "pmc": "PMC1999654", "publisher-id": "07-PONE-RA-01026R1", "doi": "10.1371/journal.pone.0001008"} |
| ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
| text | \nPredicting Spatial Patterns of Plant Recruitment Using Animal-Displacement Kernels\nFor plants ... |
| img_ref | [{"id": "pone-0001008-g001", "start": 9177, "end": 9185}, {"id": "pone-0001008-g001", "start": 10715, "end": 10723}, ...] | | | | |
Explanation to each key
- info: some info. about the paper, like paper type, pmid, pmc id and so on.
- text: a string whihc is the paper content.
- img_ref: a list which contains which image and where it is referred in the original paper. For example {"id": "pone-0001008-g001", "start": 9177, "end": 9185} denotes the fig pone-0001008-g001 have been metioned in the text string at index 9177-9185.
You can get the image form our PMC figure parts, and fig is named unified as ```PMCxxxxxxx_figid.jpg``` like ```PMC1999654_pone-0001008-g001.jpg```
Note that, our PMC figures are collected before PMC-Inline, and during the time window, some papers have been updated. Thus some figures may be missed in our figure base. |
parrotzone/sdxl-1.0 | 2023-09-20T12:27:51.000Z | [
"license:openrail++",
"region:us"
] | parrotzone | null | null | null | 7 | 3 | ---
license: openrail++
---
# check [sdxl.parrotzone.art](https://sdxl.parrotzone.art) for easy viewing ⋆。°✩
---
## all images were made with SDXL 1.0 + the 0.9 VAE
- steps: 20
- cfg scale: 7
- no refiner
- random seeds
|
hezarai/xlsum-fa | 2023-08-08T12:26:16.000Z | [
"task_categories:summarization",
"language:fa",
"region:us"
] | hezarai | null | null | null | 0 | 3 | ---
task_categories:
- summarization
language:
- fa
pretty_name: XLSum Persian
---
The Persian portion of the [XLSum](https://huggingface.co/datasets/csebuetnlp/xlsum) dataset.
### Citation
```bibtex
@inproceedings{hasan-etal-2021-xl,
title = "{XL}-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages",
author = "Hasan, Tahmid and
Bhattacharjee, Abhik and
Islam, Md. Saiful and
Mubasshir, Kazi and
Li, Yuan-Fang and
Kang, Yong-Bin and
Rahman, M. Sohel and
Shahriyar, Rifat",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.413",
pages = "4693--4703",
}
``` |
AlanRobotics/saiga_tokenized | 2023-08-02T17:24:25.000Z | [
"region:us"
] | AlanRobotics | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: labels
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 52968582.0
num_examples: 32688
- name: test
num_bytes: 5885398.0
num_examples: 3632
download_size: 20205153
dataset_size: 58853980.0
---
# Dataset Card for "saiga_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TibetanAI/TibetanAI_NERv1.0 | 2023-08-03T02:18:55.000Z | [
"language:bo",
"license:apache-2.0",
"region:us"
] | TibetanAI | null | null | null | 0 | 3 | ---
license: apache-2.0
language:
- bo
---
# Dataset Card for TibetanAI_NERv1.0
## Dataset Description
TibetanAI_NERv1.0 is a Tibetan NER dataset. 藏文命名实体识别数据集。
- **Paper: 基于小样本学习的藏文命名实体识别
### Languages
Tibetan
### Licensing Information
apache-2.0
### Citation Information
于韬,张英,拥措.基于小样本学习的藏文命名实体识别[J].计算机与现代化,2023(05):13-19.
### Contributions
Title-题名: 基于小样本学习的藏文命名实体识别
Author-作者: 于韬;张英;拥措;
Organ-单位: 西藏大学信息科学技术学院;西藏大学西藏自治区藏文信息技术人工智能重点实验室;西藏大学藏文信息技术教育部工程研究中心;
|
bigheiniuJ/InstructEvalMetaICLAll | 2023-08-03T18:06:46.000Z | [
"region:us"
] | bigheiniuJ | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: task
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: options
sequence: string
- name: seed
dtype: string
- name: split
dtype: string
splits:
- name: meta_train
num_bytes: 2338759626
num_examples: 3399184
- name: meta_eval_100shot
num_bytes: 23447441
num_examples: 47685
download_size: 1159790167
dataset_size: 2362207067
---
# Dataset Card for "InstructEvalMetaICLAll"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tilyupo/trivia_cqa | 2023-08-04T15:37:37.000Z | [
"region:us"
] | tilyupo | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context
dtype: string
- name: context_score
dtype: float64
- name: context_source
dtype: string
splits:
- name: train
num_bytes: 44625505.0
num_examples: 79682
- name: validation
num_bytes: 5750820.0
num_examples: 10291
download_size: 33689157
dataset_size: 50376325.0
---
# Dataset Card for "trivia_cqa_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
johnowhitaker/mcqgen_1k_initial_examples | 2023-08-03T19:59:27.000Z | [
"region:us"
] | johnowhitaker | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: correct_answer
dtype: string
- name: text
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 902358
num_examples: 975
download_size: 558885
dataset_size: 902358
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "mcqgen_1k_initial_examples"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SargeZT/coco-stuff-captioned-multi | 2023-08-03T20:36:20.000Z | [
"region:us"
] | SargeZT | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: segmented
dtype: image
- name: caption
dtype: string
- name: gray_image
dtype: image
- name: softedge
dtype: image
- name: depth
dtype: image
- name: canny
dtype: image
- name: binary
dtype: image
- name: color
dtype: image
splits:
- name: test
num_bytes: 6925042.0
num_examples: 8
- name: train
num_bytes: 7013965619.0
num_examples: 9000
download_size: 7008916049
dataset_size: 7020890661.0
---
# Dataset Card for "coco-stuff-captioned-multi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
globis-university/aozorabunko-chats | 2023-08-21T12:33:10.000Z | [
"task_categories:text-generation",
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:ja",
"license:cc-by-4.0",
"region:us"
] | globis-university | null | null | null | 3 | 3 | ---
license: cc-by-4.0
task_categories:
- text-generation
- text-classification
language:
- ja
size_categories:
- 100K<n<1M
---
# Overview
This dataset is of conversations extracted from [Aozora Bunko (青空文庫)](https://www.aozora.gr.jp/), which collects public-domain books in Japan, using a simple heuristic approach.
# Method
First, lines surrounded by quotation mark pairs (`「」`) are extracted as utterances from the `text` field of [globis-university/aozorabunko-clean](https://huggingface.co/datasets/globis-university/aozorabunko-clean).
Then, consecutive utterances are collected and grouped together.
The code to reproduce this dataset is made available on GitHub: [globis-org/aozorabunko-exctractor](https://github.com/globis-org/aozorabunko-extractor).
# Notice
As the conversations are extracted using a simple heuristic, a certain amount of the data may actually be monologues.
# Tips
If you prefer to employ only modern Japanese, you can filter entries with: `row["meta"]["文字遣い種別"] == "新字新仮名"`.
# License
CC BY 4.0 |
rombodawg/code_lima_wizard_vicuna_12k_from70kunfiltered_Backup | 2023-08-04T02:58:33.000Z | [
"license:other",
"region:us"
] | rombodawg | null | null | null | 0 | 3 | ---
license: other
---
Backup of code_lima_wizard_vicuna_12k_from70kunfiltered used in rombodawg/MegaCodeTraining112k
Link to the combined dataset bellow
https://huggingface.co/datasets/rombodawg/MegaCodeTraining112k |
rombodawg/code_wizard_vicuna_10k_from70kunfiltered_backup | 2023-08-04T03:00:05.000Z | [
"license:other",
"region:us"
] | rombodawg | null | null | null | 0 | 3 | ---
license: other
---
Backup of code_wizard_vicuna_10k_from70kunfiltered used in rombodawg/MegaCodeTraining112k
Link to the combined dataset bellow
https://huggingface.co/datasets/rombodawg/MegaCodeTraining112k |
d0rj/gsm8k-ru | 2023-08-04T08:34:00.000Z | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:translated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:gsm8k",
"language:ru",
"license:mit",
"math-word-problems",
"arxiv:2110.14168",
"region:us"
] | d0rj | null | null | null | 0 | 3 | ---
annotations_creators:
- crowdsourced
language_creators:
- translated
language:
- ru
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- gsm8k
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: gsm8k
pretty_name: Grade School Math 8K (ru)
tags:
- math-word-problems
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 6815618.0
num_examples: 7473
- name: test
num_bytes: 1234140.0
num_examples: 1319
download_size: 3883654
dataset_size: 8049758.0
---
# gsm8k-ru
Translated version of [gsm8k](https://huggingface.co/datasets/gsm8k) dataset into Russian.
## Dataset Description
- **Homepage:** https://openai.com/blog/grade-school-math/
- **Repository:** https://github.com/openai/grade-school-math
- **Paper:** https://arxiv.org/abs/2110.14168 |
RyokoExtra/JapaneseGoblin | 2023-08-05T14:21:38.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:text-to-image",
"task_categories:text-to-video",
"language:ja",
"license:apache-2.0",
"region:us"
] | RyokoExtra | null | null | null | 0 | 3 | ---
license: apache-2.0
task_categories:
- text-classification
- text-generation
- text-to-image
- text-to-video
language:
- ja
pretty_name: Japanese Goblin
---
# Dataset Card for JapaneseGoblin
[WE ARE THE JAPANESE GOBLIN!](https://en.touhouwiki.net/wiki/Lyrics:_%E7%A0%95%E6%9C%88_(%E3%82%B3%E3%82%B3%26%E3%81%95%E3%81%A4%E3%81%8D_%E3%81%8C_%E3%81%A6%E3%82%93%E3%81%93%E3%82%82%E3%82%8A%27s_%E4%BD%9C%E6%A5%AD%E5%A6%A8%E5%AE%B3Remix))
## Dataset Description
- **Homepage:** Here!
- **Repository:** N/A
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** KaraKaraWitch
### Dataset Summary
JapaneseGoblin is a dump of en.touhouwiki.net wiki.
### Supported Tasks and Leaderboards
This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes.
- text-classification
- text-generation
### Languages
Primarily english, however there are also japanese as well.
## Dataset Structure
All the articles are located in `touhou.dump.json` in a jsonl format.
### Data Instances
Refer to `touhou.dump.sample.json` for a sample format of each jsonl line.
### Data Fields
Refer to the sample above.
### Data Splits
The entire dump is contained within `touhou.dump.json`.
## Dataset Creation
### Curation Rationale
Someone requested for a dataset on touhou wiki.
### Source Data
#### Initial Data Collection and Normalization
None. No normalization is performed as this is a raw dump of the dataset.
#### Who are the source language producers?
The related wiki editors on en.touhouwiki.net.
### Annotations
#### Annotation process
No Annotations are present.
#### Who are the annotators?
No human annotators.
### Personal and Sensitive Information
We are certain there is no PII included in the dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
As expected, touhou wiki is focuses on the Touhou... Franchise(?)
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
KaraKaraWitch
### Licensing Information
Apache 2.0, for all parts of which KaraKaraWitch may be considered authors. All other material is distributed under fair use principles.
Ronsor Labs additionally is allowed to relicense the dataset as long as it has gone through processing.
### Citation Information
```
@misc{japanesegoblin,
title = {JapaneseGoblin: We are Japanese Goblin!},
author = {KaraKaraWitch},
year = {2023},
howpublished = {\url{https://huggingface.co/datasets/RyokoExtra/JapaneseGoblin}},
}
```
### Name Etymology
N/A
### Contributions
- [@KaraKaraWitch (Twitter)](https://twitter.com/KaraKaraWitch) for gathering this dataset.
- Suikamelon: Requesting dataset |
CyberHarem/iori_minase_azurlane | 2023-09-17T17:04:08.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 3 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of iori_minase_azurlane
This is the dataset of iori_minase_azurlane, containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 467 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 467 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 467 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 467 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
rombodawg/2XUNCENSORED_MegaCodeTraining188k | 2023-08-16T02:30:08.000Z | [
"license:other",
"region:us"
] | rombodawg | null | null | null | 9 | 3 | ---
license: other
---
_________________________________________________________________________________
VERSION 3 IS RELEASED DOWNLOAD HERE:
- https://huggingface.co/datasets/rombodawg/LosslessMegaCodeTrainingV3_2.2m_Evol
_________________________________________________________________________________
This is a uncensored mega combined dataset using both razent/wizardlm-code-evol-32k and nickrosh/Evol-Instruct-Code-80k-v1
In this version many lines of instructions were removed in part of a uncensoring process.
The Rombo's format.rar file is so you can use the training data in oobagooba text generation webui. Simply unzip it, and use it as a json file.
All links bellow
https://huggingface.co/datasets/razent/wizardlm-code-evol-32k
(This repository was deleted, however you can find each individual data file from this repository
re-uploaded as their own individual repositories on my huggingface account)
https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1/tree/main
Thank you to the contributors of the datasets. I do not own them, please give credit where credit is due |
Tverous/misinfo | 2023-08-14T01:05:45.000Z | [
"region:us"
] | Tverous | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: uid
dtype: string
- name: premise
dtype: string
- name: claim
dtype: string
- name: label
dtype: string
- name: claim_cleaned_amr
dtype: string
- name: amr_penman
dtype: string
- name: amr_tokens
sequence: string
- name: amr_nodes
dtype: string
- name: amr_alignments
dtype: string
- name: amr_edges
sequence:
sequence: string
splits:
- name: '201281'
num_bytes: 121835937
num_examples: 62026
download_size: 33222114
dataset_size: 121835937
---
# Dataset Card for "misinfo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gaodrew/roco-65k-256px | 2023-08-05T12:07:37.000Z | [
"region:us"
] | gaodrew | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 675508431.156
num_examples: 65418
download_size: 651136006
dataset_size: 675508431.156
---
# Dataset Card for "roco-65k-256px"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
akkasi/practice | 2023-08-05T14:22:29.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:bsd",
"biology",
"region:us"
] | akkasi | null | null | null | 0 | 3 | ---
license: bsd
language:
- en
size_categories:
- 1K<n<10K
task_categories:
- text-classification
tags:
- biology
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.