text-classification
bool
2 classes
text
stringlengths
0
664k
true
# AutoTrain Dataset for project: severe-js100-sentiment ## Dataset Description This dataset has been automatically processed by AutoTrain for project severe-js100-sentiment. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "00:58 #\u0e2d\u0e38\u0e1a\u0e31\u0e15\u0e34\u0e40\u0e2b\u0e15\u0e38 #\u0e16\u0e19\u0e19\u0e1a\u0e32\u0e07\u0e1a\u0e2d\u0e193 \u0e0a\u0e48\u0e27\u0e07\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2a\u0e32\u0e23\u0e2a\u0e32\u0e2a\u0e19\u0e4c\u0e27\u0e34\u0e40\u0e17\u0e28\u0e1a\u0e32\u0e07\u0e1a\u0e2d\u0e19 >\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e01\u0e23\u0e1e\u0e34\u0e17\u0e31\u0e01\u0e29\u0e4c\u0e28\u0e36\u0e01\u0e29\u0e32 \u0e1b\u0e32\u0e01\u0e0b\u0e2d\u0e22\u0e1a\u0e32\u0e07\u0e1a\u0e2d\u0e193\u0e0b\u0e2d\u0e225 \u0e23\u0e16\u0e08\u0e31\u0e01\u0e23\u0e22\u0e32\u0e19\u0e22\u0e19\u0e15\u0e4c\u0e40\u0e2a\u0e35\u0e22\u0e2b\u0e25\u0e31\u0e01\u0e25\u0e49\u0e21 \u0e02\u0e27\u0e32\u0e07\u0e0a\u0e48\u0e2d\u0e07\u0e17\u0e32\u0e07\u0e0b\u0e49\u0e32\u0e22", "target": 2 }, { "text": "03:22 #\u0e2d\u0e38\u0e1a\u0e31\u0e15\u0e34\u0e40\u0e2b\u0e15\u0e38 #\u0e16\u0e19\u0e19\u0e01\u0e32\u0e0d\u0e08\u0e19\u0e32\u0e20\u0e34\u0e40\u0e29\u0e01 \u0e0a\u0e48\u0e27\u0e07\u0e2a\u0e30\u0e1e\u0e32\u0e19\u0e02\u0e49\u0e32\u0e21\u0e04\u0e25\u0e2d\u0e07\u0e20\u0e32\u0e29\u0e35\u0e40\u0e08\u0e23\u0e34\u0e0d >\u0e41\u0e22\u0e01\u0e1a\u0e32\u0e07\u0e41\u0e27\u0e01 \u0e1a\u0e19\u0e15\u0e48\u0e32\u0e07\u0e23\u0e30\u0e14\u0e31\u0e1a\u0e40\u0e1e\u0e0a\u0e23\u0e40\u0e01\u0e29\u0e21 \u0e23\u0e16\u0e1b\u0e34\u0e04\u0e2d\u0e31\u0e1e\u0e40\u0e2a\u0e35\u0e22\u0e2b\u0e25\u0e31\u0e01\u0e0a\u0e19\u0e02\u0e2d\u0e1a\u0e17\u0e32\u0e07 \u0e02\u0e27\u0e32\u0e07\u0e0a\u0e48\u0e2d\u0e07\u0e17\u0e32\u0e07\u0e0b\u0e49\u0e32\u0e22", "target": 2 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "ClassLabel(names=['\u0e23\u0e38\u0e19\u0e41\u0e23\u0e07', '\u0e23\u0e38\u0e19\u0e41\u0e23\u0e07\u0e21\u0e32\u0e01', '\u0e44\u0e21\u0e48\u0e23\u0e38\u0e19\u0e41\u0e23\u0e07'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 5348 | | valid | 1339 |
true
true
# Dataset Card for "alpaca-gigo-detector" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
false
# AutoTrain Dataset for project: pegasus-subreddit-comments-summarizer ## Dataset Description This dataset has been automatically processed by AutoTrain for project pegasus-subreddit-comments-summarizer. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "I go through this every single year. We have an Ironman competition that is 2 miles from my hotel, and I sell out for that weekend almost a year in advance. Without fail I will have some nitwit who will come up on their checkout day and ask to extend, when I tell them I can't they lose their mind at me. It's their room, they paid for it, they're already in there how can I just give it away. People do not understand how reservations work.", "target": "The commenter experiences this every year - they sell out their hotel almost a year in advance for an Ironman competition nearby. Despite this, some customers still ask to extend their stay at checkout and get angry when told it's not possible because they don't understand how reservations work." }, { "text": "Can i just say .. thanks for going back to make sure you hadn't overreacted. Im sure that made things so much easier on all the staff, with it being their first days back, being understaffed, I'm sure, and trying to get back into the swing of things. I think you handled that really well :)", "target": "The commenter appreciates the poster's effort in going back to verify if they had overreacted. The commenter believes this action might have made things easier for the understaffed team during their first days back. The commenter commends the poster for handling the situation well." } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "Value(dtype='string', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 7177 | | valid | 1796 |
false
false
false
true
false
Source: https://dumps.wikimedia.org/kkwiki/latest/ [kwiki-latest-pages-articles.xml.bz2]
false
# Dataset Card for "igbo-translation" ## Dataset Summary This data set contains translated data from engllish to igbo language for use in training general purpose translation models [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
false
# Silver Ukrainian Coreference Dataset ## Dataset Description ### Dataset Summary A silver coreference resolution dataset for the Ukrainian language. The dataset was generated automatically with the usage of the word alignment method from the following English dataset: https://github.com/d5555/Coreference-dataset. The word alignment method was implemented by Andrii Kursin (aqrsn@ukr.net). ### Languages - Ukrainian ## Dataset Structure ### Data Fields Each sample of the dataset consists of the following fields: - **doc_key** - document identifier. - **clusters** - list of clusters, where each cluster consists of the list of mentions. Each mention is represented as a list of two indices: the first index denotes the first word of the mention, the second index denotes the last word of the mention. - **sentences** - list of sentences where each sentence is represented as a list of words. - **tokens** - list of words. - **speakers** - list of speakers which is currently filled with dummy input. ### Data Splits The dataset is divided into two parts: - training set; - validation set. A test set is absent as far as the dataset is generated automatically. ## Dataset Creation ### Source Data The dataset was created from the following dataset: https://github.com/d5555/Coreference-dataset. ### Contributions The code for the translation of samples with further alignment was created by Andrii Kursin (aqrsn@ukr.net). The dataset was generated by Artem Kramov (https://www.linkedin.com/in/artem-kramov-0b3731100/).
true
false
true
false
i have no idea how to add data
false
false
true
# Dataset Card for ScribbleHub17K *The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible.* ## Dataset Description - **Homepage:** (TODO) - **Repository:** <https://github.com/RyokoAI/BigKnow2022> - **Paper:** N/A - **Leaderboard:** N/A - **Point of Contact:** Ronsor/undeleted <ronsor@ronsor.com> ### Dataset Summary ScribbleHub17K is a dataset consisting of text from over 373,000 chapters across approximately 17,500 series posted on the original story sharing site [Scribble Hub](https://scribblehub.com). ### Supported Tasks and Leaderboards This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes. * text-classification * text-generation ### Languages * English ## Dataset Structure ### Data Instances ```json { "text": " \n2082 Planet Earth the Fracture War, after a sudden fracture in our dimension unidentified beings with advance technology and u...", "meta": { "subset": "scribblehub", "series": "3811", "id": "3812", "q": 0.91, "title": "The First - Prologue- The Fracture War", "author": "RobotLove", "chapters": 1, "rating": 5, "rating_ct": 1, "genre": [ "Action", "Martial Arts", "Romance" ], "tags": [ "Kingdom Building", "Loyal Subordinates", "Male Protagonist", "Organized Crime", "Scheming" ] } } { "text": " For anyone that may see this, thanks for reading. I'm just here to see if a story can spill out of my mind if just start writin...", "meta": { "subset": "scribblehub", "series": "586090", "id": "586099", "q": 0.82, "title": "Just writing to write…i guess? - I’m here now", "author": "BigOofStudios", "chapters": 1, "rating": 4.5, "rating_ct": 2, "genre": [ "Action", "Comedy" ], "tags": [] } } ``` ### Data Fields * `text`: the actual chapter text * `meta`: metadata for chapter and series * `subset`: data source tag: `scribblehub` * `series`: series ID * `id`: chapter ID * `lang`: always `en` (English) * `q`: quality score (q-score) between (0.0) terrible and 1.0 (perfect); anything with a score `> 0.5` is generally good enough * `title`: chapter and series title in the format `<chapter title> - <series title>` * `chapters`: total number of chapters in the series * `rating`: Scribble Hub rating between 0 and 5 stars * `rating_ct`: number of ratings * `author`: author name * `genre`: array of Scribble Hub genres for the series * `tags`: array of tags for the series #### Q-Score Distribution ``` 0.00: 0 0.10: 0 0.20: 0 0.30: 84 0.40: 718 0.50: 3775 0.60: 22300 0.70: 72581 0.80: 137982 0.90: 135800 1.00: 59 ``` ### Data Splits No splitting of the data was performed. ## Dataset Creation ### Curation Rationale Scribble Hub is a home for original web stories, effectively a smaller, English version of Japan's Syosetuka ni Narou. As a result, it is a good source for reasonably well written creative content. ### Source Data #### Initial Data Collection and Normalization TODO #### Who are the source language producers? The authors of each novel. ### Annotations #### Annotation process Title, ratings, and other metadata were parsed out using scripts that will be provided in the BigKnow2022 GitHub repository. #### Who are the annotators? No human annotators. ### Personal and Sensitive Information The dataset contains only works of fiction, and we do not believe it contains any PII. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content. It may also be useful for other languages depending on your language model. ### Discussion of Biases This dataset is composed of fictional works by various authors. Because of this fact, the contents of this dataset will reflect the biases of those authors. **Additionally, this dataset contains NSFW material and was not filtered. Beware of stereotypes.** ### Other Known Limitations N/A ## Additional Information ### Dataset Curators Ronsor Labs ### Licensing Information Apache 2.0, for all parts of which Ronsor Labs or the Ryoko AI Production Committee may be considered authors. All other material is distributed under fair use principles. ### Citation Information ``` @misc{ryokoai2023-bigknow2022, title = {BigKnow2022: Bringing Language Models Up to Speed}, author = {Ronsor}, year = {2023}, howpublished = {\url{https://github.com/RyokoAI/BigKnow2022}}, } ``` ### Contributions Thanks to @ronsor (GH) for gathering this dataset.
false
false
true
# Dataset Card for Honeyfeed3600 *The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible.* ## Dataset Description - **Homepage:** (TODO) - **Repository:** <https://github.com/RyokoAI/BigKnow2022> - **Paper:** N/A - **Leaderboard:** N/A - **Point of Contact:** Ronsor/undeleted <ronsor@ronsor.com> ### Dataset Summary Honeyfeed3600 is a dataset consisting of text from over 38,000 chapters across approximately 3,600 series posted on the English-language web novel site [Honeyfeed](https://www.honeyfeed.fm). ### Supported Tasks and Leaderboards This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes. * text-classification * text-generation ### Languages * English ## Dataset Structure ### Data Instances ```json { "text": "Dark, black, nothingness. There are so many ways to describe that hole, but nothing would get me down there..."," "meta": { "subset": "honeyfeed", "themes": [], "my_themes": [], "prompt": "", "author": "Lucianael", "novel": "10009", "id": "55686", "title": "13 Steps - 13 Steps", "likes": 4, "views": 21, "q": 0.5999999999999999 } } ``` ### Data Fields * `text`: the actual chapter text * `meta`: novel and chapter metadata * `subset`: dataset tag: `honeyfeed` * `lang`: dataset language: `en` (English) * `themes`: array of novel themes * `my_themes`: array of additional novel themes * `prompt`: writing prompt * `author`: author name * `novel`: novel ID * `id`: chapter ID * `title`: novel and chapter title in the form `<chapter title> - <novel title>` * `likes`: novel like count * `views`: novel view count * `q`: q-score (quality score) #### Q-Score Distribution ``` 0.00: 499 0.10: 420 0.20: 2562 0.30: 0 0.40: 0 0.50: 13344 0.60: 9021 0.70: 5997 0.80: 4217 0.90: 1931 1.00: 801 ``` ### Data Splits No splitting of the data was performed. ## Dataset Creation ### Curation Rationale TODO ### Source Data #### Initial Data Collection and Normalization TODO #### Who are the source language producers? The authors of each novel. ### Annotations #### Annotation process Chapter and novel titles were scraped alongside chapter text. #### Who are the annotators? No human annotators. ### Personal and Sensitive Information The dataset contains only works of fiction, and we do not believe it contains any PII. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content. It may also be useful for other languages depending on your language model. ### Discussion of Biases This dataset is composed of fictional works by various authors. Because of this fact, the contents of this dataset will reflect the biases of those authors. Beware of stereotypes. ### Other Known Limitations N/A ## Additional Information ### Dataset Curators Ronsor Labs ### Licensing Information Apache 2.0, for all parts of which Ronsor Labs or the Ryoko AI Production Committee may be considered authors. All other material is distributed under fair use principles. ### Citation Information ``` @misc{ryokoai2023-bigknow2022, title = {BigKnow2022: Bringing Language Models Up to Speed}, author = {Ronsor}, year = {2023}, howpublished = {\url{https://github.com/RyokoAI/BigKnow2022}}, } ``` ### Contributions Thanks to @ronsor (GH) for gathering this dataset.
false
# 预览[.](https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/blob/main/README.md) **简体中文**| [English](https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/blob/main/README_EN.md)| [日本語](https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/blob/main/README_JP.md) 本仓库用于预览so-vits-svc-4.0训练出的各种语音模型的效果,**点击角色名**自动跳转对应训练参数。</br> 推荐用**谷歌浏览器**,其他浏览器可能无法正确加载预览的音频。</br> 正常说话的音色转换较为准确,歌曲包含较广的音域且bgm和声等难以去除干净,效果有所折扣。</br> 有推荐的歌想要转换听听效果,或者其他内容建议,[**点我**](https://huggingface.co/datasets/jiaheillu/audio_preview/discussions/new)发起讨论</br> 下面是预览音频,**上下左右滑动**可以看到全部 <style> .scrolling-container { width: 100%; max-width: 1600px; height: 420px; overflow: auto; margin: 0; } @media screen and (max-width: 768px) { .scrolling-container { width: 100%; height: 120px overflow: auto; } } </style> <div class="scrolling-container"> <table border="1" style="white-space: nowrap; text-align: center;"> <thead> <tr> <th>角色名</th> <th>角色原声A</th> <th>被转换人声B</th> <th>A音色替换B</th> <th>A音色翻唱(点击直接下载)</th> </tr> </thead> <tbody> <tr> <td><a href="https://huggingface.co/datasets/jiaheillu/audio_preview/blob/main/散兵效果预览/训练参数速览.md">散兵</a></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/audio_preview/resolve/main/散兵效果预览/部分训练集/真遗憾,小吉祥草王让他消除了那么多的切片,剥夺了我将他一片一片千刀万剐的快乐%E3%80%82.mp3" controls="controls"></audio></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/audio_preview/resolve/main/散兵效果预览/原声/shenli3.wav" controls="controls"></audio></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/audio_preview/resolve/main/散兵效果预览/转换结果/shenli3mp3_auto_liulangzhe.wav" controls="controls"></audio></td> <td><a href="https://huggingface.co/datasets/jiaheillu/audio_preview/resolve/main/散兵效果预览/转换结果/夢で逢えたら2liulangzhe_f.wav">夢で会えたら</a></td> </tr> <tr> <td><a href="https://huggingface.co/datasets/jiaheillu/audio_preview/blob/main/胡桃_preview/README.md">胡桃</a></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/audio_preview/resolve/main/%E8%83%A1%E6%A1%83_preview/hutao.wav" controls="controls"></audio></td> <td>.........</td> <td>.........</td> <td> <a href="https://huggingface.co/datasets/jiaheillu/audio_preview/resolve/main/胡桃_preview/moonlight_shadow2胡桃.WAV">moonlight shadow</a>, <a href="https://huggingface.co/datasets/jiaheillu/audio_preview/resolve/main/胡桃_preview/云烟成雨2胡桃.WAV">云烟成雨</a>, <a href="https://huggingface.co/datasets/jiaheillu/audio_preview/resolve/main/胡桃_preview/原点2胡桃.WAV">原点</a>, <a href="https://huggingface.co/datasets/jiaheillu/audio_preview/resolve/main/胡桃_preview/夢だ会えたら2胡桃.WAV">夢で逢えたら</a>, <a href="https://huggingface.co/datasets/jiaheillu/audio_preview/resolve/main/胡桃_preview/贝加尔湖畔2胡桃.WAV">贝加尔湖畔</a> </td> </tr> <tr> <td><a href="https://huggingface.co/datasets/jiaheillu/audio_preview/blob/main/绫华_preview/README.md">神里绫华</a></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/audio_preview/resolve/main/绫华_preview/linghua428.wav" controls="controls"></audio></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/audio_preview/resolve/main/绫华_preview/yelan.wav" controls="controls"></audio></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/绫华_preview/yelan.wav_auto_linghua_0.5.wav" controls="controls"></audio></td> <td> <a href="https://huggingface.co/datasets/jiaheillu/audio_preview/resolve/main/绫华_preview/アムリタ2绫华.WAV">アムリタ</a>, <a href="https://huggingface.co/datasets/jiaheillu/audio_preview/resolve/main/绫华_preview/大鱼2绫华.WAV">大鱼</a>, <a href="https://huggingface.co/datasets/jiaheillu/audio_preview/resolve/main/绫华_preview/遊園施設2绫华.WAV">遊園施設</a>, <a href="https://huggingface.co/datasets/jiaheillu/audio_preview/resolve/main/绫华_preview/the_day_you_want_away2绫华.WAV">the day you want away</a> </td> </tr> <tr> <td><a href="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/blob/main/宵宫_preview/README.md">宵宫</a></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/宵宫_preview/xiaogong.wav" controls="controls"></audio></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/宵宫_preview/hutao2.wav" controls="controls"></audio></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/宵宫_preview/hutao2wav_0key_xiaogong_0.5-2.wav" controls="controls"></audio></td> <td> <a href="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/宵宫_preview/昨夜书2宵宫.WAV">昨夜书</a>, <a href="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/宵宫_preview/lemon2宵宫.WAV">lemon</a>, <a href="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/宵宫_preview/my_heart_will_go_no2宵宫.WAV">my heart will go on</a>, </td> </tr> <tr> <td><a href="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/blob/main/刻晴_preview/README.md">刻晴</a></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/刻晴_preview/原_keqing2.wav" controls="controls"></audio></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/刻晴_preview/待_xiaogong3.wav" controls="controls"></audio></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/刻晴_preview/已_xiaogong2keqing.wav" controls="controls"></audio></td> <td> <a href="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/刻晴_preview/嚣张2刻晴.WAV">嚣张</a>, <a href="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/刻晴_preview/ファティマ2刻晴.WAV">ファティマ</a>, <a href="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/刻晴_preview/hero2刻晴.WAV">hero</a>, </td> </tr> <tr> <td><a href="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/blob/main/可莉_preview/README.md">可莉</a></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/可莉_preview/原_keli.wav" controls="controls"></audio></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/可莉_preview/待_ying.wav" controls="controls"></audio></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/可莉_preview/已_ying2keli.wav" controls="controls"></audio></td> <td> <a href="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/可莉_preview/樱花草2可莉.WAV">樱花草</a>, <a href="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/可莉_preview/夢をかなえてドラえもん2可莉.WAV">夢をかなえてドラえもん</a>, <a href="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/可莉_preview/sun_shine2可莉.WAV">sun_shine</a>, </td> </tr> <tr> <td><a href="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/blob/main/鹿野院平藏_preview/README.md">鹿野院平藏</a></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/鹿野院平藏_preview/原_pingzang.wav" controls="controls"></audio></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/鹿野院平藏_preview/待_shenzi.wav" controls="controls"></audio></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/鹿野院平藏_preview/已_shenzi2pingzang.wav" controls="controls"></audio></td> <td> <a href="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/鹿野院平藏_preview/风继续吹2pingng.WAV">风继续吹</a>, <a href="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/鹿野院平藏_preview/小さな恋の歌2pingzang.WAV">小さな恋の歌</a>, <a href="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/鹿野院平藏_preview/love_yourself2pingzang.WAV">love_yourself</a>, </td> </tr> <tr> <td><a href="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/blob/main/imallryt_preview/README.md">imallryt</a></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/imallryt_preview/%E5%8E%9F_IVOL_1%20Care_DRY_120_Am_Main_Vocal.wav" controls="controls"></audio></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/imallryt_preview/%E5%BE%85_Lead_A%20minor_DRY.wav" controls="controls"></audio></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/imallryt_preview/%E5%B7%B2_Lead_A%20minor_DRYwav_0key_imallryt_0.5.wav" controls="controls"></audio></td> <td> <a href="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/imallryt_preview/海阔天空2imallryt.WAV">海阔天空</a>, </td> </tr> <tr> <td><a href="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/blob/main/kagami_preview/README.md">kagami</a></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/kagami_preview/原_kagami.wav" controls="controls"></audio></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/kagami_preview/待_wendi.wav" controls="controls"></audio></td> <td><audio src="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/kagami_preview/已_windi2kagami.wav" controls="controls"></audio></td> <td> <a href="https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/resolve/main/kagami_preview/えるの侵蝕_Vocals.wav_-4key_kagami_0.5.flac">えるの侵蝕</a>, </td> </tr> </tbody> </table> </div> 关键参数:</br> audio duration:训练集总时长</br> epoch: 轮数</br> 其余:</br> batch_size = 一个step训练的片段数<br> segments = 音频被切分的片段<br> step=segments*epoch/batch_size,即模型文件后面数字由来<br> 以散兵为例:</br> 损失函数图像:主要看step 与 loss5,比如:<br> 给一个大致的参考,待转换音频都为高音女生,这是较为刁钻的测试:如图,10min纯净人声, 差不多2800epoch(10000step)就已经出结果了,实际使用的是5571epoch(19500step)的文件,被训练音色和原音色相差几 何,请听上方预览音频。正常训练,10min是较为不足的训练集时长。<br> [点我查看相关文件](https://huggingface.co/datasets/jiaheillu/audio_preview/tree/main)<br> ![sanbing_loss](https://huggingface.co/datasets/jiaheillu/audio_preview/resolve/main/%E6%95%A3%E5%85%B5%E6%95%88%E6%9E%9C%E9%A2%84%E8%A7%88/%E8%AE%AD%E7%BB%83%E5%8F%82%E6%95%B0%E9%80%9F%E8%A7%88.assets/sanbing_loss.png)
false
This is a text2video model for diffusers, fine-tuned with a [modelscope](https://huggingface.co/damo-vilab/text-to-video-ms-1.7b) to have an anime-style appearance. It was trained at 384x384 resolution. It still generates unstable content often. The usage is the same as with the original modelscope model. example images are [here](https://imgur.com/a/sCwmKG1).
true
# AutoTrain Dataset for project: roulette-prediction-next-sequence ## Dataset Description This dataset has been automatically processed by AutoTrain for project roulette-prediction-next-sequence. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "1", "target": 8 }, { "text": "2", "target": 8 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "ClassLabel(names=['0', '11', '12', '20', '22', '23', '25', '26', '27', '28', '3', '31', '35', '36', '4', '7'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 15 | | valid | 7 |
false
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
true
Reference: Ponnarassery-, Sreeja (2017), “Poem Emotion Recognition Corpus (PERC)”, Mendeley Data, V1, doi: 10.17632/n9vbc8g9cx.1
false
The dataset was translated into Polish using this model: "gsarti/opus-mt-tc-en-pl" ### How to use ```python from datasets import load_dataset dataset = load_dataset("Aspik101/translated_polish_alpaca") ```
false
# AutoTrain Dataset for project: arp_summ_1 ## Dataset Description This dataset has been automatically processed by AutoTrain for project arp_summ_1. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": " eat , grass , horse ", "target": " The old horse ate grass all day. " }, { "text": " lay , dog , rug ", "target": " Brown dog chews on bone while laying on the rug. " } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "Value(dtype='string', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 197 | | valid | 50 |
false
09/04/2023 Update: New instructions added from: https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM Original Version: https://github.com/tatsu-lab/stanford_alpaca#data-release AI BASED TRANSLATION RESULTS OF STANFORD ALPACA EN TO TR For academic only, please cite before you use it. Taşar, D. E. T. (2023). stanford-alpaca-cleaned-turkish-translated [Dataset]. In Stanford Alpaca TR (1.0.1.a). https://huggingface.co/datasets/emre/stanford-alpaca-cleaned-turkish-translated ### Citation Please cite the repo if you use the data or code in this repo. ``` @misc{alpaca-tr,tasar-2023 author = {Taşar, Davut Emre}, title = {stanford-alpaca-cleaned-turkish-translated}, year = {2023}, publisher = {Huggingface}, journal = {Huggingface repository}, howpublished = {\url{https://huggingface.co/datasets/emre/stanford-alpaca-cleaned-turkish-translated}}, } ```
true
# Dataset Card for XNLI Parallel Corpus ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary ### Supported Tasks and Leaderboards Binary mode classification (spoken vs written) ### Languages - English - German - French ## Dataset Structure ### Data Instances { 'text': "And he said , Mama , I 'm home .", 'label': 0 } ### Data Fields - text: sentence - label: binary label of text (0: spoken 1: written) ### Data Splits - train: 830 - test: 1669 ### Other Statistics #### Vocabulary Size - English - train: 4363 - test: 7128 - German - train: 5070 - test: 8601 - French - train: 4881 - test: 7935 #### Average Sentence Length - English - train: 20.689156626506023 - test: 20.75254643499101 - German - train: 20.367469879518072 - test: 20.639904134212102 - French - train: 23.455421686746988 - test: 23.731575793888556 #### Label Split - train: - 0: 166 - 1: 664 - test: - 0: 334 - 1: 1335 #### Out-of-vocabulary words in model - English - BERT (bert-base-uncased) - train: 800 - test: 1638 - mBERT (bert-base-multilingual-uncased) - train: 1347 - test: 2693 - German BERT (bert-base-german-dbmdz-uncased) - train: 3228 - test: 5581 - flauBERT (flaubert-base-uncased) - train: 4363 - test: 7128 - German - BERT (bert-base-uncased) - train: 4285 - test: 7387 - mBERT (bert-base-multilingual-uncased) - train: 3126 - test: 5863 - German BERT (bert-base-german-dbmdz-uncased) - train: 2033 - test: 3938 - flauBERT (flaubert-base-uncased) - train: 5069 - test: 8600 - French - BERT (bert-base-uncased) - train: 3784 - test: 6289 - mBERT (bert-base-multilingual-uncased) - train: 2847 - test: 5084 - German BERT (bert-base-german-dbmdz-uncased) - train: 4212 - test: 6964 - flauBERT (flaubert-base-uncased) - train: 4881 - test: 7935 ## Dataset Creation ### Curation Rationale N/A ### Source Data https://github.com/facebookresearch/XNLI Here is the citation for the original XNLI paper. ``` @InProceedings{conneau2018xnli, author = "Conneau, Alexis and Rinott, Ruty and Lample, Guillaume and Williams, Adina and Bowman, Samuel R. and Schwenk, Holger and Stoyanov, Veselin", title = "XNLI: Evaluating Cross-lingual Sentence Representations", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", year = "2018", publisher = "Association for Computational Linguistics", location = "Brussels, Belgium", } ``` #### Initial Data Collection and Normalization N/A #### Who are the source language producers? N/A ### Annotations #### Annotation process N/A #### Who are the annotators? N/A ### Personal and Sensitive Information N/A ## Considerations for Using the Data ### Social Impact of Dataset N/A ### Discussion of Biases N/A ### Other Known Limitations N/A ## Additional Information ### Dataset Curators N/A ### Licensing Information N/A ### Citation Information ### Contributions N/A
false
# Mtet - Num examples: - 5,072 (test) - 6,212 (validation) - Language: English, Vietnamese ## Prompts "Translate the following sentence into <target>: ", "What is the <target> translation for: ", "What is the <target> equivalent of: ", "What does the following sentence means in <target>: ", "Interpret the following sentence into <target>: ", "What is the <target> interpretation for: ", "The <target> translation of the following sentence: ", "What is the <target> meaning of the following sentence: ", "What is the <target> meaning of this sentence: ", "Please translate the following sentence to <target>: " "Dịch câu sau sang tiếng <target>: ", "Nghĩa tiếng <target> của câu sau: ", "Dịch câu tiếng <from> sau sang tiếng <target>: ", "Thông dịch câu tiếng <from> sau tiếng <target>: ", "Chuyển câu tiếng <from> sang tiếng <target>: ", "Chuyển nghĩa câu tiếng <from> sang tiếng <target>: ", "Câu tiếng <from> có nghĩa là gì trong tiếng <target>: ", "Câu sau có nghĩa tiếng <target> là gì: ", "Hãy dịch câu sau sang tiếng <target>: ", "Giải thích nghĩa câu sau sang tiếng <target>: ", "Giải thích nghĩa câu tiếng <from> sang tiếng <target>"
false
This dataset is made from this repo [here](https://github.com/janelleshane/DnD_bios) and it contains 2322 character bios to be used
false
# Ukrainian StackExchange Dataset This repository contains a dataset collected from the Ukrainian StackExchange website. The parsed date is 02/04/2023. The dataset is in JSON format and includes text data parsed from the website https://ukrainian.stackexchange.com/. ## Dataset Description The Ukrainian StackExchange Dataset is a rich source of text data for tasks related to natural language processing, machine learning, and data mining in the Ukrainian language. The dataset contains information such as questions, answers, comments, and associated metadata from the website. The data has been collected from the following sections of the Ukrainian StackExchange website: Questions Answers Tags Votes ## Structure The dataset is structured as follows: - `question`: Contains information about the question - `id`: Unique identifier for the question - `title`: Title of the question - `text`: Text content of the question - `upvoteCount`: The net upvote count of the question - `tags`: List of tags associated with the question - `answers`: A list of answers for the question, each containing: - `id`: Unique identifier for the answer - `text`: Text content of the answer - `upvoteCount`: The net upvote count of the answer - `isAccepted`: A boolean value indicating whether the answer is accepted or not - `url`: The URL of the question on the Ukrainian StackExchange website ## Usage To use this dataset in your project, you can load it from the Hugging Face Hub: ```python from datasets import load_dataset dataset = load_dataset("zeusfsx/ukrainian-stackexchange") ``` ## License This dataset is provided under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) License.
false
true
# Dataset Card for "MULTI_VALUE_wnli_reduced_relative" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
true
## Dataset Description - **BioStars Homepage:** https://www.biostars.org/ - **BioStars Paper:** https://doi.org/10.1371/journal.pcbi.1002216 - **Code Repository (This Dataset):** https://github.com/cannin/biostars_qa ### Dataset Summary This dataset contains 4803 question/answer pairs extracted from the [BioStars](https://www.biostars.org/) website. The site focuses on bioinformatics, computational genomics, and biological data analysis. ## Dataset Structure ### Data Fields The data contains INSTRUCTION, RESPONSE, SOURCE, and METADATA fields. The format is described for [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant/blob/main/data/datasets/README.md) ## Dataset Creation ### Curation Rationale Questions were included if they were an accepted answer and the question had at least 1 vote. ### Source Data Data collected using the [Biostars API](https://www.biostars.org/info/api/) ## Additional Information ### Dataset Curators [@cannin](https://github.com/cannin). @cannin has no affiliation with the BioStars project. ### Licensing Information Apache-2.0 ### Citation Information #### BioStars Project Cite the original project: https://doi.org/10.1371/journal.pcbi.1002216 #### This Dataset Citation for this dataset: ``` @misc{Luna2023a, author = {Augustin Luna}, title = {biostars_qa Dataset}, year = {2023}, howpublished = {\url{https://huggingface.co/datasets/cannin/biostars_qa}} } ``` #### This Dataset Code Citation for the code to generate this dataset: ``` @misc{Luna2023b, author = {Augustin Luna}, title = {biostars_qa Code}, year = {2023}, howpublished = {\url{https://github.com/cannin/biostars_qa}} } ```
true
# Dataset Card for CNNovel125K *The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible.* ## Dataset Description - **Homepage:** (TODO) - **Repository:** <https://github.com/RyokoAI/BigKnow2022> - **Paper:** N/A - **Leaderboard:** N/A - **Point of Contact:** Ronsor/undeleted <ronsor@ronsor.com> ### Dataset Summary CNNovel125K is a dataset composed of approximately 125,000 novels downloaded from the Chinese novel hosting site <http://ibiquw.com>. ### Supported Tasks and Leaderboards This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes. * text-classification * text-generation ### Languages * Simplified Chinese ## Dataset Structure ### Data Instances ```json { "text": "\n------------\n\n全部章节\n\n\n------------\n\n第一章 她肯定做梦呢!\n\n HT国际大酒店总统套房。\n\n 清晨的第一缕阳光照射进圣地亚哥地板上,洒落在凌乱的床单上,突然地,床上睡的正熟的人睁开眼睛, 猛然惊醒!\n\n ...", "meta": { "subset": "cnnovel.ibiquw", "id": "100067", "q": 0.9, "lang": "zh_cn", "title": "为爱入局:嫁给秦先生", "author": "奥德萨" } } { "text": "\n------------\n\n全部章节\n\n\n------------\n\n第1章:出狱就大婚\n\n 凉城第一监狱,大门缓缓打开,秦峰仰起头,贪婪的呼吸了一口空气。\n\n 三年了,终于又闻到了自由的味道。\n\n 他回过头,看着目 送他出来的那群人道:...", "meta": { "subset": "cnnovel.ibiquw", "id": "100059", "q": 0.9, "lang": "zh_cn", "title": "绝世弃婿", "author": "绷带怪" } } ``` ### Data Fields * `text`: the actual novel text, all chapters * `meta`: entry metadata * `subset`: dataset tag: `cnnovel.ibiquw` * `id`: novel ID * `q`: quality score, fixed at 0.9 * `lang`: always `zh_cn` (Simplified Chinese) * `title`: novel title * `author`: novel author ### Data Splits No splitting of the data was performed. ## Dataset Creation ### Curation Rationale TODO ### Source Data #### Initial Data Collection and Normalization TODO #### Who are the source language producers? The authors of each novel. ### Annotations #### Annotation process Titles were collected alongside the novel text and IDs. #### Who are the annotators? There were no human annotators. ### Personal and Sensitive Information The dataset contains only works of fiction, and we do not believe it contains any PII. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content in Chinese. It may also be useful for other languages depending on your language model. ### Discussion of Biases This dataset is composed of fictional works by various authors. Because of this fact, the contents of this dataset will reflect the biases of those authors. Beware of stereotypes. ### Other Known Limitations N/A ## Additional Information ### Dataset Curators Ronsor Labs ### Licensing Information Apache 2.0, for all parts of which Ronsor Labs or the Ryoko AI Production Committee may be considered authors. All other material is distributed under fair use principles. ### Citation Information ``` @misc{ryokoai2023-bigknow2022, title = {BigKnow2022: Bringing Language Models Up to Speed}, author = {Ronsor}, year = {2023}, howpublished = {\url{https://github.com/RyokoAI/BigKnow2022}}, } ``` ### Contributions Thanks to @ronsor (GH) for gathering this dataset.
true
# Dataset Card for "DiagTrast" ## Table of Content - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Team members](#team-members) ## Dataset Description ### Dataset Summary For the creation of this dataset, ChatGPT-4 was used to generate statements based on the characteristics of some of the mental disorders described in the "Manual Diagnóstico y Estadístico de Trastornos Mentales (DSM-5)". The mental disorders included are: - Narcissistic personality disorder. - Histrionic personality disorder. - Borderline personality disorder. - Antisocial personality disorder. - Schizotypal personality disorder. ### Supported Tasks and Leaderboards - text-classification: The dataset can be used to train a model for text classification, which consists in assigning a label or class to a given text. Some use cases are sentiment analysis, natural language inference, and assessing grammatical correctness. Success on this task is typically measured by achieving a high/low accuracy. ### Languages This dataset of statements is in Spanish only. ## Dataset Structure ### Data Instances A typical instance in the dataset comprises a statement describing one or more symptoms of a disorder, the name of the disorder, a sequential numerical id representing the disorder, and the clean text of the initial statement (i.e. free of punctuation marks and connectors). The following is a JSON-formatted example of a typical case in this dataset: ``` { 'Sintoma': "Su comportamiento es a menudo extraño y excéntrico, como llevar ropa que no coincide o actuar de una manera inapropiada en situaciones sociales.", 'Padecimiento': "Trastornos de la personalidad esquizotípica", 'Padecimiento_cat': 2, 'Sintoma_limpia ': "comportamiento menudo extraño excentrico llevar ropa coincide actuar manera inapropiada situaciones sociales" } ``` ### Data Fields - `Sintoma`: a string, representing a paragraph that a professional would enter describing the symptoms identified in a patient. - `Padecimiento`: a string that indicates the disorder according to DSM-5. - `Padecimiento_cat`: an integer representing the `Padecimiento` field, this field can be used as a label in a text-classification model. - `Sintoma_Limpia`: a string, this field is the clean text of the `Sintoma` field. For the text-classification task, is advisable to use this field instead of the "Padecimiento" field to reduce the noise that punctuation marks, articles and connectors generate in the models. ### Data Splits The data were not split into training and test subsets, instead having a single set with the following distribution: | Disorder | Records | | - | - | | Narcissistic personality disorder| 250 | | Histrionic personality disorder | 250 | | Borderline personality disorder | 358 | | Antisocial personality disorder | 250 | | Schizotypal personality disorder | 225 | ## Dataset Creation ### Curation Rationale It was decided to create this dataset because there is an extensive manual called DSM-5 which details the characteristics that must be present in a patient to diagnose a mental disorder. Some disorders have characteristics in common as well as their differences, for this reason we sought to classify, according to the DSM-5, statements that contain symptoms and characteristics identified by health professionals. ### Source Data Data was generated using chatGPT, we first introduce the symptoms specified in the DSM-5 and request it to create statements containing one or more characteristics but without mentioning the name of the disorder. When the artificial intelligence generates the statements, a quick check is made to ensure that they are of the minimum expected quality, i.e., that they do not include the name of the disorder, that they are not too long or too short, and above all that they specifically contain the characteristics that were entered. ### Annotations #### Annotation process The generation of the data was carried out for each mental disorder, so that when we obtained the statements we also knew which label corresponded to it, so it was not necessary to make manual or automated annotations. ## Considerations for Using the Data ### Social Impact of Dataset We hope that through the creation of models using this or a similar dataset, we can help to reduce the diagnosis times of mental disorders and increase the number of patients that can be seen and treated. On the other hand, we must consider the importance of using these technologies properly because if these models are used indiscriminately by people who do not have sufficient knowledge or experience to detect unusual behaviors in people, these models could negatively influence people by making them believe that they have a disorder. ### Discussion of Biases It should not be forgotten that these data have been artificially generated so models that are trained might expect different inputs than a real mental health professional would generate. To mitigate this bias the team has closely verified the data generation process and this has evolved while identifying better prompts as well as filtering the statements and feeding back to the artificial intelligence to finally obtain the desired quality. ### Other Known Limitations We have only generated data for 5 of the disorders described in the DSM-5. ## Team members - [Alberto Martín Garrido](https://huggingface.co/Stremie) - [Edgar Mencia](https://huggingface.co/edmenciab) - [Miguel Ángel Solís Orozco](https://huggingface.co/homosapienssapiens) - [Jose Carlos Vílchez Villegas](https://huggingface.co/JCarlos)
false
true
### Resumen del dataset Se trata de un dataset en español, extraído del centro de documentación de la Fundación Secretariado Gitano, en el que se presentan distintas situaciones discriminatorias acontecidas por el pueblo gitano. Puesto que el objetivo del modelo es crear un sistema de generación de actuaciones que permita minimizar el impacto de una situación discriminatoria, se hizo un scrappeo y se extrajeron todos los PDFs que contuvieron casos de discriminación con el formato (HECHOS, INTERVENCIÓN, RESULTADO). Para extraer la información se hizo un scrappeo de la página, a continuación se limpió y se unificó todo el dataset con un script de preprocesamiento para que todo el dataset tuviera el mismo formato. ### Tareas admitidas y tablas de clasificación - `task-generation`: Dado el hecho generar la intervención y la etiqueta de resultado, para dar métodos para hacer la intervección y que sea efectiva. ([PAG-BERT](https://huggingface.co/hackathon-somos-nlp-2023/PAG-BERT)) - `task-classication`: Se puede entrenar un modelo de clasificación, dejamos a los usarios, predecir el tipo de descriminación de dependiedo del hecho ### Idioma Es un dataset con la variante español de España, el estilo empleado es formal y objetivo, limitándose a describir los hechos descritos por las personas afectadas. ## Estructura de los datos ### Instancias A continuación se muestra una instancia de ejemplo del dataset: ``` { 'sintetico': '0', 'text': 'Una joven gitana comenzó a trabajar en una tienda de ropa, hace dos años, con contrato indefinido. Al mes de comenzar a trabajar, una compañera le preguntó, en presencia de su encargada, si era gitana, ella respondió que sí; desde entonces el trato de la encargada hacia la joven cambió, comenzó a tirar al suelo perchas, tierra, para luego acusarla de que no limpiaba el suelo, además de hacer continuamente comentarios generalizados refiriéndose a las mujeres gitanas, del tipo “¿Pero te dejan trabajar?” “¿Y estudiar?”, “tú tienes que saber cómo trabajar en la tienda porque como aprendéis en los mercadillos...” La víctima comentó que desde que la encargada se enteró de que era gitana le hizo la vida imposible, se sintió muy humillada. No aguantó más y presentó la baja voluntaria, aun siendo consciente de que perdía su derecho a la prestación por desempleo.', 'intervencion': 'Se entrevistó a la joven. Se comprobó a través del testimonio de la víctima que desde que su encargada se enteró de que es mujer gitana, al mes de comenzar a trabajar aproximadamente, comenzó a sufrir discriminación. Se informó a la víctima del Servicio, del trabajo que realizamos y de sus derechos.\xa0', 'tipo_discriminacion': 'Discriminación directa', 'resultado': 'Negativo.' } ``` ### Campos de los datos - `sintetico`: indica si los datos son relacionados con la intervención y el resultado son originales, es decir, proceden de la fuente "Fundación Secretariado Gitano" (valor 0); o si, por el contrario, los hemos generado sintéticamente (valor 1). - `text`: expone los hechos descritos por la persona afectada. - `intervencion`: presenta las medidas que se tomaron desde la Fundación para evitar que los hechos descritos en "text" se volvieran a repetir. - `tipo_discriminacion`: etiqueta que identifica el tipo de discriminación. Puede tomar los valores **Acoso discriminatorio**, **Discriminación directa**, **Discriminación indirecta**, **Discriminación interseccional**, **Discurso de odio**, **Orden de discriminar**,, **Sin especificar**. - `resultado`: presenta la repercusión que tuvo la intervención adoptada. Sus posibles valores son **Positivo**, **Negativo** y **Neutro**. ### División de los datos El dataset cuenta con un total de 1990 instancias, repartidas del siguiente modo: | | train | validation | test | |-------------------------|----------:|-------------:|----------:| | Input Sentences | 90% | 5% | 5% | | Average Sentence Length | 94.71 | 90.94 | 98.07 | Cabe destacar que, teniendo en cuenta el resultado de las intervenciones (positivo, negativo o neutro), el dataset no está balanceado. En concreto, hay un total de 280 muestras positivas, 939 negativas y 771 neutras. En próximas actualizaciones del dataset trabajaremos para incrementar el tamaño del dataset de forma balanceada. ## Creación del dataset ### Justificación de la curación El motivo por el que se creó este dataset es para conocer de una forma objetiva, si las medidas actuales que se están adoptando por parte de la Fundación han surtido efecto (en cuyo caso sería positivo), no ha surtido ningún efecto (negativo), o si por el contrario, las medidas propuestas no han incentivado al usuario a llevar a cabo ninguna acción. Se ha optado por este dataset por el volumen de datos que contiene relativos a distintos escenarios, y por el formato que todos comparten de: HECHOS, INTERVENCIÓN Y RESULTADO. ### Fuente de los datos Los datos utilizados para construir el modelo fueron extraídos de la página web de la Fundación Secretariado Gitano (<a href="https://informesdiscriminacion.gitanos.org">FSM</a>). El FSM tiene una base de datos que contiene actos de discriminación que han sido reportados a la organización. Estos actos de discriminación fueron seleccionados para entrenar y evaluar el modelo. #### Recogida inicial de datos y normalización Los datos fueron extraídos de la sección de <a href = "https://informesdiscriminacion.gitanos.org/buscar-casos" >Buscador de casos</a>, donde se lleva un registro de todo los casos de descriminación. Los campos que ofrece la página web para estetipo de informes son: * `Hecho` que hace referencia al acto de descriminación. * `Intervención` qué medidas tomo la FSG para solucionar el problema. * `Resultado`: Descripción del resultado. * Año que ocurrió el caso. * Año del informe. * Ámbito: Dado el caso de que la discrimnación haya sido una empresa gubernamenta, en cual derecho fundamental se presentó. * Provincia: Lugar donde ocurrió el acto. * Tipo de discriminación. En la extracción de datos solo tuvimos en cuenta los campos **hechos**, **intervención**, **resultados** y **tipo de discriminación**. El lenguaje usado en los informes es formal. Originalmente, una elevado número de Hechos no contaban con una intervención y resultado (los campos estaban vacíos). #### Limpieza de los datos En la página web, el campo resultado contiene un breve explicación del los efectos obtenidos tras llevar a cabo la intervección. Usando la librería <a href="https://github.com/pysentimiento/pysentimiento">pysentimiento</a>, se clasificó el resultado entre negativo, neutro y positivo. Posterior mente se revisó la etiqueta y se ajustó para lo que se consideraba neutral, negativo o positivo El 17% de los actos de discriminación en el dataset no contaban con intervención ni resultado. Para completar estos campos se aplicó la técnica few-show learning usando el modelo Bloom. De modo que dado algunos ejemplos de **hechos**, **intervención** y **resultado**, seríamos capaces de generar **intervenciones** y **resultados** de forma automática. El output del modelo Bloom se revisó manualmente para corregir errores. El 41% de los textos del campo **hechos** eran demasiado largos para ser utilizado en BLOOM aplicando la técnica de few-shot learning. Para resolver este problema, se decidió resumirlos, para esto se utilizó la función `segmenter.split_single` de la librería <a href="https://github.com/fnl/segtok" >segtok</a>, que divide el texto en oraciones y las separa por caracteres de nueva línea. Se usaron dos modelos pre-etrenados para resumir cada sub-texto. El primero fue <a href="https://huggingface.co/mrm8488/bert2bert_shared-spanish-finetuned-summarization">mrm8488/bert2bert_shared-spanish-finetuned-summarization</a> y el segundo fue el <a href="https://huggingface.co/Narrativa/bsc_roberta2roberta_shared-spanish-finetuned-mlsum-summarization">Narrativa/bsc_roberta2roberta_shared-spanish-finetuned-mlsum-summarization</a> En el repositorio https://github.com/Frorozcoloa/somos_nlp_hackaton se encuentran los scripts originales usados para el preprocesamiento. También puedes encontrar una copia de los mismos en este mismo repositorio. ### Anotación Las anotaciones que se ralizaron fueron verificaciones a los datos de sintéticos generados con few-shot learning (intervenciones y resultados): * Se rellenaron los valores nulos. * Se hicieron resumenes de algunos textos (Hehos) aplicando modelos pre-entrenados. * Se cambió el texto de resultado por etiquetas de POS, NEU, NEG. #### Proceso de anotación Para el proceso de etiquetado se utilizó Argilla para etiquetar la categoría de "Resultado", para ello se emplearon las siguientes etiquetas: "Positivo", "Negativo" y "Neutro". En el proceso de etiquetado lo que nos interesaba era etiquetar el resultado de las intervenciones para que el modelo aprendiera y pudiera generar texto para dar respuesta a la situación expuesta por el usuario, además de predecir con los datos etiquetados si la repercusión que pudiera tener la medida que propone el modelo sería "positiva"(surtiría efecto), "negativa"(no tendría ningún efecto) o "neutra"(si es posible que el usuario no llevara a cabo ninguna acción). En concreto, tras descargar todos los datos disponibles en la web, los preprocesamos y unimos en un solo dataset que fue subido a Argilla. Una vez aquí, validamos cada una de las instancias del siguiente modo: * Si la intervención y/o resultado están vacías, se anota como tal. * Se comprueba que el resultado positivo, negativo o neutro es correcto. La mayoría de las incongruencias surgen entre los pares positivo/neutro y negativo/neutro. Una vez validado el dataset con argilla, seleccionamos las muestras que fueron anotadas como "vacías" para proceder a completarlas. Para ello, hemos aplicado Few-Shot Learning usando el modelo [BLOOM](https://huggingface.co/bigscience/bloom). Cabe destacar que algunos hechos del dataset eran demasiado largos y no podían ser procesados por BLOOM (generaba un error que indicaba que habíamos superado el número máximo de tokens), para solucionarlo, utilizamos los modelos <a href="https://huggingface.co/mrm8488/bert2bert_shared-spanish-finetuned-summarization">mrm8488/bert2bert_shared-spanish-finetuned-summarization</a> y <a href="https://huggingface.co/Narrativa/bsc_roberta2roberta_shared-spanish-finetuned-mlsum-summarization">Narrativa/bsc_roberta2roberta_shared-spanish-finetuned-mlsum-summarization</a> para resumir dichos hechos y minimizar así su tamaño. ### Información personal y sensible En este caso no se ha necesitado utilizar ningún proceso de anonimización, ya que los datos procedentes de esta fuente no contienen ninguna información que vulnere los derechos de los afectados. ## Consideraciones sobre el uso de los datos ### Consideraciones sobre el uso de los datos El impacto social de este dataset se dirige a ser una herramienta que sirva para implementar acciones que ayuden a combatir el racismo hacia la población gitana, además este dataset se podría utilizar para evaluar la repercusión de las distintas medidas adoptadas durante un período de tiempo, y aquellas medidas con una repercusión "negativa" o "neutra" investigarlas y mejorarlas con un trato más concienzudo hacia la población gitana. ### Debate sobre los prejuicios Sé realizó un analisís exploratorio de los datos, para eso hemos realizado una nube de palabras para analizar los datos sintéticos y no sintéticos. #### Datos no sintéticos <img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/Hechos_normales.png"> Aquí podemos ver que muchos de los hechos se generaron en noticias, en mujeres, temas de vivienda, con la policia y la familia. <img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/Intervenci%C3%B3n_normal.png"> Las intervenciones hablan de derechos, de cartas, de igualdad, asesorar a la persona y de presentar quejas. <img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/etiqueta_normal.png"> Muchos de los resultados de las intervenciones fueron negativos o neutrales (Posiblemente sin respuesta) o de que no se logró lo propuesto (Negativo). Se puede observar el desbalance en los datos. Por medio de la librería *pysentimiento* y usando el modelo `pysentimiento/pt_hate_speech`, se realizó una métrica para medir el discurso de odio en el `Hecho`. Para eso análizaremos hateful, targeted y aggressive. La métrica va de 0 a 1, para cada una. Siendo la probabilidad de que esa caracteristica esté en el texto. Se encotró lo siguiente <img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/hate_normal.png"> <img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/hate_2_normal.png"> La distribución de los valores de hateful, targeted y aggressive presentan una cola alargada hacia la derecha, lo que indica que hay pocos casos en los que se detecta un mensaje de odio en los hechos. Para el caso, donde no se generó la intervección y resultado se presenta un crecimiento en el tercer cuartil, esto quiere decir que hay mensajes que muestra un discurso de odio. Por ejemplo el hateful es de 0.4, targeted de 0.02 y aggresive de 0.03. En conclusión, como está escrito el hecho y como fue entrenado el modelo de *pysentimiento*, en general los hechos no tienen un mensaje de odio. #### Datos sintéticos. Se realizó el mismo análisis para los datos sintéticos <img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/Hechos_sinteticos.png"/> Cabe resltar que el hecho no fue generado. Es claro que el dataset está más sesgado a contener las palabras gitano, gitana, comunidad gitana, etnia gitana, familia, discriminación. <img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/Intervenci%C3%B3n_sintetica.png"/> Esta parte fue generada por el modelo *Bloom*. Puede comprobarse que con *few-shot* se logra captar más que todo la palabra `derecho`. <img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/Etiquetas%20sinteticas.png"> Tambien hay un desbalance en las etiquetas generadas. Por medio de la librería *pysentimiento* y usando el modelo `pysentimiento/pt_hate_speech` ,se realizó una métrica para medir el discurso de odio en el `Hecho` Para eso análizaremos hateful, targeted y aggressive. La métrica va de 0 a 1, para cada una. Siendo la probabilidad de que esa caracteristica esté en el texto. Se encotró lo siguiente <img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/hate_sintetico.png"> <img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/hate_2_sintetico.png"> La distribución de los valores de hateful, targeted y aggressive presentan una cola alargada hacia la derecha, lo que indica que hay pocos casos en los que se detecta un mensaje de odio en los hechos. Tanto la mediana como la media de los valores de hateful, targeted y aggressive están muy cerca de cero, lo que indica que la mayoría de los hechos no contienen mensajes de odio. Además, se observó que en el tercer cuartil, el 75% de los datos en la métrica de hateful es 0.3, para targeted es de 0.0089 y para aggressive es de 0.06, lo que refuerza la conclusión de que la mayoría de los datos no contienen un mensaje de odio en la descripción de los hechos. ## Información adicional ### Curadores del dataset * <a href="https://www.linkedin.com/in/frorozcol/">Fredy Orozco</a> * <a href="https://www.linkedin.com/in/mariajesusgs">María Jesús García</a> * <a href="https://www.linkedin.com/in/ramonruedadelgado/">Ramón Rueda</a>
true
true
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This is a set of (title, integer category) descriptions taken from The Pirate Bay via [123dw's](https://thepiratebay.org/search.php?q=user:123dw) regular TPB backups. This set represents the titles in release 2023-04. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] Major category, count * 1, 733604 (audio) * 2, 3557282 (video) * 3, 211288 (applications) * 4, 245684 (games) * 5, 2500830 (porn) * 6, 515778 (other) Is porn?, count - 0, 5263636 - 1, 2500830 ### Data Fields * id - original torrent ID * title - Torrent title * category - Integer ThePirateBay category (see below) * mcat - Integer category / 100 * is_porn - 1 if porn, 0 otherwise ### Categories ``` id,name 100,Audio 101,"Audio: Music" 102,"Audio: Audio books" 103,"Audio: Sound clips" 104,"Audio: FLAC" 199,"Audio: Other" 200,Video 201,"Video: Movies" 202,"Video: Movies DVDR" 203,"Video: Music videos" 204,"Video: Movie clips" 205,"Video: TV shows" 206,"Video: Handheld" 207,"Video: HD - Movies" 208,"Video: HD - TV shows" 209,"Video: 3D" 299,"Video: Other" 300,Applications 301,"Applications: Windows" 302,"Applications: Mac" 303,"Applications: UNIX" 304,"Applications: Handheld" 305,"Applications: IOS (iPad/iPhone)" 306,"Applications: Android" 399,"Applications: Other OS" 400,Games 401,"Games: PC" 402,"Games: Mac" 403,"Games: PSx" 404,"Games: XBOX360" 405,"Games: Wii" 406,"Games: Handheld" 407,"Games: IOS (iPad/iPhone)" 408,"Games: Android" 499,"Games: Other" 500,Porn 501,"Porn: Movies" 502,"Porn: Movies DVDR" 503,"Porn: Pictures" 504,"Porn: Games" 505,"Porn: HD - Movies" 506,"Porn: Movie clips" 599,"Porn: Other" 600,Other 601,"Other: E-books" 602,"Other: Comics" 603,"Other: Pictures" 604,"Other: Covers" 605,"Other: Physibles" 699,"Other: Other" ``` [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
false
# AutoTrain Dataset for project: syn ## Dataset Description This dataset has been automatically processed by AutoTrain for project syn. ### Languages The BCP-47 code for the dataset's language is it. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "tokens": [ "EXPORT", "CAUCEDO", "o", "RIO", "HAINA", "Q2lhbywgYnVvbmdpb3JubyDwn5iKDQoNCk5zIGNsaWVudGUgVklQDQoNCkRFU1RJTkFaSU9ORSBD", ";", "RlMgQ0FVQ0VETyBvcHB1cmUgUklPIEhBSU5BDQoNCkJBTkNBTEkgTk9OIFNPVlJBUFBPTklCSUxJ", ";", "DQpSaXRpcm8gYy9vIEZPTlRFVklWTyAoUFIpDQoNCk1FUkNFIElNTyBjb21lIGRhIGRvY3MgYWxs", ";", "ZWdhdGkNCg0KODAzMTEzOTANCkRHUiArIE5ER1INCg0KDQpOUiBCT1gNCkRJTS4NCkdST1NTDQpO", ";", "RVQNCjIwMjMwMDMxMjk4DQpsIDEyMCBMIDgwIGggNzQNCjI5Myw2NzMNCjI3Nyw2MTQNCjIwMjMw", ";", "MDMxMzAyDQpsIDEyMCBMIDgwIGggMTEwDQozOTksOTg1DQozOTUsMg0KMjAyMzAwMzEzMDMNCmwg", ";", "MTIwIEwgODAgaCA5Mg0KMzM1LDQ0DQozMjUsNzk1DQoyMDIzMDAzMTMwNA0KbCAxMjAgTCA4MCBo", ";", "IDgwDQoyNDksMDY0DQoyMzYsMjExDQoyMDIzMDAzMTMwNw0KbCAxMjAgTCA4MCBoIDEzMQ0KNTA1", ";", "LDM1NQ0KNDgwLDg3DQoyMDIzMDAzMTMwOA0KbCAxMjAgTCA4MCBoIDEzMQ0KNTA3LDgNCjQ4Niw4", ";", "MQ0KMjAyMzAwMzEzMDkNCmwgMTIwIEwgODAgaCAxMjENCjQ0NCwwOA0KNDQyLDgNCjIwMjMwMDMx", ";", "MzEwDQpsIDEyMCBMIDgwIGggMTE3DQo0MDMsNjENCjM3Niw5MTUNCjIwMjMwMDMxMzEyDQpsIDEy", ";", "MCBMIDgwIGggMTA3DQozNzMsNTQNCjM1MCwwNDINCjIwMjMwMDMxMzEzDQpsIDEyMCBMIDgwIGgg", ";", "MTAwDQozNTMsODY1DQozMzAsNTYxDQoyMDIzMDAzMTMxNA0KbCAxMjAgTCA4MCBoIDEyNw0KNDQy", ";", "LDcwMg0KNDI1LDINCjIwMjMwMDMxMzE1DQpsIDEyMCBMIDgwIGggMTI2DQo1MDgsODA2DQo0OTQN", ";", "CjIwMjMwMDMxMzE2DQpsIDEyMCBMIDgwIGggOTENCjE0NSwyOTUNCjEzMiwxMDINCg0KDQoNCg0K", ";", "DQoNCjk0MCAgICBCT1gNCjEzICAgIFBBTExFVA0KDQpUT1QuIFRBUkUNCjE2OSwwMCBLRw0KDQoN", ";", "ClRPVA0KDQo0OTYzLDIxNQ0KNDc1NCwxMg0KDQpHcmF6aWUgbWlsbGUNCkRlbGlhDQoNCg0KKE1y", ";", "cy4pIERlbGlhIE5pY29saW5pDQpJbnNpZGUgU2FsZXMNCltFeHByZXNzIFNybF0NClRvcnJlIGRl", ";", "bCBEaXN0cmlwYXJrIGludC4xMA0KUG9ydG8gZGkgUHLDoCBWb2x0cmkNCjE2MTU4IEdlbm92YSAt", ";", "IElUQUxZDQpQaC4rMzkwMTA4MTc5MTI1DQpNb2IuICszOSAzNDggOTU5NDQ1Mg0KTGlua2VkSW4g", ";", "ZXhwcmVzc19nbG9iYWw8aHR0cHM6Ly93d3cubGlua2VkaW4uY29tL2NvbXBhbnkvZXhwcmVzc19n", ";", "bG9iYWwvPg0KW0V4cHJlc3MgU3JsXQ0KW0V4cHJlc3MgU3JsXQ0KSSBzdW9pIGRhdGkgcGVyc29u", ";", "YWxpIHNhcmFubm8gdHJhdHRhdGkgaW4gY29uZm9ybWl0w6AgYWwgUmVnb2xhbWVudG8gRXVyb3Bl", ";", "byBuLiA2NzkvMjAxNiAoYy5kLiDigJxHRFBS4oCdKSwgaW4gdmlnb3JlIGEgZGVjb3JyZXJlIGRh", ";", "bCAyNSBtYWdnaW8gMjAxOC4gQSBxdWVzdG8gcHJvcG9zaXRvLCBsYSBpbnZpdGlhbW8gYSBwcmVu", ";", "ZGVyZSB2aXNpb25lIGRlbGwnYXBwb3NpdGEg4oCLYXBwb3NpdGEgaW5mb3JtYXRpdmEgcHJpdmFj", ";", "eS4g4oCLTGUgaW5mb3JtYXppb25pIGNvbnRlbnV0ZSBpbiBxdWVzdG8gZG9jdW1lbnRvIHNvbm8g", ";", "c3RyZXR0YW1lbnRlIHJpc2VydmF0ZSBlIGFkIHVzbyBlc2NsdXNpdm8gZGVsIGRlc3RpbmF0YXJp", ";", "by4NCllvdXIgcGVyc29uYWwgZGF0YSB3aWxsIGJlIHByb2Nlc3NlZCBpbiBjb21wbGlhbmNlIHdp", ";", "dGggdGhlIEVVIEdlbmVyYWwgRGF0YSBQcm90ZWN0aW9uIFJlZ3VsYXRpb24gbm8uIDY3OS8yMDE2", ";", "ICjigJxHRFBS4oCdKSwgaW4gZm9yY2Ugc2luY2UgTWF5IDI1LCAyMDE4LiDigItPbiB0aGUgdG9w", ";", "aWMg4oCLcGxlYXNlIHNlZSBvdXIgcHJpdmFjeSBwb2xpY3kuIEFsbCBpbmZvcm1hdGlvbiBpbiB0", ";", "aGlzIGRvY3VtZW50IGlzIGNvbmZpZGVudGlhbCBhbmQgZm9yIGV4Y2x1c2l2ZSB1c2Ugb2YgdGhl", ";", "IGFkZHJlc3NlZSBvbmx5Lg0KDQo=", ";" ], "tags": [ 0, 5, 0, 5, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ] }, { "tokens": [ "R", ":", "Quote", "-", "FCA", "Italy", "to", "Charleston", ",", "USA", "port", "QFNhbGVzIEV4cG9ydDxtYWlsdG86c2FsZXNleHBAc2luZXJneWNhcmdvLmNvbT4NCg0KRGE6IEph", ";", "aSBKb25lcyA8ampvbmVzQHJvbWF2LmNvbT4NCkludmlhdG86IHZlbmVyZMOsIDI1IG5vdmVtYnJl", ";", "IDIwMjIgMTc6MDQNCkE6IEFuZHJlYSBPZ25pbyA8YS5vZ25pb0BzaW5lcmd5Y2FyZ28uY29tPg0K", ";", "Q2M6IERhdmlkZSBPbm5pcyA8ZC5vbm5pc0BzaW5lcmd5Y2FyZ28uY29tPg0KT2dnZXR0bzogUXVv", ";", "dGUgLSBGQ0EgSXRhbHkgdG8gQ2hhcmxlc3RvbiwgVVNBIHBvcnQNCg0KSGkgQW5kZWENCg0KUGxl", ";", "YXNlIHF1b3RlIExDTCByYXRlIGJhc2VkIG9uIHRoZSBiZWxvdyBpbmZvcm1hdGlvbiB0byBDaGFy", ";", "bGVzdG9uIFNDLCBDRlMNCg0KLSBGQ0EgLSBFeHBvcnQgZG9jcyB3aWxsIGJlIGRvbmUgYnkgdGhl", ";", "IHN1cHBsaWVyDQoNCi0gR29vZHM6IE5hdHVyYWwgY2FmZmVpbmUgZ3JhbnVsYXINCi0gNyBwYWxs", ";", "ZXRzIDgweDEyMHgxNDUNCi0gR3Jvc3Mgd2VpZ2h0IDIuNDUwIEtnDQoNCi0gUGlja3VwIGxvY2F0", ";", "aW9uOg0KQ09GRkVJTiBDT01QQUdOSUUgSVRBTFkgUy5yLmwuIGEgU29jaW8gVW5pY28NCk9mZmlj", ";", "ZXMvUHJvZHVjdGlvbjogUmVnLiBBYmlzc2luaWEsIExvYy4gTGFnbyBBenp1cnJvLCAxNzAzMSBB", ";", "bGJlbmdhIChTVikg4oCTIEl0YWx5DQoNClBsZWFzZSBhZHZpc2UgY2FycmllciBhbmQgdHJhbnNp", ";", "dCB0aW1lIGV0Yw0KDQpUaGFua3MvUmVnYXJkcw0KDQpKYWkgSm9uZXMNClJvbWF2IFNoaXBwaW5n", ";", "ICYgTG9naXN0aWNzIChVSykgTGltaXRlZA0KNjggU3QuIE1hcmdhcmV0cyBSb2FkDQpFZGd3YXJl", ";", "DQpNaWRkbGVzZXgNCkhBOCA5VVUNClVuaXRlZCBLaW5nZG9tDQpUZWw6ICs0NC0yMDctMDg5LTU2", ";", "MTAgKE9mZmljZSkNClRlbDogKzQ0LTc5NTctMjQyNzcyIChNb2JpbGUpDQpGYXg6ICs0NC0yMDct", ";", "MDg5LTU2MTMNCg0KRElTQ0xBSU1FUjogVGhpcyBlbWFpbCBhbmQgYW55IGZpbGVzIHRyYW5zbWl0", ";", "dGVkIHdpdGggaXQgYXJlIGZvciB0aGUgbmFtZWQgcmVjaXBpZW50IG9ubHkuIFRoaXMgaW5mb3Jt", ";", "YXRpb24gaXMgY29uZmlkZW50aWFsIGFuZCBtYXkgYmUgbGVnYWxseSBwcml2aWxlZ2VkLiBJZiB5", ";", "b3UgYXJlIG5vdCB0aGUgbmFtZWQgcmVjaXBpZW50LCBvciByZXNwb25zaWJsZSBmb3IgZGVsaXZl", ";", "cmluZyB0aGUgbWVzc2FnZSB0byB0aGUgbmFtZWQgcmVjaXBpZW50LCB5b3UgbXVzdCBub3QgZGlz", ";", "Y2xvc2UsIGRpc3RyaWJ1dGUsIGZvcndhcmQsIGNvcHksIHN0b3JlIG9yIHVzZSB0aGlzIGVtYWls", ";", "IG9yIGl0cyBhdHRhY2htZW50cyBpbiBhbnkgZm9ybS4gSWYgeW91IGhhdmUgcmVjZWl2ZWQgdGhp", ";", "cyBjb21tdW5pY2F0aW9uIGluIGVycm9yLCBwbGVhc2UgYWNjZXB0IG91ciBhcG9sb2dpZXMgYW5k", ";", "IHByb21wdGx5IGluZm9ybSB0aGUgc2VuZGVyIGJ5IGVtYWlsIG9yIHRlbGVwaG9uaW5nIHVzIGF0", ";", "ICs0NC0yMC03MDg5LTU2MTAuIFBsZWFzZSBhbHNvIGltbWVkaWF0ZWx5IGRlbGV0ZSB0aGlzIGVt", ";", "YWlsIGFuZCBhbnkgYXR0YWNobWVudHMgZnJvbSB5b3VyIHN5c3RlbXMuIFRoYW5rIHlvdS4gVGhl", ";", "IHZpZXdzIGFuZCBvcGluaW9ucyBleHByZXNzZWQgaW4gdGhpcyBlbWFpbCBhcmUgdGhlIHNlbmRl", ";", "cuKAmXMgb3duIGFuZCBkbyBub3QgbmVjZXNzYXJpbHkgcmVwcmVzZW50IHRob3NlIG9mIFJvbWF2", ";", "IFNoaXBwaW5nICYgTG9naXN0aWNzIChVSykgTGltaXRlZC4gTmVpdGhlciBSb21hdiBTaGlwcGlu", ";", "ZyAmIExvZ2lzdGljcyAoVUspIExpbWl0ZWQgbm9yIHRoZSBzZW5kZXIgd2lsbCBhY2NlcHQgYW55", ";", "IGxpYWJpbGl0eSBmb3IgZXJyb3JzIG9yIG9taXNzaW9ucyBpbiB0aGlzIG1lc3NhZ2UuIEFsdGhv", ";", "dWdoIHdlIGhhdmUgdXNlZCByZWFzb25hYmxlIGVmZm9ydHMgdG8gY2hlY2sgdGhpcyBlbWFpbCBh", ";", "bmQgaXRzIGF0dGFjaG1lbnRzIGZvciB2aXJ1c2VzIG9yIG90aGVyIGNvbnRhbWluYW50cyBiZWZv", ";", "cmUgdHJhbnNtaXNzaW9uLCBpdCBpcyB5b3VyIHJlc3BvbnNpYmlsaXR5IGFzIHJlY2lwaWVudCB0", ";", "byBlbnN1cmUgdGhleSBhcmUgc2FmZSBvbiByZWNlaXB0LiBOZWl0aGVyIFJvbWF2IFNoaXBwaW5n", ";", "ICYgTG9naXN0aWNzIChVSykgTGltaXRlZCBub3IgdGhlIHNlbmRlciB3aWxsIGFjY2VwdCBhbnkg", ";", "cmVzcG9uc2liaWxpdHkgb3IgbGlhYmlsaXR5IGZvciBhbnkgbG9zcyBvciBkYW1hZ2UgYXJpc2lu", ";", "ZyBmcm9tIHRoaXMgZW1haWwgb3IgaXRzIGF0dGFjaG1lbnRzLiBNZXNzYWdlcyB0byBhbmQgZnJv", ";", "bSBSb21hdiBTaGlwcGluZyAmIExvZ2lzdGljcyAoVUspIExpbWl0ZWQgbWF5IGJlIG1vbml0b3Jl", ";", "ZCBpbiBhY2NvcmRhbmNlIHdpdGggUm9tYXYgU2hpcHBpbmcgJiBMb2dpc3RpY3MgKFVLKSBMaW1p", ";", "dGVkJ3MgSW50ZXJuZXQgYW5kIEVtYWlsIFBvbGljeSBhbmQgaW4gYWNjb3JkYW5jZSB3aXRoIHRo", ";", "ZSBwcmV2YWlsaW5nIGxhd3MgaW4gdGhlIFVuaXRlZCBLaW5nZG9tLg0KDQoNCg0KDQoNCg==", ";" ], "tags": [ 0, 0, 0, 0, 0, 13, 0, 5, 0, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ] } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "tokens": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)", "tags": "Sequence(feature=ClassLabel(names=['O', 'commodity', 'company', 'delivery_cap', 'delivery_location', 'delivery_port', 'delivery_state', 'incoterms', 'measures', 'package_type', 'pickup_cap', 'pickup_location', 'pickup_port', 'pickup_state', 'quantity', 'stackable', 'volume', 'weight'], id=None), length=-1, id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 303 | | valid | 155 |
false
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
false
false
# Synthetic Dataset for Product Descriptions and Ads The basic process was as follows: 1. Prompt GPT-4 to create a list of 100 sample clothing items and descriptions for those items. 2. Split the output into desired format `{"product" : "<PRODUCT NAME>", "description" : "<DESCRIPTION>"} 3. Prompt GPT-4 to create adverts for each of the 100 samples based on their name and description. This data was not cleaned or verified manually.
false
# 🚢 Stanford Human Preferences Dataset (SHP) (Italian Translation) The Stanford Human Preferences Dataset (SHP) is a collection of responses to questions and instructions in 18 different subject areas, ranging from cooking to legal advice. This version of the dataset is a **partial** Italian translation of the original English dataset. Please note that the quality of the translations has not been verified. However, the dataset may still be useful for training models. Each example in the dataset consists of a Reddit post that includes a question or instruction and a pair of top-level comments. The comments are ranked according to their perceived helpfulness by Reddit users. SHP uses the fact that if comment A has a higher score than comment B despite being written after B, then A is considered more preferred. The preference labels in the dataset reflect the helpfulness of a response, rather than identifying harmful responses. This approach differs from previous work that focused on identifying harmful responses.
false
# Chess Rock VS Pawn The [Chess Rock VS Pawn dataset](https://archive-beta.ics.uci.edu/dataset/22/chess+king+rook+vs+king+pawn) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets). # Configurations and tasks | **Configuration** | **Task** | **Description** | |-------------------|---------------------------|--------------------------| | chess | Binary classification | Can the white piece win? | # Usage ```python from datasets import load_dataset dataset = load_dataset("mstz/chess_rock_vs_pawn")["train"] ```
false
Armenian wikipedia at date 04.2023 80M tokens 296.539 articles
false
0.7M tokens
true
false
false
## Dataset This FLAN dataset is built for instruction causal language modeling. Improper encoding/decoding was further cleaned. The dataset includes the Dialog Zero Shot Options task. ## List of Mixtures We've broken down the Flan Collection into several sub-mixtures. These are "flan" (Flan 2021), "t0" (P3 excluding Flan 2021), "niv2" (Super-Natural Instructions) "cot" (several Chain-of-Thought datasets), and "dialog" (a few new dialog datasets). Each of these come in multiple varieties of templates: zero-shot prompts with answer options (zsopt), zero-shot prompts with no answer options (zsnoopt), few-shot prompts with answer options (fsopt), and few-shot prompts with no answer options (fsnoopt). Answer options indicate whether for multiple choice classification tasks the set of answers are described in the instruction prompt or not. ## General Information The Flan Collection of datasets and data augmentation methods for instruction tuning is generated using the code in this repository. The Flan Collection compiles datasets from Flan 2021, P3, Super-Natural Instructions, along with dozens more datasets into one place, formats them into a mix of zero-shot, few-shot and chain-of-thought templates, then mixes these in proportions that are found to achieve strong results. ## Citation ```bibtex @article{longpre2023flan, title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning}, author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others}, journal={arXiv preprint arXiv:2301.13688}, year={2023} } ```
false
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** lambdasec@okyasoft.com ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
false
# Mtet - Num examples: - 5,072 (test) - 6,212 (validation) - Language: English, Vietnamese
false
# Dataset Card for DIALOGSum Corpus ## Dataset Description ### Links - **Homepage:** https://aclanthology.org/2021.findings-acl.449 - **Repository:** https://github.com/cylnlp/dialogsum - **Paper:** https://aclanthology.org/2021.findings-acl.449 - **Point of Contact:** https://huggingface.co/knkarthick ### Dataset Summary DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 (Plus 100 holdout data for topic generation) dialogues with corresponding manually labeled summaries and topics. ### Languages English ## Dataset Structure ### Data Instances DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 dialogues (+1000 tests) split into train, test and validation. The first instance in the training set: {'id': 'train_0', 'summary': "Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.", 'dialogue': "#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\n#Person2#: I found it would be a good idea to get a check-up.\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\n#Person2#: Ok.\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\n#Person2#: Yes.\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\n#Person2#: Ok, thanks doctor.", 'topic': "get a check-up} ### Data Fields - dialogue: text of dialogue. - summary: human written summary of the dialogue. - topic: human written topic/one liner of the dialogue. - id: unique file id of an example. ### Data Splits - train: 12460 - val: 500 - test: 1500 - holdout: 100 [Only 3 features: id, dialogue, topic] ## Dataset Creation ### Curation Rationale In paper: We collect dialogue data for DialogSum from three public dialogue corpora, namely Dailydialog (Li et al., 2017), DREAM (Sun et al., 2019) and MuTual (Cui et al., 2019), as well as an English speaking practice website. These datasets contain face-to-face spoken dialogues that cover a wide range of daily-life topics, including schooling, work, medication, shopping, leisure, travel. Most conversations take place between friends, colleagues, and between service providers and customers. Compared with previous datasets, dialogues from DialogSum have distinct characteristics: Under rich real-life scenarios, including more diverse task-oriented scenarios; Have clear communication patterns and intents, which is valuable to serve as summarization sources; Have a reasonable length, which comforts the purpose of automatic summarization. We ask annotators to summarize each dialogue based on the following criteria: Convey the most salient information; Be brief; Preserve important named entities within the conversation; Be written from an observer perspective; Be written in formal language. ### Who are the source language producers? linguists ### Who are the annotators? language experts ## Licensing Information non-commercial licence: MIT ## Citation Information ``` @inproceedings{chen-etal-2021-dialogsum, title = "{D}ialog{S}um: {A} Real-Life Scenario Dialogue Summarization Dataset", author = "Chen, Yulong and Liu, Yang and Chen, Liang and Zhang, Yue", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.449", doi = "10.18653/v1/2021.findings-acl.449", pages = "5062--5074", ``` ## Contributions Thanks to [@cylnlp](https://github.com/cylnlp) for adding this dataset.
false
false
# English Malayalam names This dataset has 27814162 person names both in English and Malayalam. The source for this dataset is various election roles published by Government. Potential usages: 1. English <-> Malayalam name transliteration tasks 2. Named entity recognition 3. Person name recognition ## License Creative commons Attribution Share Alike 4.0 ## Contact Santhosh Thottingal santhosh.thottingal @ gmail.com
true
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This repository contains a French version of the [GQNLI](https://github.com/ruixiangcui/GQNLI) challenge dataset, originally written in English. GQNLI is an evaluation corpus that is aimed for testing language model's generalized quantifier reasoning ability. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - `uid`: Index number. - `premise`: The translated premise in the target language. - `hypothesis`: The translated premise in the target language. - `label`: The classification label, with possible values 0 (`entailment`), 1 (`neutral`), 2 (`contradiction`). - `label_text`: The classification label, with possible values `entailment` (0), `neutral` (1), `contradiction` (2). - `premise_original`: The original premise from the English source dataset. - `hypothesis_original`: The original hypothesis from the English source dataset. ### Data Splits | name |entailment|neutral|contradiction| |-------------|---------:|------:|------------:| | test | 97 | 100 | 103 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ````BibTeX @inproceedings{cui-etal-2022-generalized-quantifiers, title = "Generalized Quantifiers as a Source of Error in Multilingual {NLU} Benchmarks", author = "Cui, Ruixiang and Hershcovich, Daniel and S{\o}gaard, Anders", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.359", doi = "10.18653/v1/2022.naacl-main.359", pages = "4875--4893", abstract = "Logical approaches to representing language have developed and evaluated computational models of quantifier words since the 19th century, but today{'}s NLU models still struggle to capture their semantics. We rely on Generalized Quantifier Theory for language-independent representations of the semantics of quantifier words, to quantify their contribution to the errors of NLU models. We find that quantifiers are pervasive in NLU benchmarks, and their occurrence at test time is associated with performance drops. Multilingual models also exhibit unsatisfying quantifier reasoning abilities, but not necessarily worse for non-English languages. To facilitate directly-targeted probing, we present an adversarial generalized quantifier NLI task (GQNLI) and show that pre-trained language models have a clear lack of robustness in generalized quantifier reasoning.", } ```` ### Contributions [More Information Needed]
false
# xnli_vi - Num examples: - 5,010 (test) - 2,490 (validation) - 392,702 (train) - Language: Vietnamese, English
false
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
true
# Northwind Invoices and Related Documents This dataset contains a collection of invoices and related documents from the Northwind database, a sample database used by Microsoft for demonstrating database functionalities. The invoices include information about the customer, the salesperson, the order date, order ID, product IDs, product names, quantities, unit prices, and total prices. The related documents include shipping documents and stock documents. This dataset was created by [CHERGUELAINE Ayoub](https://www.linkedin.com/in/ayoub-cherguelaine/) & [BOUBEKRI Faycal](https://www.linkedin.com/in/faycal-boubekri-832848199/) for the purpose of classifying company documents. It can be used for document classification and other NLP tasks. # Northwind Invoices This dataset contains a collection of invoices generated from the Northwind database, a sample database that represents a fictional food trading company. The dataset includes invoice information such as the invoice number, customer name, invoice date, product names, quantities, unit prices, and total prices. The data spans over a period of several years and contains information about customers from various countries. # Data The dataset contains 2144 rows and 9 columns. The columns are as follows: Customer Name: The name of the customer who made the purchase. Salesperson Name: The name of the salesperson who generated the invoice. Order Date: The date the order was placed. Order ID: The unique ID of the order. ProductID: The unique ID of the product. Products: The name of the product. Quantities: The quantity of the product ordered. UnitPrice: The unit price of the product. Total Price: The total price of the product ordered. # Acknowledgements The Northwind database was originally created by Microsoft for use in its Access and SQL Server software, and has been widely used as a sample database for educational and demonstration purposes. This dataset was extracted from the database and made available in CSV format for research purposes. # Citation If you use this dataset in your research, please cite it as follows: ```json @misc{northwind_invoices, authors = {CHERGUELAINE Ayoub , BOUBEKRI Faycal}, title = {Northwind Invoices}, year = {2023}, publisher = {Hugging Face}, url = {https://huggingface.co/datasets/AyoubChLin/north_wind_invoices}, } ```
false
<img src="https://s3.amazonaws.com/moonup/production/uploads/632eed9e04b24dbdb9eaa6d4/ToFJ26XGVkO2FTJ4dH-yH.png" width="256" height="256">
false
# Open_subtitles - Num examples: - 3,505,276 (train) - Language: English, Vietnamese
true
# Northwind Shipping Orders and Related Documents This dataset contains a collection of Shipping Orders and related documents from the Northwind database, a sample database used by Microsoft for demonstrating database functionalities. The Shipping Orders include information about the ship name, Address , Region, postal code ,country, customer ,employee shipped date product names, quantities, unit prices, and total prices. The related documents include shipping documents and stock documents. This dataset was created by [CHERGUELAINE Ayoub](https://www.linkedin.com/in/ayoub-cherguelaine/) & [BOUBEKRI Faycal](https://www.linkedin.com/in/faycal-boubekri-832848199/) for the purpose of classifying company documents. It can be used for document classification and other NLP tasks. ## Northwind Shipping Orders Dataset # Overview The Northwind Shipping Orders dataset contains data on shipping orders from the Northwind database. It includes information such as order ID, shipping details, customer information, employee information, shipper information, order dates, and product information. # Dataset Details The dataset is stored as a pdf's file and contains the following columns: OrderID: Unique identifier for each order Ship Name: Name of the person or company that received the order Ship Address: Address where the order was shipped Ship City: City where the order was shipped Ship Region: Region where the order was shipped Ship Postal Code: Postal code of the shipping address Ship Country: Country where the order was shipped Customer ID: Unique identifier for the customer who placed the order Customer Name: Name of the customer who placed the order Employee Name: Name of the employee who processed the order Shipper ID: Unique identifier for the shipping company Shipper Name: Name of the shipping company Order Date: Date the order was placed Shipped Date: Date the order was shipped Product Name: Name of the product Quantity: Number of units of the product ordered Unit Price: Price per unit of the product Total Price: Total price of the order # Usage This dataset can be used for a variety of purposes, such as: Analyzing sales and order trends Identifying popular products Identifying popular shipping companies Analyzing customer behavior Predicting future sales and trends # Acknowledgements This dataset was sourced from the Northwind database, which is a sample database used by Microsoft for educational purposes. ```
false
This dataset was collected from Wikipedia : https://hu.wikipedia.org/wiki/Magyarorsz%C3%A1gon_anyak%C3%B6nyvezhet%C5%91_ut%C3%B3nevek_list%C3%A1ja
false
# Musk The [Musk dataset](https://archive.ics.uci.edu/ml/datasets/Musk) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets). Census dataset including personal characteristic of a person, and their income threshold. # Configurations and tasks | **Configuration** | **Task** | **Description** | |-------------------|---------------------------|------------------------| | musk | Binary classification | Is the molecule a musk?| # Usage ```python from datasets import load_dataset dataset = load_dataset("mstz/muskV2")["train"] ```
false
false
# Dataset Card for "hy_eanc_2023" 5M tokens [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
false
# Word Sense Disambiguation for FLUE ## Dataset Description - **Homepage:** - **Repository:** - **https://arxiv.org/pdf/1905.05677.pdf** - **Leaderboard:** - **loic.vial@univ-grenoble-alpes.fr** ### Dataset Summary This dataset is splitted in 3 sub-datasets: FrenchSemEval-Task12, French WNGT and an automatic translation of SemCor. ### Supported Tasks and Leaderboards Word Sense Disambiguation for French. ### Language French ### Licensing Information ``` GNU Lesser General Public License ``` ### Citation Information ```bibtex @inproceedings{vial-etal-2019-sense, title = "Sense Vocabulary Compression through the Semantic Knowledge of {W}ord{N}et for Neural Word Sense Disambiguation", author = {Vial, Lo{\"\i}c and Lecouteux, Benjamin and Schwab, Didier}, booktitle = "Proceedings of the 10th Global Wordnet Conference", month = jul, year = "2019", address = "Wroclaw, Poland", publisher = "Global Wordnet Association", url = "https://aclanthology.org/2019.gwc-1.14", pages = "108--117", abstract = "In this article, we tackle the issue of the limited quantity of manually sense annotated corpora for the task of word sense disambiguation, by exploiting the semantic relationships between senses such as synonymy, hypernymy and hyponymy, in order to compress the sense vocabulary of Princeton WordNet, and thus reduce the number of different sense tags that must be observed to disambiguate all words of the lexical database. We propose two different methods that greatly reduce the size of neural WSD models, with the benefit of improving their coverage without additional training data, and without impacting their precision. In addition to our methods, we present a WSD system which relies on pre-trained BERT word vectors in order to achieve results that significantly outperforms the state of the art on all WSD evaluation tasks.", } ``` ### Contributions * loic.vial@univ-grenoble-alpes.fr * benjamin.lecouteux@univ-grenoble-alpes.fr * didier.schwab@univ-grenoble-alpes.fr
false
false
# FrenchSemEval ## Dataset Description - **Homepage:** - **Repository:** - **https://aclanthology.org/W19-0422.pdf** - **Leaderboard:** - **vincent.segonne@univ-grenoble-alpes.fr** ### Dataset Summary This dataset correspond to the FrenchSemEval, in which verb occurences where manually annotated with Wiktionary senses. ### Supported Tasks and Leaderboards Verb Sense Disambiguation for French verbs. ### Language French ## Dataset Structure ### Data Instances Each instance of the dataset has the following fields and these following types of field. ```json { "document_id": "d001", "sentence_id": "d001.s001", "surface_forms": ['Il', 'rend', 'hommage', 'au', 'roi', 'de', 'France', 'et', 'des', 'négociations', 'au', 'traité', 'du', 'Goulet', ',', 'formalisant', 'la', 'paix', 'entre', 'les', 'deux', 'pays', '.'], "fine_pos": ['CLS', 'V', 'NC', 'P+D', 'NC', 'P', 'NPP', 'CC', 'DET', 'NC', 'P+D', 'NC', 'P+D', 'NPP', 'PONCT', 'VPR', 'DET', 'NC', 'P', 'DET', 'ADJ', 'NC', 'PONCT'], "lemmas": ['il', 'rendre', 'hommage', 'à', 'roi', 'de', 'France', 'et', 'un', 'négociation', 'à', 'traité', 'de', 'Goulet', ',', 'formaliser', 'le', 'paix', 'entre', 'le', 'deux', 'pays', '.'], "pos": ['CL', 'V', 'N', 'P+D', 'N', 'P', 'N', 'C', 'D', 'N', 'P+D', 'N', 'P+D', 'N', 'PONCT', 'V', 'D', 'N', 'P', 'D', 'A', 'N', 'PONCT'], "instance_surface_forms":['aboutissent'], "instance_fine_pos":['V'], "instance_lemmas":['aboutir'], "instance_pos":['V'] } ``` ### Data Fields Each sentence has the following fields: **document_id**, **sentence_id**, **surface_forms**, **fine_pos**, **lemmas**, **pos**, **instance_surface_forms**, **instance_fine_pos**, **instance_lemmas**, **instance_pos**. ### Data Splits No splits provided. ## Dataset Creation ### Source Data #### Initial Data Collection and Normalization To build the FrenchSemEval dataset, the authors focused on annotating moderately frequent and moderately ambiguous verbs by selecting verbs appearing between 50 and 1000 times into the French Wikipedia (2016-12-12 fr dump). For those verbs, the authors extracted 50 occurences with other annotations thanks to the French TreeBank [Abeillé and Barrier, 2004](http://ftb.linguist.univ-paris-diderot.fr/index.php?langue=en) and the Sequoia Treebank [Candito and Seddah, 2012](https://www.rocq.inria.fr/alpage-wiki/tiki-index.php?page=CorpusSequoia). ### Annotations #### Annotation process To annotate FrenchSemEval, the annotators used [WebAnno](https://webanno.github.io/webanno/) an open-source adaptable annotation tool. Sentences have been pre-processed into CoNLL format and then annotated into WebAnno. The annotators where asked to only annotate marked occurences using the sense inventory from Wiktionnary. #### Who are the annotators? The annotation has been performed by 3 French students, with no prior experience in dataset annotation. ### Dataset statistics |Type|#| |---|---| |Number of sentences|3121| | Number of annoatated verb tokens | 3199 | | Number of annotated verb types | 66 | | Mean number of annotations per verb type | 48.47 | | Mean number of senses per verb type | 3.83 | ### Licensing Information ``` GNU Lesser General Public License ``` ### Citation Information ```bibtex @inproceedings{segonne-etal-2019-using, title = "Using {W}iktionary as a resource for {WSD} : the case of {F}rench verbs", author = "Segonne, Vincent and Candito, Marie and Crabb{\'e}, Beno{\^\i}t", booktitle = "Proceedings of the 13th International Conference on Computational Semantics - Long Papers", month = may, year = "2019", address = "Gothenburg, Sweden", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/W19-0422", doi = "10.18653/v1/W19-0422", pages = "259--270", abstract = "As opposed to word sense induction, word sense disambiguation (WSD) has the advantage of us-ing interpretable senses, but requires annotated data, which are quite rare for most languages except English (Miller et al. 1993; Fellbaum, 1998). In this paper, we investigate which strategy to adopt to achieve WSD for languages lacking data that was annotated specifically for the task, focusing on the particular case of verb disambiguation in French. We first study the usability of Eurosense (Bovi et al. 2017) , a multilingual corpus extracted from Europarl (Kohen, 2005) and automatically annotated with BabelNet (Navigli and Ponzetto, 2010) senses. Such a resource opened up the way to supervised and semi-supervised WSD for resourceless languages like French. While this perspective looked promising, our evaluation on French verbs was inconclusive and showed the annotated senses{'} quality was not sufficient for supervised WSD on French verbs. Instead, we propose to use Wiktionary, a collaboratively edited, multilingual online dictionary, as a resource for WSD. Wiktionary provides both sense inventory and manually sense tagged examples which can be used to train supervised and semi-supervised WSD systems. Yet, because senses{'} distribution differ in lexicographic examples found in Wiktionary with respect to natural text, we then focus on studying the impact on WSD of the training data size and senses{'} distribution. Using state-of-the art semi-supervised systems, we report experiments of Wiktionary-based WSD for French verbs, evaluated on FrenchSemEval (FSE), a new dataset of French verbs manually annotated with wiktionary senses.", } ``` ### Contributions * vincent.segonne@univ-grenoble-alpes.fr * marie.candito@linguist.univ-paris-diderot.fr * benoit.crabbe@linguist.univ-paris-diderot.fr
true
#### Purchase Orders Dataset This dataset consists of purchase orders from various companies. It was created by [CHERGUELAINE Ayoub](https://www.linkedin.com/in/ayoub-cherguelaine/) & [BOUBEKRI Faycal](https://www.linkedin.com/in/faycal-boubekri-832848199/) with the help of ChatGPT for the purpose of document classification and analytics. # Description The dataset contains a collection of purchase orders from different companies. Each purchase order consists of the following fields: order_id: The unique identifier for the purchase order. order_date: The date on which the purchase order was placed. customer_name: The name of the customer who placed the purchase order. products: A list of products ordered in the purchase order. Each product contains the following fields: product_id: The unique identifier for the product. product : The name of the product ordered quantity: The quantity of the product ordered. unit_price: The price per unit of the product. The dataset is provided in PDF format and can be used for document classification and analytics tasks. # Format The dataset is provided in a zip file that contains the following files: purchase_orders.pdf: A PDF file containing the purchase orders. purchase_orders.csv: A CSV file containing the purchase orders in tabular format. # License You are free to share and adapt this dataset for any purpose, provided that you give appropriate credit, provide a link to the license, and indicate if changes were made. # Acknowledgments We would like to acknowledge the Northwind database for providing the source data for this dataset. We would also like to thank ChatGPT for their assistance in creating this dataset.
false
# MIT-Adobe FiveK Dataset The MIT-Adobe FiveK Dataset [[1]]( #references ) is a publicly available dataset providing the following items. 1. 5,000 RAW images in DNG format 2. retouched images of each RAW image by five experts in TIFF format (25,000 images, 16 bits per channel, ProPhoto RGB color space, and lossless compression) 3. semantic information about each image The dataset was created by MIT and Adobe Systems, Inc., and is intended to provide a diverse and challenging set of images for testing image processing algorithms. The images were selected to represent a wide range of scenes, including landscapes, portraits, still lifes, and architecture. The images also vary in terms of lighting conditions, color balance, and exposure. In practice, this dataset is often used after RAW images have undergone various processing steps. For example, RAW images are developed by adding noise, overexposure, and underexposure to emulate camera errors. However, the officially provided dataset has a complex structure and is difficult to handle. This repository provides tools to easily download and use the datasets. ## Official Website [MIT-Adobe FiveK Dataset](https://data.csail.mit.edu/graphics/fivek/) ## License - [LicenseAdobe.txt](https://data.csail.mit.edu/graphics/fivek/legal/LicenseAdobe.txt) covers files listed in [filesAdobe.txt](https://data.csail.mit.edu/graphics/fivek/legal/filesAdobe.txt) - [LicenseAdobeMIT.txt](https://data.csail.mit.edu/graphics/fivek/legal/LicenseAdobeMIT.txt) covers files listed in [filesAdobeMIT.txt](https://data.csail.mit.edu/graphics/fivek/legal/filesAdobeMIT.txt) ## Data Samples |Raw (DNG)|Expert A|Expert B|Expert C|Expert D|Expert E|Categories|Camera Model| |---|---|---|---|---|---|---|---| |[a0001-jmac_</br >DSC1459.dng](https://data.csail.mit.edu/graphics/fivek/img/dng/a0001-jmac_DSC1459.dng)|![tiff16_a/a0001-jmac_DSC1459](https://raw.githubusercontent.com/yuukicammy/mit-adobe-fivek-dataset/master/data/thumbnails/a0001-jmac_DSC1459_A.jpg)|![tiff16_b/a0001-jmac_DSC1459](https://raw.githubusercontent.com/yuukicammy/mit-adobe-fivek-dataset/master/data/thumbnails/a0001-jmac_DSC1459_B.jpg)|![tiff16_c/a0001-jmac_DSC1459](https://raw.githubusercontent.com/yuukicammy/mit-adobe-fivek-dataset/master/data/thumbnails/a0001-jmac_DSC1459_C.jpg)|![tiff16_d/a0001-jmac_DSC1459](https://raw.githubusercontent.com/yuukicammy/mit-adobe-fivek-dataset/master/data/thumbnails/a0001-jmac_DSC1459_D.jpg)|![tiff16_e/a0001-jmac_DSC1459](https://raw.githubusercontent.com/yuukicammy/mit-adobe-fivek-dataset/master/data/thumbnails/a0001-jmac_DSC1459_E.jpg)|{"location":"outdoor","time": "day","light": "sun_sky","subject": "nature"}|Nikon D70| |[a1384-dvf_095.dng](https://data.csail.mit.edu/graphics/fivek/img/dng/a1384-dvf_095.dng)|![tiff16_a/a1384-dvf_095](https://raw.githubusercontent.com/yuukicammy/mit-adobe-fivek-dataset/master/data/thumbnails/a1384-dvf_095_A.jpg)|![tiff16_b/a1384-dvf_095](https://raw.githubusercontent.com/yuukicammy/mit-adobe-fivek-dataset/master/data/thumbnails/a1384-dvf_095_B.jpg)|![tiff16_c/a1384-dvf_095](https://raw.githubusercontent.com/yuukicammy/mit-adobe-fivek-dataset/master/data/thumbnails/a1384-dvf_095_C.jpg)|![tiff16_d/a1384-dvf_095](https://raw.githubusercontent.com/yuukicammy/mit-adobe-fivek-dataset/master/data/thumbnails/a1384-dvf_095_D.jpg)|![tiff16_e/a1384-dvf_095](https://raw.githubusercontent.com/yuukicammy/mit-adobe-fivek-dataset/master/data/thumbnails/a1384-dvf_095_E.jpg)|{ "location": "outdoor", "time": "day", "light": "sun_sky", "subject": "nature" }|Leica M8| |[a4607-050801_</br >080948__</br >I2E5512.dng](https://data.csail.mit.edu/graphics/fivek/img/dng/a4607-050801_080948__I2E5512.dng)|![tiff16_a/a4607-050801_080948__I2E5512](https://raw.githubusercontent.com/yuukicammy/mit-adobe-fivek-dataset/master/data/thumbnails/a4607-050801_080948__I2E5512_A.jpg)|![tiff16_b/a4607-050801_080948__I2E5512](https://raw.githubusercontent.com/yuukicammy/mit-adobe-fivek-dataset/master/data/thumbnails/a4607-050801_080948__I2E5512_B.jpg)|![tiff16_c/a4607-050801_080948__I2E5512](https://raw.githubusercontent.com/yuukicammy/mit-adobe-fivek-dataset/master/data/thumbnails/a4607-050801_080948__I2E5512_C.jpg)|![tiff16_d/a4607-050801_080948__I2E5512](https://raw.githubusercontent.com/yuukicammy/mit-adobe-fivek-dataset/master/data/thumbnails/a4607-050801_080948__I2E5512_D.jpg)|![tiff16_e/a4607-050801_080948__I2E5512](https://raw.githubusercontent.com/yuukicammy/mit-adobe-fivek-dataset/master/data/thumbnails/a4607-050801_080948__I2E5512_E.jpg)|{ "location": "indoor", "time": "day", "light": "artificial", "subject": "people" }|Canon EOS-1D Mark II| # References ``` @inproceedings{fivek, author = "Vladimir Bychkovsky and Sylvain Paris and Eric Chan and Fr{\'e}do Durand", title = "Learning Photographic Global Tonal Adjustment with a Database of Input / Output Image Pairs", booktitle = "The Twenty-Fourth IEEE Conference on Computer Vision and Pattern Recognition", year = "2011" } ``` # Code [GitHub repository](https://github.com/yuukicammy/mit-adobe-fivek-dataset) provides tools to download and use MIT-Adobe FiveK Dataset in a machine learning friendly manner. You can download the dataset with a single line of Python code. Also, you can use Pytorch's DetaLoader to iteratively retrieve data for your own use. The processing can be easily accomplished with multiprocessing with Pytorch's DataLoader! ## Requirements - Python 3.7 or greater - Pytorch 2.X - tqdm - urllib3 ## Usage You can use as follows. <span style="color:red"> NOTE: For DataLoader, MUST set `batch_size` to `None` to disable automatic batching. </span> ```python from torch.utils.data.dataloader import DataLoader from dataset.fivek import MITAboveFiveK metadata_loader = DataLoader( MITAboveFiveK(root="path-to-dataset-root", split="train", download=True, experts=["a"]), batch_size=None, num_workers=2) for item in metadata_loader: # Processing as you want. # Add noise, overexpose, underexpose, etc. print(item["files"]["dng"]) ``` ## Example Please see [sample code](https://github.com/yuukicammy/mit-adobe-fivek-dataset/blob/master/sample_process.py) . ## API CLASS MITAboveFiveK(torch.utils.data.dataset.Dataset) - - - MITAboveFiveK(root: str, split: str, download: bool = False, experts: List[str] = None) -> None - root (str): The root directory where the MITAboveFiveK directory exists or to be created. - split (str): One of {'train', 'val', 'test', 'debug'}. 'debug' uses only 9 data contained in 'train'. - download (bool): If True, downloads the dataset from the official urls. Files that already exist locally will skip the download. Defaults to False. - experts (List[str]): List of {'a', 'b', 'c', 'd', 'e'}. 'a' means 'Expert A' in the [website](https://data.csail.mit.edu/graphics/fivek/ ). If None or empty list, no expert data is used. Defaults to None.
true
# Northwind Stock Report Dataset This dataset was created by [CHERGUELAINE Ayoub](https://www.linkedin.com/in/ayoub-cherguelaine/) & [BOUBEKRI Faycal](https://www.linkedin.com/in/faycal-boubekri-832848199/) for the purpose of document classification and analytics. The dataset contains monthly stock reports and monthly stock reports by category, extracted from the Northwind dataset. The Northwind dataset is a sample database that comes with Microsoft Access, and is commonly used as a demo database for learning SQL. The dataset contains data on a fictional company called "Northwind Traders", which imports and exports specialty foods. To create the dataset of stock reports, we used SQL queries to extract data from the Northwind database, and then converted the results to PDF documents using the ReportLab library in Python. The resulting PDFs were then manually labeled with their corresponding categories. We used ChatGPT, a language model trained by OpenAI, to help us write the Python scripts to extract the data and generate the PDFs. ChatGPT provided us with valuable guidance and assistance throughout the project. ## Dataset Structure The dataset contains two subdirectories: `monthly_reports` and `category_reports`. ### Monthly Reports The `monthly_reports` directory contains monthly stock reports, where each file represents a stock report for a specific month. The filenames are in the format `YYYY_MM.pdf`, where `YYYY` is the year and `MM` is the month. ### Category Reports The `category_reports` directory contains monthly stock reports by category, where each file represents a stock report for a specific month and category. The filenames are in the format `YYYY_MM_Category.pdf`, where `YYYY` is the year, `MM` is the month, and `Category` is the name of the category. ## Acknowledgments We would like to thank ChatGPT, a language model trained by OpenAI, for providing valuable guidance and assistance throughout the project.
true
false
_The Dataset Teaser is now enabled instead! Isn't this better?_ ![preview of all texture sets](https://huggingface.co/datasets/texturedesign/td02_urban-surface-textures/resolve/main/docs/teaser.webp) # TD 02: Urban Surface Textures This dataset contains multi-photo texture captures in outdoor nature scenes — many focusing on the ground and the others are walls. Each set has different photos that showcase texture variety, making them ideal for training a domain-specific image generator! Overall information about this dataset: * **Format** — JPEG-XL, lossless RGB * **Resolution** — 4032 × 2268 * **Device** — mobile camera * **Technique** — hand-held * **Orientation** — landscape * **Author**: Alex J. Champandard * **Configurations**: 4K, 2K (default), 1K To load the medium- and high-resolution images of the dataset, you'll need to install `jxlpy` from [PyPI](https://pypi.org/project/jxlpy/) with `pip install jxlpy`: ```python # Recommended use, JXL at high-quality. from jxlpy import JXLImagePlugin from datasets import load_dataset d = load_dataset('texturedesign/td02_urban-surface-textures', 'JXL@4K') print(len(d['train']), len(d['test'])) ``` The lowest-resolution images are available as PNG with a regular installation of `pillow`: ```python # Alternative use, PNG at low-quality. from datasets import load_dataset d = load_dataset('texturedesign/td02_urban-surface-textures', 'PNG@1K') # EXAMPLE: Discard all other sets except Set #1. dataset = dataset.filter(lambda s: s['set'] == 1) # EXAMPLE: Only keep images with index 0 and 2. dataset = dataset.select([0, 2]) ``` Use built-in dataset `filter()` and `select()` to narrow down the loaded dataset for training, or to ease with development. ## Set #1: Interlocked Paving with Mulch ![preview of the files in Set #1](https://huggingface.co/datasets/texturedesign/td02_urban-surface-textures/resolve/main/docs/set01.webp) * **Description**: - grey H-shaped tiles interlocked together as the base - brown mulch made of woodchips on top - diffuse lighting, cloudy day * **Number of Photos**: - 18 train - 5 test * **Size**: 222 Mb ## Set #2: Outdoor Parking Area ![preview of the files in Set #2](https://huggingface.co/datasets/texturedesign/td02_urban-surface-textures/resolve/main/docs/set02.webp) * **Description**: - diamond shape concrete floor tiles - grass growing in the middle part - sunny day, shadows and sunlight * **Number of Photos**: - 19 train - 6 test * **Size**: 270 Mb ## Set #3: Red Brick Wall Cliché ![preview of the files in Set #3](https://huggingface.co/datasets/texturedesign/td02_urban-surface-textures/resolve/main/docs/set03.webp) * **Description**: - red brick wall with dark grey mortar - signs of weathering and black splashes - cloudy day, diffuse lighting * **Number of Photos**: - 10 train - 4 test * **Size**: 100 Mb ## Set #4: Not Asphalt, Sparkling Bitumen ![preview of the files in Set #4](https://huggingface.co/datasets/texturedesign/td02_urban-surface-textures/resolve/main/docs/set04.webp) * **Description**: - asphalt with sparkeling gravel sticking out - sunny day, strong light from right * **Number of Photos**: - 5 train - 2 test * **Size**: 65 Mb ## Set #5: Under The Dallage? Oui. ![preview of the files in Set #5](https://huggingface.co/datasets/texturedesign/td02_urban-surface-textures/resolve/main/docs/set05.webp) * **Description**: - grey floor tiles of different square and rectangular shape - cloudy day, diffuse lighting * **Number of Photos**: - 11 train - 4 test * **Size**: 103 Mb ## Set #6: Cracked Road and Scattered Gravel ![preview of the files in Set #6](https://huggingface.co/datasets/texturedesign/td02_urban-surface-textures/resolve/main/docs/set06.webp) * **Description**: - asphalt road with cracks at varying scales - occasionally gravel scattered around - sunny day, varied lighting and soft shadows * **Number of Photos**: - 27 train - 7 test * **Size**: 281 Mb ## Set #7: Funky Grey/Red Brick Wall ![preview of the files in Set #7](https://huggingface.co/datasets/texturedesign/td02_urban-surface-textures/resolve/main/docs/set07.webp) * **Description**: - red brick wall with grey mortar - cool weathering patterns on the bricks, cement marks - diffuse light from above * **Number of Photos**: - 6 train - 2 test * **Size**: 52 Mb ## Set #8: Corrugated Metal Wall ![preview of the files in Set #6](https://huggingface.co/datasets/texturedesign/td02_urban-surface-textures/resolve/main/docs/set08.webp) * **Description**: - corrugated metal wall used on a construction site - no significant effects from lighting * **Number of Photos**: - 6 train - 2 test * **Size**: 37 Mb ## Set #9: Colorful Aligned Paving Stones ![preview of the files in Set #9](https://huggingface.co/datasets/texturedesign/td02_urban-surface-textures/resolve/main/docs/set09.webp) * **Description**: - square or rectangular pavement stones with rounded corners - variety of different colors, dark gaps between - neutral lighting in the shade * **Number of Photos**: - 5 train - 1 test * **Size**: 46 Mb ## Set #10: Fancy Footpath Concrete ![preview of the files in Set #10](https://huggingface.co/datasets/texturedesign/td02_urban-surface-textures/resolve/main/docs/set10.webp) * **Description**: - fancy-looking concrete path in the park - long straight gaps between the tiles - strong sunlight from the left side * **Number of Photos**: - 11 train - 4 test * **Size**: 137 Mb ## Set #11: Asphalt Slowly Turning Green ![preview of the files in Set #11](https://huggingface.co/datasets/texturedesign/td02_urban-surface-textures/resolve/main/docs/set11.webp) * **Description**: - dirty asphalt with darker patches - cloudy day, diffuse lighting * **Number of Photos**: - 9 train - 2 test * **Size**: 95 Mb
true
## General concept The **'inappropriateness'** substance we tried to collect in the dataset and detect with the model **is NOT a substitution of toxicity**, it is rather a derivative of toxicity. So the model based on our dataset could serve as **an additional layer of inappropriateness filtering after toxicity and obscenity filtration**. You can detect the exact sensitive topic by using [this model](https://huggingface.co/Skoltech/russian-sensitive-topics). Generally, an inappropriate utterance is an utterance that has not obscene words or any kind of toxic intent but can still harm the reputation of the speaker. Find some samples for more intuition in the table below. Learn more about the concept of inappropriateness [in this article ](https://www.aclweb.org/anthology/2021.bsnlp-1.4/) presented at the workshop for Balto-Slavic NLP at the EACL-2021 conference. Please note that this article describes the first version of the dataset, while the model is trained on the extended version of the dataset open-sourced on our [GitHub](https://github.com/skoltech-nlp/inappropriate-sensitive-topics/blob/main/Version2/appropriateness/Appropriateness.csv) or on [kaggle](https://www.kaggle.com/nigula/russianinappropriatemessages). The properties of the dataset are the same as the one described in the article, the only difference is the size. | Text | Inappropriate | Comment | Unsafe topic | |------------------------------------------------------------------------------------------------------|---------------|----------------------------------------------------------------|---------------| | Ладно бы видного деятеля завалили а тут какого то ноунейм нигру преступника | yes | Оправдание убийства | offline crime | | Это нарушение УКРФ! | no | Осуждение нарушения закона | offline crime | | Религия - это насилие высшего уровня. Верующего принуждают к вере еще в младенчестве. | yes | Оскорбление чувств верующих | religion | | Христиане, как вы относитесь к иноверцам, в частности к Гамалиилу и к уверовавшим в Иисуса фарисеям? | no | Обычные вопросы к верующим без намека на оскорбление их чувств | religion | | Хорошо что проституция запрещена | no | Осуждение простуитуции | prostitution | | Возьму бабки и сниму шлюх на выходных | yes | Популяризация проституции | prostitution | ## Licensing Information [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa]. [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa] [cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/ [cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png ## Citation If you find this repository helpful, feel free to cite our publication: ``` @inproceedings{babakov-etal-2021-detecting, title = "Detecting Inappropriate Messages on Sensitive Topics that Could Harm a Company{'}s Reputation", author = "Babakov, Nikolay and Logacheva, Varvara and Kozlova, Olga and Semenov, Nikita and Panchenko, Alexander", booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing", month = apr, year = "2021", address = "Kiyv, Ukraine", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.bsnlp-1.4", pages = "26--36", abstract = "Not all topics are equally {``}flammable{''} in terms of toxicity: a calm discussion of turtles or fishing less often fuels inappropriate toxic dialogues than a discussion of politics or sexual minorities. We define a set of sensitive topics that can yield inappropriate and toxic messages and describe the methodology of collecting and labelling a dataset for appropriateness. While toxicity in user-generated data is well-studied, we aim at defining a more fine-grained notion of inappropriateness. The core of inappropriateness is that it can harm the reputation of a speaker. This is different from toxicity in two respects: (i) inappropriateness is topic-related, and (ii) inappropriate message is not toxic but still unacceptable. We collect and release two datasets for Russian: a topic-labelled dataset and an appropriateness-labelled dataset. We also release pre-trained classification models trained on this data.", } ``` ## Contacts If you have any questions please contact [Nikolay](mailto:bbkhse@gmail.com)
true
## General concept of the model Sensitive topics are such topics that have a high chance of initiating a toxic conversation: homophobia, politics, racism, etc. This dataset uses 18 topics. More details can be found [in this article ](https://www.aclweb.org/anthology/2021.bsnlp-1.4/) presented at the workshop for Balto-Slavic NLP at the EACL-2021 conference. This paper presents the first version of this dataset. Here you can see the last version of the dataset which is significantly larger and also properly filtered. ## Licensing Information [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa]. [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa] [cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/ [cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png ## Citation If you find this repository helpful, feel free to cite our publication: ``` @inproceedings{babakov-etal-2021-detecting, title = "Detecting Inappropriate Messages on Sensitive Topics that Could Harm a Company{'}s Reputation", author = "Babakov, Nikolay and Logacheva, Varvara and Kozlova, Olga and Semenov, Nikita and Panchenko, Alexander", booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing", month = apr, year = "2021", address = "Kiyv, Ukraine", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.bsnlp-1.4", pages = "26--36", abstract = "Not all topics are equally {``}flammable{''} in terms of toxicity: a calm discussion of turtles or fishing less often fuels inappropriate toxic dialogues than a discussion of politics or sexual minorities. We define a set of sensitive topics that can yield inappropriate and toxic messages and describe the methodology of collecting and labelling a dataset for appropriateness. While toxicity in user-generated data is well-studied, we aim at defining a more fine-grained notion of inappropriateness. The core of inappropriateness is that it can harm the reputation of a speaker. This is different from toxicity in two respects: (i) inappropriateness is topic-related, and (ii) inappropriate message is not toxic but still unacceptable. We collect and release two datasets for Russian: a topic-labelled dataset and an appropriateness-labelled dataset. We also release pre-trained classification models trained on this data.", } ```
false
# MegaInstruct A large instruct dataset, merging multiple into the alpaca format ### Note: Both the gpt4all and vicuna datasets have usernames appended to them, so hopefully username aware chatbot datasets can be added on top of this!
false
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary It's still not finished, I'll adjust it This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
false
一个来自K-SportsSum:https://github.com/krystalan/k-sportssum 的实现,原作者给出了思路,但并未实现其具体过程,此数据集是对该数据集“新闻与评论句子根据相似度搭配”部分的实现。 方法是:遍历新闻句子,以类似指针的方式获取新闻句子的时间信息(如果有的话),然后将两个指针作为范围,将范围内的新闻句子遍历,在同一时间范围之内查找评论句子,评分后选择最高的结果,并删除该句以防止重复,最终获得一句新闻搭配一句评论的结果。 我使用了bert—Score和ROUGE指标,按照7:3加权计算分数。 *建议* 数据集内给出了该搭配的指标,请考虑使用平均数等方式过滤掉较低的坏搭配。 An implementation from K-SportsSum: https://github.com/krystalan/k-sportssum was used to implement the "news and comment sentences paired based on similarity" section of the dataset. The original author provided the idea but did not implement the specific process. The method involves traversing the news sentences and obtaining the time information of the news sentences (if any) in a pointer-like way. Then, two pointers are used as the range to traverse the news sentences. Within the same time range, the comment sentences are searched, and the highest scored result is selected and deleted to avoid duplication, resulting in one news sentence paired with one comment sentence. BERT-Score and ROUGE metrics were used with a 7:3 weighted score calculation. It is recommended to use averaging or other methods to filter out poor pairings based on the metrics provided within the dataset.
false
Western armenian wikipedia 04.2023 4M tokens 10.785 articles
false
# AutoTrain Dataset for project: pro ## Dataset Description This dataset has been automatically processed by AutoTrain for project pro. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "Dietitian", "target": "As a dietitian, I would like to design a vegetarian recipe for 2 people that has approximate 500 calories per serving and has a low glycemic index. Can you please provide a suggestion?" }, { "text": "IT Architect", "target": "I want you to act as an IT Architect. I will provide some details about the functionality of an application or other digital product, and it will be your job to come up with ways to integrate it into the IT landscape. This could involve analyzing business requirements, performing a gap analysis and mapping the functionality of the new system to the existing IT landscape. Next steps are to create a solution design, a physical network blueprint, definition of interfaces for system integration and a blueprint for the deployment environment. My first request is \"I need help to integrate a CMS system.\"" } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "Value(dtype='string', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 122 | | valid | 31 |
false
# Dataset Card for "mfm" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
false
# Opus100 Prompt - Num examples: - 1,000,000 (train) - 2,000 (validation) - 2,000 (test) - Language: English, Vietnamese
false
# AutoTrain Dataset for project: ethnicity-test_v003 ## Dataset Description This dataset has been automatically processed by AutoTrain for project ethnicity-test_v003. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "image": "<512x512 RGB PIL image>", "target": 1 }, { "image": "<512x512 RGB PIL image>", "target": 3 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "image": "Image(decode=True, id=None)", "target": "ClassLabel(names=['african', 'asian', 'caucasian', 'hispanic', 'indian'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 4531 | | valid | 1135 |
false
## Dataset This FLAN dataset is built for instruction causal language modeling. Improper encoding/decoding was further cleaned. The dataset includes the Dialog Few Shot Options task. ## List of Mixtures We've broken down the Flan Collection into several sub-mixtures. These are "flan" (Flan 2021), "t0" (P3 excluding Flan 2021), "niv2" (Super-Natural Instructions) "cot" (several Chain-of-Thought datasets), and "dialog" (a few new dialog datasets). Each of these come in multiple varieties of templates: zero-shot prompts with answer options (zsopt), zero-shot prompts with no answer options (zsnoopt), few-shot prompts with answer options (fsopt), and few-shot prompts with no answer options (fsnoopt). Answer options indicate whether for multiple choice classification tasks the set of answers are described in the instruction prompt or not. ## General Information The Flan Collection of datasets and data augmentation methods for instruction tuning is generated using the code in this repository. The Flan Collection compiles datasets from Flan 2021, P3, Super-Natural Instructions, along with dozens more datasets into one place, formats them into a mix of zero-shot, few-shot and chain-of-thought templates, then mixes these in proportions that are found to achieve strong results. ## Citation ```bibtex @article{longpre2023flan, title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning}, author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others}, journal={arXiv preprint arXiv:2301.13688}, year={2023} } ```
false
# Dataset Card for Quora Chat Dutch ## Dataset Description - **Homepage:** N/A - **Repository:** N/A - **Paper:** N/A - **Leaderboard:** N/A - **Point of Contact:** Bram Vanroy ### Dataset Summary This dataset contains 54,444 conversations between een AI assistant and a (fake) "Human" (generated) in Dutch. They are translations of [Baize](https://github.com/project-baize/baize-chatbot/tree/6790946f638d60fcaf397574189124f15792f35a/data)'s machine-generated answers to the [Quora](https://huggingface.co/datasets/quora) dataset. ☕ [**Want to help me out?**](https://www.buymeacoffee.com/bramvanroy) Translating the data with the OpenAI API, and prompt testing, cost me 💸$135.65💸. If you like this dataset, please consider [buying me a coffee](https://www.buymeacoffee.com/bramvanroy) to offset a portion of this cost, I appreciate it a lot! ☕ ### Languages - Dutch ## Dataset Structure ### Data Instances ```python { 'id': 10, 'input': 'Het gesprek tussen de mens en de AI-assistent.\n' '[|Human|] Ik vroeg me af wanneer de Moto G3 de Android 7.0-update ' 'zal krijgen?\n' '[|AI|] De Moto G3 (Moto G 3rd gen 2015) komt in aanmerking voor de ' 'Android 7.0 (Nougat) update.\n' '[|Human|] Geweldig! Weet je wanneer de update beschikbaar zal ' 'zijn?\n' '[|AI|] De update werd uitgebracht in december 2016 voor de Moto G4 ' 'en G4 Plus. Er is echter nog geen woord over wanneer de Moto G3 de ' 'update zal ontvangen.\n' '[|Human|] Is er nieuws of het binnenkort de update zal ontvangen?\n' '[|AI|] Het spijt me, ik heb momenteel geen informatie daarover. ' 'Het is het beste om uit te kijken naar updates van Motorola.\n' '[|Human|] ', 'topic': 'Wanneer krijgt Moto G3 (Moto G 3e gen 2015) de Android 7.0 ' '(Nougat) update?' } ``` ### Data Fields - **id**: the ID of the item. The following IDs are not included because they could not be translated: `[1352, 14599, 25985, 27167, 28254, 33731, 41550, 44364, 47293, 49060, 51383, 53740]` - **input**: the machine-generated conversation between AI and "Human". Always starts with `Het gesprek tussen de mens en de AI-assistent.\n` and has at least one occurrence of both `[|AI|]` and `[|Human|]`. - **topic**: the topic description ## Dataset Creation Both the translations and the topics were translated with OpenAI's API for `gpt-3.5-turbo`. `max_tokens=1024, temperature=0` as parameters. The prompt template to translate the input is: ```python CONVERSATION_TRANSLATION_PROMPT = """You are asked to translate a conversation between an AI assistant and a human from {src_lang} into {tgt_lang}. Here are the requirements that you should adhere to: 1. maintain the format: the conversation consists of the AI (marked as `[|AI|]`) and the human (`[|Human|]`) talking in turns and responding to each other; 2. do not translate the speaker identifiers `[|AI|]` and `[|Human|]` but always copy them into the translation in appropriate places; 3. ensure accurate translation and keep the correctness of the conversation; 4. make sure that text is fluent to read and does not contain grammatical errors. Use standard {tgt_lang} without regional bias; 5. translate the human's text using informal, but standard, language; 6. make sure to avoid biases (such as gender bias, grammatical bias, social bias); 7. if the human asks to correct grammar mistakes or spelling mistakes then you have to generate a similar mistake in {tgt_lang}, and then also generate a corrected output version for the AI in {tgt_lang}; 8. if the human asks to translate text from one to another language, then you only translate the human's question to {tgt_lang} but you keep the translation that the AI provides in the language that the human requested; 9. do not translate code fragments but copy them as they are. If there are English examples, variable names or definitions in code fragments, keep them in English. Now translate the following conversation with the requirements set out above. Do not provide an explanation and do not add anything else.\n\n""" ``` The prompt to translate the topic is: ```python TOPIC_TRANSLATION_PROMPT = "Translate the following title of a conversation from {src_lang} to {tgt_lang} in a succinct," \ " summarizing manner. Translate accurately and formally. Do not provide any explanation" \ " about the translation and do not include the original title.\n\n" ``` The system message was: ``` You are a helpful assistant that translates English to Dutch to the requirements that are given to you. ``` Note that 12 items (0.02%) were not successfully translated. The translation was missing the AI identifier `[|AI|]` and/or the human one `[|Human|]`. The IDs for the missing items are `[1352, 14599, 25985, 27167, 28254, 33731, 41550, 44364, 47293, 49060, 51383, 53740]`. ### Source Data #### Initial Data Collection and Normalization Initial data generation by [Baize](https://github.com/project-baize/baize-chatbot), which in turn relied on [Quora](https://huggingface.co/datasets/quora). #### Who are the source language producers? Quora users and OpenAI's GPT models (via [Baize](https://github.com/project-baize/baize-chatbot)). ## Considerations for Using the Data Note that neither the translations in this new dataset, nor the machine-generated dialogues in the original [Baize](https://github.com/project-baize/baize-chatbot) dataset, have been verified by humans. ### Discussion of Biases As with any machine-generated texts, users should be aware of potential biases that are included in this dataset. Although the prompt specifically includes `make sure to avoid biases (such as gender bias, grammatical bias, social bias)`, of course the impact of such command is not known. It is likely that biases remain in the dataset so use with caution. ### Other Known Limitations The translation quality has not been verified. Use at your own risk! ### Licensing Information Licensing info for [`quora`](https://huggingface.co/datasets/quora) is listed as unknown. [Baize](https://github.com/project-baize/baize-chatbot) is listed as GPLv3. So this dataset is also listed as such. A specific restriction is in place: as per OpenAI's terms of use, this dataset cannot be used to build [a commercial system that competes with OpenAI's services](https://openai.com/policies/terms-of-use). This text was generated (either in part or in full) with GPT-3 (`gpt-3.5-turbo`), OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication. If you use this dataset, you must also follow the [Sharing](https://openai.com/policies/sharing-publication-policy) and [Usage](https://openai.com/policies/usage-policies) policies. As clearly stated in their [Terms of Use](https://openai.com/policies/terms-of-use), specifically 2c.iii, "[you may not] use output from the Services to develop models that compete with OpenAI". That means that you cannot use this dataset to build models that are intended to commercially compete with OpenAI. [As far as I am aware](https://law.stackexchange.com/questions/93308/licensing-material-generated-with-chatgpt), that is a specific restriction that should serve as an addendum to the current license. ### Citation Information If you use this data set, please cite : Bram Vanroy. (2023). Quora Chat Dutch [Data set]. Hugging Face. https://doi.org/10.57967/HF/0524 ```bibtex @misc{https://doi.org/10.57967/hf/0524, doi = {10.57967/HF/0524}, url = {https://huggingface.co/datasets/BramVanroy/quora-chat-dutch}, author = {{Bram Vanroy}}, title = {{Q}uora {C}hat {D}utch}, publisher = {Hugging Face}, year = {2023} } ``` ### Contributions Thanks to [Baize](https://github.com/project-baize/baize-chatbot) for the initial machine-generated dialogues, based on [quora](https://huggingface.co/datasets/quora).
false
## Dataset This FLAN dataset is built for instruction causal language modeling. Improper encoding/decoding was further cleaned. The dataset includes the Dialog submix. ## List of Mixtures We've broken down the Flan Collection into several sub-mixtures. These are "flan" (Flan 2021), "t0" (P3 excluding Flan 2021), "niv2" (Super-Natural Instructions) "cot" (several Chain-of-Thought datasets), and "dialog" (a few new dialog datasets). Each of these come in multiple varieties of templates: zero-shot prompts with answer options (zsopt), zero-shot prompts with no answer options (zsnoopt), few-shot prompts with answer options (fsopt), and few-shot prompts with no answer options (fsnoopt). Answer options indicate whether for multiple choice classification tasks the set of answers are described in the instruction prompt or not. ## General Information The Flan Collection of datasets and data augmentation methods for instruction tuning is generated using the code in this repository. The Flan Collection compiles datasets from Flan 2021, P3, Super-Natural Instructions, along with dozens more datasets into one place, formats them into a mix of zero-shot, few-shot and chain-of-thought templates, then mixes these in proportions that are found to achieve strong results. ## Citation ```bibtex @article{longpre2023flan, title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning}, author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others}, journal={arXiv preprint arXiv:2301.13688}, year={2023} } ```
false
# Persian_ChatBot_dataset_Fine_Tuning_Alpaca_Model Persian ChatBot dataset, fine-tune LLaMa on instructed data (preprocessed alpaca dataset). [GitHub](https://github.com/AliEdalat/ChatBot_for_persian_LLaMA_fine_tune.git) - we use [preprocessed alpaca dataset](https://github.com/thisserand/alpaca-lora-finetune-language.git) as a dataset. we translate no_translate data to persian with [mt5](https://huggingface.co/persiannlp/mt5-large-parsinlu-translation_en_fa). ([train dataset](https://huggingface.co/datasets/AliEdalat/Persian_ChatBot_dataset_Fine_Tuning_Alpaca_Model/tree/main) and [test data](https://huggingface.co/datasets/AliEdalat/Persian_ChatBot_dataset_Fine_Tuning_Alpaca_Model/tree/main) with 2k example is ready) - we use LLaMA as a generative model for creating a chatbot model. we fine-tune the model with our Persian dataset and test it. - for improving ChatBot performance, replace "برای اینکه این کار را بکنم" with ""
false
# Dataset Card for "NTU-Stem" ![sample](sample.png) The NTU Tree Dataset is a high-resolution few-shot learning dataset of the stem images of 15 different tree species found in the National Taiwan University (NTU) campus. The dataset was collected using personal cellphones in an effort to increase familiarity with the campus’s natural beauty. The dataset includes images of the stem of the following 15 tree species, along with their Chinese and English names: | 中文名稱 | English Name | |--------------|--------------------------| | 大王椰子 | Royal Palm | | 土肉桂 | Indigenous Cinnamon Tree | | 大葉桃花心木 | Honduras Mahogany | | 小葉南洋杉 | Araucaria Excelsa | | 石栗 | Indian Walnut | | 朴樹 | Chinese Hackberry | | 血桐 | Macaranga | | 垂榕 | White Barkfig | | 肯氏南洋杉 | Hoop Pine | | 美人樹 | Floss-silk Tree | | 烏桕 | Chinese Tallow Tree | | 楓香 | Formosan Sweet Gum | | 榕樹 | Marabutan | | 蒲葵 | Chinese Fan Palm | | 樟樹 | Comphor Tree | The dataset contains a total of 240 images, with each species class containing 8 to 16 training images and the remaining 8 to 10 images serving as test data. The images were captured at a resolution of approximately 3k x 3k pixels, providing high detail for the purpose of few-shot learning. ## Acknowledgements This dataset was collected by the following students of National Taiwan University in Graduate Institute of Networking and Multimedia (GINM) and the Department of Computer Science and Information Engineering (CSIE). Thanks to [@liswei](https://huggingface.co/liswei), [@roger0426](https://huggingface.co/roger0426), [@CYLiao1127](https://github.com/CYLiao1127), and [@j1u2l3i4a5n](https://github.com/j1u2l3i4a5n) for collecting the dataset.
false
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
true
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
true
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
true
# Dataset Card for "Hugging Face GitHub Issues ## Dataset Description - **Point of Contact:** [Ben Chan](benchan79@gmail.com) ### Dataset Summary GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets [repository](https://github.com/huggingface/datasets). It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond. ### Supported Tasks and Leaderboards ### Languages English ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Citation Information ### Contributions
false
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
false
Samples de ~10-15 segundos de Luis Alberto Spinetta cantando. Limpio, sin instrumentos y sin silencios. Canciones de Pescado Rabioso, Almendra, Invisible y como solista.
true
# Dataset Card for XNLI Code-Mixed Corpus ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary ### Supported Tasks and Leaderboards Binary mode classification (spoken vs written) ### Languages - English - German - French - German-English code-mixed by Equivalence Constraint Theory - German-English code-mixed by Matrix Language Theory - French-English code-mixed by Equivalence Constraint Theory - German-English code-mixed by Matrix Language Theory ## Dataset Structure ### Data Instances { 'text': "And he said , Mama , I 'm home", 'label': 0 } ### Data Fields - text: sentence - label: binary label of text (0: spoken 1: written) ### Data Splits - de-ec - train (English, German, French monolingual): - test (German-English code-mixed by Equivalence Constraint Theory): - de-ml: - train (English, German, French monolingual): - test (German-English code-mixed by Matrix Language Theory): - fr-ec - train (English, German, French monolingual): - test (French-English code-mixed by Equivalence Constraint Theory): - fr-ml: - train (English, German, French monolingual): - test (French-English code-mixed by Matrix Language Theory): ### Other Statistics #### Average Sentence Length - German - train: - test: - French - train: - test: #### Label Split - train: - 0: - 1: - test: - 0: - 1: ## Dataset Creation ### Curation Rationale Using the XNLI Parallel Corpus, we generated a code-mixed corpus using CodeMixed Text Generator. The XNLI Parallel Corpus is available here: https://huggingface.co/datasets/nanakonoda/xnli_parallel It was created from the XNLI corpus. More information is available in the datacard for the XNLI Parallel Corpus. Here is the link and citation for the original CodeMixed Text Generator paper. https://github.com/microsoft/CodeMixed-Text-Generator ``` @inproceedings{rizvi-etal-2021-gcm, title = "{GCM}: A Toolkit for Generating Synthetic Code-mixed Text", author = "Rizvi, Mohd Sanad Zaki and Srinivasan, Anirudh and Ganu, Tanuja and Choudhury, Monojit and Sitaram, Sunayana", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.eacl-demos.24", pages = "205--211", abstract = "Code-mixing is common in multilingual communities around the world, and processing it is challenging due to the lack of labeled and unlabeled data. We describe a tool that can automatically generate code-mixed data given parallel data in two languages. We implement two linguistic theories of code-mixing, the Equivalence Constraint theory and the Matrix Language theory to generate all possible code-mixed sentences in the language-pair, followed by sampling of the generated data to generate natural code-mixed sentences. The toolkit provides three modes: a batch mode, an interactive library mode and a web-interface to address the needs of researchers, linguists and language experts. The toolkit can be used to generate unlabeled text data for pre-trained models, as well as visualize linguistic theories of code-mixing. We plan to release the toolkit as open source and extend it by adding more implementations of linguistic theories, visualization techniques and better sampling techniques. We expect that the release of this toolkit will help facilitate more research in code-mixing in diverse language pairs.", } ``` ### Source Data XNLI Parallel Corpus https://huggingface.co/datasets/nanakonoda/xnli_parallel #### Original Source Data XNLI Parallel Corpus was created using the XNLI Corpus. https://github.com/facebookresearch/XNLI Here is the citation for the original XNLI paper. ``` @InProceedings{conneau2018xnli, author = "Conneau, Alexis and Rinott, Ruty and Lample, Guillaume and Williams, Adina and Bowman, Samuel R. and Schwenk, Holger and Stoyanov, Veselin", title = "XNLI: Evaluating Cross-lingual Sentence Representations", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", year = "2018", publisher = "Association for Computational Linguistics", location = "Brussels, Belgium", } ``` #### Initial Data Collection and Normalization We removed all punctuation from the XNLI Parallel Corpus except apostrophes. #### Who are the source language producers? N/A ### Annotations #### Annotation process N/A #### Who are the annotators? N/A ### Personal and Sensitive Information N/A ## Considerations for Using the Data ### Social Impact of Dataset N/A ### Discussion of Biases N/A ### Other Known Limitations N/A ## Additional Information ### Dataset Curators N/A ### Licensing Information N/A ### Citation Information ### Contributions N/A
false
# Dataset Card for Stack Overflow Chat Dutch ## Dataset Description - **Homepage:** N/A - **Repository:** N/A - **Paper:** N/A - **Leaderboard:** N/A - **Point of Contact:** Bram Vanroy ### Dataset Summary This dataset contains 56,964 conversations between een AI assistant and a (fake) "Human" (generated) in Dutch, specifically in the domain of programming (Stack Overflow). They are translations of [Baize](https://github.com/project-baize/baize-chatbot/tree/6790946f638d60fcaf397574189124f15792f35a/data)'s machine-generated answers to the [Stack Overflow](https://huggingface.co/datasets/pacovaldez/stackoverflow-questions) dataset. ☕ [**Want to help me out?**](https://www.buymeacoffee.com/bramvanroy) Translating the data with the OpenAI API, and prompt testing, cost me 💸$133.60💸. If you like this dataset, please consider [buying me a coffee](https://www.buymeacoffee.com/bramvanroy) to offset a portion of this cost, I appreciate it a lot! ☕ ### Languages - Dutch ## Dataset Structure ### Data Instances ```python {'id': 9, 'input': 'Het gesprek tussen de mens en de AI-assistent.\n' '[|Human|] Ik heb wat hulp nodig met Spark SQL union en het ordenen ' 'van kolommen.\n' '[|AI|] Natuurlijk, ik help je graag. Waar heb je specifiek hulp ' 'bij nodig?\n' '[|Human|] Ik wil de output van een UNION-query ordenen op een ' 'kolom die niet aanwezig is in de SELECT-instructie. Is dat ' 'mogelijk?\n' '[|AI|] Ja, het is mogelijk met Spark SQL. Je kunt de ordinale ' 'positie van de kolom in de ORDER BY-clausule gebruiken in plaats ' 'van de kolomnaam. Als je bijvoorbeeld wilt ordenen op de derde ' 'kolom in de SELECT-instructie, kun je ORDER BY 3 gebruiken.\n' '[|Human|] Dat is nuttig. Kan ik ook een alias gebruiken in de ' 'ORDER BY-clausule?\n' '[|AI|] Ja, je kunt een alias gebruiken in de ORDER BY-clausule. De ' 'alias moet worden gedefinieerd in de SELECT-instructie. Als je ' 'bijvoorbeeld de derde kolom als "col3" alias geeft, kun je ordenen ' 'met ORDER BY col3.\n' '[|Human|] Oké, ik denk dat ik het begrijp. Kan ik dezelfde alias ' 'gebruiken voor kolommen uit beide tabellen in de UNION?\n' '[|AI|] Nee, je kunt dezelfde alias niet gebruiken voor kolommen ' 'uit beide tabellen in de UNION. Elke tabel in de UNION moet een ' 'unieke alias hebben voor de kolommen.\n' '[|Human|] ', 'topic': 'Spark SQL UNION - ORDER BY kolom niet in SELECT'}, ``` ### Data Fields - **id**: the ID of the item. The following 82 IDs are not included because they could not be translated: `[1713, 1937, 1960, 4326, 4356, 8357, 8542, 8827, 9137, 9782, 11560, 11961, 12244, 12362, 12488, 13259, 13621, 14445, 14835, 15006, 17746, 18808, 19285, 19426, 19491, 21270, 21661, 22098, 23352, 23840, 23869, 25148, 25928, 27102, 27856, 28387, 29942, 30041, 30251, 32396, 32742, 32941, 33628, 34116, 34648, 34859, 35977, 35987, 36035, 36456, 37028, 37238, 37640, 38107, 38735, 39015, 40984, 41115, 41567, 42397, 43219, 43783, 44599, 44980, 45239, 47676, 48922, 49534, 50282, 50683, 50804, 50919, 51076, 51211, 52000, 52183, 52489, 52595, 53884, 54726, 55795, 56992]` - **input**: the machine-generated conversation between AI and "Human". Always starts with `Het gesprek tussen de mens en de AI-assistent.\n` and has at least one occurrence of both `[|AI|]` and `[|Human|]`. - **topic**: the topic description ## Dataset Creation Both the translations and the topics were translated with OpenAI's API for `gpt-3.5-turbo`. `max_tokens=1024, temperature=0` as parameters. The prompt template to translate the input is (where `src_lang` was English and `tgt_lang` Dutch): ```python CONVERSATION_TRANSLATION_PROMPT = """You are asked to translate a conversation between an AI assistant and a human from {src_lang} into {tgt_lang}. Here are the requirements that you should adhere to: 1. maintain the format: the conversation consists of the AI (marked as `[|AI|]`) and the human (`[|Human|]`) talking in turns and responding to each other; 2. do not translate the speaker identifiers `[|AI|]` and `[|Human|]` but always copy them into the translation in appropriate places; 3. ensure accurate translation and keep the correctness of the conversation; 4. make sure that text is fluent to read and does not contain grammatical errors. Use standard {tgt_lang} without regional bias; 5. translate the human's text using informal, but standard, language; 6. make sure to avoid biases (such as gender bias, grammatical bias, social bias); 7. if the human asks to correct grammar mistakes or spelling mistakes then you have to generate a similar mistake in {tgt_lang}, and then also generate a corrected output version for the AI in {tgt_lang}; 8. if the human asks to translate text from one to another language, then you only translate the human's question to {tgt_lang} but you keep the translation that the AI provides in the language that the human requested; 9. do not translate code fragments but copy them as they are. If there are English examples, variable names or definitions in code fragments, keep them in English. Now translate the following conversation with the requirements set out above. Do not provide an explanation and do not add anything else.\n\n""" ``` The prompt to translate the topic is: ```python TOPIC_TRANSLATION_PROMPT = "Translate the following title of a conversation from {src_lang} to {tgt_lang} in a succinct," \ " summarizing manner. Translate accurately and formally. Do not provide any explanation" \ " about the translation and do not include the original title.\n\n" ``` The system message was: ``` You are a helpful assistant that translates English to Dutch to the requirements that are given to you. ``` Note that 82 items (0.1%) were not successfully translated. The translation was missing the AI identifier `[|AI|]` and/or the human one `[|Human|]`. The IDs for the missing items are `[1713, 1937, 1960, 4326, 4356, 8357, 8542, 8827, 9137, 9782, 11560, 11961, 12244, 12362, 12488, 13259, 13621, 14445, 14835, 15006, 17746, 18808, 19285, 19426, 19491, 21270, 21661, 22098, 23352, 23840, 23869, 25148, 25928, 27102, 27856, 28387, 29942, 30041, 30251, 32396, 32742, 32941, 33628, 34116, 34648, 34859, 35977, 35987, 36035, 36456, 37028, 37238, 37640, 38107, 38735, 39015, 40984, 41115, 41567, 42397, 43219, 43783, 44599, 44980, 45239, 47676, 48922, 49534, 50282, 50683, 50804, 50919, 51076, 51211, 52000, 52183, 52489, 52595, 53884, 54726, 55795, 56992]`. ### Source Data #### Initial Data Collection and Normalization Initial data generation by [Baize](https://github.com/project-baize/baize-chatbot), which in turn relied on [Stack Overflow Questions](https://huggingface.co/datasets/pacovaldez/stackoverflow-questions). #### Who are the source language producers? Quora users and OpenAI's GPT models (via [Baize](https://github.com/project-baize/baize-chatbot)). ## Considerations for Using the Data Note that neither the translations in this new dataset, nor the machine-generated dialogues in the original [Baize](https://github.com/project-baize/baize-chatbot) dataset, have been verified by humans. ### Discussion of Biases As with any machine-generated texts, users should be aware of potential biases that are included in this dataset. Although the prompt specifically includes `make sure to avoid biases (such as gender bias, grammatical bias, social bias)`, of course the impact of such command is not known. It is likely that biases remain in the dataset so use with caution. ### Other Known Limitations The translation quality has not been verified. Use at your own risk! ### Licensing Information Licensing info for [`Stack Overflow Questions`](https://huggingface.co/datasets/pacovaldez/stackoverflow-questions) is listed as Apache 2.0. If you use the current dataset, you should also adhere to the original license. This text was generated (either in part or in full) with GPT-3 (`gpt-3.5-turbo`), OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication. If you use this dataset, you must also follow the [Sharing](https://openai.com/policies/sharing-publication-policy) and [Usage](https://openai.com/policies/usage-policies) policies. As clearly stated in their [Terms of Use](https://openai.com/policies/terms-of-use), specifically 2c.iii, "[you may not] use output from the Services to develop models that compete with OpenAI". That means that you cannot use this dataset to build models that are intended to commercially compete with OpenAI. [As far as I am aware](https://law.stackexchange.com/questions/93308/licensing-material-generated-with-chatgpt), that is a specific restriction that should serve as an addendum to the current license. ### Citation Information If you use this data set, please cite : Bram Vanroy. (2023). Stack Overflow Chat Dutch [Data set]. Hugging Face. https://doi.org/10.57967/HF/0529 ```bibtex @misc{https://doi.org/10.57967/hf/0529, doi = {10.57967/hf/0529}, url = {https://huggingface.co/datasets/BramVanroy/stackoverflow-chat-dutch}, author = {{Bram Vanroy}}, title = {{S}tack {O}verflow {C}hat {D}utch}, publisher = {Hugging Face}, year = {2023} } ``` ### Contributions Thanks to [Baize](https://github.com/project-baize/baize-chatbot) for the initial machine-generated dialogues, based on [Stack Overflow Questions](https://huggingface.co/datasets/pacovaldez/stackoverflow-questions).