id
stringlengths
2
115
lastModified
stringlengths
24
24
tags
list
author
stringlengths
2
42
description
stringlengths
0
6.67k
citation
stringlengths
0
10.7k
likes
int64
0
3.66k
downloads
int64
0
8.89M
created
timestamp[us]
card
stringlengths
11
977k
card_len
int64
11
977k
embeddings
list
jonathan-roberts1/MLRSNet
2023-04-03T16:34:12.000Z
[ "license:cc-by-4.0", "region:us" ]
jonathan-roberts1
null
null
0
3
2023-02-27T18:19:58
--- dataset_info: features: - name: image dtype: image - name: label sequence: class_label: names: '0': airplane '1': airport '2': bare soil '3': baseball diamond '4': basketball court '5': beach '6': bridge '7': buildings '8': cars '9': chaparral '10': cloud '11': containers '12': crosswalk '13': dense residential area '14': desert '15': dock '16': factory '17': field '18': football field '19': forest '20': freeway '21': golf course '22': grass '23': greenhouse '24': gully '25': habor '26': intersection '27': island '28': lake '29': mobile home '30': mountain '31': overpass '32': park '33': parking lot '34': parkway '35': pavement '36': railway '37': railway station '38': river '39': road '40': roundabout '41': runway '42': sand '43': sea '44': ships '45': snow '46': snowberg '47': sparse residential area '48': stadium '49': swimming pool '50': tanks '51': tennis court '52': terrace '53': track '54': trail '55': transmission tower '56': trees '57': water '58': wetland '59': wind turbine splits: - name: train num_bytes: 1327782862.875 num_examples: 109161 download_size: 1304951717 dataset_size: 1327782862.875 license: cc-by-4.0 --- # Dataset Card for "MLRSNet" ## Dataset Description - **Paper:** [MLRSNet: A multi-label high spatial resolution remote sensing dataset for semantic scene understanding](https://www.sciencedirect.com/science/article/pii/S0924271620302677) ### Licensing Information CC BY 4.0 ## Citation Information [MLRSNet: A multi-label high spatial resolution remote sensing dataset for semantic scene understanding](https://www.sciencedirect.com/science/article/pii/S0924271620302677) ``` @article{qi2020mlrsnet, title = {MLRSNet: A multi-label high spatial resolution remote sensing dataset for semantic scene understanding}, author = {Qi, Xiaoman and Zhu, Panpan and Wang, Yuebin and Zhang, Liqiang and Peng, Junhuan and Wu, Mengfan and Chen, Jialong and Zhao, Xudong and Zang, Ning and Mathiopoulos, P Takis}, year = 2020, journal = {ISPRS Journal of Photogrammetry and Remote Sensing}, publisher = {Elsevier}, volume = 169, pages = {337--350} } ```
2,797
[ [ -0.04144287109375, -0.0204925537109375, 0.005420684814453125, 0.00556182861328125, -0.00876617431640625, -0.02783203125, -0.0034389495849609375, -0.033782958984375, 0.00365447998046875, 0.0338134765625, -0.058502197265625, -0.059112548828125, -0.0499267578125, ...
jonathan-roberts1/MultiScene
2023-04-03T16:15:59.000Z
[ "task_categories:image-classification", "task_categories:zero-shot-image-classification", "license:mit", "region:us" ]
jonathan-roberts1
null
null
0
3
2023-02-28T16:13:48
--- dataset_info: features: - name: image dtype: image - name: label sequence: class_label: names: '0': apron '1': baseball field '2': basketball field '3': beach '4': bridge '5': cemetery '6': commercial '7': farmland '8': woodland '9': golf course '10': greenhouse '11': helipad '12': lake or pond '13': oil field '14': orchard '15': parking lot '16': park '17': pier '18': port '19': quarry '20': railway '21': residential '22': river '23': roundabout '24': runway '25': soccer '26': solar panel '27': sparse shrub '28': stadium '29': storage tank '30': tennis court '31': train station '32': wastewater plant '33': wind turbine '34': works '35': sea splits: - name: train num_bytes: 867506522 num_examples: 14000 download_size: 867005851 dataset_size: 867506522 license: mit task_categories: - image-classification - zero-shot-image-classification --- # Dataset Card for "MultiScene" ## Dataset Description - **Paper** [MultiScene: A Large-scale Dataset and Benchmark for Multi-scene Recognition in Single Aerial Images](https://ieeexplore.ieee.org/iel7/36/4358825/09537917.pdf) - **Split** Clean ### Split Information This HuggingFace dataset repository contains just the 'Clean' split. ### Licensing Information MIT. ## Citation Information [MultiScene: A Large-scale Dataset and Benchmark for Multi-scene Recognition in Single Aerial Images](https://ieeexplore.ieee.org/iel7/36/4358825/09537917.pdf) ``` @article{hua2021multiscene, title = {MultiScene: A Large-scale Dataset and Benchmark for Multi-scene Recognition in Single Aerial Images}, author = {Hua, Y. and Mou, L. and Jin, P. and Zhu, X. X.}, year = {in press}, journal = {IEEE Transactions on Geoscience and Remote Sensing} } ```
2,145
[ [ -0.05267333984375, -0.00366973876953125, 0.005496978759765625, 0.0110321044921875, -0.0146636962890625, 0.0034275054931640625, -0.00978851318359375, -0.0267181396484375, 0.016876220703125, 0.03692626953125, -0.04888916015625, -0.043731689453125, -0.0251007080078...
Javiai/failures-3D-print
2023-10-06T11:26:23.000Z
[ "task_categories:object-detection", "size_categories:n<1K", "license:unknown", "region:us" ]
Javiai
null
null
0
3
2023-03-02T18:13:10
--- license: unknown dataset_info: features: - name: image_id dtype: int64 - name: image dtype: image - name: width dtype: int64 - name: height dtype: int64 - name: objects struct: - name: bbox sequence: sequence: int64 - name: categories sequence: int64 splits: - name: train num_bytes: 3878997 num_examples: 73 download_size: 3549033 dataset_size: 3878997 configs: - config_name: default data_files: - split: train path: data/train-* task_categories: - object-detection size_categories: - n<1K --- # Failures in 3D printing Dataset This is a small dataset of images from failures in 3D print. That idea of this dataset is use for train and object detection model for failures detection on 3D printing. In the images it detected 4 categories: - **Error**: This refer a any error in the part except the type of error known like spaghetti - **Extrusor**: The base of the extrusor - **Part**: The part is the piece that is printing - **Spagheti**: This is a type of error produced because the extrusor is printing on the air ## Structure The structure of the dataset is - **image_id:** Id of the image - **image:** Image instance in PIL format - **width:** Width of the image in pixels - **height:** Height of the image in pixels - **objects:** bounding boxes in the images - **bbox:** coordinates of the bounding box. The coordinates are [x_center, y_center, bbox width, bbox height] - **categories:** category of the bounding box. The categories are 0: error, 1: extrusor, 2: part and 3: spaghetti ## Download the dataset ```python from datasets import load_dataset dataset = load_dataset('Javiai/failures-3D-print') ``` ## Show the Bounding Boxes ```python import numpy as np import os from PIL import Image, ImageDraw image = dataset["train"][0]["image"] annotations = dataset["train"][0]["objects"] draw = ImageDraw.Draw(image) categories = ['error','extrusor','part','spagheti'] id2label = {index: x for index, x in enumerate(categories, start=0)} label2id = {v: k for k, v in id2label.items()} for i in range(len(annotations["categories"])): box = annotations["bbox"][i] class_idx = annotations["categories"][i] x, y, w, h = tuple(box) draw.rectangle((x - w/2, y - h/2, x + w/2, y + h/2), outline="red", width=1) draw.text((x - w/2, y - h/2), id2label[class_idx], fill="white") image ```
2,419
[ [ -0.0418701171875, -0.0218353271484375, 0.0013914108276367188, -0.00475311279296875, -0.0171051025390625, -0.0186004638671875, 0.03497314453125, -0.01456451416015625, 0.010162353515625, 0.0401611328125, -0.04205322265625, -0.0282135009765625, -0.02667236328125, ...
IlyaGusev/yandex_q_full
2023-03-07T20:30:24.000Z
[ "region:us" ]
IlyaGusev
null
null
1
3
2023-03-06T18:17:41
--- dataset_info: features: - name: id dtype: string - name: id2 dtype: int64 - name: title dtype: string - name: text_plain dtype: string - name: text_html dtype: string - name: author dtype: string - name: negative_votes dtype: int32 - name: positive_votes dtype: int32 - name: quality dtype: int8 - name: views dtype: uint64 - name: votes dtype: int32 - name: approved_answer dtype: string - name: timestamp dtype: uint64 - name: tags sequence: string - name: answers sequence: - name: id dtype: string - name: id2 dtype: int64 - name: text_plain dtype: string - name: text_html dtype: string - name: author dtype: string - name: negative_votes dtype: int32 - name: positive_votes dtype: int32 - name: votes dtype: int32 - name: quality dtype: int8 - name: views dtype: uint64 - name: reposts dtype: int32 - name: timestamp dtype: uint64 splits: - name: train num_bytes: 5468460217 num_examples: 1297670 download_size: 1130317937 dataset_size: 5468460217 --- Based on https://huggingface.co/datasets/its5Q/yandex-q, parsed full.jsonl.gz
1,269
[ [ -0.025115966796875, -0.024444580078125, 0.04620361328125, 0.02734375, -0.00823974609375, -0.00809478759765625, 0.00023734569549560547, -0.036407470703125, 0.053192138671875, 0.039093017578125, -0.07598876953125, -0.07257080078125, -0.018310546875, 0.00675964...
codesj/vira-intents-live
2023-03-08T23:14:55.000Z
[ "region:us" ]
codesj
null
null
0
3
2023-03-08T23:14:52
--- dataset_info: features: - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 536982 num_examples: 7434 - name: validation num_bytes: 227106 num_examples: 3140 download_size: 348952 dataset_size: 764088 --- # Dataset Card for "vira-intents-live" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
455
[ [ -0.0167388916015625, -0.0276336669921875, 0.0160064697265625, 0.018463134765625, -0.0187225341796875, -0.012786865234375, 0.01605224609375, -0.006893157958984375, 0.0751953125, 0.039459228515625, -0.065185546875, -0.050750732421875, -0.020233154296875, -0.02...
EJaalborg2022/beer_reviews_label_drift_neg
2023-03-10T20:58:48.000Z
[ "region:us" ]
EJaalborg2022
null
null
0
3
2023-03-09T22:09:39
--- dataset_info: features: - name: prediction_ts dtype: float32 - name: beer_ABV dtype: float32 - name: beer_name dtype: string - name: beer_style dtype: string - name: review_appearance dtype: float32 - name: review_palette dtype: float32 - name: review_taste dtype: float32 - name: review_aroma dtype: float32 - name: text dtype: string - name: label dtype: class_label: names: '0': negative '1': neutral '2': positive splits: - name: training num_bytes: 6908323 num_examples: 9000 - name: validation num_bytes: 970104 num_examples: 1260 - name: production num_bytes: 21305419 num_examples: 27742 download_size: 16954616 dataset_size: 29183846 --- # Dataset Card for "beer_reviews_label_drift_neg" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
975
[ [ -0.04205322265625, -0.029815673828125, 0.013153076171875, 0.0223236083984375, -0.0138702392578125, 0.0009226799011230469, 0.0231475830078125, 0.002460479736328125, 0.062744140625, 0.0240936279296875, -0.07098388671875, -0.068359375, -0.027740478515625, -0.01...
Deysi/spanish-chinese
2023-03-11T18:08:09.000Z
[ "task_categories:translation", "size_categories:10M<n<100M", "language:es", "language:zh", "license:apache-2.0", "language", "translation", "traducción", "idiomas", "chino", "chinese", "español", "spanish", "Universidad de La Rioja", "region:us" ]
Deysi
null
null
2
3
2023-03-11T16:22:23
--- dataset_info: features: - name: spanish dtype: string - name: chinese dtype: string splits: - name: train num_bytes: 3048111118.5537825 num_examples: 9092567 - name: test num_bytes: 762027863.4462174 num_examples: 2273142 download_size: 2473454462 dataset_size: 3810138982 license: apache-2.0 task_categories: - translation language: - es - zh tags: - language - translation - traducción - idiomas - chino - chinese - español - spanish - Universidad de La Rioja pretty_name: Spanish and Chinese aligned sentences size_categories: - 10M<n<100M --- # Dataset Card for "spanish-chinese" All sensences extracted from the United Nations Parallel Corpus v1.0. The parallel corpus consists of manually translated United Nations documents for the six official UN languages, Arabic, Chinese, English, French, Russian, and Spanish. The corpus is freely available for download at https://conferences.unite.un.org/UNCorpus under the terms of use outlined in the attached DISCLAIMER. The original individual documents are available at the United Nations Official Document System (ODS) at http://ods.un.org. Reference: Ziemski, M., Junczys-Dowmunt, M., and Pouliquen, B., (2016), The United Nations Parallel Corpus, Language Resources and Evaluation (LREC’16), Portorož, Slovenia, May 2016.
1,325
[ [ -0.016143798828125, 0.0037326812744140625, 0.0192718505859375, 0.0406494140625, -0.0218353271484375, 0.01206207275390625, -0.029449462890625, -0.035247802734375, 0.01113128662109375, 0.03240966796875, -0.03460693359375, -0.057647705078125, -0.0300140380859375, ...
Shamima/jujutsu-kaisen-maki-zenin
2023-03-12T06:09:37.000Z
[ "region:us" ]
Shamima
null
null
0
3
2023-03-12T05:28:28
--- dataset_info: features: - name: image dtype: image splits: - name: train num_bytes: 395471.0 num_examples: 11 download_size: 0 dataset_size: 395471.0 --- # Dataset Card for "jujutsu-kaisen-maki-zenin" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
361
[ [ -0.036102294921875, -0.01326751708984375, 0.0236053466796875, -0.00003147125244140625, -0.034942626953125, -0.0034732818603515625, 0.005321502685546875, -0.0036602020263671875, 0.073486328125, 0.0223846435546875, -0.0660400390625, -0.05523681640625, -0.038970947...
recmeapp/thumbs-up
2023-03-13T08:56:10.000Z
[ "task_categories:text-classification", "size_categories:1M<n<10M", "language:en", "code", "region:us" ]
recmeapp
null
null
0
3
2023-03-13T06:05:49
--- task_categories: - text-classification language: - en tags: - code size_categories: - 1M<n<10M --- # Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset contains more than 2.1 million negative user reviews (reviews with 1 or 2 ratings) from 9775 apps across 48 categories from Google Play. Moreover, the number of votes that each review received within a month is also recorded. Those reviews having more votes can be cosidered as improtant reviews. ### Supported Tasks and Leaderboards Detecting app issues proactively by identifying prominent app reviews. ### Languages English ## How to use the dataset? ``` from datasets import load_dataset import pandas as pd # Load the dataset dataset = load_dataset("recmeapp/thumbs-up") # Convert to Pandas dfs = {split: dset.to_pandas() for split, dset in dataset.items()} dataset_df = pd.concat([dfs["train"], dfs["validation"], dfs["test"]]) # How many rows are there in the thumbs-up dataset? print(f'There are {len(dataset_df)} rows in the thumbs-up dataset.') # How many unique apps are there in the thumbs-up dataset? print(f'There are {len(dataset_df["app_name"].unique())} unique apps.') # How many categoris are there in the thumbs-up dataset? print(f'There are {len(dataset_df["category"].unique())} unique categories.') # What is the highest vote a review received in the thumbs-up dataset? print(f'The highest vote a review received is {max(dataset_df["votes"])}.') ``` ## Usage This dataset was used for training the PPrior, a novel framework proposed in [this paper](https://ieeexplore.ieee.org/abstract/document/10020586). You can find the implementation in this [GitHub repository](https://github.com/MultifacetedNLP/PPrior).
1,833
[ [ -0.036407470703125, -0.0277252197265625, 0.0037860870361328125, 0.0243072509765625, -0.0157318115234375, -0.0022106170654296875, 0.0016698837280273438, 0.01155853271484375, 0.0265655517578125, 0.041717529296875, -0.047515869140625, -0.0657958984375, -0.021881103...
x1101/nsfw
2023-03-13T07:39:53.000Z
[ "region:us" ]
x1101
null
null
0
3
2023-03-13T07:33:58
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
foldl/torch-forum
2023-03-15T12:52:42.000Z
[ "task_categories:question-answering", "task_categories:text-classification", "task_categories:text-generation", "size_categories:1K<n<10K", "language:en", "license:cc-by-sa-4.0", "code", "region:us" ]
foldl
null
null
1
3
2023-03-14T14:24:22
--- dataset_info: features: - name: title dtype: string - name: category dtype: string - name: posts list: - name: contents dtype: string - name: isAccepted dtype: bool - name: likes dtype: int64 - name: poster dtype: string - name: answered dtype: bool splits: - name: train num_bytes: 1540936 num_examples: 706 download_size: 734399 dataset_size: 1540936 license: cc-by-sa-4.0 task_categories: - question-answering - text-classification - text-generation language: - en tags: - code pretty_name: Pytorch Forums Parsed size_categories: - 1K<n<10K --- # Dataset Card for "torch-forum" Dataset structure ``` { title:str category:str, posts:List[{ poster:str, contents:str, likes:int, isAccepted:bool }] } ```
881
[ [ -0.038848876953125, -0.0195159912109375, 0.001827239990234375, -0.00107574462890625, -0.0706787109375, 0.02752685546875, -0.00061798095703125, 0.0250091552734375, 0.050537109375, 0.031341552734375, -0.0172271728515625, -0.0792236328125, -0.052581787109375, -...
mxeval/mathqa-x
2023-03-20T19:21:12.000Z
[ "task_categories:text-generation", "size_categories:1K<n<10K", "language:en", "license:apache-2.0", "mathqa-x", "mathqa", "mxeval", "arxiv:2210.14868", "region:us" ]
mxeval
A collection of execution-based multi-lingual benchmark for code generation.
@article{mbxp_athiwaratkun2022, title = {Multi-lingual Evaluation of Code Generation Models}, author = {Athiwaratkun, Ben and Gouda, Sanjay Krishna and Wang, Zijian and Li, Xiaopeng and Tian, Yuchen and Tan, Ming and Ahmad, Wasi Uddin and Wang, Shiqi and Sun, Qing and Shang, Mingyue and Gonugondla, Sujan Kumar and Ding, Hantian and Kumar, Varun and Fulton, Nathan and Farahani, Arash and Jain, Siddhartha and Giaquinto, Robert and Qian, Haifeng and Ramanathan, Murali Krishna and Nallapati, Ramesh and Ray, Baishakhi and Bhatia, Parminder and Sengupta, Sudipta and Roth, Dan and Xiang, Bing}, doi = {10.48550/ARXIV.2210.14868}, url = {https://arxiv.org/abs/2210.14868}, keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} }
1
3
2023-03-14T21:41:40
--- license: apache-2.0 task_categories: - text-generation language: - en tags: - mathqa-x - mathqa - mxeval pretty_name: mbxp size_categories: - 1K<n<10K --- # MBXP ## Table of Contents - [MathQA-X](#MathQA-X) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#related-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Executional Correctness](#execution) - [Execution Example](#execution-example) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Social Impact of Dataset](#social-impact-of-dataset) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) # MathQA-X ## Dataset Description - **Repository:** [GitHub Repository](https://github.com/amazon-science/mbxp-exec-eval) - **Paper:** [Multi-lingual Evaluation of Code Generation Models](https://openreview.net/forum?id=Bo7eeXm6An8) ### Dataset Summary This repository contains data and code to perform execution-based multi-lingual evaluation of code generation capabilities and the corresponding data, namely, a multi-lingual benchmark MBXP, multi-lingual MathQA and multi-lingual HumanEval. <br>Results and findings can be found in the paper ["Multi-lingual Evaluation of Code Generation Models"](https://arxiv.org/abs/2210.14868). ### Related Tasks and Leaderboards * [Multi-HumanEval](https://huggingface.co/datasets/mxeval/multi-humaneval) * [MBXP](https://huggingface.co/datasets/mxeval/mbxp) * [MathQA-X](https://huggingface.co/datasets/mxeval/mathqa-x) ### Languages The programming problems are written in multiple programming languages and contain English natural text in comments and docstrings. ## Dataset Structure To lookup currently supported datasets ```python get_dataset_config_names("mxeval/mathqa-x") ['python', 'java', 'javascript'] ``` To load a specific dataset and language ```python from datasets import load_dataset load_dataset("mxeval/mathqa-x", "python") DatasetDict({ test: Dataset({ features: ['task_id', 'language', 'prompt', 'test', 'entry_point', 'canonical_solution'], num_rows: 1883 }) }) ``` ### Data Instances An example of a dataset instance: ```python { "task_id": "MathQA/0", "language": "python", "prompt": "def problem():\n \"\"\"\n a shopkeeper sold an article offering a discount of 5 % and earned a profit of 31.1 % . what would have been the percentage of profit earned if no discount had been offered ? n0 = 5.0 n1 = 31.1\n \"\"\"\n", "test": "import math\ndef compare(x, y):\n return math.fabs(x-y)<1e-8\ncandidate = problem\nassert compare(candidate(), 38.0)\ndef check(x): pass\n", "entry_point": "problem", "canonical_solution": " n0 = 5.0\n n1 = 31.1\n t0 = n1 + 100.0\n t1 = 100.0 - n0\n t2 = t0 * 100.0\n t3 = t2 / t1\n answer = t3 - 100.0\n return answer\n" } ``` ### Data Fields - `task_id`: identifier for the data sample - `prompt`: input for the model containing function header and docstrings - `canonical_solution`: solution for the problem in the `prompt` - `description`: task description - `test`: contains function to test generated code for correctness - `entry_point`: entry point for test - `language`: programming lanuage identifier to call the appropriate subprocess call for program execution ### Data Splits - MathQA-X - Python - Java - Javascript ## Dataset Creation ### Curation Rationale Since code generation models are often trained on dumps of GitHub a dataset not included in the dump was necessary to properly evaluate the model. However, since this dataset was published on GitHub it is likely to be included in future dumps. ### Personal and Sensitive Information None. ### Social Impact of Dataset With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models. ## Execution ### Execution Example Install the repo [mbxp-exec-eval](https://github.com/amazon-science/mbxp-exec-eval) to execute generations or canonical solutions for the prompts from this dataset. ```python >>> from datasets import load_dataset >>> from mxeval.execution import check_correctness >>> mathqa_python = load_dataset("mxeval/mathqa-x", "python", split="test") >>> example_problem = mathqa_python[0] >>> check_correctness(example_problem, example_problem["canonical_solution"], timeout=20.0) {'task_id': 'MathQA/0', 'passed': True, 'result': 'passed', 'completion_id': None, 'time_elapsed': 9.673357009887695} ``` ### Considerations for Using the Data Make sure to sandbox the execution environment. ### Dataset Curators AWS AI Labs ### Licensing Information [LICENSE](https://huggingface.co/datasets/mxeval/mathqa-x/blob/main/mathqa-x-LICENSE) <br> [THIRD PARTY LICENSES](https://huggingface.co/datasets/mxeval/mathqa-x/blob/main/THIRD_PARTY_LICENSES) ### Citation Information ``` @inproceedings{ athiwaratkun2023multilingual, title={Multi-lingual Evaluation of Code Generation Models}, author={Ben Athiwaratkun and Sanjay Krishna Gouda and Zijian Wang and Xiaopeng Li and Yuchen Tian and Ming Tan and Wasi Uddin Ahmad and Shiqi Wang and Qing Sun and Mingyue Shang and Sujan Kumar Gonugondla and Hantian Ding and Varun Kumar and Nathan Fulton and Arash Farahani and Siddhartha Jain and Robert Giaquinto and Haifeng Qian and Murali Krishna Ramanathan and Ramesh Nallapati and Baishakhi Ray and Parminder Bhatia and Sudipta Sengupta and Dan Roth and Bing Xiang}, booktitle={The Eleventh International Conference on Learning Representations }, year={2023}, url={https://openreview.net/forum?id=Bo7eeXm6An8} } ``` ### Contributions [skgouda@](https://github.com/sk-g) [benathi@](https://github.com/benathi)
6,285
[ [ -0.027435302734375, -0.036590576171875, 0.00962066650390625, 0.02545166015625, 0.013092041015625, 0.0091705322265625, -0.0230712890625, -0.0135498046875, -0.0007014274597167969, 0.02764892578125, -0.050079345703125, -0.053466796875, -0.0253448486328125, 0.00...
nbtpj/Movies_and_TV_meta
2023-03-15T01:34:17.000Z
[ "region:us" ]
nbtpj
null
null
0
3
2023-03-15T01:32:39
--- dataset_info: features: - name: category dtype: string - name: tech1 dtype: string - name: description dtype: string - name: fit dtype: string - name: title dtype: string - name: also_buy dtype: string - name: tech2 dtype: string - name: brand dtype: string - name: feature dtype: string - name: rank dtype: string - name: also_view dtype: string - name: main_cat dtype: string - name: similar_item dtype: string - name: date dtype: string - name: price dtype: string - name: asin dtype: string - name: imageURL dtype: string - name: imageURLHighRes dtype: string - name: details dtype: string splits: - name: train num_bytes: 292562315 num_examples: 203766 download_size: 152902943 dataset_size: 292562315 --- # Dataset Card for "Movies_and_TV_meta" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
1,019
[ [ -0.04937744140625, -0.01605224609375, 0.01049041748046875, -0.0029811859130859375, -0.0287322998046875, 0.00445556640625, 0.031280517578125, 0.0187835693359375, 0.0587158203125, 0.0474853515625, -0.06463623046875, -0.040374755859375, -0.055816650390625, -0.0...
Multimodal-Fatima/VQAv2_test
2023-05-13T21:54:43.000Z
[ "region:us" ]
Multimodal-Fatima
null
null
0
3
2023-03-17T21:59:25
--- dataset_info: features: - name: question_type dtype: string - name: multiple_choice_answer dtype: string - name: answers_original list: - name: answer dtype: string - name: answer_confidence dtype: string - name: answer_id dtype: int64 - name: id_image dtype: int64 - name: answer_type dtype: string - name: question_id dtype: int64 - name: question dtype: string - name: image dtype: image - name: id dtype: int64 - name: clip_tags_ViT_L_14 sequence: string - name: blip_caption dtype: string - name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14 sequence: string - name: DETA_detections_deta_swin_large_o365_coco_classes list: - name: attribute dtype: string - name: box sequence: float32 - name: label dtype: string - name: location dtype: string - name: ratio dtype: float32 - name: size dtype: string - name: tag dtype: string - name: Attributes_ViT_L_14_descriptors_text_davinci_003_full sequence: string - name: clip_tags_ViT_L_14_wo_openai sequence: string - name: clip_tags_ViT_L_14_with_openai sequence: string - name: clip_tags_LAION_ViT_H_14_2B_wo_openai sequence: string - name: clip_tags_LAION_ViT_H_14_2B_with_openai sequence: string - name: clip_tags_LAION_ViT_bigG_14_2B_wo_openai sequence: string - name: clip_tags_LAION_ViT_bigG_14_2B_with_openai sequence: string - name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full sequence: string - name: Attributes_LAION_ViT_bigG_14_2B_descriptors_text_davinci_003_full sequence: string - name: clip_tags_ViT_B_16_with_openai sequence: string - name: DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random list: - name: attribute dtype: string - name: box sequence: float64 - name: captions_module sequence: string - name: captions_module_filter sequence: string - name: label dtype: string - name: location dtype: string - name: ratio dtype: float64 - name: size dtype: string - name: tag dtype: string - name: answers sequence: string splits: - name: test num_bytes: 92151870512.0 num_examples: 447793 download_size: 18737258554 dataset_size: 92151870512.0 --- # Dataset Card for "VQAv2_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
2,596
[ [ -0.036712646484375, -0.01438140869140625, 0.00791168212890625, 0.01030731201171875, -0.0141448974609375, -0.00557708740234375, 0.0391845703125, -0.005680084228515625, 0.03631591796875, 0.0308380126953125, -0.0545654296875, -0.037261962890625, -0.028564453125, ...
Kushtrim/Kosovo-Parliament-Transcriptions
2023-10-25T18:06:17.000Z
[ "size_categories:100K<n<1M", "source_datasets:Kuvendi i Kosovës", "language:sq", "license:cc-by-4.0", "region:us" ]
Kushtrim
null
null
2
3
2023-03-18T11:10:30
--- language: sq license: cc-by-4.0 size_categories: - 100K<n<1M source_datasets: Kuvendi i Kosovës configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: text dtype: string - name: speaker dtype: string - name: date dtype: string - name: id dtype: string - name: num_tokens dtype: int64 splits: - name: train num_bytes: 162498331 num_examples: 122694 download_size: 81817214 dataset_size: 162498331 --- # Kosovo-Parliament-Transcriptions [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Kushtrimvisoka/Kosovo-Parliament-Transcriptions/blob/main/Kosovo_Parliament_Transcriptions.ipynb) The dataset comprises transcripts of speeches delivered by members of the Kosovo Assembly during parliamentary sessions spanning from 2007. The goal of this repository is to provide a valuable resource for researchers and professionals interested in natural language processing, or political discourse analysis. # Data source The dataset was compiled from publicly available transcripts published on the current and old official website of the Kosovo Assembly (https://kuvendikosoves.org/). # Data Preperation The dataset was compiled by downloading PDF files and converting them to a text format using OCR. The resulting text was then cleaned to fix punctuation and spelling errors. It's important to note that due to the complexity of the PDF-to-text conversion process, the dataset may still contain typos and other errors. As a result, the dataset is provided "as is". Additionally, it should be noted that the dataset includes speeches given in languages other than Albanian. # To do - [ ] Conduct additional quality assurance checks to identify and correct any remaining errors in the dataset. - [ ] Add a column for the language of the speech. - [ ] Add a column for the party of the speaker. # Dataset structure The dataset contains the following fields: text, speaker, date, id. # Usage ```python from datasets import load_dataset dataset = load_dataset('Kushtrim/Kosovo-Parliament-Transcriptions') ``` # License The dataset is licensed under the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/). # Citation If you use this dataset in your research, please consider citing this repository.
2,437
[ [ -0.0126495361328125, -0.0236663818359375, 0.0195770263671875, 0.0322265625, -0.037841796875, -0.0093231201171875, -0.0203399658203125, -0.0013170242309570312, 0.0273590087890625, 0.057708740234375, -0.0293121337890625, -0.03973388671875, -0.0423583984375, 0....
RyokoAI/Fandom23K
2023-03-20T19:58:46.000Z
[ "task_categories:text-classification", "task_categories:text-generation", "size_categories:10M<n<100M", "language:en", "license:cc-by-sa-3.0", "wiki", "training", "region:us" ]
RyokoAI
null
null
7
3
2023-03-19T02:52:11
--- license: cc-by-sa-3.0 language: - en tags: - wiki - training task_categories: - text-classification - text-generation pretty_name: Fandom23K Wikis size_categories: - 10M<n<100M --- # Dataset Card for Fandom23K *The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible.* ## Dataset Description - **Homepage:** (TODO) https://docs.ryokoai.com/docs/training/dataset#Fandom22K - **Repository:** <https://github.com/RyokoAI/BigKnow2022> - **Paper:** N/A - **Leaderboard:** N/A - **Point of Contact:** Ronsor/undeleted <ronsor@ronsor.com> ### Dataset Summary Fandom23K is a dataset composed of 15,616,749 articles scraped from approximately 23,665 Fandom.com wikis between March 14 and March 18, 2023. It is a subset of the upcoming BigKnow2022 dataset. ### Supported Tasks and Leaderboards This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes. * text-classification ### Languages * English * Potentially other languages in much smaller quantities. ## Dataset Structure ### Data Instances ```json { "tag": "fandom.wikia2011", "text": "# Add Your Wiki's Highlights\n\nWrite the text of your article here!-_-\n\n", "title": "Add Your Wiki's Highlights" } { "tag": "fandom.wikia2011", "text": "# Add Your Wiki's Highlights!\n\nWikia wants to hear from you! What significant milestones did your wiki experience in 2011? What cool things did the community try out?\nCreate a page for the wiki you're most active on! Be sure to add it to the Entertainment, Gaming, or Lifestyle categories so it shows up in the right place!\n\n", "title": "Add Your Wiki's Highlights!" } { "tag": "fandom.wikia2011", "text": "# Assassins Creed Wiki 2011\n\nIn 2011, Assassin's Creed Wiki tested new Wikia features such as Message Wall, Chat, and New Layouts.\n\n", "title": "Assassins Creed Wiki 2011" } ``` ### Data Fields * **text**: the actual article text * **title**: the article title * **tag**: text source tag, in the following format: `fandom.<wiki name>` ### Data Splits No splitting of the data was performed. ## Dataset Creation ### Curation Rationale Fandom23K provides an up-to-date corpus containing pop culture and media information spanning a variety of interests and hobbies. Previous datasets containing such information are either part of a large and harder-to-handle whole, such as Common Crawl, do not provide enough variety, or are simply outdated. ### Source Data #### Initial Data Collection and Normalization *More information about any referenced scripts, commands, or programs used may be found in the BigKnow2022 GitHub repository.* First, a list of active Fandom wikis was gathered into a text file. Active is defined as "having at least 250 images on the wiki." This list was gathered in early January 2023, despite the actual wiki content being more recent. Second, the `scrape_fandom.py` script was used to generate and download an up to date dump for each of the wikis. Third, `wikiextractor` was used to process these dumps into single XML files containing each article stripped of all formatting besides links. Fourth, `dump2jsonl` was used to convert the XML files into JSONL files with an article per line. Light markdown formatting was applied, converting the HTML links to markdown-formatted links, and automatically making the article's title a header. Finally, the JSONL files were concatenated into the Fandom23K dataset. The version uploaded to this repository, however, is split into multiple files, numbered 00 through 04 inclusive. #### Who are the source language producers? The contributors of each wiki. ### Annotations #### Annotation process Wiki names and article titles were collected alongside the article text. Other than that automated process, no annotation was performed. #### Who are the annotators? There were no human annotators. ### Personal and Sensitive Information The dataset was collected from public wiki data. As a result, we do not believe it should contain any PII and did not inspect it further. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content requiring knowledge of popular culture or a particular niche. ### Discussion of Biases This dataset contains text from random Internet users and generally should not be used as an authoritative source of information. Additionally, this dataset was not filtered at all. We recommmend its usage for research purposes only. ### Other Known Limitations This dataset is based on a list of active wikis from January 2023, even though the actual wiki content may be more recent. Additionally, smaller yet still active wikis may have been excluded. ## Additional Information ### Dataset Curators Ronsor Labs ### Licensing Information CC-BY-SA 3.0, except for any portions which state otherwise. ### Citation Information ``` @misc{ryokoai2023-bigknow2022, title = {BigKnow2022: Bringing Language Models Up to Speed}, author = {Ronsor}, year = {2023}, howpublished = {\url{https://github.com/RyokoAI/BigKnow2022}}, } ``` ### Contributions Thanks to @ronsor for gathering this dataset.
5,349
[ [ -0.053314208984375, -0.04345703125, 0.01239776611328125, 0.0271759033203125, -0.0114898681640625, -0.0009756088256835938, -0.03717041015625, -0.0399169921875, 0.0521240234375, 0.034576416015625, -0.0733642578125, -0.040130615234375, -0.0276031494140625, 0.04...
tbboukhari/Alpaca_french_instruct
2023-09-05T15:52:14.000Z
[ "language:fr", "region:us" ]
tbboukhari
null
null
2
3
2023-03-19T15:06:24
--- language: fr dataset_info: features: - name: instruction dtype: string - name: ' saisir' dtype: string - name: ' sortir' dtype: string - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 23260190 num_examples: 52002 download_size: 14152821 dataset_size: 23260190 --- # Dataset Card for "Alpaca_french_instruct" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
509
[ [ -0.050048828125, -0.032440185546875, 0.0108184814453125, 0.027557373046875, -0.0117340087890625, -0.01010894775390625, 0.0177001953125, -0.019287109375, 0.0653076171875, 0.045806884765625, -0.055511474609375, -0.05780029296875, -0.05816650390625, -0.01838684...
pcuenq/face_synthetics_spiga
2023-03-20T08:53:26.000Z
[ "region:us" ]
pcuenq
null
null
8
3
2023-03-20T05:32:12
--- dataset_info: features: - name: image dtype: image - name: image_seg dtype: image - name: landmarks dtype: string - name: spiga sequence: sequence: float64 - name: spiga_seg dtype: image splits: - name: train num_bytes: 31081737215.0 num_examples: 100000 download_size: 31009656222 dataset_size: 31081737215.0 --- # Dataset Card for "face_synthetics_spiga" This is a copy of [Microsoft FaceSynthetics dataset](https://github.com/microsoft/FaceSynthetics) with [SPIGA](https://github.com/andresprados/SPIGA) landmark annotations. For a copy of the original FaceSynthetics dataset with no extra annotations, please refer to [pcuenq/face_synthetics](https://huggingface.co/pcuenq/face_synthetics). Please, refer to the original [license](LICENSE.txt), which we replicate in this repo. The SPIGA annotations were created by Hugging Face Inc. and are distributed under the MIT license. This dataset was prepared using the code below. It iterates through the dataset to perform landmark detection using SPIGA, and then to create visualizations of the features. Visualization is performed using Matplotlib to render to memory buffers. ```Python import numpy as np from datasets import load_dataset from spiga.inference.config import ModelConfig from spiga.inference.framework import SPIGAFramework dataset_name = "pcuenq/face_synthetics" faces = load_dataset(dataset_name) faces = faces["train"] # ## Obtain SPIGA features processor = SPIGAFramework(ModelConfig("300wpublic")) # We obtain the bbox from the existing landmarks in the dataset. # We could use `dlib`, but this should be faster. # Note that the `landmarks` are stored as strings. def parse_landmarks(landmarks_str): landmarks = landmarks_str.strip().split('\n') landmarks = [k.split(' ') for k in landmarks] landmarks = [(float(x), float(y)) for x, y in landmarks] return landmarks def bbox_from_landmarks(landmarks_str): landmarks = parse_landmarks(landmarks_str) landmarks_x, landmarks_y = zip(*landmarks) x_min, x_max = min(landmarks_x), max(landmarks_x) y_min, y_max = min(landmarks_y), max(landmarks_y) width = x_max - x_min height = y_max - y_min # Give it a little room; I think it works anyway x_min -= 5 y_min -= 5 width += 10 height += 10 bbox = (x_min, y_min, width, height) return bbox def spiga_process(example): image = example["image"] image = np.array(image) # BGR image = image[:, :, ::-1] bbox = bbox_from_landmarks(example["landmarks"]) features = processor.inference(image, [bbox]) landmarks = features["landmarks"][0] example["spiga"] = landmarks return example # For some reason this map doesn't work with num_proc > 1 :( # TODO: run inference on GPU faces = faces.map(spiga_process) # ## "Segmentation" # We use bezier paths to draw contours and areas. import matplotlib.pyplot as plt import matplotlib.patches as patches from matplotlib.path import Path import PIL def get_patch(landmarks, color='lime', closed=False): contour = landmarks ops = [Path.MOVETO] + [Path.LINETO]*(len(contour)-1) facecolor = (0, 0, 0, 0) # Transparent fill color, if open if closed: contour.append(contour[0]) ops.append(Path.CLOSEPOLY) facecolor = color path = Path(contour, ops) return patches.PathPatch(path, facecolor=facecolor, edgecolor=color, lw=4) # Draw to a buffer. def conditioning_from_landmarks(landmarks, size=512): # Precisely control output image size dpi = 72 fig, ax = plt.subplots(1, figsize=[size/dpi, size/dpi], tight_layout={'pad':0}) fig.set_dpi(dpi) black = np.zeros((size, size, 3)) ax.imshow(black) face_patch = get_patch(landmarks[0:17]) l_eyebrow = get_patch(landmarks[17:22], color='yellow') r_eyebrow = get_patch(landmarks[22:27], color='yellow') nose_v = get_patch(landmarks[27:31], color='orange') nose_h = get_patch(landmarks[31:36], color='orange') l_eye = get_patch(landmarks[36:42], color='magenta', closed=True) r_eye = get_patch(landmarks[42:48], color='magenta', closed=True) outer_lips = get_patch(landmarks[48:60], color='cyan', closed=True) inner_lips = get_patch(landmarks[60:68], color='blue', closed=True) ax.add_patch(face_patch) ax.add_patch(l_eyebrow) ax.add_patch(r_eyebrow) ax.add_patch(nose_v) ax.add_patch(nose_h) ax.add_patch(l_eye) ax.add_patch(r_eye) ax.add_patch(outer_lips) ax.add_patch(inner_lips) plt.axis('off') fig.canvas.draw() buffer, (width, height) = fig.canvas.print_to_buffer() assert width == height assert width == size buffer = np.frombuffer(buffer, np.uint8).reshape((height, width, 4)) buffer = buffer[:, :, 0:3] plt.close(fig) return PIL.Image.fromarray(buffer) def spiga_segmentation(example): landmarks = example["spiga"] example['spiga_seg'] = conditioning_from_landmarks(landmarks) return example faces = faces.map(spiga_segmentation, num_proc=16) faces.push_to_hub(f"{dataset_name}_spiga") ```
5,134
[ [ -0.046173095703125, -0.0491943359375, 0.034912109375, 0.025482177734375, -0.020111083984375, -0.02838134765625, -0.00930023193359375, -0.026763916015625, 0.037384033203125, 0.04803466796875, -0.05255126953125, -0.040771484375, -0.032684326171875, -0.01305389...
reginaboateng/cleaned_ebmnlp_pico
2023-03-20T14:40:48.000Z
[ "region:us" ]
reginaboateng
null
null
0
3
2023-03-20T14:40:37
--- dataset_info: features: - name: tokens sequence: string - name: chunk_tags sequence: string - name: pos_tags sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': I-INT '2': I-OUT '3': I-PAR splits: - name: train num_bytes: 29122187 num_examples: 26016 - name: validation num_bytes: 1482730 num_examples: 2064 download_size: 3415345 dataset_size: 30604917 --- # Dataset Card for "cleaned_ebmnlp_pico" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
665
[ [ -0.03302001953125, -0.01580810546875, -0.0022735595703125, 0.0027904510498046875, -0.03472900390625, -0.007419586181640625, 0.0211029052734375, -0.025604248046875, 0.07061767578125, 0.042724609375, -0.0504150390625, -0.052001953125, -0.033294677734375, -0.00...
LorenzH/juliet_test_suite_c_1_3
2023-03-21T14:38:12.000Z
[ "task_categories:text-classification", "size_categories:10K<n<100K", "license:cc0-1.0", "region:us" ]
LorenzH
null
null
0
3
2023-03-21T13:49:04
--- license: cc0-1.0 task_categories: - text-classification pretty_name: Juliet Test Suite 1.3 size_categories: - 10K<n<100K --- # Dataset Card for the Juliet Test Suite 1.3 ### Dataset Summary This Datasets contains all test cases from the NIST's [Juliet test suite](https://samate.nist.gov/SARD/test-suites/112) for the C and C++ programming languages. The dataset contains a benign and a defective implementation of each sample, which have been extracting by means of the OMITGOOD and OMITBAD preprocessor macros of the Juliet test suite. ### Supported Tasks and Leaderboards Software defect prediction, code clone detection. ### Languages The C and C++ programming languages. ## Dataset Structure ### Data Instances ### Data Fields | index | name | type | description | | --- | --- | --- | --- | | 0 | index | int | The index of each sample in the dataset. | | 1 | filename | str | The path to the test case including the file name. | | 2 | class | int | The class of the defect, i.e., the collection by CWE number from which the sample was taken. | | 3 | good | str | The code of the benign implementation. | | 4 | bad | str | The code of the defective implementation. | ### Data Splits | type | size | |------|------| | train | 80706 cases | | test | 20177 cases | ## Dataset Creation ### Curation Rationale ### Source Data https://samate.nist.gov/SARD/test-suites/112 #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations The Juliet test suite is a synthetic dataset, meaning that all samples have been manually crafted. Therefore they are not entirely representative of actual software defects found in the wild. A classifier trained on these samples may suffer from decreased predictive performance, leading to gross misclassifactions. Critical software defects may therefore be overlooked, when such model is applied in a realistic environment. ## Additional Information ### Dataset Curators https://github.com/lorenz9314/ ### Licensing Information ### Citation Information ### Contributions
2,300
[ [ -0.025238037109375, -0.026123046875, 0.01514434814453125, 0.0034770965576171875, 0.01300811767578125, 0.0210723876953125, 0.015960693359375, -0.00788116455078125, 0.01226806640625, 0.052032470703125, -0.053619384765625, -0.05755615234375, -0.039642333984375, ...
Deysi/split-imdb
2023-03-21T22:55:45.000Z
[ "task_categories:text-classification", "size_categories:10K<n<100K", "language:en", "sentiment analysis", "region:us" ]
Deysi
null
null
0
3
2023-03-21T18:25:05
--- dataset_info: features: - name: text dtype: string - name: label dtype: int64 - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 46538455.6 num_examples: 35000 - name: test num_bytes: 9972526.2 num_examples: 7500 - name: valid num_bytes: 9972526.2 num_examples: 7500 download_size: 0 dataset_size: 66483508 task_categories: - text-classification language: - en tags: - sentiment analysis pretty_name: Split dataset for imdb film reviews size_categories: - 10K<n<100K --- # Dataset Card for "split-imdb"
584
[ [ -0.05670166015625, -0.002017974853515625, -0.023651123046875, 0.007740020751953125, -0.0751953125, 0.038055419921875, 0.0193328857421875, 0.017333984375, 0.058380126953125, 0.0369873046875, -0.06365966796875, -0.02386474609375, -0.04559326171875, 0.001409530...
pythainlp/thaigov-v2-corpus-22032023-oa
2023-03-22T08:35:10.000Z
[ "region:us" ]
pythainlp
null
null
0
3
2023-03-22T08:33:37
--- dataset_info: features: - name: TEXT dtype: string - name: SOURCE dtype: string - name: url struct: - name: url dtype: string splits: - name: train num_bytes: 241455880 num_examples: 30380 download_size: 81088077 dataset_size: 241455880 --- # Dataset Card for "thaigov-v2-corpus-22032023-oa" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
474
[ [ -0.03253173828125, -0.024261474609375, 0.014434814453125, 0.01457977294921875, -0.0235595703125, -0.002101898193359375, 0.006160736083984375, -0.0143890380859375, 0.0618896484375, 0.046905517578125, -0.0305023193359375, -0.046875, -0.04180908203125, -0.02122...
s-nlp/en_non_detoxified
2023-09-08T08:38:22.000Z
[ "task_categories:text-classification", "language:en", "license:openrail++", "region:us" ]
s-nlp
null
null
0
3
2023-03-24T13:06:46
--- license: openrail++ task_categories: - text-classification language: - en --- # ParaDetox: Detoxification with Parallel Data (English). Paraphrase Task Negative Results This repository contains information about **Paraphrase Task** markup from [English Paradetox dataset](https://huggingface.co/datasets/s-nlp/paradetox) collection pipeline. In this dataset, the samples that were marked as *"cannot rewrite"* are present. The original paper ["ParaDetox: Detoxification with Parallel Data"](https://aclanthology.org/2022.acl-long.469/) was presented at ACL 2022 main conference. ## ParaDetox Collection Pipeline The ParaDetox Dataset collection was done via [Yandex.Toloka](https://toloka.yandex.com/) crowdsource platform. The collection was done in three steps: * *Task 1:* **Generation of Paraphrases**: The first crowdsourcing task asks users to eliminate toxicity in a given sentence while keeping the content. * *Task 2:* **Content Preservation Check**: We show users the generated paraphrases along with their original variants and ask them to indicate if they have close meanings. * *Task 3:* **Toxicity Check**: Finally, we check if the workers succeeded in removing toxicity. Specifically this repo contains the results of **Task 1: Generation of Paraphrases**. The general size of the dataset is about 12,059 samples. Here, the samples that were marked by annotators that they cannot detoxify are present. The reason for this can be following: * *non-toxic*: the text is simply non toxic, can be with negative sentiment, however, without any obscene or rude lexicon; * *toxic content*: the text is passive aggressive, sarcastic, or other, so the insult is deeply incorporated in the message. To detoxify it, you need to change the meaning dramantically. * *unclear*: the text is only about obscene lexicon, random words, or any other tokens combination that makes it difficult to understand the main content. Annotators could select several options. ## Citation ``` @inproceedings{logacheva-etal-2022-paradetox, title = "{P}ara{D}etox: Detoxification with Parallel Data", author = "Logacheva, Varvara and Dementieva, Daryna and Ustyantsev, Sergey and Moskovskiy, Daniil and Dale, David and Krotova, Irina and Semenov, Nikita and Panchenko, Alexander", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.acl-long.469", pages = "6804--6818", abstract = "We present a novel pipeline for the collection of parallel data for the detoxification task. We collect non-toxic paraphrases for over 10,000 English toxic sentences. We also show that this pipeline can be used to distill a large existing corpus of paraphrases to get toxic-neutral sentence pairs. We release two parallel corpora which can be used for the training of detoxification models. To the best of our knowledge, these are the first parallel datasets for this task.We describe our pipeline in detail to make it fast to set up for a new language or domain, thus contributing to faster and easier development of new parallel resources.We train several detoxification models on the collected data and compare them with several baselines and state-of-the-art unsupervised approaches. We conduct both automatic and manual evaluations. All models trained on parallel data outperform the state-of-the-art unsupervised models by a large margin. This suggests that our novel datasets can boost the performance of detoxification systems.", } ``` ## Contacts For any questions, please contact: Daryna Dementieva (dardem96@gmail.com)
3,823
[ [ -0.00452423095703125, -0.033843994140625, 0.0455322265625, 0.01450347900390625, -0.026885986328125, -0.002529144287109375, 0.0005497932434082031, -0.0074462890625, 0.0254669189453125, 0.058837890625, -0.0265655517578125, -0.06573486328125, -0.042083740234375, ...
s-nlp/ru_paradetox_content
2023-09-08T08:36:21.000Z
[ "task_categories:text-classification", "language:ru", "license:openrail++", "region:us" ]
s-nlp
null
null
0
3
2023-03-24T15:00:38
--- license: openrail++ task_categories: - text-classification language: - ru --- # ParaDetox: Detoxification with Parallel Data (Russian). Content Task Results This repository contains information about **Content Task** markup from [Russian Paradetox dataset](https://huggingface.co/datasets/s-nlp/ru_paradetox) collection pipeline. ## ParaDetox Collection Pipeline The ParaDetox Dataset collection was done via [Yandex.Toloka](https://toloka.yandex.com/) crowdsource platform. The collection was done in three steps: * *Task 1:* **Generation of Paraphrases**: The first crowdsourcing task asks users to eliminate toxicity in a given sentence while keeping the content. * *Task 2:* **Content Preservation Check**: We show users the generated paraphrases along with their original variants and ask them to indicate if they have close meanings. * *Task 3:* **Toxicity Check**: Finally, we check if the workers succeeded in removing toxicity. Specifically this repo contains the results of **Task 2: Content Preservation Check**. Here, the samples with markup confidence >= 90 are present. One text in the pair is toxic, another -- its non-toxic paraphrase (should be). Totally, datasets contains 10,975 pairs. Among them, the minor part is negative examples (2,812 pairs). ## Citation ``` @inproceedings{logacheva-etal-2022-study, title = "A Study on Manual and Automatic Evaluation for Text Style Transfer: The Case of Detoxification", author = "Logacheva, Varvara and Dementieva, Daryna and Krotova, Irina and Fenogenova, Alena and Nikishina, Irina and Shavrina, Tatiana and Panchenko, Alexander", booktitle = "Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.humeval-1.8", doi = "10.18653/v1/2022.humeval-1.8", pages = "90--101", abstract = "It is often difficult to reliably evaluate models which generate text. Among them, text style transfer is a particularly difficult to evaluate, because its success depends on a number of parameters.We conduct an evaluation of a large number of models on a detoxification task. We explore the relations between the manual and automatic metrics and find that there is only weak correlation between them, which is dependent on the type of model which generated text. Automatic metrics tend to be less reliable for better-performing models. However, our findings suggest that, ChrF and BertScore metrics can be used as a proxy for human evaluation of text detoxification to some extent.", } ``` ## Contacts For any questions, please contact: Daryna Dementieva (dardem96@gmail.com)
2,800
[ [ -0.001552581787109375, -0.038604736328125, 0.0501708984375, 0.0289306640625, -0.0226898193359375, 0.0005712509155273438, -0.0169525146484375, -0.0203704833984375, 0.006366729736328125, 0.048431396484375, -0.036712646484375, -0.05706787109375, -0.043731689453125,...
niv-al/instruct
2023-03-24T19:12:36.000Z
[ "task_categories:question-answering", "task_categories:text-generation", "task_categories:text2text-generation", "task_categories:table-question-answering", "size_categories:10M<n<100M", "language:en", "license:openrail", "region:us" ]
niv-al
null
null
9
3
2023-03-24T18:50:18
--- license: openrail task_categories: - question-answering - text-generation - text2text-generation - table-question-answering language: - en pretty_name: Instruct size_categories: - 10M<n<100M --- # Dataset Card for Instruct Based on Alpaca's instruction finetuning. ``` "Below is an instruction that describes a task, paired with an input that provides further context.\n" "Write a response that appropriately completes the request\n" "### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:" ```
516
[ [ -0.04071044921875, -0.064208984375, -0.00540924072265625, 0.02020263671875, -0.03875732421875, -0.0251922607421875, 0.01259613037109375, -0.0028820037841796875, 0.038055419921875, 0.059295654296875, -0.07562255859375, -0.047454833984375, -0.04925537109375, -...
taesiri/gta-myths
2023-03-25T04:46:58.000Z
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:en", "license:mit", "game", "region:us" ]
taesiri
null
null
4
3
2023-03-25T01:32:44
--- dataset_info: features: - name: Myth dtype: string - name: Outcome dtype: string - name: Extra dtype: string splits: - name: validation num_bytes: 28122 num_examples: 453 download_size: 15572 dataset_size: 28122 license: mit task_categories: - text-classification language: - en tags: - game pretty_name: GTA V Myths size_categories: - 1K<n<10K --- # Dataset Card for "GTA V Myths" List of Myths in GTA V, extracted from [Caylus's Channel](https://www.youtube.com/watch?v=bKKOBbWy2sQ&ab_channel=Caylus) [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
677
[ [ -0.04876708984375, -0.03375244140625, 0.0224609375, 0.00038552284240722656, -0.015472412109375, 0.0350341796875, 0.004550933837890625, -0.0333251953125, 0.05645751953125, 0.033966064453125, -0.061737060546875, -0.03082275390625, -0.031524658203125, -0.028808...
rcds/swiss_law_area_prediction
2023-07-20T07:38:52.000Z
[ "task_categories:text-classification", "annotations_creators:machine-generated", "language_creators:expert-generated", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "language:de", "language:fr", "language:it", "license:cc-by-sa-4.0", "arxiv:2306.09237",...
rcds
This dataset contains court decision for law area prediction task.
@InProceedings{huggingface:dataset, title = {A great new dataset}, author={huggingface, Inc. }, year={2020} }
2
3
2023-03-25T10:51:36
--- license: cc-by-sa-4.0 annotations_creators: - machine-generated language: - de - fr - it language_creators: - expert-generated multilinguality: - multilingual pretty_name: Law Area Prediction size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-classification --- # Dataset Card for Law Area Prediction ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The dataset contains cases to be classified into the four main areas of law: Public, Civil, Criminal and Social These can be classified further into sub-areas: ``` "public": ['Tax', 'Urban Planning and Environmental', 'Expropriation', 'Public Administration', 'Other Fiscal'], "civil": ['Rental and Lease', 'Employment Contract', 'Bankruptcy', 'Family', 'Competition and Antitrust', 'Intellectual Property'], 'criminal': ['Substantive Criminal', 'Criminal Procedure'] ``` ### Supported Tasks and Leaderboards Law Area Prediction can be used as text classification task ### Languages Switzerland has four official languages with three languages German, French and Italian being represenated. The decisions are written by the judges and clerks in the language of the proceedings. | Language | Subset | Number of Documents| |------------|------------|--------------------| | German | **de** | 127K | | French | **fr** | 156K | | Italian | **it** | 46K | ## Dataset Structure - decision_id: unique identifier for the decision - facts: facts section of the decision - considerations: considerations section of the decision - law_area: label of the decision (main area of law) - law_sub_area: sub area of law of the decision - language: language of the decision - year: year of the decision - court: court of the decision - chamber: chamber of the decision - canton: canton of the decision - region: region of the decision ### Data Fields [More Information Needed] ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits The dataset was split date-stratisfied - Train: 2002-2015 - Validation: 2016-2017 - Test: 2018-2022 ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML. #### Who are the source language producers? The decisions are written by the judges and clerks in the language of the proceedings. ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf) © Swiss Federal Supreme Court, 2002-2022 The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made. Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf ### Citation Information Please cite our [ArXiv-Preprint](https://arxiv.org/abs/2306.09237) ``` @misc{rasiah2023scale, title={SCALE: Scaling up the Complexity for Advanced Language Model Evaluation}, author={Vishvaksenan Rasiah and Ronja Stern and Veton Matoshi and Matthias Stürmer and Ilias Chalkidis and Daniel E. Ho and Joel Niklaus}, year={2023}, eprint={2306.09237}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions
5,584
[ [ -0.02349853515625, -0.046875, 0.03448486328125, 0.0243072509765625, -0.032440185546875, -0.021270751953125, -0.01898193359375, -0.0177459716796875, 0.0211944580078125, 0.043670654296875, -0.0411376953125, -0.06573486328125, -0.05096435546875, -0.002582550048...
JoelIzDaBest66/Talk
2023-03-26T13:16:44.000Z
[ "region:us" ]
JoelIzDaBest66
null
null
0
3
2023-03-26T13:07:52
"you", "ai" "Hello!", "Hi there! What's your name?" "My name is Carl.", "Wow! That's a pretty cool name! I don't have a name, but you can call me AI." "How are you?", "I'm doing just well!" "Thank you!", "You're welcome." "What is 1 + 1?", "1 + 1 makes 2." "I have a cat!", "I don't have one, since I'm a robot." "Kitten fight!", "No wait! I'm allergic to adorableness!" "Who parked their car on my sandwich?", "I did!"
419
[ [ -0.04498291015625, -0.07391357421875, 0.0416259765625, 0.032440185546875, -0.0214996337890625, 0.0146331787109375, 0.00901031494140625, -0.0251007080078125, 0.03192138671875, 0.01149749755859375, -0.0443115234375, -0.03729248046875, -0.0258941650390625, 0.02...
arattinger/noto-emoji-captions
2023-03-26T14:21:59.000Z
[ "annotations_creators:machine-generated", "multilinguality:monolingual", "language:en", "region:us" ]
arattinger
null
null
0
3
2023-03-26T13:25:46
--- dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 77868555.5 num_examples: 3468 download_size: 77424588 dataset_size: 77868555.5 annotations_creators: - machine-generated language: - en multilinguality: - monolingual pretty_name: 'Pokémon BLIP captions' --- # Dataset Card for Noto Emoji Captions BLIP generated captions for Noto emojis. The dataset was captioned with the [pre-trained BLIP model](https://github.com/salesforce/BLIP). It contains a list of ´image´ and ´text´ keys with the images being 512x512.
609
[ [ -0.0210113525390625, -0.0208587646484375, 0.004302978515625, 0.0216827392578125, -0.044158935546875, 0.022369384765625, -0.0026702880859375, -0.02569580078125, 0.047637939453125, 0.0621337890625, -0.06597900390625, -0.0301666259765625, -0.0401611328125, 0.02...
cyanic-selkie/wikianc-hr
2023-06-01T13:58:07.000Z
[ "task_categories:token-classification", "size_categories:1M<n<10M", "language:hr", "license:cc-by-sa-3.0", "wikidata", "wikipedia", "wikification", "region:us" ]
cyanic-selkie
null
null
1
3
2023-03-27T08:30:50
--- license: cc-by-sa-3.0 task_categories: - token-classification language: - hr tags: - wikidata - wikipedia - wikification pretty_name: WikiAnc HR size_categories: - 1M<n<10M --- # Dataset Card for WikiAnc HR ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) ## Dataset Description - **Repository:** [WikiAnc repository](https://github.com/cyanic-selkie/wikianc) ### Dataset Summary The WikiAnc HR datasets is an automatically generated dataset from Wikipedia (hr) and Wikidata dumps (March 1, 2023). The code for generating the dataset can be found [here](https://github.com/cyanic-selkie/wikianc). ### Supported Tasks - `wikificiation`: The dataset can be used to train a model for Wikification. ### Languages The text in the dataset is in Croatian. The associated BCP-47 code is `hr`. You can find the English version [here](https://huggingface.co/datasets/cyanic-selkie/wikianc-en). ## Dataset Structure ### Data Instances A typical data point represents a paragraph in a Wikipedia article. The `paragraph_text` field contains the original text in an NFC normalized, UTF-8 encoded string. The `paragraph_anchors` field contains a list of anchors, each represented by a struct with the inclusive starting UTF-8 code point `start` field, exclusive ending UTF-8 code point `end` field, a nullable `qid` field, a nullable `pageid` field, and an NFC normalized, UTF-8 encoded `title` (Wikipedia) field. Additionally, each paragraph has `article_title`, `article_pageid`, and (nullable) `article_qid` fields referring to the article the paragraph came from. There is also a nullable, NFC normalized, UTF-8 encoded `section_heading` field, and an integer `section_level` field referring to the heading (if it exists) of the article section, and the level in the section hierarchy that the paragraph came from. The `qid` fields refers to Wikidata's QID identifiers, while the `pageid` and `title` fields refer to Wikipedia's pageID and title identifiers (there is a one-to-one mapping between pageIDs and titles). **NOTE:** An anchor will always have a `title`, but that doesn't mean it has to have a `pageid`. This is because Wikipedia allows defining anchors to nonexistent articles. An example from the WikiAnc HR test set looks as follows: ``` { "uuid": "8a9569ea-a398-4d14-8bce-76c263a8c0ac", "article_title": "Špiro_Dmitrović", "article_pageid": 70957, "article_qid": 16116278, "section_heading": null, "section_level": 0, "paragraph_text": "Špiro Dmitrović (Benkovac, 1803. – Zagreb, 6. veljače 1868.) hrvatski časnik i politički borac u doba ilirizma.", "paragraph_anchors": [ { "start": 17, "end": 25, "qid": 397443, "pageid": 14426, "title": "Benkovac" }, { "start": 27, "end": 32, "qid": 6887, "pageid": 1876, "title": "1803." }, { "start": 35, "end": 41, "qid": 1435, "pageid": 5903, "title": "Zagreb" }, { "start": 43, "end": 53, "qid": 2320, "pageid": 496, "title": "6._veljače" }, { "start": 54, "end": 59, "qid": 7717, "pageid": 1811, "title": "1868." }, { "start": 102, "end": 110, "qid": 680821, "pageid": 54622, "title": "Ilirizam" } ] } ``` ### Data Fields - `uuid`: a UTF-8 encoded string representing a v4 UUID that uniquely identifies the example - `article_title`: an NFC normalized, UTF-8 encoded Wikipedia title of the article; spaces are replaced with underscores - `article_pageid`: an integer representing the Wikipedia pageID of the article - `article_qid`: an integer representing the Wikidata QID this article refers to; it can be null if the entity didn't exist in Wikidata at the time of the creation of the original dataset - `section_heading`: a nullable, NFC normalized, UTF-8 encoded string representing the section heading - `section_level`: an integer representing the level of the section in the section hierarchy - `paragraph_text`: an NFC normalized, UTF-8 encoded string representing the paragraph - `paragraph_anchors`: a list of structs representing anchors, each anchor has: - `start`: an integer representing the inclusive starting UTF-8 code point of the anchors - `end`: an integer representing the exclusive ending UTF-8 code point of the anchor - `qid`: a nullable integer representing the Wikidata QID this anchor refers to; it can be null if the entity didn't exist in Wikidata at the time of the creation of the original dataset - `pageid`: a nullable integer representing the Wikipedia pageID of the anchor; it can be null if the article didn't exist in Wikipedia at the time of the creation of the original dataset - `title`: an NFC normalized, UTF-8 encoded string representing the Wikipedia title of the anchor; spaces are replaced with underscores; can refer to a nonexistent Wikipedia article ### Data Splits The data is split into training, validation and test sets; paragraphs belonging to the same article aren't necessarily in the same split. The final split sizes are as follows: | | Train | Validation | Test | | :----- | :------: | :-----: | :----: | | WikiAnc HR - articles | 192,653 | 116,375 | 116,638 | | WikiAnc HR - paragraphs | 2,346,651 | 292,590 | 293,557 | | WikiAnc HR - anchors | 8,368,928 | 1,039,851 | 1,044,828 | | WikiAnc HR - anchors with QIDs | 7,160,367 | 891,959 | 896,414 | | WikiAnc HR - anchors with pageIDs | 7,179,116 | 894,313 | 898,692 | **NOTE:** The number of articles in the table above refers to the number of articles that have at least one paragraph belonging to the article appear in the split. ## Additional Information ### Licensing Information The WikiAnc HR dataset is given under the [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) license.
6,415
[ [ -0.03912353515625, -0.038116455078125, 0.0117950439453125, 0.01151275634765625, -0.017822265625, -0.0239105224609375, -0.01119232177734375, -0.01522064208984375, 0.0237579345703125, 0.01450347900390625, -0.056915283203125, -0.07513427734375, -0.0190277099609375,...
lhoestq/test-image
2023-03-27T16:34:28.000Z
[ "region:us" ]
lhoestq
null
null
0
3
2023-03-27T16:34:05
--- dataset_info: features: - name: image dtype: image splits: - name: train num_bytes: 173136.0 num_examples: 1 download_size: 174237 dataset_size: 173136.0 --- # Dataset Card for "test-image" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
350
[ [ -0.05108642578125, -0.0224609375, 0.009765625, 0.01026153564453125, -0.0238800048828125, -0.00457763671875, 0.020843505859375, -0.0159759521484375, 0.055572509765625, 0.0242156982421875, -0.05023193359375, -0.050018310546875, -0.042388916015625, -0.016235351...
lhoestq/test-image-list
2023-03-27T16:35:34.000Z
[ "region:us" ]
lhoestq
null
null
0
3
2023-03-27T16:34:58
--- dataset_info: features: - name: image list: image splits: - name: train num_bytes: 346275.0 num_examples: 1 download_size: 174383 dataset_size: 346275.0 --- # Dataset Card for "test-image-list" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
354
[ [ -0.05792236328125, -0.0167694091796875, -0.0018110275268554688, 0.001956939697265625, -0.02227783203125, -0.00698089599609375, 0.01593017578125, -0.01407623291015625, 0.050567626953125, 0.028533935546875, -0.049468994140625, -0.04705810546875, -0.043121337890625...
koutch/stackoverflow_question_types
2023-04-10T14:45:23.000Z
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:en", "license:cc", "code", "arxiv:1803.09371", "region:us" ]
koutch
null
null
0
3
2023-03-30T08:44:05
--- dataset_info: features: - name: question_id dtype: int64 - name: title dtype: string - name: question_body dtype: string - name: question_type dtype: string - name: question_date dtype: string splits: - name: train num_bytes: 3433758 num_examples: 3449 - name: test num_bytes: 12055 num_examples: 14 download_size: 0 dataset_size: 3445813 license: cc task_categories: - text-classification language: - en tags: - code pretty_name: staqt size_categories: - 1K<n<10K --- # Dataset Card for "stackoverflow_question_types" ## NOTE: the dataset is still currently under annotation ## Dataset Description Recent research has taken a look into leveraging data available from StackOverflow (SO) to train large language models for programming-related tasks. However, users can ask a wide range of questions on stackoverflow; The "stackoverflow question types" is a dataset of manually annotated questions posted on SO with an associated type. Following a previous [study](https://ieeexplore.ieee.org/document/6405249), each question was annotated with a type capturing the main concern of the user who posted the question. The questions were annotated with the given types: * *Need to know*: Questions regarding the possibility or availability of (doing) something. These questions normally show the lack of knowledge or uncertainty about some aspects of the technology (e.g. the presence of a feature in an API or a language). * *How to do it*: Providing a scenario and asking how to implement it (sometimes with a given technology or API). * *Debug/corrective*: Dealing with problems in the code under development, such as runtime errors and unexpected behaviour. * *Seeking different solutions*: The questioner has a working code yet seeks a different approach to doing the job. * *Conceptual*: The question seeks to understand some aspects of programming (with or without using code examples) * *Other*: a question related to another aspect of programming, or even non-related to programming. ### Remarks For this dataset, we are mainly interested in questions related to *programming*. For instance, for [this question](https://stackoverflow.com/questions/51142399/no-acceptable-c-compiler-found-in-path-installing-python-and-gcc), the user is "trying to install Python-3.6.5 on a machine that does not have any package manager installed" and is facing issues. Because it's not related to the concept of programming, we would classify it as "other" and not "debugging". Moreover, we note the following conceptual distinctions between the different categories: - Need to know: the user asks "is it possible to do x" - How to do it: the user wants to do "x", knows it's possible, but has no clear idea or solution/doesn't know how to do it -> wants any solution for solving "x". - Debug: the user wants to do "x", and has a clear idea/solution "y" but it is not working, and is seeking a correction to "y". - Seeking-different-solution: the user wants to do "x", and has found already a working solution "y", but is seeking an alternative "z". Sometimes, it's hard to truly understand the users' true intentions; the separating line between each category will be minor and might be subject to interpretation. Naturally, some questions may have multiple concerns (i.e. could correspond to multiple categories). However, this dataset contains mainly questions for which we could assign a clear single category to each question. Currently, all questions annotated are a subset of the [stackoverflow_python](koutch/stackoverflow_python) dataset. ### Languages The currently annotated questions concern posts with the *python* tag. The questions are written in *English*. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - question_id: the unique id of the post - question_body: the (HTML) content of the question - question_type: the assigned category/type/label - "needtoknow" - "howto", - "debug", - "seeking", - "conceptual", - "other" ### Data Splits [More Information Needed] ## Dataset Creation ### Annotations #### Annotation process Previous research looked into mining natural language-code pairs from stackoverflow. Two notable works yielded the [StaQC](https://arxiv.org/abs/1803.09371) and [ConaLA](https://arxiv.org/abs/1803.09371) datasets. Parts of the dataset used a subset of the manual annotations provided by the authors of the papers (available at [staqc](https://huggingface.co/datasets/koutch/staqc), and [conala](https://huggingface.co/datasets/neulab/conala])). The questions were annotated as belonging to the "how to do it" category. To ease the annotation procedure, we used the [argilla platform](https://docs.argilla.io/en/latest/index.html) and multiple iterations of [few-shot training with a SetFit model](https://docs.argilla.io/en/latest/tutorials/notebooks/labelling-textclassification-setfit-zeroshot.html#%F0%9F%A6%BE-Train-a-few-shot-SetFit-model). ## Considerations for Using the Data ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed]
5,171
[ [ -0.054046630859375, -0.076416015625, 0.01174163818359375, 0.01343536376953125, -0.01125335693359375, -0.0006389617919921875, -0.01416778564453125, -0.040374755859375, 0.0301513671875, 0.0455322265625, -0.04730224609375, -0.038665771484375, -0.034942626953125, ...
Francesco/aquarium-qlnqy
2023-03-30T09:16:41.000Z
[ "task_categories:object-detection", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc", "rf100", "region:us" ]
Francesco
null
null
1
3
2023-03-30T09:16:16
--- dataset_info: features: - name: image_id dtype: int64 - name: image dtype: image - name: width dtype: int32 - name: height dtype: int32 - name: objects sequence: - name: id dtype: int64 - name: area dtype: int64 - name: bbox sequence: float32 length: 4 - name: category dtype: class_label: names: '0': aquarium '1': fish '2': jellyfish '3': penguin '4': puffin '5': shark '6': starfish '7': stingray annotations_creators: - crowdsourced language_creators: - found language: - en license: - cc multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - object-detection task_ids: [] pretty_name: aquarium-qlnqy tags: - rf100 --- # Dataset Card for aquarium-qlnqy ** The original COCO dataset is stored at `dataset.tar.gz`** ## Dataset Description - **Homepage:** https://universe.roboflow.com/object-detection/aquarium-qlnqy - **Point of Contact:** francesco.zuppichini@gmail.com ### Dataset Summary aquarium-qlnqy ### Supported Tasks and Leaderboards - `object-detection`: The dataset can be used to train a model for Object Detection. ### Languages English ## Dataset Structure ### Data Instances A data point comprises an image and its object annotations. ``` { 'image_id': 15, 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>, 'width': 964043, 'height': 640, 'objects': { 'id': [114, 115, 116, 117], 'area': [3796, 1596, 152768, 81002], 'bbox': [ [302.0, 109.0, 73.0, 52.0], [810.0, 100.0, 57.0, 28.0], [160.0, 31.0, 248.0, 616.0], [741.0, 68.0, 202.0, 401.0] ], 'category': [4, 4, 0, 0] } } ``` ### Data Fields - `image`: the image id - `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `width`: the image width - `height`: the image height - `objects`: a dictionary containing bounding box metadata for the objects present on the image - `id`: the annotation id - `area`: the area of the bounding box - `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format) - `category`: the object's category. #### Who are the annotators? Annotators are Roboflow users ## Additional Information ### Licensing Information See original homepage https://universe.roboflow.com/object-detection/aquarium-qlnqy ### Citation Information ``` @misc{ aquarium-qlnqy, title = { aquarium qlnqy Dataset }, type = { Open Source Dataset }, author = { Roboflow 100 }, howpublished = { \url{ https://universe.roboflow.com/object-detection/aquarium-qlnqy } }, url = { https://universe.roboflow.com/object-detection/aquarium-qlnqy }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2022 }, month = { nov }, note = { visited on 2023-03-29 }, }" ``` ### Contributions Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
3,502
[ [ -0.047821044921875, -0.03240966796875, 0.0141143798828125, -0.01523590087890625, -0.033447265625, -0.01519012451171875, 0.0020084381103515625, -0.0343017578125, 0.01739501953125, 0.031402587890625, -0.048736572265625, -0.0694580078125, -0.03009033203125, 0.0...
AlekseyKorshuk/soda_input_output-clean
2023-03-31T20:31:56.000Z
[ "region:us" ]
AlekseyKorshuk
null
null
0
3
2023-03-31T20:05:40
--- dataset_info: features: - name: input_text dtype: string - name: output_text dtype: string splits: - name: train num_bytes: 842581512.2664871 num_examples: 940754 download_size: 495782858 dataset_size: 842581512.2664871 --- # Dataset Card for "soda_input_output-clean" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
435
[ [ -0.0225830078125, -0.0217742919921875, 0.0080108642578125, 0.004840850830078125, -0.00713348388671875, 0.00469970703125, 0.00237274169921875, 0.0093231201171875, 0.0511474609375, 0.031982421875, -0.05767822265625, -0.044830322265625, -0.032012939453125, -0.0...
rcds/swiss_criticality_prediction
2023-07-20T07:39:07.000Z
[ "task_categories:text-classification", "annotations_creators:machine-generated", "language_creators:expert-generated", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "language:de", "language:fr", "language:it", "license:cc-by-sa-4.0", "arxiv:2306.09237",...
rcds
This dataset contains Swiss federal court decisions for the legal criticality prediction task
@InProceedings{huggingface:dataset, title = {A great new dataset}, author={huggingface, Inc. }, year={2020} }
0
3
2023-03-31T21:21:30
--- annotations_creators: - machine-generated language: - de - fr - it language_creators: - expert-generated license: - cc-by-sa-4.0 multilinguality: - multilingual pretty_name: Legal Criticality Prediction size_categories: - 100K<n<1M source_datasets: - original tags: [] task_categories: - text-classification --- # Dataset Card for Criticality Prediction ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Legal Criticality Prediction (LCP) is a multilingual, diachronic dataset of 139K Swiss Federal Supreme Court (FSCS) cases annotated with two criticality labels. The bge_label i a binary label (critical, non-critical), while the citation label has 5 classes (critical-1, critical-2, critical-3, critical-4, non-critical). Critical classes of the citation_label are distinct subsets of the critical class of the bge_label. This dataset creates a challenging text classification task. We also provide additional metadata as the publication year, the law area and the canton of origin per case, to promote robustness and fairness studies on the critical area of legal NLP. ### Supported Tasks and Leaderboards LCP can be used as text classification task ### Languages Switzerland has four official languages with three languages German, French and Italian being represenated. The decisions are written by the judges and clerks in the language of the proceedings. German (91k), French (33k), Italian (15k) ## Dataset Structure ``` { "decision_id": "008d8a52-f0ea-4820-a18c-d06066dbb407", "language": "fr", "year": "2018", "chamber": "CH_BGer_004", "region": "Federation", "origin_chamber": "338.0", "origin_court": "127.0", "origin_canton": "24.0", "law_area": "civil_law", "law_sub_area": , "bge_label": "critical", "citation_label": "critical-1", "facts": "Faits : A. A.a. Le 17 août 2007, C.X._, née le 14 février 1944 et domiciliée...", "considerations": "Considérant en droit : 1. Interjeté en temps utile (art. 100 al. 1 LTF) par les défendeurs qui ont succombé dans leurs conclusions (art. 76 LTF) contre une décision...", "rulings": "Par ces motifs, le Tribunal fédéral prononce : 1. Le recours est rejeté. 2. Les frais judiciaires, arrêtés à 10'000 fr., sont mis solidairement à la charge des recourants...", } ``` ### Data Fields ``` decision_id: (str) a unique identifier of the for the document language: (str) one of (de, fr, it) year: (int) the publication year chamber: (str) the chamber of the case region: (str) the region of the case origin_chamber: (str) the chamber of the origin case origin_court: (str) the court of the origin case origin_canton: (str) the canton of the origin case law_area: (str) the law area of the case law_sub_area:(str) the law sub area of the case bge_label: (str) critical or non-critical citation_label: (str) critical-1, critical-2, critical-3, critical-4, non-critical facts: (str) the facts of the case considerations: (str) the considerations of the case rulings: (str) the rulings of the case ``` ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits The dataset was split date-stratisfied - Train: 2002-2015 - Validation: 2016-2017 - Test: 2018-2022 | Language | Subset | Number of Documents (Training/Validation/Test) | |------------|------------|--------------------------------------------| | German | **de** | 81'264 (56592 / 19601 / 5071) | | French | **fr** | 49'354 (29263 / 11117 / 8974) | | Italian | **it** | 7913 (5220 / 1901 / 792) | ## Dataset Creation ### Curation Rationale The dataset was created by Stern (2023). ### Source Data #### Initial Data Collection and Normalization The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML. #### Who are the source language producers? The decisions are written by the judges and clerks in the language of the proceedings. ### Annotations #### Annotation process bge_label: 1. all bger_references in the bge header were extracted (for bge see rcds/swiss_rulings). 2. bger file_names are compared with the found references citation_label: 1. count all citations for all bger cases and weight citations 2. divide cited cases in four different classes, depending on amount of citations #### Who are the annotators? Stern processed data and introduced bge and citation-label Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch). ### Personal and Sensitive Information The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf) © Swiss Federal Supreme Court, 2002-2022 The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made. Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf ### Citation Information Please cite our [ArXiv-Preprint](https://arxiv.org/abs/2306.09237) ``` @misc{rasiah2023scale, title={SCALE: Scaling up the Complexity for Advanced Language Model Evaluation}, author={Vishvaksenan Rasiah and Ronja Stern and Veton Matoshi and Matthias Stürmer and Ilias Chalkidis and Daniel E. Ho and Joel Niklaus}, year={2023}, eprint={2306.09237}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@Stern5497](https://github.com/stern5497) for adding this dataset.
7,603
[ [ -0.02264404296875, -0.043243408203125, 0.03448486328125, 0.019012451171875, -0.024017333984375, -0.01320648193359375, -0.0234375, -0.0244903564453125, 0.0090179443359375, 0.039459228515625, -0.033905029296875, -0.06793212890625, -0.05584716796875, 0.00762557...
azcorpus/azcorpus_v0
2023-09-20T10:24:11.000Z
[ "license:openrail", "region:us" ]
azcorpus
null
null
14
3
2023-04-01T13:37:10
--- extra_gated_prompt: "You agree to not use the dataset to conduct experiments that cause harm to human subjects." extra_gated_fields: Name and Surname: text Email: text Company: text Purpose of Use: text I agree to use this dataset for non-commercial use ONLY: checkbox license: openrail --- ![](https://user-images.githubusercontent.com/31247506/229346998-1e08344b-26fc-4978-89f7-0ecba076fe25.png) # azcorpus - The largest open-source NLP corpus for Azerbaijani (1.9M documents, ~ 18M sentences) __Due to ongoing maintenance activities, only a portion of our corpus is currently available for access.__ In recent years, deep learning models have been widely used in NLP, yielding excellent results. However, most research works in NLP have focused on high-resource languages such as English. There is a significant gap in NLP research for low- resource languages, Azerbaijani being no exception. So, the availability of adequate corpora for most of the languages is still limited, especially for less-resourced languages such as Azerbaijani. Therefore, this study aimed to contribute to the NLP research community by building the largest NLP corpus for Azerbaijani language. ## Corpus Summary “azcorpus” built for text generation purposes contains a total of 1.9 million documents, drawn from a variety of sources. The corpus is designed to provide a broad range of linguistic data for natural language processing and organized by genre and topic, with texts covering a range of subjects including politics, economics, science, culture, sport, history, society and etc. Texts were selected from a variety of sources including newspapers, magazines, academic journals, wikipedia articles and books. The corpus includes both contemporary and historical texts, providing a rich linguistic and cultural context for natural language processing applications. ___ ## Corpus structure ### Data fields - id: Document id - text - Newline-separated content - source - Document source - reliability - Subjective cleaning evaluation rate - license - Document license ### Data Splits This corpus has 3 sources(az_books, az_wiki, and az_news) and 1.876.492 cleaned documents. | Source name | Number of Instances | Size (GB) | | ------------- | --------------------|:----------------------| | az_books | 1,540,732 | 19.5 | | az_wiki | 98,882 | 0.9 | | az_news | 236,878 | 3.8 | ___ ## Methodology The first step in building "azcorpus" was to collect text data from various sources. The news websites were selected based on their popularity and the diversity of topics covered. Additionally, a collection of ebooks in Azerbaijani was obtained from various online sources. We have expanded our collection to encompass not only fictional literature, but also scholarly works, such as physics, chemistry, and etc. Source-specific cleaning techniques were applied separately to ensure consistency and accuracy in the corpus. Further information regarding the methodology at hand will be expounded upon in our forthcoming academic paper. To ensure the ethical use of the corpus, we only collected publicly available data, and we did not collect any personal or sensitive information. We also ensured that the corpus was used for research purposes only and not for commercial gain. In accordance with legal considerations, it is not within our current plans to divulge sources at this time. ___ ## Corpus Usage To obtain comprehensive guidance on how to use "azcorpus", please refer to the detailed usage instructions provided in this [notebook](https://github.com/azcorpus/azcorpus_v0/blob/main/azcorpus_v0.ipynb). ```python corpus = AzCorpus(access_token = "your_token") # To obtain a corpus in the raw JSON format corpus.generate_samples() ``` The download of the entire corpus is a process that entails a time span of approximately 25 minutes to 2 hours, contingent upon the velocity of your internet connection. Presently, our team is engrossed in the refinement of the download script with the objective of enhancing efficiency. ___ ## Considerations for Using the Corpus #### Social Impact Our work has the potential to contribute to the community by providing a valuable resource for development of new text generation tools in Azerbaijani. "azcorpus" demonstrates the importance of building large NLP corpora for under-resourced languages, and highlights the social impact of such resources. By making this corpus available to the wider community, we hope to stimulate further research and development in the field of Azerbaijani text generation, and contribute to the broader goal of promoting linguistic diversity and cultural heritage. Future studies could explore the potential community impact of our work. #### Biases and Limitations Addressing potential bias in machine learning corpuses is a common concern in research. In this study, we acknowledge that our dataset may be subject to bias and to mitigate this issue, we employed several techniques. However, we recognize that our approach may still have limitations. So, It is important to exercise caution with models trained on a "azcorpus" that has not been adequately filtered, as this may have an impact on the resulting models. In particular, it is crucial to be mindful of any biases that may be present in the "azcorpus_v0". Future work could further investigate these issues and explore additional methods to address bias in the corpus. ___ ## Additional Information #### Corpus authors The corpus was put together by [Huseyn Kishiyev](https://www.linkedin.com/in/huseynkishiyev/), [Jafar Isbarov](https://www.linkedin.com/in/jafar-isbarov/), [Kanan Suleymanli](https://www.linkedin.com/in/kanan-suleyman/), [Khazar Heydarli](https://www.linkedin.com/in/xezer-heyderli/), [Leyla Eminova](https://www.linkedin.com/in/leyla-eminova/) and [Nijat Zeynalov](https://www.linkedin.com/in/nijat-zeynalov-064163142/). The authors' names have been arranged in alphabetical order. All authors have equal rights and contributed equally to this work. The authors declare no conflict of interest. There are no founding sponsors and no other role in the design of the work other than the authors; in the collection, analysis, or interpretation of data; in the writing of the manuscript, and in the decision to publish the corpus. ___
6,454
[ [ -0.0290679931640625, -0.03350830078125, 0.01021575927734375, 0.0163116455078125, -0.017913818359375, 0.006183624267578125, -0.0259246826171875, -0.039886474609375, 0.01113128662109375, 0.049407958984375, -0.043609619140625, -0.06488037109375, -0.044891357421875,...
RyokoAI/ScribbleHub17K
2023-04-03T23:21:16.000Z
[ "task_categories:text-classification", "task_categories:text-generation", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "novel", "training", "story", "region:us" ]
RyokoAI
null
null
2
3
2023-04-01T23:34:11
--- license: apache-2.0 language: - en tags: - novel - training - story task_categories: - text-classification - text-generation pretty_name: ScribbleHub17K size_categories: - 100K<n<1M --- # Dataset Card for ScribbleHub17K *The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible.* ## Dataset Description - **Homepage:** (TODO) - **Repository:** <https://github.com/RyokoAI/BigKnow2022> - **Paper:** N/A - **Leaderboard:** N/A - **Point of Contact:** Ronsor/undeleted <ronsor@ronsor.com> ### Dataset Summary ScribbleHub17K is a dataset consisting of text from over 373,000 chapters across approximately 17,500 series posted on the original story sharing site [Scribble Hub](https://scribblehub.com). ### Supported Tasks and Leaderboards This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes. * text-classification * text-generation ### Languages * English ## Dataset Structure ### Data Instances ```json { "text": " \n2082 Planet Earth the Fracture War, after a sudden fracture in our dimension unidentified beings with advance technology and u...", "meta": { "subset": "scribblehub", "series": "3811", "id": "3812", "q": 0.91, "title": "The First - Prologue- The Fracture War", "author": "RobotLove", "chapters": 1, "rating": 5, "rating_ct": 1, "genre": [ "Action", "Martial Arts", "Romance" ], "tags": [ "Kingdom Building", "Loyal Subordinates", "Male Protagonist", "Organized Crime", "Scheming" ] } } { "text": " For anyone that may see this, thanks for reading. I'm just here to see if a story can spill out of my mind if just start writin...", "meta": { "subset": "scribblehub", "series": "586090", "id": "586099", "q": 0.82, "title": "Just writing to write…i guess? - I’m here now", "author": "BigOofStudios", "chapters": 1, "rating": 4.5, "rating_ct": 2, "genre": [ "Action", "Comedy" ], "tags": [] } } ``` ### Data Fields * `text`: the actual chapter text * `meta`: metadata for chapter and series * `subset`: data source tag: `scribblehub` * `series`: series ID * `id`: chapter ID * `lang`: always `en` (English) * `q`: quality score (q-score) between (0.0) terrible and 1.0 (perfect); anything with a score `> 0.5` is generally good enough * `title`: chapter and series title in the format `<chapter title> - <series title>` * `chapters`: total number of chapters in the series * `rating`: Scribble Hub rating between 0 and 5 stars * `rating_ct`: number of ratings * `author`: author name * `genre`: array of Scribble Hub genres for the series * `tags`: array of tags for the series #### Q-Score Distribution ``` 0.00: 0 0.10: 0 0.20: 0 0.30: 84 0.40: 718 0.50: 3775 0.60: 22300 0.70: 72581 0.80: 137982 0.90: 135800 1.00: 59 ``` ### Data Splits No splitting of the data was performed. ## Dataset Creation ### Curation Rationale Scribble Hub is a home for original web stories, effectively a smaller, English version of Japan's Syosetuka ni Narou. As a result, it is a good source for reasonably well written creative content. ### Source Data #### Initial Data Collection and Normalization TODO #### Who are the source language producers? The authors of each novel. ### Annotations #### Annotation process Title, ratings, and other metadata were parsed out using scripts that will be provided in the BigKnow2022 GitHub repository. #### Who are the annotators? No human annotators. ### Personal and Sensitive Information The dataset contains only works of fiction, and we do not believe it contains any PII. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content. It may also be useful for other languages depending on your language model. ### Discussion of Biases This dataset is composed of fictional works by various authors. Because of this fact, the contents of this dataset will reflect the biases of those authors. **Additionally, this dataset contains NSFW material and was not filtered. Beware of stereotypes.** ### Other Known Limitations N/A ## Additional Information ### Dataset Curators Ronsor Labs ### Licensing Information Apache 2.0, for all parts of which Ronsor Labs or the Ryoko AI Production Committee may be considered authors. All other material is distributed under fair use principles. ### Citation Information ``` @misc{ryokoai2023-bigknow2022, title = {BigKnow2022: Bringing Language Models Up to Speed}, author = {Ronsor}, year = {2023}, howpublished = {\url{https://github.com/RyokoAI/BigKnow2022}}, } ``` ### Contributions Thanks to @ronsor (GH) for gathering this dataset.
4,962
[ [ -0.0249176025390625, -0.048614501953125, 0.033599853515625, 0.0310821533203125, -0.0178070068359375, -0.0008668899536132812, -0.0233154296875, -0.04229736328125, 0.061309814453125, 0.038818359375, -0.057464599609375, -0.049468994140625, -0.05413818359375, 0....
COMP0087-GROUP8-22-23/PERC
2023-04-02T15:14:52.000Z
[ "task_categories:text-classification", "task_categories:text-generation", "size_categories:1K<n<10K", "language:en", "art", "region:us" ]
COMP0087-GROUP8-22-23
null
null
0
3
2023-04-02T15:11:58
--- task_categories: - text-classification - text-generation language: - en tags: - art pretty_name: PERC size_categories: - 1K<n<10K --- Reference: Ponnarassery-, Sreeja (2017), “Poem Emotion Recognition Corpus (PERC)”, Mendeley Data, V1, doi: 10.17632/n9vbc8g9cx.1
266
[ [ 0.006866455078125, -0.0078582763671875, 0.010894775390625, 0.0308380126953125, -0.0380859375, -0.01457977294921875, 0.0161590576171875, -0.037384033203125, 0.0418701171875, 0.0142974853515625, -0.052581787109375, -0.04486083984375, -0.051605224609375, 0.0301...
madebyollin/pokemon-512
2023-04-02T22:25:32.000Z
[ "region:us" ]
madebyollin
null
null
2
3
2023-04-02T21:58:27
--- dataset_info: features: - name: image dtype: image splits: - name: train num_bytes: 1136778591.55 num_examples: 6930 download_size: 1115147878 dataset_size: 1136778591.55 --- # Dataset Card for "pokemon-512" A cleaned + upsampled-to-512px-square version of https://www.kaggle.com/datasets/djilax/pkmn-image-dataset, suitable for training high-resolution unconditional image generators. ![](comparison_screenshot.png)
447
[ [ -0.0347900390625, -0.001468658447265625, 0.0001462697982788086, 0.0138702392578125, -0.053955078125, 0.0031490325927734375, -0.006359100341796875, 0.0031375885009765625, 0.055267333984375, 0.05352783203125, -0.043182373046875, -0.035552978515625, -0.040954589843...
hakatiki/hungarian-books-corpus
2023-05-20T10:10:03.000Z
[ "region:us" ]
hakatiki
null
null
0
3
2023-04-03T14:58:09
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
cannin/biostars_qa
2023-04-06T14:18:09.000Z
[ "task_categories:text-classification", "task_categories:question-answering", "task_categories:text-generation", "size_categories:1K<n<10K", "language:en", "license:cc-by-4.0", "biology", "region:us" ]
cannin
null
null
2
3
2023-04-03T22:10:57
--- license: cc-by-4.0 task_categories: - text-classification - question-answering - text-generation language: - en tags: - biology size_categories: - 1K<n<10K --- ## Dataset Description - **BioStars Homepage:** https://www.biostars.org/ - **BioStars Paper:** https://doi.org/10.1371/journal.pcbi.1002216 - **Code Repository (This Dataset):** https://github.com/cannin/biostars_qa ### Dataset Summary This dataset contains 4803 question/answer pairs extracted from the [BioStars](https://www.biostars.org/) website. The site focuses on bioinformatics, computational genomics, and biological data analysis. ## Dataset Structure ### Data Fields The data contains INSTRUCTION, RESPONSE, SOURCE, and METADATA fields. The format is described for [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant/blob/main/data/datasets/README.md) ## Dataset Creation ### Curation Rationale Questions were included if they were an accepted answer and the question had at least 1 vote. ### Source Data Data collected using the [Biostars API](https://www.biostars.org/info/api/) ## Additional Information ### Dataset Curators [@cannin](https://github.com/cannin). @cannin has no affiliation with the BioStars project. ### Licensing Information Apache-2.0 ### Citation Information #### BioStars Project Cite the original project: https://doi.org/10.1371/journal.pcbi.1002216 #### This Dataset Citation for this dataset: ``` @misc{Luna2023a, author = {Augustin Luna}, title = {biostars_qa Dataset}, year = {2023}, howpublished = {\url{https://huggingface.co/datasets/cannin/biostars_qa}} } ``` #### This Dataset Code Citation for the code to generate this dataset: ``` @misc{Luna2023b, author = {Augustin Luna}, title = {biostars_qa Code}, year = {2023}, howpublished = {\url{https://github.com/cannin/biostars_qa}} } ```
1,857
[ [ -0.016571044921875, -0.035552978515625, 0.0300750732421875, 0.017486572265625, -0.016326904296875, -0.0110321044921875, 0.0020542144775390625, -0.0153656005859375, 0.03900146484375, 0.033905029296875, -0.048095703125, -0.059814453125, -0.027374267578125, 0.0...
RyokoAI/CNNovel125K
2023-04-04T11:38:03.000Z
[ "task_categories:text-classification", "task_categories:text-generation", "size_categories:100K<n<1M", "language:zh", "license:apache-2.0", "novel", "training", "region:us" ]
RyokoAI
null
null
14
3
2023-04-03T22:17:25
--- license: apache-2.0 language: - zh tags: - novel - training task_categories: - text-classification - text-generation pretty_name: CNNovel125K size_categories: - 100K<n<1M --- # Dataset Card for CNNovel125K *The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible.* ## Dataset Description - **Homepage:** (TODO) - **Repository:** <https://github.com/RyokoAI/BigKnow2022> - **Paper:** N/A - **Leaderboard:** N/A - **Point of Contact:** Ronsor/undeleted <ronsor@ronsor.com> ### Dataset Summary CNNovel125K is a dataset composed of approximately 125,000 novels downloaded from the Chinese novel hosting site <http://ibiquw.com>. ### Supported Tasks and Leaderboards This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes. * text-classification * text-generation ### Languages * Simplified Chinese ## Dataset Structure ### Data Instances ```json { "text": "\n------------\n\n全部章节\n\n\n------------\n\n第一章 她肯定做梦呢!\n\n HT国际大酒店总统套房。\n\n 清晨的第一缕阳光照射进圣地亚哥地板上,洒落在凌乱的床单上,突然地,床上睡的正熟的人睁开眼睛, 猛然惊醒!\n\n ...", "meta": { "subset": "cnnovel.ibiquw", "id": "100067", "q": 0.9, "lang": "zh_cn", "title": "为爱入局:嫁给秦先生", "author": "奥德萨" } } { "text": "\n------------\n\n全部章节\n\n\n------------\n\n第1章:出狱就大婚\n\n 凉城第一监狱,大门缓缓打开,秦峰仰起头,贪婪的呼吸了一口空气。\n\n 三年了,终于又闻到了自由的味道。\n\n 他回过头,看着目 送他出来的那群人道:...", "meta": { "subset": "cnnovel.ibiquw", "id": "100059", "q": 0.9, "lang": "zh_cn", "title": "绝世弃婿", "author": "绷带怪" } } ``` ### Data Fields * `text`: the actual novel text, all chapters * `meta`: entry metadata * `subset`: dataset tag: `cnnovel.ibiquw` * `id`: novel ID * `q`: quality score, fixed at 0.9 * `lang`: always `zh_cn` (Simplified Chinese) * `title`: novel title * `author`: novel author ### Data Splits No splitting of the data was performed. ## Dataset Creation ### Curation Rationale TODO ### Source Data #### Initial Data Collection and Normalization TODO #### Who are the source language producers? The authors of each novel. ### Annotations #### Annotation process Titles were collected alongside the novel text and IDs. #### Who are the annotators? There were no human annotators. ### Personal and Sensitive Information The dataset contains only works of fiction, and we do not believe it contains any PII. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content in Chinese. It may also be useful for other languages depending on your language model. ### Discussion of Biases This dataset is composed of fictional works by various authors. Because of this fact, the contents of this dataset will reflect the biases of those authors. Beware of stereotypes. ### Other Known Limitations N/A ## Additional Information ### Dataset Curators Ronsor Labs ### Licensing Information Apache 2.0, for all parts of which Ronsor Labs or the Ryoko AI Production Committee may be considered authors. All other material is distributed under fair use principles. ### Citation Information ``` @misc{ryokoai2023-bigknow2022, title = {BigKnow2022: Bringing Language Models Up to Speed}, author = {Ronsor}, year = {2023}, howpublished = {\url{https://github.com/RyokoAI/BigKnow2022}}, } ``` ### Contributions Thanks to @ronsor (GH) for gathering this dataset.
3,563
[ [ -0.0223846435546875, -0.042144775390625, 0.0200958251953125, 0.0263671875, -0.02349853515625, -0.0278778076171875, -0.042388916015625, -0.03863525390625, 0.0182647705078125, 0.032989501953125, -0.03778076171875, -0.062744140625, -0.040008544921875, 0.0124053...
d2mw/thepiratebay-categorized-titles-2023-04
2023-04-04T17:44:48.000Z
[ "task_categories:text-classification", "region:us" ]
d2mw
null
null
0
3
2023-04-04T17:25:09
--- task_categories: - text-classification --- # Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This is a set of (title, integer category) descriptions taken from The Pirate Bay via [123dw's](https://thepiratebay.org/search.php?q=user:123dw) regular TPB backups. This set represents the titles in release 2023-04. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] Major category, count * 1, 733604 (audio) * 2, 3557282 (video) * 3, 211288 (applications) * 4, 245684 (games) * 5, 2500830 (porn) * 6, 515778 (other) Is porn?, count - 0, 5263636 - 1, 2500830 ### Data Fields * id - original torrent ID * title - Torrent title * category - Integer ThePirateBay category (see below) * mcat - Integer category / 100 * is_porn - 1 if porn, 0 otherwise ### Categories ``` id,name 100,Audio 101,"Audio: Music" 102,"Audio: Audio books" 103,"Audio: Sound clips" 104,"Audio: FLAC" 199,"Audio: Other" 200,Video 201,"Video: Movies" 202,"Video: Movies DVDR" 203,"Video: Music videos" 204,"Video: Movie clips" 205,"Video: TV shows" 206,"Video: Handheld" 207,"Video: HD - Movies" 208,"Video: HD - TV shows" 209,"Video: 3D" 299,"Video: Other" 300,Applications 301,"Applications: Windows" 302,"Applications: Mac" 303,"Applications: UNIX" 304,"Applications: Handheld" 305,"Applications: IOS (iPad/iPhone)" 306,"Applications: Android" 399,"Applications: Other OS" 400,Games 401,"Games: PC" 402,"Games: Mac" 403,"Games: PSx" 404,"Games: XBOX360" 405,"Games: Wii" 406,"Games: Handheld" 407,"Games: IOS (iPad/iPhone)" 408,"Games: Android" 499,"Games: Other" 500,Porn 501,"Porn: Movies" 502,"Porn: Movies DVDR" 503,"Porn: Pictures" 504,"Porn: Games" 505,"Porn: HD - Movies" 506,"Porn: Movie clips" 599,"Porn: Other" 600,Other 601,"Other: E-books" 602,"Other: Comics" 603,"Other: Pictures" 604,"Other: Covers" 605,"Other: Physibles" 699,"Other: Other" ``` [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
3,047
[ [ -0.033111572265625, -0.0218505859375, 0.00533294677734375, 0.031280517578125, -0.0218353271484375, 0.014862060546875, 0.00399017333984375, -0.00711822509765625, 0.04217529296875, 0.04595947265625, -0.05859375, -0.06781005859375, -0.04669189453125, 0.01078796...
P1ayer-1/chatgpt-conversations-chatlogs.net
2023-05-04T03:14:33.000Z
[ "doi:10.57967/hf/0643", "region:us" ]
P1ayer-1
null
null
12
3
2023-04-05T00:24:23
## ChatGPT Conversations from Chatlogs.net This dataset contains 89,288 conversations conversations between users and ChatGPT. Version 1 contains all conversations available up to the cutoff date of April 4, 2023. Version 2 contains all conversations available up to the cutoff date of April 20, 2023. ## Source Data The conversations were scraped from the website Chatlogs.net. The data was generated using a custom scraper that can be found here: https://github.com/P1ayer-1/chatlogs.net-scraper --- license: cc-by-4.0 ---
532
[ [ -0.02410888671875, -0.048004150390625, 0.01174163818359375, 0.0175323486328125, -0.0235443115234375, -0.006999969482421875, 0.0023174285888671875, -0.046722412109375, 0.0233001708984375, 0.04913330078125, -0.07086181640625, -0.014923095703125, -0.027236938476562...
ashwathjadhav23/Spanish_MLM_3
2023-04-05T06:34:21.000Z
[ "region:us" ]
ashwathjadhav23
null
null
0
3
2023-04-05T06:34:18
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 3451474 num_examples: 25000 download_size: 1919406 dataset_size: 3451474 --- # Dataset Card for "Spanish_MLM_3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
356
[ [ -0.0269317626953125, -0.02447509765625, 0.006916046142578125, 0.048858642578125, 0.006267547607421875, -0.0038776397705078125, 0.013458251953125, -0.0189361572265625, 0.054901123046875, 0.048431396484375, -0.066162109375, -0.07098388671875, -0.038665771484375, ...
learningmachineaz/translate_enaz_10m
2023-04-07T15:57:38.000Z
[ "task_categories:translation", "task_categories:text-generation", "task_categories:text2text-generation", "size_categories:1M<n<10M", "language:en", "language:az", "license:openrail", "azerbaijani books", "azerbaijani news", "azerbaijani poems", "azerbaijani articles", "azerbaijani dataset", ...
learningmachineaz
Machine translation EN-AZ dataset based on Google Translate and National Library of Azerbaijan.
@InProceedings{ huggingface:dataset, title={Machine translation EN-AZ dataset}, author={Learning Machine LLC}, year={2022} }
2
3
2023-04-06T12:12:45
--- license: openrail task_categories: - translation - text-generation - text2text-generation language: - en - az tags: - azerbaijani books - azerbaijani news - azerbaijani poems - azerbaijani articles - azerbaijani dataset pretty_name: English-Azerbaijani Dataset size_categories: - 1M<n<10M --- # Description Dataset used to train our mT5 based model for machine translation, extracted from various text sources of National Library of Azerbaijan: [mT5-translation-enaz](https://huggingface.co/learningmachineaz/mt5-enaz-10m) \ It has only clean texts. Wiki articles wasn't used as they contain a lot of irrelevant data. | Key point | Info | |-------------------------|---------| | Rows | ~10mil. EN-AZ sentence pairs | | Size | 975M (zipped) / 2.8G (unzipped) | | Format | TSV (tab separated pairs) | | English | Google Translate | | Azerbaijani | Original cleaned text | ## Author Collected and prepared by [Renat Kalimulin](https://www.linkedin.com/in/rinat-kalimulin-16853358/)
1,015
[ [ -0.0052337646484375, -0.0298614501953125, 0.0145263671875, -0.006565093994140625, -0.04241943359375, -0.016326904296875, -0.0083770751953125, -0.009033203125, -0.005702972412109375, 0.0699462890625, -0.052581787109375, -0.056488037109375, -0.05133056640625, ...
PrathameshPawar/summary_2k
2023-04-16T19:19:50.000Z
[ "region:us" ]
PrathameshPawar
null
null
0
3
2023-04-08T01:08:15
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
larryvrh/CCMatrix-v1-Ja_Zh-filtered
2023-04-08T05:13:43.000Z
[ "task_categories:translation", "language:zh", "language:ja", "region:us" ]
larryvrh
null
null
4
3
2023-04-08T05:05:55
--- dataset_info: features: - name: ja dtype: string - name: zh dtype: string splits: - name: train num_bytes: 847526347 num_examples: 5686275 download_size: 651183008 dataset_size: 847526347 task_categories: - translation language: - zh - ja pretty_name: cc --- # Dataset Card for "CCMatrix-v1-Ja_Zh-filtered" ------ Filtered and modified version of Japanese/Chinese language pair data from [CCMatrix v1](https://opus.nlpl.eu/CCMatrix.php). Process steps: 1. Basic regex based filtering / length checking to remove abnormal pairs. 2. Semantic similarity filtering with a threshold value of 0.6, based on [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE). 3. Convert all Traditional Chinese sentences into Simplified Chinese with [zhconv](https://github.com/gumblex/zhconv). ------ 经过过滤和修改的日语/中文语言对数据,来自[CCMatrix v1](https://opus.nlpl.eu/CCMatrix.php)。 处理步骤: 1. 基本的基于正则表达式的过滤/长度检查,以删除异常对。 2. 基于[sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE)的语义相似性过滤,阈值为0.6。 3. 使用[zhconv](https://github.com/gumblex/zhconv)将所有繁体中文句子转换为简体中文。 ------ 以下はフィルタリングされ修正された日本語/中国語のペアデータです。データ元は[CCMatrix v1](https://opus.nlpl.eu/CCMatrix.php)です。 処理手順: 1. 正規表現に基づくフィルタリング/長さのチェックを行い、異常なペアを削除します。 2. [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE)に基づくセマンティック類似性フィルタリングを行い、閾値は0.6です。 3. [zhconv](https://github.com/gumblex/zhconv)を使って、すべての繁体字中国語の文を簡体字中国語に変換します。
1,465
[ [ -0.037994384765625, -0.06256103515625, 0.032135009765625, 0.012603759765625, -0.050994873046875, -0.0156402587890625, -0.0294036865234375, -0.01212310791015625, 0.037139892578125, 0.056549072265625, -0.07244873046875, -0.07183837890625, -0.01715087890625, 0....
Djacon/ru_goemotions
2023-04-08T16:51:52.000Z
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "multilinguality:monolingual", "language:ru", "license:mit", "emotion", "arxiv:2005.00547", "region:us" ]
Djacon
null
null
1
3
2023-04-08T16:27:02
--- language: - ru license: - mit multilinguality: - monolingual task_categories: - text-classification task_ids: - multi-class-classification - multi-label-classification pretty_name: RuGoEmotions tags: - emotion --- # Dataset Card for GoEmotions ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ### Dataset Summary The RuGoEmotions dataset contains 34k Reddit comments labeled for 9 emotion categories (joy, interest, surprice, sadness, anger, disgust, fear, guilt and neutral). The dataset already with predefined train/val/test splits ### Supported Tasks and Leaderboards This dataset is intended for multi-class, multi-label emotion classification. ### Languages The data is in Russian. ## Dataset Structure ### Data Instances Each instance is a reddit comment with one or more emotion annotations (or neutral). ### Data Fields The configuration includes: - `text`: the reddit comment - `labels`: the emotion annotations ### Data Splits The simplified data includes a set of train/val/test splits with 26.9k, 3.29k, and 3.37k examples respectively. ## Dataset Creation ### Curation Rationale From the paper abstract: > Understanding emotion expressed in language has a wide range of applications, from building empathetic chatbots to detecting harmful online behavior. Advancement in this area can be improved using large-scale datasets with a fine-grained typology, adaptable to multiple downstream tasks. ### Source Data #### Initial Data Collection and Normalization Data was collected from Reddit comments via a variety of automated methods discussed in 3.1 of the paper. #### Who are the source language producers? English-speaking Reddit users. ### Annotations #### Who are the annotators? Annotations were produced by 3 English-speaking crowdworkers in India. ### Personal and Sensitive Information This dataset includes the original usernames of the Reddit users who posted each comment. Although Reddit usernames are typically disasociated from personal real-world identities, this is not always the case. It may therefore be possible to discover the identities of the individuals who created this content in some cases. ## Considerations for Using the Data ### Social Impact of Dataset Emotion detection is a worthwhile problem which can potentially lead to improvements such as better human/computer interaction. However, emotion detection algorithms (particularly in computer vision) have been abused in some cases to make erroneous inferences in human monitoring and assessment applications such as hiring decisions, insurance pricing, and student attentiveness (see [this article](https://www.unite.ai/ai-now-institute-warns-about-misuse-of-emotion-detection-software-and-other-ethical-issues/)). ### Discussion of Biases From the authors' github page: > Potential biases in the data include: Inherent biases in Reddit and user base biases, the offensive/vulgar word lists used for data filtering, inherent or unconscious bias in assessment of offensive identity labels, annotators were all native English speakers from India. All these likely affect labelling, precision, and recall for a trained model. Anyone using this dataset should be aware of these limitations of the dataset. ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Researchers at Amazon Alexa, Google Research, and Stanford. See the [author list](https://arxiv.org/abs/2005.00547). ### Licensing Information The GitHub repository which houses this dataset has an [Apache License 2.0](https://github.com/google-research/google-research/blob/master/LICENSE). ### Citation Information @inproceedings{demszky2020goemotions, author = {Demszky, Dorottya and Movshovitz-Attias, Dana and Ko, Jeongwoo and Cowen, Alan and Nemade, Gaurav and Ravi, Sujith}, booktitle = {58th Annual Meeting of the Association for Computational Linguistics (ACL)}, title = {{GoEmotions: A Dataset of Fine-Grained Emotions}}, year = {2020} } ### Contributions Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
5,096
[ [ -0.037200927734375, -0.04644775390625, 0.007350921630859375, 0.01800537109375, -0.024688720703125, -0.0129547119140625, -0.0307769775390625, -0.046875, 0.032012939453125, 0.0184478759765625, -0.04998779296875, -0.0687255859375, -0.053466796875, 0.01161193847...
hackathon-somos-nlp-2023/Habilidades_Agente_v1
2023-04-18T23:45:27.000Z
[ "task_categories:text-generation", "size_categories:10K<n<100K", "language:es", "license:apache-2.0", "region:us" ]
hackathon-somos-nlp-2023
null
null
21
3
2023-04-09T04:05:52
--- task_categories: - text-generation language: - es size_categories: - 10K<n<100K pretty_name: Habilidades - Agente license: apache-2.0 --- ## Description ``` Español: Presentamos un conjunto de datos que presenta tres partes principales: 1. Dataset sobre habilidades blandas. 2. Dataset de conversaciones empresariales entre agentes y clientes. 3. Dataset curado de Alpaca en español: Este dataset toma como base el dataset https://huggingface.co/datasets/somosnlp/somos-alpaca-es, y fue curado con la herramienta Argilla, alcanzando 9400 registros curados. Los datos están estructurados en torno a un método que se describe mediante tres elementos principales: instrucción, entrada y salida. Cada ejemplo incluye una instrucción que describe la tarea o el problema a resolver, la entrada que proporciona el contexto o la información necesaria para resolver la tarea, y la salida que es la respuesta esperada a la tarea. Además, hay dos tokens especiales incluidos en el dataset: "<SN>" que indica el inicio del ejemplo, y "<EN>" que indica el final del ejemplo. Este dataset ha sido creado para su uso en tareas de procesamiento del lenguaje natural, como la generación de texto o el modelado del lenguaje. English: We present a dataset that consists of three main parts: 1. Soft skills dataset. 2. Business conversations dataset between agents and clients. 3. Curated Alpaca dataset in Spanish: This dataset is based on the https://huggingface.co/datasets/somosnlp/somos-alpaca-es dataset, and was curated with the Argilla tool, resulting in 9400 curated records. The data is structured around a method described by three main elements: instruction, input, and output. Each example includes an instruction that describes the task or problem to be solved, the input that provides the context or information needed to solve the task, and the output that is the expected response to the task. Additionally, there are two special tokens included in the dataset: "<SN>" which indicates the start of the example, and "<EN>" which indicates the end of the example. This dataset has been created for use in natural language processing tasks such as text generation or language modeling. ``` Content dataset: - softskills 1000 conversation. - Enterprices conversation client agent +14000. - alpaca curated +9400. Metod: - instruction. - input. - output. ## Example dataset ``` <SN> instruction: Describa cómo se puede utilizar el marketing digital para mejorar la participación del cliente. , input: "", output:El marketing digital puede utilizarse para establecer relaciones con los clientes, proporcionándoles contenido personalizado y involucrándolos con promociones y ofertas adaptadas a sus intereses. Además, el marketing digital puede usarse para mejorar la satisfacción del cliente respondiendo a las consultas rápidamente, conectándose con los clientes a través de las redes sociales y proporcionando la información correcta en el momento adecuado. Además, el marketing digital puede ayudar a generar lealtad a través de incentivos y recompensas, así como a proporcionar un servicio al cliente excepcional. <EN> ``` ## tokens ``` <SN> : inicio <EN> : parada ``` ## Citation ``` @misc {hackathon-somos-nlp-2023, author = { {Alberto Ceballos, Edison Bejarano, Leonardo Bolaños, Santiago Pineda, Nicolay Potes} }, title = { Habilidades agente V1 }, year = { 2023 } publisher = { Hugging Face } } ```
3,448
[ [ -0.03619384765625, -0.076171875, 0.00470733642578125, 0.04473876953125, -0.016265869140625, -0.005222320556640625, -0.01091766357421875, -0.042633056640625, 0.048004150390625, 0.0382080078125, -0.060821533203125, -0.0645751953125, -0.038482666015625, 0.03323...
Isotonic/human_assistant_conversation_deduped
2023-07-05T12:35:56.000Z
[ "task_categories:text-generation", "task_categories:conversational", "size_categories:100K<n<1M", "language:en", "language:es", "language:zh", "license:afl-3.0", "region:us" ]
Isotonic
null
null
3
3
2023-04-11T06:16:00
--- license: afl-3.0 dataset_info: features: - name: prompt dtype: string - name: response dtype: string - name: text dtype: string splits: - name: train num_bytes: 1069951715.5157907 num_examples: 586784 - name: test num_bytes: 133745787.85612378 num_examples: 73349 - name: validation num_bytes: 133743964.43947384 num_examples: 73348 download_size: 701202899 dataset_size: 1337441467.8113883 task_categories: - text-generation - conversational language: - en - es - zh size_categories: - 100K<n<1M --- # Deduplicated version of Isotonic/human_assistant_conversation - Deduped with max jaccard similarity of 0.75
670
[ [ -0.0265045166015625, -0.050750732421875, 0.01324462890625, 0.0233306884765625, -0.034698486328125, -0.022216796875, -0.0263824462890625, -0.04010009765625, 0.051422119140625, 0.0506591796875, -0.033050537109375, -0.0411376953125, -0.0157470703125, 0.02539062...
argilla/alpaca_bangla
2023-04-11T08:07:37.000Z
[ "region:us" ]
argilla
null
null
0
3
2023-04-11T08:07:33
--- dataset_info: features: - name: text dtype: 'null' - name: inputs struct: - name: _instruction dtype: string - name: input dtype: string - name: output dtype: string - name: prediction list: - name: label dtype: string - name: score dtype: float64 - name: prediction_agent dtype: 'null' - name: annotation dtype: 'null' - name: annotation_agent dtype: 'null' - name: vectors dtype: 'null' - name: multi_label dtype: bool - name: explanation dtype: 'null' - name: id dtype: string - name: metadata dtype: 'null' - name: status dtype: string - name: event_timestamp dtype: timestamp[us] - name: metrics dtype: 'null' splits: - name: train num_bytes: 1919536 num_examples: 1000 download_size: 717463 dataset_size: 1919536 --- # Dataset Card for "alpaca_bangla" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
1,046
[ [ -0.052032470703125, -0.03466796875, -0.004852294921875, 0.034271240234375, -0.030426025390625, -0.0081329345703125, 0.023773193359375, -0.028778076171875, 0.07366943359375, 0.03265380859375, -0.053375244140625, -0.056060791015625, -0.053955078125, -0.0125503...
benchan79/github-issues
2023-04-11T11:15:33.000Z
[ "task_categories:text-classification", "task_categories:text-retrieval", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "task_ids:document-retrieval", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:unkno...
benchan79
null
null
0
3
2023-04-11T10:39:29
--- dataset_info: features: - name: url dtype: string - name: repository_url dtype: string - name: labels_url dtype: string - name: comments_url dtype: string - name: events_url dtype: string - name: html_url dtype: string - name: id dtype: int64 - name: node_id dtype: string - name: number dtype: int64 - name: title dtype: string - name: user struct: - name: login dtype: string - name: id dtype: int64 - name: node_id dtype: string - name: avatar_url dtype: string - name: gravatar_id dtype: string - name: url dtype: string - name: html_url dtype: string - name: followers_url dtype: string - name: following_url dtype: string - name: gists_url dtype: string - name: starred_url dtype: string - name: subscriptions_url dtype: string - name: organizations_url dtype: string - name: repos_url dtype: string - name: events_url dtype: string - name: received_events_url dtype: string - name: type dtype: string - name: site_admin dtype: bool - name: labels list: - name: id dtype: int64 - name: node_id dtype: string - name: url dtype: string - name: name dtype: string - name: color dtype: string - name: default dtype: bool - name: description dtype: string - name: state dtype: string - name: locked dtype: bool - name: assignee struct: - name: login dtype: string - name: id dtype: int64 - name: node_id dtype: string - name: avatar_url dtype: string - name: gravatar_id dtype: string - name: url dtype: string - name: html_url dtype: string - name: followers_url dtype: string - name: following_url dtype: string - name: gists_url dtype: string - name: starred_url dtype: string - name: subscriptions_url dtype: string - name: organizations_url dtype: string - name: repos_url dtype: string - name: events_url dtype: string - name: received_events_url dtype: string - name: type dtype: string - name: site_admin dtype: bool - name: assignees list: - name: login dtype: string - name: id dtype: int64 - name: node_id dtype: string - name: avatar_url dtype: string - name: gravatar_id dtype: string - name: url dtype: string - name: html_url dtype: string - name: followers_url dtype: string - name: following_url dtype: string - name: gists_url dtype: string - name: starred_url dtype: string - name: subscriptions_url dtype: string - name: organizations_url dtype: string - name: repos_url dtype: string - name: events_url dtype: string - name: received_events_url dtype: string - name: type dtype: string - name: site_admin dtype: bool - name: milestone struct: - name: url dtype: string - name: html_url dtype: string - name: labels_url dtype: string - name: id dtype: int64 - name: node_id dtype: string - name: number dtype: int64 - name: title dtype: string - name: description dtype: string - name: creator struct: - name: login dtype: string - name: id dtype: int64 - name: node_id dtype: string - name: avatar_url dtype: string - name: gravatar_id dtype: string - name: url dtype: string - name: html_url dtype: string - name: followers_url dtype: string - name: following_url dtype: string - name: gists_url dtype: string - name: starred_url dtype: string - name: subscriptions_url dtype: string - name: organizations_url dtype: string - name: repos_url dtype: string - name: events_url dtype: string - name: received_events_url dtype: string - name: type dtype: string - name: site_admin dtype: bool - name: open_issues dtype: int64 - name: closed_issues dtype: int64 - name: state dtype: string - name: created_at dtype: timestamp[s] - name: updated_at dtype: timestamp[s] - name: due_on dtype: 'null' - name: closed_at dtype: 'null' - name: comments sequence: string - name: created_at dtype: timestamp[s] - name: updated_at dtype: timestamp[s] - name: closed_at dtype: timestamp[s] - name: author_association dtype: string - name: active_lock_reason dtype: 'null' - name: body dtype: string - name: reactions struct: - name: url dtype: string - name: total_count dtype: int64 - name: '+1' dtype: int64 - name: '-1' dtype: int64 - name: laugh dtype: int64 - name: hooray dtype: int64 - name: confused dtype: int64 - name: heart dtype: int64 - name: rocket dtype: int64 - name: eyes dtype: int64 - name: timeline_url dtype: string - name: performed_via_github_app dtype: 'null' - name: state_reason dtype: string - name: draft dtype: bool - name: pull_request struct: - name: url dtype: string - name: html_url dtype: string - name: diff_url dtype: string - name: patch_url dtype: string - name: merged_at dtype: timestamp[s] - name: is_pull_request dtype: bool splits: - name: train num_bytes: 15437002 num_examples: 3100 download_size: 4434085 dataset_size: 15437002 annotations_creators: - no-annotation language: - en language_creators: - found license: - unknown multilinguality: - monolingual pretty_name: Hugging Face GitHub Issues size_categories: - unknown source_datasets: - original tags: [] task_categories: - text-classification - text-retrieval task_ids: - multi-class-classification - multi-label-classification - document-retrieval --- # Dataset Card for "Hugging Face GitHub Issues ## Dataset Description - **Point of Contact:** [Ben Chan](benchan79@gmail.com) ### Dataset Summary GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets [repository](https://github.com/huggingface/datasets). It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond. ### Supported Tasks and Leaderboards ### Languages English ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Citation Information ### Contributions
7,536
[ [ -0.03411865234375, -0.03961181640625, 0.006580352783203125, 0.03057861328125, -0.0182342529296875, 0.007633209228515625, -0.0220794677734375, -0.043426513671875, 0.05413818359375, 0.0306549072265625, -0.06414794921875, -0.057647705078125, -0.061065673828125, ...
mfromm/AMSR
2023-04-12T15:58:08.000Z
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:en", "license:openrail", "argument-mining", "argument-identification", "region:us" ]
mfromm
null
null
1
3
2023-04-12T10:21:14
--- license: openrail task_categories: - text-classification language: - en tags: - argument-mining - argument-identification pretty_name: AMSR size_categories: - 1K<n<10K --- Argument Mining in Scientific Reviews (AMSR) We release a new dataset of peer-reviews from different computer science conferences with annotated arguments, called AMSR (**A**rgument **M**ining in **S**cientific **R**eviews). 1. Raw Data conferences_raw/ contains directories for each conference we scraped (e.g., [iclr20](./data/iclr20)). The respective directory of each conference comprises multiple `*.json` files, where every file contains the information belonging to a single paper, such as the title, the abstract, the submission date and the reviews. The reviews are stored in a list called `"review_content"`. 2. Cleaned Data conferences_cleaned/ contains reviews and papers where we removed all unwated character sequences from the reviews. For details on the details of the preprocessing steps, please refer to our paper "Argument Mining Driven Analysis of Peer-Reviews". 3. Annotated Data conferences_annotated/ contains sentence_level and token_level data of 77 reviews, annotated each by 3 annotators. We have three labels: PRO - Arguments supporting the acceptance of the paper. CON - Arguments opposing the acceptance of the paper. NON - Non-argumentative sentences/tokens which have no influence on the acceptance of the paper. And following we have three tasks: Argumentation Detection: A binary classification of whether a text span is an argument. The classes are denoted by ARG and NON, where ARG is the union of PRO and CON classes. Stance Detection: A binary classification whether an argumentative text span is supporting or opposing the paper acceptance. he model is trained and evaluated only on argumentative PRO and CON text spans. Joint Detection: A multi-class classification between the classes PRO, CON and NON, i.e. the combination of argumentation and stance detection. 4. Generalization across Conferences conferences_annotated_generalization/ contains token_level data of 77 reviews split diffrently than in 3. We studied the model’s generalization to peer-reviews for papers from other (sub)domains. To this end, wereduce the test set to only contain reviews from the GI’20conference. The focus of the GI’20 conference is ComputerGraphics and Human-Computer Interaction, while the otherconferences are focused on Representation Learning, AI andMedical Imaging. We consider the GI’20 as a subdomain since all conferences are from the domain of computer science. NO-GI: The original training dataset with all sentences from reviews of GI’20 removed. ALL A resampling of the original training dataset of the same size as NO-GI, with sentences from all conferences. 5. jupyter-Notebook ReviewStat is a jupyter notebook, which shows interesting statistics of the raw dataset.
2,908
[ [ -0.044097900390625, -0.045135498046875, 0.0281219482421875, 0.0024566650390625, -0.0400390625, 0.0064239501953125, -0.00705718994140625, -0.0361328125, 0.0237274169921875, 0.013275146484375, -0.0294647216796875, -0.05499267578125, -0.0460205078125, 0.0272064...
pythainlp/final_training_set_v1
2023-04-29T07:06:04.000Z
[ "task_categories:conversational", "task_categories:text-generation", "language:en", "region:us" ]
pythainlp
null
null
1
3
2023-04-13T16:52:49
--- dataset_info: features: - name: text dtype: string - name: metadata struct: - name: source dtype: string - name: nb_token dtype: int64 splits: - name: train num_bytes: 337155434.9768474 num_examples: 405760 - name: test num_bytes: 1277960.0231525812 num_examples: 1538 download_size: 191404581 dataset_size: 338433395 task_categories: - conversational - text-generation language: - en --- # Dataset Card for "final_training_set_v1" Finetuning datasets for [WangChanGLM](https://github.com/pythainlp/wangchanglm) sourced from [LAION OIG chip2 and infill_dbpedia](https://huggingface.co/datasets/laion/OIG) ([Apache-2.0](https://github.com/pythainlp/wangchanglm/blob/main/LICENSE)), [DataBricks Dolly v2](https://github.com/databrickslabs/dolly) ([Apache-2.0](https://github.com/pythainlp/wangchanglm/blob/main/LICENSE)), [OpenAI TL;DR](https://github.com/openai/summarize-from-feedback) ([MIT](https://opensource.org/license/mit/)), and [Hello-SimpleAI HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3) ([CC-BY SA](https://creativecommons.org/licenses/by-sa/4.0/))
1,131
[ [ -0.0210723876953125, -0.0108489990234375, -0.0036945343017578125, 0.01372528076171875, -0.01971435546875, -0.0179290771484375, 0.006458282470703125, -0.0151824951171875, -0.00269317626953125, 0.0343017578125, -0.046234130859375, -0.038482666015625, -0.0156707763...
prashanthpillai/docvqa_test
2023-04-13T17:30:48.000Z
[ "region:us" ]
prashanthpillai
null
null
0
3
2023-04-13T17:29:28
--- dataset_info: features: - name: questionId dtype: int64 - name: question dtype: string - name: image sequence: sequence: sequence: uint8 - name: docId dtype: int64 - name: ucsf_document_id dtype: string - name: ucsf_document_page_no dtype: string - name: data_split dtype: string - name: words sequence: string - name: boxes sequence: sequence: int64 splits: - name: test num_bytes: 843083964 num_examples: 5188 download_size: 296859136 dataset_size: 843083964 --- # Dataset Card for "docvqa_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
726
[ [ -0.0433349609375, -0.025238037109375, 0.01042938232421875, 0.0008559226989746094, -0.00980377197265625, -0.005672454833984375, 0.032928466796875, 0.0096435546875, 0.032440185546875, 0.0261688232421875, -0.053955078125, -0.049774169921875, -0.034454345703125, ...
snipaid/instruct-snippet-mlsum
2023-04-19T18:21:38.000Z
[ "task_categories:summarization", "task_categories:text2text-generation", "size_categories:1K<n<10K", "language:de", "license:mit", "news", "headline generation", "teaser generation", "keyword generation", "tweet generation", "serp title-tag generation", "serp meta-description generation", "n...
snipaid
null
null
0
3
2023-04-13T20:00:40
--- license: mit language: de tags: - news - headline generation - teaser generation - keyword generation - tweet generation - serp title-tag generation - serp meta-description generation - news snippet generation size_categories: - 1K<n<10K task_categories: - summarization - text2text-generation pretty_name: Instruct-Snippet-MLSUM-500 --- # Dataset Card for Instruct-Snippet-MLSUM-500 ### Dataset Summary This is a dataset for multitask instruction finetuning dataset for the task of news snippet generation. It is built from a sample of ~500 news articles from the [MLSUM](https://huggingface.co/datasets/mlsum) dataset, augmented with machine generated news snippets. ### Supported Tasks This dataset was created to support the task of generating news snippets such as title, teaser, keywords, serp and tweet for news articles in German language. ### Languages de - German ## Dataset Structure lable: a string feature. instruction: a string feature. input: a string feature. output: a string feature. ## Dataset Creation This dataset was created from Snippet-MLSUM-500. See [Snippet-MLSUM-500](https://huggingface.co/datasets/snipaid/snippet-mlsum-500) for the dataset without instructions. Instructions were generated with GPT-3.5 from a human-curated seed-set of instructions. ## Considerations for Using the Data ### Known Limitations Part of the snippet data is machine generated. Be aware that these features (specifically: output) may exhibit signs of model hallucination, toxicity and stereotypes. ## Additional Information See [Instruct-Snippet-MLSUM-500-V2](https://huggingface.co/datasets/snipaid/instruct-snippet-mlsum-500-v2) if you are interested in an improved successor, with further support for summaries. ### Licensing Information This dataset is licensed under MIT license.
1,822
[ [ -0.022064208984375, -0.041015625, 0.0102081298828125, 0.0142364501953125, -0.016754150390625, -0.002391815185546875, -0.0184326171875, 0.001613616943359375, 0.01593017578125, 0.050506591796875, -0.07513427734375, -0.06890869140625, -0.034698486328125, 0.0001...
qbao775/PARARULE-Plus-Depth-3
2023-06-05T03:57:53.000Z
[ "task_categories:text-classification", "task_categories:question-answering", "size_categories:100K<n<1M", "language:en", "license:mit", "Reasoning", "Multi-Step-Deductive-Reasoning", "Logical-Reasoning", "region:us" ]
qbao775
null
null
1
3
2023-04-16T05:25:47
--- license: mit task_categories: - text-classification - question-answering language: - en tags: - Reasoning - Multi-Step-Deductive-Reasoning - Logical-Reasoning size_categories: - 100K<n<1M --- # PARARULE-Plus-Depth-3 This is a branch which includes the dataset from PARARULE-Plus Depth=3. PARARULE Plus is a deep multi-step reasoning dataset over natural language. It can be seen as an improvement on the dataset of PARARULE (Peter Clark et al., 2020). Both PARARULE and PARARULE-Plus follow the closed-world assumption and negation as failure. The motivation is to generate deeper PARARULE training samples. We add more training samples for the case where the depth is greater than or equal to two to explore whether Transformer has reasoning ability. PARARULE Plus is a combination of two types of entities, animals and people, and corresponding relationships and attributes. From the depth of 2 to the depth of 5, we have around 100,000 samples in the depth of each layer, and there are nearly 400,000 samples in total. Here is the original links for PARARULE-Plus including paper, project and data. Paper: https://www.cs.ox.ac.uk/isg/conferences/tmp-proceedings/NeSy2022/paper15.pdf Project: https://github.com/Strong-AI-Lab/Multi-Step-Deductive-Reasoning-Over-Natural-Language Data: https://github.com/Strong-AI-Lab/PARARULE-Plus PARARULE-Plus has been collected and merged by [LogiTorch.ai](https://www.logitorch.ai/), [ReasoningNLP](https://github.com/FreedomIntelligence/ReasoningNLP), [Prompt4ReasoningPapers](https://github.com/zjunlp/Prompt4ReasoningPapers) and [OpenAI/Evals](https://github.com/openai/evals/pull/651). In this huggingface version, we pre-processed the dataset and use `1` to represent `true` and `0` to represent `false` to better help user train model. ## How to load the dataset? ``` from datasets import load_dataset dataset = load_dataset("qbao775/PARARULE-Plus-Depth-3") ``` ## How to train a model using the dataset? We provide an [example](https://github.com/Strong-AI-Lab/PARARULE-Plus/blob/main/README.md#an-example-script-to-load-pararule-plus-and-fine-tune-bert) that you can `git clone` the project and fine-tune the dataset locally. ## Citation ``` @inproceedings{bao2022multi, title={Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation}, author={Qiming Bao and Alex Yuxuan Peng and Tim Hartill and Neset Tan and Zhenyun Deng and Michael Witbrock and Jiamou Liu}, year={2022}, publisher={The 2nd International Joint Conference on Learning and Reasoning and 16th International Workshop on Neural-Symbolic Learning and Reasoning (IJCLR-NeSy 2022)} } ```
2,673
[ [ -0.040863037109375, -0.049224853515625, 0.0357666015625, 0.017913818359375, -0.0016632080078125, -0.00722503662109375, -0.006366729736328125, -0.034393310546875, 0.0006103515625, 0.043304443359375, -0.035186767578125, -0.03973388671875, -0.03607177734375, 0....
dariolopez/gpt-j-oasst1-es
2023-04-21T19:03:26.000Z
[ "size_categories:1K<n<10K", "language:es", "license:apache-2.0", "region:us" ]
dariolopez
null
null
1
3
2023-04-17T18:30:42
--- dataset_info: features: - name: instruction dtype: string - name: output dtype: string splits: - name: train num_bytes: 4445880 num_examples: 3909 download_size: 2580076 dataset_size: 4445880 license: apache-2.0 language: - es size_categories: - 1K<n<10K --- # OpenAssistant Conversations Spanish Dataset (OASST1-es) for GPT-j ## Dataset Summary Subset of the original [OpenAssistant Conversations Dataset (OASST)](https://huggingface.co/datasets/OpenAssistant/oasst1). * Filtered by `lang=es`. * Formatted according to the "instruction - output" pattern. * Select the best ranked output (Some instructions have multiple outputs ranked by humans). * Select only the first level of the tree conversation. ## Dataset Structure The dataset has 3909 rows of tuples (instructions and outputs).
830
[ [ -0.0252532958984375, -0.06573486328125, 0.0167083740234375, 0.01360321044921875, -0.009796142578125, 0.01152801513671875, 0.003421783447265625, -0.0157318115234375, 0.02398681640625, 0.046142578125, -0.064208984375, -0.06341552734375, -0.046173095703125, -0....
albertvillanova/tmp-imagefolder-metadata
2023-04-19T11:41:46.000Z
[ "region:us" ]
albertvillanova
null
null
0
3
2023-04-19T10:57:14
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
ioclab/animesfw
2023-04-24T14:10:44.000Z
[ "region:us" ]
ioclab
null
null
1
3
2023-04-19T15:24:32
--- dataset_info: features: - name: image dtype: image - name: tags dtype: string splits: - name: train num_bytes: 968422627084.875 num_examples: 3969879 download_size: 4471804726 dataset_size: 968422627084.875 --- # Dataset Card for "animesfw" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
407
[ [ -0.0450439453125, -0.01458740234375, 0.0153350830078125, 0.034820556640625, -0.009765625, -0.0095672607421875, 0.031341552734375, -0.01313018798828125, 0.06915283203125, 0.03619384765625, -0.07769775390625, -0.03790283203125, -0.041656494140625, -0.011161804...
justram/COCO2014-Captions
2023-04-19T20:33:40.000Z
[ "region:us" ]
justram
null
null
1
3
2023-04-19T20:33:17
--- dataset_info: features: - name: text_id dtype: int64 - name: caption dtype: string splits: - name: train num_bytes: 36551702 num_examples: 566747 - name: val num_bytes: 1610843 num_examples: 25010 - name: test num_bytes: 1610345 num_examples: 25010 download_size: 21814166 dataset_size: 39772890 --- # Dataset Card for "COCO2014-Captions" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
525
[ [ -0.04522705078125, -0.00801849365234375, 0.011138916015625, 0.041961669921875, -0.021759033203125, 0.020050048828125, 0.010467529296875, -0.0203399658203125, 0.05322265625, 0.04754638671875, -0.061981201171875, -0.053009033203125, -0.04296875, 0.006519317626...
cestwc/SG-subzone-poi-sentiment
2023-04-20T07:44:54.000Z
[ "region:us" ]
cestwc
null
null
0
3
2023-04-20T07:13:14
--- dataset_info: features: - name: local_created_at dtype: string - name: id dtype: int64 - name: text dtype: string - name: source dtype: string - name: truncated dtype: bool - name: in_reply_to_status_id dtype: float64 - name: in_reply_to_user_id dtype: float64 - name: user_id dtype: int64 - name: user_name dtype: string - name: user_screen_name dtype: string - name: user_location dtype: string - name: user_url dtype: string - name: user_verified dtype: bool - name: user_default_profile dtype: bool - name: user_description dtype: string - name: user_followers_count dtype: int64 - name: user_friends_count dtype: int64 - name: user_listed_count dtype: int64 - name: user_favourites_count dtype: int64 - name: user_statuses_count dtype: int64 - name: local_user_created_at dtype: string - name: place_id dtype: string - name: place_url dtype: string - name: place_place_type dtype: string - name: place_name dtype: string - name: place_country_code dtype: string - name: place_bounding_box_type dtype: string - name: place_bounding_box_coordinates dtype: string - name: is_quote_status dtype: bool - name: retweet_count dtype: int64 - name: favorite_count dtype: int64 - name: entities_hashtags dtype: string - name: entities_urls dtype: string - name: entities_symbols dtype: string - name: entities_user_mentions dtype: string - name: favorited dtype: bool - name: retweeted dtype: bool - name: possibly_sensitive dtype: bool - name: lang dtype: string - name: latitude dtype: float64 - name: longitude dtype: float64 - name: year_created_at dtype: int64 - name: month_created_at dtype: int64 - name: day_created_at dtype: int64 - name: weekday_created_at dtype: int64 - name: hour_created_at dtype: int64 - name: minute_created_at dtype: int64 - name: year_user_created_at dtype: int64 - name: month_user_created_at dtype: int64 - name: day_user_created_at dtype: int64 - name: weekday_user_created_at dtype: int64 - name: hour_user_created_at dtype: int64 - name: minute_user_created_at dtype: int64 - name: subzone dtype: string - name: planning_area dtype: string - name: poi_flag dtype: float64 - name: poi_id dtype: string - name: poi_dist dtype: float64 - name: poi_latitude dtype: float64 - name: poi_longitude dtype: float64 - name: poi_name dtype: string - name: poi_type dtype: string - name: poi_cate2 dtype: string - name: poi_cate3 dtype: string - name: clean_text dtype: string - name: joy_score dtype: float64 - name: trust_score dtype: float64 - name: positive_score dtype: float64 - name: sadness_score dtype: float64 - name: disgust_score dtype: float64 - name: anger_score dtype: float64 - name: anticipation_score dtype: float64 - name: negative_score dtype: float64 - name: fear_score dtype: float64 - name: surprise_score dtype: float64 - name: words dtype: string - name: polarity_score dtype: float64 - name: labels dtype: int64 splits: - name: '0203' num_bytes: 1519418943 num_examples: 1025135 download_size: 415295950 dataset_size: 1519418943 --- # Dataset Card for "SG-subzone-poi-sentiment" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
3,649
[ [ -0.05609130859375, -0.01479339599609375, 0.0130767822265625, 0.022186279296875, -0.024871826171875, -0.0125274658203125, 0.0108642578125, 0.007083892822265625, 0.0758056640625, 0.0137481689453125, -0.06781005859375, -0.06939697265625, -0.033050537109375, -0....
biglab/webui-7k
2023-05-05T02:25:39.000Z
[ "license:other", "region:us" ]
biglab
null
null
0
3
2023-04-20T08:01:27
--- license: other --- This data accompanies the WebUI project (https://dl.acm.org/doi/abs/10.1145/3544548.3581158) For more information, check out the project website: https://uimodeling.github.io/ To download this dataset, you need to install the huggingface-hub package ``` pip install huggingface-hub ``` Use snapshot_download ``` from huggingface_hub import snapshot_download snapshot_download(repo_id="biglab/webui-7k", repo_type="dataset") ``` IMPORTANT * Before downloading and using, please review the copyright info here: https://github.com/js0nwu/webui/blob/main/COPYRIGHT.txt * Not all data samples have the same number of files (e.g., same number of device screenshots) due to the fact that the crawler used a timeout during collection * The dataset released on HuggingFace was filtered using a list of explicit words and therefore contains fewer samples than the experiments originally used in the paper. The raw dataset is currently available (https://drive.google.com/drive/folders/1hcO75W2FjsZoibsj2TIbKz67hy9JkOBz?usp=share_link) but may be removed in the future.
1,090
[ [ -0.0335693359375, -0.0491943359375, 0.0073089599609375, 0.0171661376953125, -0.012176513671875, -0.0137176513671875, -0.0003085136413574219, -0.0197906494140625, 0.033782958984375, 0.0268707275390625, -0.055419921875, -0.038421630859375, -0.031402587890625, ...
lang-uk/every_prompt
2023-04-20T16:49:02.000Z
[ "task_categories:question-answering", "multilinguality:multilingual", "size_categories:1M<n<10M", "license:mit", "region:us" ]
lang-uk
Every prompt dataset. Every Prompt is a data-driven approach to mining instructions from the web. It contains over a million FAQs and HowTos from around the world in a structured format. It also has basic pre-processing to calculate the length of the useful text and identify the language of that text with the help of GCLD3
null
12
3
2023-04-20T11:08:51
--- license: mit task_categories: - question-answering pretty_name: Every Prompt size_categories: - 1M<n<10M multilinguality: - multilingual --- ## Every Prompt Every Prompt is a data-driven approach to mining instructions from the web. It contains over a million FAQs and HowTos from around the world in a structured format. It also has basic pre-processing to calculate the length of the useful text and identify the language of that text with the help of [GCLD3](https://github.com/google/cld3) It relies on the [Web Data Commons](http://webdatacommons.org) dataset (from October 2022) to find the seed list of sites with [**HowTo**](https://schema.org/HowTo) and [**FAQPage**](https://schema.org/FAQPage) items. The general pipeline looks like this: * Download 1.6TB of structured data from webdatacommons to identify the pages with the structured data we need (wget/parallel). That gives us 1,985,925 seed pages * Crawls the seed pages and tries to extract structured data using [extruct](https://pypi.org/project/extruct/#description) package. That left around 1,358,638 pages which are alive and well-formed. * Extracts only the relevant structured data of the HowTo/FAQPage type with the help of jmespath. That boils down to 1,266,926 json documents. * Extracts the textual information out of the structure to identify the text's language, the textual data's length, and the text/data ratio. You can use the resulting dataset by filtering for the language and amount of the text. You need to convert the structured data into instructions yourself. You'll need to apply extra cleansing/evaluation of the instructions you've got because, you know, the internet is still full of crap. **Caveat emptor**: the format of the FAQs and HowTo's in the dataset might vary greatly. Account for that. To understand potential pitfalls, look at the jmespath expression at the `export_structured_data.py`. ## Detailed stats (with breakdown by language and data type) | language | FAQPage count | FAQPage text length | HowTo count | HowTo text length | items count | text length | | --- | --- | --- | --- | --- | --- | --- | | en | 592730 | 1186748927 | 29017 | 77135350 | 621747 | 1263884277 | | de | 83184 | 213931486 | 3370 | 13905977 | 86554 | 227837463 | | es | 63237 | 113906536 | 6466 | 30517773 | 69703 | 144424309 | | fr | 65081 | 141638675 | 3672 | 21632272 | 68753 | 163270947 | | ja | 55439 | 46231152 | 1402 | 1678468 | 56841 | 47909620 | | ru | 41271 | 70947161 | 2403 | 12805308 | 43674 | 83752469 | | nl | 34066 | 102719276 | 2007 | 11078079 | 36073 | 113797355 | | it | 23076 | 43968063 | 2465 | 13696136 | 25541 | 57664199 | | vi | 23115 | 38603954 | 720 | 3224051 | 23835 | 41828005 | | zh | 22496 | 21111729 | 1112 | 1513344 | 23608 | 22625073 | | pl | 19424 | 41446645 | 306 | 419787 | 19730 | 41866432 | | fa | 17263 | 31294557 | 1819 | 1915117 | 19082 | 33209674 | | tr | 13619 | 20040069 | 722 | 418695 | 14341 | 20458764 | | und | 12256 | 1032156 | 322 | 8941 | 12578 | 1041097 | | pt | 10784 | 26163387 | 1775 | 8295306 | 12559 | 34458693 | | ro | 10536 | 16405628 | 75 | 89946 | 10611 | 16495574 | | id | 8256 | 14353165 | 1871 | 13055561 | 10127 | 27408726 | | ko | 8348 | 7624222 | 616 | 1533830 | 8964 | 9158052 | | sv | 8007 | 15926376 | 390 | 638054 | 8397 | 16564430 | | ar | 6950 | 10240266 | 1241 | 7517175 | 8191 | 17757441 | | da | 7691 | 15277244 | 408 | 450176 | 8099 | 15727420 | | cs | 7546 | 13201121 | 480 | 2471544 | 8026 | 15672665 | | fi | 7767 | 14468764 | 199 | 170138 | 7966 | 14638902 | | hi | 4517 | 4307716 | 683 | 4294129 | 5200 | 8601845 | | hu | 4866 | 10639836 | 125 | 61118 | 4991 | 10700954 | | el | 4600 | 10555382 | 103 | 55576 | 4703 | 10610958 | | no | 4357 | 8426887 | 179 | 354796 | 4536 | 8781683 | | uk | 4401 | 6925331 | 90 | 37285 | 4491 | 6962616 | | iw | 4056 | 7723904 | 36 | 35305 | 4092 | 7759209 | | bg | 3620 | 10154727 | 41 | 31268 | 3661 | 10185995 | | sk | 2639 | 4394140 | 65 | 32527 | 2704 | 4426667 | | th | 1877 | 3823867 | 613 | 3171583 | 2490 | 6995450 | | mr | 2002 | 2274197 | 57 | 75906 | 2059 | 2350103 | | mt | 1886 | 3761332 | 14 | 5443 | 1900 | 3766775 | | cy | 1524 | 3171667 | 25 | 11641 | 1549 | 3183308 | | bs | 1366 | 2031881 | 34 | 23298 | 1400 | 2055179 | | et | 1299 | 1694117 | 5 | 2005 | 1304 | 1696122 | | ms | 989 | 1927545 | 174 | 720492 | 1163 | 2648037 | | ca | 1068 | 1614073 | 62 | 34072 | 1130 | 1648145 | | lt | 1056 | 2272916 | 44 | 57169 | 1100 | 2330085 | | ne | 966 | 771410 | 29 | 28569 | 995 | 799979 | | hr | 796 | 1394174 | 15 | 10191 | 811 | 1404365 | | fy | 743 | 633705 | 24 | 5823 | 767 | 639528 | | lb | 703 | 1133527 | 18 | 3985 | 721 | 1137512 | | gl | 628 | 1159618 | 34 | 9049 | 662 | 1168667 | | mn | 644 | 1174921 | 11 | 3592 | 655 | 1178513 | | la | 635 | 363380 | 13 | 2009 | 648 | 365389 | | af | 577 | 444351 | 38 | 14403 | 615 | 458754 | | sl | 451 | 1708497 | 50 | 50361 | 501 | 1758858 | | ht | 455 | 223768 | 13 | 4406 | 468 | 228174 | | lv | 317 | 1017694 | 32 | 31983 | 349 | 1049677 | | gd | 273 | 295170 | 52 | 20374 | 325 | 315544 | | sr | 287 | 367782 | 23 | 5177 | 310 | 372959 | | co | 288 | 284629 | 12 | 3530 | 300 | 288159 | | az | 268 | 273548 | 9 | 13011 | 277 | 286559 | | fil | 210 | 165520 | 63 | 77100 | 273 | 242620 | | jv | 244 | 153411 | 14 | 75932 | 258 | 229343 | | sn | 239 | 175459 | 10 | 8890 | 249 | 184349 | | bn | 190 | 301199 | 42 | 23451 | 232 | 324650 | | ga | 198 | 263174 | 30 | 12905 | 228 | 276079 | | mg | 201 | 53082 | 18 | 6141 | 219 | 59223 | | hi-Latn | 194 | 250495 | 4 | 33091 | 198 | 283586 | | hmn | 173 | 793850 | 16 | 5902 | 189 | 799752 | | ka | 162 | 262305 | 8 | 3427 | 170 | 265732 | | ig | 136 | 129243 | 10 | 2941 | 146 | 132184 | | is | 139 | 236415 | 4 | 1277 | 143 | 237692 | | ta | 129 | 155042 | 12 | 4079 | 141 | 159121 | | kk | 102 | 152629 | 28 | 11885 | 130 | 164514 | | eu | 118 | 130847 | 10 | 3522 | 128 | 134369 | | eo | 121 | 69071 | 6 | 1885 | 127 | 70956 | | ur | 93 | 259680 | 33 | 20499 | 126 | 280179 | | so | 112 | 203877 | 6 | 2151 | 118 | 206028 | | tg | 99 | 73437 | 16 | 5539 | 115 | 78976 | | mk | 29 | 62730 | 84 | 391780 | 113 | 454510 | | be | 100 | 88386 | 8 | 2193 | 108 | 90579 | | sm | 100 | 1309239 | 8 | 2778 | 108 | 1312017 | | uz | 93 | 116820 | 7 | 2987 | 100 | 119807 | | zu | 84 | 136023 | 9 | 2744 | 93 | 138767 | | haw | 81 | 59685 | 6 | 822 | 87 | 60507 | | sq | 74 | 120593 | 12 | 6205 | 86 | 126798 | | ny | 78 | 19403 | 6 | 2046 | 84 | 21449 | | hy | 66 | 81675 | 10 | 3613 | 76 | 85288 | | ha | 44 | 84457 | 19 | 68032 | 63 | 152489 | | ru-Latn | 60 | 40266 | 1 | 61 | 61 | 40327 | | el-Latn | 57 | 55657 | 4 | 342 | 61 | 55999 | | zh-Latn | 58 | 27522 | 1 | 66 | 59 | 27588 | | sd | 52 | 51341 | 7 | 2044 | 59 | 53385 | | su | 50 | 17291 | 7 | 2358 | 57 | 19649 | | ku | 47 | 23147 | 6 | 1998 | 53 | 25145 | | bg-Latn | 48 | 15419 | 1 | 414 | 49 | 15833 | | st | 25 | 65162 | 19 | 6346 | 44 | 71508 | | yo | 37 | 103685 | 6 | 1790 | 43 | 105475 | | ceb | 41 | 72950 | 1 | 107 | 42 | 73057 | | ky | 30 | 23062 | 10 | 3679 | 40 | 26741 | | te | 32 | 42803 | 7 | 2558 | 39 | 45361 | | yi | 32 | 227267 | 7 | 2443 | 39 | 229710 | | mi | 26 | 10132 | 11 | 2915 | 37 | 13047 | | gu | 25 | 37857 | 10 | 4608 | 35 | 42465 | | ja-Latn | 33 | 17560 | 2 | 88 | 35 | 17648 | | sw | 26 | 17579 | 8 | 2726 | 34 | 20305 | | xh | 28 | 46466 | 4 | 1409 | 32 | 47875 | | ml | 16 | 33198 | 6 | 2721 | 22 | 35919 | | ps | 10 | 7671 | 12 | 2642 | 22 | 10313 | | am | 6 | 8017 | 8 | 1987 | 14 | 10004 | | kn | 5 | 22197 | 9 | 3523 | 14 | 25720 | | km | 7 | 8936 | 6 | 1879 | 13 | 10815 | | pa | 10 | 26617 | 3 | 1100 | 13 | 27717 | | si | 5 | 24000 | 5 | 1722 | 10 | 25722 | | lo | 1 | 6204 | 7 | 2115 | 8 | 8319 | | my | 3 | 14663 | 3 | 1179 | 6 | 15842 | ## Recreating the results 1. Clone the repo without the LFS files. 2. Install requirements from `requirements.txt`. 3. Install `pv` and `parallel`. 4. Run `bin/get_seed_urls.sh` to filter urls of interest out of 1.6TB of compressed data. Don't worry about disk space. Worry about the traffic. That will take around 5h on decent connection. 5. Run scrapy spider like this `scrapy crawl webdatacommons_org -s WEB_DATA_COMMONS=web_data_commons_urls_sample.txt -L INFO -o webdatacommons.jsonlines` with `WEB_DATA_COMMONS` pointing to the list of seed URLs from step 4. That might take up to a few weeks. 6. Run `python bin/extract_relevant_structured_data.py --num-threads 12 webdatacommons.jsonlines relevant.jsonlines.bz2`. That's fast, probably around 30 minutes. 7. Run `python bin/export_structured_data.py relevant.jsonlines.bz2 extruct_out.jsonlines.bz2` to obtain the final version of the dataset. 8. Optionally you can calculate the resulting stats like that: `python bin/get_stats.py extruct_out.jsonlines.bz2 every_prompt_stats.csv` ## Advices If you want to recreate the results: * Get yourself a server or VPS with enough space (80GB should be enough). * Look at the code. You'd probably want to make changes here and there. * All the python scripts have extra parameters to control the number of threads and the chunk size. Both accept compressed input and output files with the help of smart_open lib. ## License **Code** of the project has an MIT license. Copyright: [Dmytro Chaplynskyi](https://twitter.com/dchaplinsky), [lang-uk project](https://lang.org.ua), 2023
9,650
[ [ -0.06085205078125, -0.03509521484375, -0.00029659271240234375, 0.01210784912109375, 0.0004837512969970703, -0.00588226318359375, 0.010284423828125, 0.00872802734375, 0.05413818359375, 0.033660888671875, -0.04840087890625, -0.024658203125, -0.03765869140625, ...
alvations/esci-data-task2
2023-04-22T02:40:09.000Z
[ "region:us" ]
alvations
null
null
0
3
2023-04-22T01:31:08
--- dataset_info: features: - name: example_id dtype: int64 - name: query dtype: string - name: query_id dtype: int64 - name: product_id dtype: string - name: product_locale dtype: string - name: esci_label dtype: string - name: small_version dtype: int64 - name: large_version dtype: int64 - name: split dtype: string - name: product_title dtype: string - name: product_description dtype: string - name: product_bullet_point dtype: string - name: product_brand dtype: string - name: product_color dtype: string - name: gain dtype: float64 - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 2603008323 num_examples: 1977767 - name: dev num_bytes: 7386427 num_examples: 5505 - name: test num_bytes: 843102586 num_examples: 638016 download_size: 2214316591 dataset_size: 3453497336 --- # Dataset Card for "esci-data-task2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
1,108
[ [ -0.02447509765625, -0.017913818359375, 0.023529052734375, 0.0149383544921875, -0.00801849365234375, -0.005687713623046875, 0.01690673828125, -0.0201416015625, 0.052459716796875, 0.0304107666015625, -0.0701904296875, -0.046051025390625, -0.0523681640625, -0.0...
khondoker/EmoNoBa
2023-04-24T01:06:31.000Z
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "multilinguality:monolingual", "language:bn", "license:other", "emotion", "region:us" ]
khondoker
null
null
0
3
2023-04-23T09:05:28
--- license: other task_categories: - text-classification multilinguality: - monolingual language: - bn pretty_name: EmoNoBa task_ids: - multi-class-classification - multi-label-classification tags: - emotion paperswithcode_id: emonoba --- # Dataset Card for "EmoNoBa" ### Dataset Summary Detecting Multi-labeled Emotion for 6 emotion categories, namely Love, Joy, Surprise, Anger, Sadness, Fear. ### Citation Information ``` @inproceedings{islam2022emonoba, title={EmoNoBa: A Dataset for Analyzing Fine-Grained Emotions on Noisy Bangla Texts}, author={Islam, Khondoker Ittehadul and Yuvraz, Tanvir and Islam, Md Saiful and Hassan, Enamul}, booktitle={Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing}, pages={128--134}, year={2022} } ```
891
[ [ -0.045013427734375, -0.06732177734375, -0.00440216064453125, 0.0300445556640625, -0.03558349609375, -0.01275634765625, -0.02490234375, -0.031707763671875, 0.0283660888671875, 0.009124755859375, -0.0457763671875, -0.06597900390625, -0.048309326171875, 0.03814...
roszcz/maestro-v1
2023-04-23T12:18:27.000Z
[ "region:us" ]
roszcz
null
null
0
3
2023-04-23T11:09:27
--- dataset_info: features: - name: notes struct: - name: duration sequence: float64 - name: end sequence: float64 - name: pitch sequence: int64 - name: start sequence: float64 - name: velocity sequence: int64 - name: control_changes struct: - name: number sequence: int64 - name: time sequence: float64 - name: value sequence: int64 - name: composer dtype: string - name: title dtype: string - name: year dtype: int64 - name: midi_filename dtype: string splits: - name: validation num_bytes: 59070511.71238244 num_examples: 137 - name: test num_bytes: 76317376.44592476 num_examples: 177 - name: train num_bytes: 414787096.8416928 num_examples: 962 download_size: 155533838 dataset_size: 550174985.0 --- # Dataset Card for "maestro-v1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
1,023
[ [ -0.055816650390625, -0.021697998046875, 0.00374603271484375, 0.0159149169921875, -0.01445770263671875, -0.0073089599609375, 0.0304107666015625, -0.0014963150024414062, 0.066162109375, 0.041412353515625, -0.076171875, -0.050628662109375, -0.042388916015625, -...
asandovala/socialmedia-abuse
2023-04-24T16:03:53.000Z
[ "region:us" ]
asandovala
null
null
0
3
2023-04-24T15:55:11
--- dataset_info: features: - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 948 num_examples: 22 - name: validation num_bytes: 129.27272727272728 num_examples: 3 download_size: 0 dataset_size: 1077.2727272727273 --- # Dataset Card for "socialmedia-abuse" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
466
[ [ -0.0209808349609375, -0.03643798828125, 0.00933837890625, 0.04217529296875, -0.00765228271484375, 0.01480865478515625, 0.023284912109375, -0.0238189697265625, 0.055938720703125, 0.038848876953125, -0.050384521484375, -0.047607421875, -0.06658935546875, -0.02...
amitness/wikipedia_mt
2023-08-14T09:44:46.000Z
[ "language:mt", "region:us" ]
amitness
null
null
0
3
2023-04-25T06:53:59
--- language: mt dataset_info: features: - name: id dtype: string - name: url dtype: string - name: title dtype: string - name: text dtype: string splits: - name: train num_bytes: 26154083 num_examples: 5326 download_size: 15314612 dataset_size: 26154083 --- # Dataset Card for "wikipedia_mt" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
467
[ [ -0.056365966796875, -0.0343017578125, 0.0183258056640625, 0.0030975341796875, -0.0237579345703125, -0.01324462890625, 0.00757598876953125, -0.00446319580078125, 0.06304931640625, 0.0289154052734375, -0.061492919921875, -0.054412841796875, -0.045196533203125, ...
sustcsenlp/bn_emotion_noisy_dataset
2023-04-25T16:25:59.000Z
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "multilinguality:monolingual", "language:bn", "license:other", "emotion", "region:us" ]
sustcsenlp
null
null
0
3
2023-04-25T16:03:34
--- license: other task_categories: - text-classification multilinguality: - monolingual language: - bn pretty_name: EmoNoBa task_ids: - multi-class-classification - multi-label-classification tags: - emotion paperswithcode_id: emonoba --- # Dataset Card for "EmoNoBa" ### Dataset Summary Detecting Multi-labeled Emotion for 6 emotion categories, namely Love, Joy, Surprise, Anger, Sadness, Fear. ### Citation Information ``` @inproceedings{islam2022emonoba, title={EmoNoBa: A Dataset for Analyzing Fine-Grained Emotions on Noisy Bangla Texts}, author={Islam, Khondoker Ittehadul and Yuvraz, Tanvir and Islam, Md Saiful and Hassan, Enamul}, booktitle={Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing}, pages={128--134}, year={2022} } ```
891
[ [ -0.045013427734375, -0.06732177734375, -0.00440216064453125, 0.0300445556640625, -0.03558349609375, -0.01275634765625, -0.02490234375, -0.031707763671875, 0.0283660888671875, 0.009124755859375, -0.0457763671875, -0.06597900390625, -0.048309326171875, 0.03814...
saldigioia/Car0GPT
2023-04-26T10:48:09.000Z
[ "task_categories:text-classification", "language:en", "chat", "persona", "doi:10.57967/hf/0576", "region:us" ]
saldigioia
null
null
0
3
2023-04-26T10:28:45
--- language: - en task_categories: - text-classification tags: - chat - persona pretty_name: Persona based on Caroline Filips --- # AutoTrain Dataset for project: car0fil-001 ## Dataset Description This dataset has been automatically processed by AutoTrain for project car0fil-001. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "target": 0, "text": "And I remember", "feat_DATE": "2022-09-12T12:29:04", "feat_PLATFORM": null, "feat_Unnamed: 4": null, "feat_Unnamed: 3": null, "feat_Unnamed: 5": null }, { "target": 1, "text": "Throw a lil \u201cKurt filips is my dad\u201d", "feat_DATE": "2023-03-27T15:36:21", "feat_PLATFORM": null, "feat_Unnamed: 4": null, "feat_Unnamed: 3": null, "feat_Unnamed: 5": null } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "target": "ClassLabel(names=['CAROLINE FILIPS', 'NOT CAROLINE'], id=None)", "text": "Value(dtype='string', id=None)", "feat_DATE": "Value(dtype='string', id=None)", "feat_PLATFORM": "Value(dtype='string', id=None)", "feat_Unnamed: 4": "Value(dtype='float64', id=None)", "feat_Unnamed: 3": "Value(dtype='float64', id=None)", "feat_Unnamed: 5": "Value(dtype='float64', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 689784 | | valid | 172447 |
1,635
[ [ -0.03668212890625, 0.0090484619140625, 0.014190673828125, 0.0181732177734375, -0.01328277587890625, 0.0146942138671875, -0.005748748779296875, -0.02178955078125, 0.0084228515625, 0.01474761962890625, -0.060546875, -0.048126220703125, -0.03936767578125, -0.00...
PanoEvJ/job_postings_GPT
2023-05-06T13:17:13.000Z
[ "region:us" ]
PanoEvJ
null
null
3
3
2023-04-26T19:06:22
--- dataset_info: features: - name: job_postings dtype: string - name: cover_letters dtype: string splits: - name: train num_bytes: 1242482 num_examples: 297 download_size: 517424 dataset_size: 1242482 --- # Dataset Card for "job_postings_GPT" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
406
[ [ -0.022705078125, -0.0171356201171875, 0.029815673828125, 0.01160430908203125, -0.024322509765625, -0.01447296142578125, 0.027313232421875, -0.007648468017578125, 0.058258056640625, 0.035614013671875, -0.05438232421875, -0.05694580078125, -0.0577392578125, -0...
joey234/mmlu-human_sexuality-verbal-neg-prepend
2023-04-27T03:20:32.000Z
[ "region:us" ]
joey234
null
null
0
3
2023-04-27T02:02:38
--- dataset_info: features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D - name: neg_prompt dtype: string splits: - name: test num_bytes: 49813 num_examples: 131 download_size: 34784 dataset_size: 49813 --- # Dataset Card for "mmlu-human_sexuality-verbal-neg-prepend" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
585
[ [ -0.041473388671875, -0.032562255859375, -0.0017290115356445312, 0.013824462890625, -0.0091094970703125, -0.0112152099609375, 0.009490966796875, -0.0007376670837402344, 0.0496826171875, 0.017791748046875, -0.0799560546875, -0.053192138671875, -0.0361328125, 0...
cluneau/github-issues
2023-04-29T15:36:11.000Z
[ "task_categories:text-classification", "task_ids:multi-label-classification", "multilinguality:monolingual", "size_categories:1K<n<10K", "language:en", "region:us" ]
cluneau
null
null
0
3
2023-04-29T15:08:18
--- annotations_creators: [] language: - en language_creators: [] license: [] multilinguality: - monolingual pretty_name: HF Datasets GitHub Issues size_categories: - 1K<n<10K source_datasets: [] tags: [] task_categories: - text-classification task_ids: - multi-label-classification dataset_info: features: - name: url dtype: string - name: repository_url dtype: string - name: labels_url dtype: string - name: comments_url dtype: string - name: events_url dtype: string - name: html_url dtype: string - name: id dtype: int64 - name: node_id dtype: string - name: number dtype: int64 - name: title dtype: string - name: user struct: - name: login dtype: string - name: id dtype: int64 - name: node_id dtype: string - name: avatar_url dtype: string - name: gravatar_id dtype: string - name: url dtype: string - name: html_url dtype: string - name: followers_url dtype: string - name: following_url dtype: string - name: gists_url dtype: string - name: starred_url dtype: string - name: subscriptions_url dtype: string - name: organizations_url dtype: string - name: repos_url dtype: string - name: events_url dtype: string - name: received_events_url dtype: string - name: type dtype: string - name: site_admin dtype: bool - name: labels list: - name: id dtype: int64 - name: node_id dtype: string - name: url dtype: string - name: name dtype: string - name: color dtype: string - name: default dtype: bool - name: description dtype: string - name: state dtype: string - name: locked dtype: bool - name: assignee struct: - name: login dtype: string - name: id dtype: int64 - name: node_id dtype: string - name: avatar_url dtype: string - name: gravatar_id dtype: string - name: url dtype: string - name: html_url dtype: string - name: followers_url dtype: string - name: following_url dtype: string - name: gists_url dtype: string - name: starred_url dtype: string - name: subscriptions_url dtype: string - name: organizations_url dtype: string - name: repos_url dtype: string - name: events_url dtype: string - name: received_events_url dtype: string - name: type dtype: string - name: site_admin dtype: bool - name: assignees list: - name: login dtype: string - name: id dtype: int64 - name: node_id dtype: string - name: avatar_url dtype: string - name: gravatar_id dtype: string - name: url dtype: string - name: html_url dtype: string - name: followers_url dtype: string - name: following_url dtype: string - name: gists_url dtype: string - name: starred_url dtype: string - name: subscriptions_url dtype: string - name: organizations_url dtype: string - name: repos_url dtype: string - name: events_url dtype: string - name: received_events_url dtype: string - name: type dtype: string - name: site_admin dtype: bool - name: comments sequence: string - name: created_at dtype: int64 - name: updated_at dtype: int64 - name: closed_at dtype: int64 - name: author_association dtype: string - name: draft dtype: float64 - name: pull_request struct: - name: url dtype: string - name: html_url dtype: string - name: diff_url dtype: string - name: patch_url dtype: string - name: merged_at dtype: timestamp[s] - name: body dtype: string - name: reactions struct: - name: url dtype: string - name: total_count dtype: int64 - name: '+1' dtype: int64 - name: '-1' dtype: int64 - name: laugh dtype: int64 - name: hooray dtype: int64 - name: confused dtype: int64 - name: heart dtype: int64 - name: rocket dtype: int64 - name: eyes dtype: int64 - name: timeline_url dtype: string - name: state_reason dtype: string - name: is_pull_request dtype: bool splits: - name: train num_bytes: 12013382 num_examples: 2242 download_size: 3940692 dataset_size: 12013382 --- # Dataset Card for "github-issues" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
4,768
[ [ -0.03204345703125, -0.0209503173828125, 0.01276397705078125, 0.0157928466796875, -0.007198333740234375, 0.016143798828125, 0.0095367431640625, -0.008697509765625, 0.0706787109375, 0.0272064208984375, -0.05743408203125, -0.046966552734375, -0.035736083984375, ...
ghoskno/landmark-en-hed
2023-04-30T07:39:54.000Z
[ "region:us" ]
ghoskno
null
null
0
3
2023-04-30T07:08:19
--- dataset_info: features: - name: image dtype: image - name: conditioning_image dtype: image - name: text dtype: string splits: - name: train num_bytes: 11259483268.91 num_examples: 33045 download_size: 0 dataset_size: 11259483268.91 --- # Dataset Card for "landmark-en-hed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
445
[ [ -0.044219970703125, -0.0171356201171875, 0.01399993896484375, 0.0083770751953125, -0.0203094482421875, -0.014373779296875, 0.0111083984375, -0.0192718505859375, 0.06005859375, 0.03900146484375, -0.0445556640625, -0.07708740234375, -0.059539794921875, -0.0147...
BrozJoko/cagliostro-colab-ui
2023-04-30T10:09:24.000Z
[ "region:us" ]
BrozJoko
null
null
0
3
2023-04-30T09:24:21
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
reaganjlee/truthful_qa_mc_ar
2023-05-04T17:20:18.000Z
[ "region:us" ]
reaganjlee
null
null
0
3
2023-05-02T08:09:53
--- dataset_info: features: - name: question dtype: string - name: choices sequence: string - name: label dtype: class_label: names: '0': A '1': B '2': C '3': D splits: - name: train num_bytes: 149793.5 num_examples: 342 - name: validation num_bytes: 149793.5 num_examples: 342 download_size: 135659 dataset_size: 299587.0 --- # Dataset Card for "truthful_qa_mc_ar" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
598
[ [ -0.035125732421875, -0.0178375244140625, 0.0290679931640625, 0.0036144256591796875, -0.01080322265625, 0.0101318359375, 0.0404052734375, -0.0027446746826171875, 0.050537109375, 0.034759521484375, -0.0430908203125, -0.057891845703125, -0.0301055908203125, -0....
christykoh/imdb_fr
2023-05-02T16:46:08.000Z
[ "region:us" ]
christykoh
null
null
0
3
2023-05-02T13:01:14
--- dataset_info: features: - name: text dtype: string - name: label dtype: class_label: names: '0': neg '1': pos splits: - name: train num_bytes: 9570725 num_examples: 25000 - name: test num_bytes: 9575451 num_examples: 25000 download_size: 11576687 dataset_size: 19146176 --- # Dataset Card for "imdb_fr" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
513
[ [ -0.058258056640625, -0.006519317626953125, 0.0034008026123046875, 0.01470947265625, -0.0258636474609375, 0.00870513916015625, 0.0203704833984375, -0.01328277587890625, 0.06292724609375, 0.037139892578125, -0.07012939453125, -0.044097900390625, -0.053741455078125...
sinword/autotrain-data-face_de-identification
2023-05-02T13:26:46.000Z
[ "task_categories:image-classification", "region:us" ]
sinword
null
null
0
3
2023-05-02T13:06:05
--- task_categories: - image-classification --- # AutoTrain Dataset for project: face_de-identification ## Dataset Description This dataset has been automatically processed by AutoTrain for project face_de-identification. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "image": "<250x250 RGB PIL image>", "target": 6 }, { "image": "<256x256 RGB PIL image>", "target": 3 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "image": "Image(decode=True, id=None)", "target": "ClassLabel(names=['Abdullah_Gul', 'Alejandro_Toledo', 'Alvaro_Uribe', 'Amelie_Mauresmo', 'Andre_Agassi', 'Angelina_Jolie', 'Ariel_Sharon', 'Arnold_Schwarzenegger', 'Atal_Bihari_Vajpayee'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 2250 | | valid | 567 |
1,116
[ [ -0.036865234375, 0.002529144287109375, 0.0019969940185546875, 0.0185394287109375, -0.022125244140625, 0.0233001708984375, 0.001964569091796875, -0.0295867919921875, -0.00540924072265625, 0.03363037109375, -0.051177978515625, -0.04803466796875, -0.03448486328125,...
akumoth/peewee-issues
2023-05-03T15:53:06.000Z
[ "task_categories:text-classification", "task_categories:feature-extraction", "task_ids:topic-classification", "task_ids:multi-label-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language...
akumoth
null
null
0
3
2023-05-02T21:35:17
--- dataset_info: features: - name: url dtype: string - name: repository_url dtype: string - name: labels_url dtype: string - name: comments_url dtype: string - name: events_url dtype: string - name: html_url dtype: string - name: id dtype: int64 - name: node_id dtype: string - name: number dtype: int64 - name: title dtype: string - name: user struct: - name: login dtype: string - name: id dtype: int64 - name: node_id dtype: string - name: avatar_url dtype: string - name: gravatar_id dtype: string - name: url dtype: string - name: html_url dtype: string - name: followers_url dtype: string - name: following_url dtype: string - name: gists_url dtype: string - name: starred_url dtype: string - name: subscriptions_url dtype: string - name: organizations_url dtype: string - name: repos_url dtype: string - name: events_url dtype: string - name: received_events_url dtype: string - name: type dtype: string - name: site_admin dtype: bool - name: labels list: - name: id dtype: int64 - name: node_id dtype: string - name: url dtype: string - name: name dtype: string - name: color dtype: string - name: default dtype: bool - name: description dtype: 'null' - name: state dtype: string - name: locked dtype: bool - name: assignee dtype: 'null' - name: assignees sequence: 'null' - name: milestone dtype: 'null' - name: comments sequence: string - name: created_at dtype: timestamp[s] - name: updated_at dtype: timestamp[s] - name: closed_at dtype: timestamp[s] - name: author_association dtype: string - name: active_lock_reason dtype: string - name: body dtype: string - name: reactions struct: - name: url dtype: string - name: total_count dtype: int64 - name: '+1' dtype: int64 - name: '-1' dtype: int64 - name: laugh dtype: int64 - name: hooray dtype: int64 - name: confused dtype: int64 - name: heart dtype: int64 - name: rocket dtype: int64 - name: eyes dtype: int64 - name: timeline_url dtype: string - name: performed_via_github_app dtype: 'null' - name: state_reason dtype: string - name: draft dtype: bool - name: pull_request struct: - name: url dtype: string - name: html_url dtype: string - name: diff_url dtype: string - name: patch_url dtype: string - name: merged_at dtype: timestamp[s] splits: - name: train num_bytes: 9990717 num_examples: 2814 download_size: 3607838 dataset_size: 9990717 annotations_creators: - found language: - en language_creators: - found license: - mit multilinguality: - monolingual pretty_name: Peewee Github Issues size_categories: - n<1K source_datasets: - original tags: - peewee - python - github - issues task_categories: - text-classification - feature-extraction task_ids: - topic-classification - multi-label-classification --- # Dataset Card for Peewee Issues ## Dataset Summary Peewee Issues is a dataset containing all the issues in the [Peewee github repository](https://github.com/coleifer/peewee) up to the last date of extraction (5/3/2023). It has been made for educational purposes in mind (especifically, to get me used to using Hugging Face's datasets), but can be used for multi-label classification or semantic search. The contents are all in English and concern SQL databases and ORM libraries.
3,763
[ [ -0.025360107421875, -0.03662109375, 0.00998687744140625, 0.0457763671875, -0.00257110595703125, 0.0038509368896484375, -0.00021922588348388672, -0.025665283203125, 0.0305328369140625, 0.031524658203125, -0.05816650390625, -0.050384521484375, -0.043121337890625, ...
feradauto/NLP4SGPapers
2023-05-03T17:37:12.000Z
[ "task_categories:text-classification", "license:cc-by-nc-sa-4.0", "region:us" ]
feradauto
NLP4SGPAPERS dataset: a scientific dataset with three associated tasks that can help identify NLP4SG papers
2
3
2023-05-03T07:32:10
--- license: cc-by-nc-sa-4.0 pretty_name: NLP4SGPapers task_categories: - text-classification --- # Dataset Card for NLP4SGPapers ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [NLP4SG](https://github.com/feradauto/nlp4sg) - **Paper:** - **Point of Contact:** [Zhijing Jin](mailto:zjin@tue.mpg.de), [Fernando Gonzalez](mailto:fgonzalez@ethz.ch) ### Dataset Summary Scientific dataset with three associated tasks that can help identify NLP4SG papers. ### Languages The language in the dataset is English. ## Dataset Structure ### Data Instances Each instance is an annotated paper with title, abstract, year. ### Data Fields - `ID`: Paper ID in ACL Anthology - `url`: URL where the paper is available - `title`: Title of the paper - `abstract`: Abstract - `label_nlp4sg`: Whether is an NLP4SG paper or not. For more info on the criteria check our paper - `task`: List of tasks (Only available for the test set and for SG papers) - `method`: List of methods (Only available for the test set and for SG papers) - `goal1`: goal in string format - `goal2`: goal in string format - `goal3`: goal in string format - `acknowledgments`: acknowledgments - `year`: Year of publication - `sdg1` to `sdg17`: Boolean value that indicates if the paper addresses the United Nations Social Development Goal. ### Data Splits NLP4SGPapers contains train, test and validation splits. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data Information about the data collection can be found in the appendix of [our paper]. ### Personal and Sensitive Information The NLP4SGPapers dataset does not have privacy concerns. ## Considerations for Using the Data ### Social Impact of Dataset The intended use of this work is to help the creation of an overview of the NLP4SG research landscape. ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The NLP4SGPapers dataset is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/). ### Citation Information ``` ```
3,067
[ [ -0.02362060546875, -0.0122833251953125, 0.02142333984375, 0.0240020751953125, -0.0150604248046875, 0.006687164306640625, -0.018310546875, -0.0306243896484375, 0.031463623046875, 0.0309600830078125, -0.040252685546875, -0.061981201171875, -0.046417236328125, ...
glombardo/misogynistic-statements-and-their-potential-restructuring
2023-05-28T17:56:43.000Z
[ "task_categories:text2text-generation", "task_categories:text-classification", "size_categories:n<1K", "language:es", "license:cc-by-nc-4.0", "region:us" ]
glombardo
null
null
0
3
2023-05-03T09:00:22
--- license: cc-by-nc-4.0 task_categories: - text2text-generation - text-classification language: - es pretty_name: Misogynistic statements and their potential restructuring size_categories: - n<1K dataset_info: features: - name: misogynistic dtype: string - name: reformulation dtype: string splits: - name: train num_bytes: 24000 num_examples: 121 - name: validation num_bytes: 8253 num_examples: 41 - name: test num_bytes: 8346 num_examples: 41 download_size: 28877 dataset_size: 40599 --- ## Misogynistic statements and their potential restructuring Beta dataset Generated by GPT3.5 Language: Spanish
658
[ [ -0.00885009765625, -0.031890869140625, 0.0006823539733886719, 0.0338134765625, -0.01351165771484375, -0.0009927749633789062, 0.01271820068359375, -0.017669677734375, -0.01345062255859375, 0.03497314453125, -0.05474853515625, -0.0318603515625, -0.04412841796875, ...
NicholasSynovic/Modified-VEAA
2023-05-03T18:04:48.000Z
[ "task_categories:text-classification", "size_categories:10K<n<100K", "language:en", "license:agpl-3.0", "region:us" ]
NicholasSynovic
null
null
0
3
2023-05-03T17:47:32
--- license: agpl-3.0 task_categories: - text-classification language: - en size_categories: - 10K<n<100K --- # Modified Victorian Era Authorship Attribution Dataset ## About This data set is a modified version of the one that can be found [here](https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution). The difference being that the training dataset was split into two parts: 80% training, 20% testing with labels. Splitting was done with a random stratified sample approach. This is different than the source dataset which did not have any labels for the testing data. Additionally, all text has been converted to UTF-8 format and any errors were ignored. The original testing data is not included with this release. ## Citation > GUNGOR, ABDULMECIT, Benchmarking Authorship Attribution Techniques Using Over A Thousand Books by Fifty Victorian Era Novelists, Purdue Master of Thesis, 2018-04
925
[ [ 0.0037517547607421875, -0.0272064208984375, 0.006786346435546875, -0.002498626708984375, -0.013458251953125, -0.017181396484375, -0.0036792755126953125, -0.018280029296875, 0.025238037109375, 0.06854248046875, -0.043975830078125, -0.035675048828125, -0.037109375...
genta-tech/snli_indo
2023-05-04T19:46:23.000Z
[ "task_categories:text-classification", "size_categories:100K<n<1M", "language:id", "license:cc-by-4.0", "region:us" ]
genta-tech
null
null
0
3
2023-05-04T19:45:09
--- license: cc-by-4.0 task_categories: - text-classification language: - id size_categories: - 100K<n<1M dataset_info: features: - name: premise dtype: string - name: hyphothesis dtype: string - name: label dtype: int64 splits: - name: test num_bytes: 1373665 num_examples: 10000 - name: train num_bytes: 71884965 num_examples: 550152 - name: validation num_bytes: 1378057 num_examples: 10000 download_size: 20413774 dataset_size: 74636687 --- This is an Indonesia-translated version of [snli](https://huggingface.co/datasets/snli) dataset Translated using [Helsinki-NLP/EN-ID](https://huggingface.co/Helsinki-NLP/opus-mt-en-id)
687
[ [ 0.0029773712158203125, -0.041168212890625, 0.00797271728515625, 0.041778564453125, -0.03802490234375, -0.0280303955078125, -0.007526397705078125, -0.043487548828125, 0.07745361328125, 0.055633544921875, -0.06427001953125, -0.0316162109375, -0.032623291015625, ...
OdiaGenAI/dolly-odia-15k
2023-06-05T19:21:34.000Z
[ "task_categories:text-generation", "size_categories:10K<n<100K", "language:or", "license:cc-by-nc-sa-4.0", "region:us" ]
OdiaGenAI
null
null
0
3
2023-05-05T19:40:08
--- license: cc-by-nc-sa-4.0 task_categories: - text-generation language: - or pretty_name: Dolly-Odia-15K size_categories: - 10K<n<100K --- # Dataset Card for Dolly-Odia-15K ## Dataset Description - **Homepage: https://www.odiagenai.org/** - **Repository: https://github.com/shantipriyap/OdiaGenAI** - **Point of Contact: Shantipriya Parida, and Sambit Sekhar** ### Dataset Summary This dataset is the Odia-translated version of the Dolly 15K instruction set. In this dataset both English and Odia instruction, input, and output strings are available. ### Supported Tasks and Leaderboards Large Language Model (LLM) ### Languages Odia ## Dataset Structure JSON ### Data Fields instruction (string) english_instruction (string) input (string) english_input (string) output (string) english_output (string) ### Licensing Information This work is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa]. [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa] [cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/ [cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png [cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg ### Citation Information If you find this repository useful, please consider giving 👏 and citing: ``` @misc{OdiaGenAI, author = {Shantipriya Parida and Sambit Sekhar and Subhadarshi Panda and Soumendra Kumar Sahoo and Swateek Jena and Abhijeet Parida and Arghyadeep Sen and Satya Ranjan Dash and Deepak Kumar Pradhan}, title = {OdiaGenAI: Generative AI and LLM Initiative for the Odia Language}, year = {2023}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/OdiaGenAI}}, } ``` ### Contributions - Shantipriya Parida - Sambit Sekhar
1,883
[ [ -0.020721435546875, -0.07073974609375, 0.00408172607421875, 0.04302978515625, -0.0264434814453125, -0.0033931732177734375, -0.0052947998046875, -0.0245361328125, 0.0236663818359375, 0.050323486328125, -0.040863037109375, -0.057403564453125, -0.045257568359375, ...
turkish-nlp-suite/beyazperde-top-300-movie-reviews
2023-09-20T16:41:11.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:tr", "license:cc-by-sa-4.0", "region:us" ]
turkish-nlp-suite
Movies sentiment analysis dataset for Turkish. Includes reviews for Top 300 movies of all time,crawled from popular Turkish movies website Beyazperde.com. All reviews are in Turkish.[BeyazPerde Top 300 Movie Reviews Dataset](https://github.com/turkish-nlp-suite/BeyazPerde-Movie-Reviews/)
@inproceedings{altinok-2023-diverse, title = "A Diverse Set of Freely Available Linguistic Resources for {T}urkish", author = "Altinok, Duygu", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-long.768", pages = "13739--13750", abstract = "This study presents a diverse set of freely available linguistic resources for Turkish natural language processing, including corpora, pretrained models and education material. Although Turkish is spoken by a sizeable population of over 80 million people, Turkish linguistic resources for natural language processing remain scarce. In this study, we provide corpora to allow practitioners to build their own applications and pretrained models that would assist industry researchers in creating quick prototypes. The provided corpora include named entity recognition datasets of diverse genres, including Wikipedia articles and supplement products customer reviews. In addition, crawling e-commerce and movie reviews websites, we compiled several sentiment analysis datasets of different genres. Our linguistic resources for Turkish also include pretrained spaCy language models. To the best of our knowledge, our models are the first spaCy models trained for the Turkish language. Finally, we provide various types of education material, such as video tutorials and code examples, that can support the interested audience on practicing Turkish NLP. The advantages of our linguistic resources are three-fold: they are freely available, they are first of their kind, and they are easy to use in a broad range of implementations. Along with a thorough description of the resource creation process, we also explain the position of our resources in the Turkish NLP world.", }
0
3
2023-05-08T08:45:08
--- language: - tr license: - cc-by-sa-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K task_categories: - text-classification task_ids: - sentiment-classification pretty_name: BeyazPerde Top 300 Movie Reviews --- # Dataset Card for turkish-nlp-suite/beyazperde-top-300-movie-reviews <img src="https://raw.githubusercontent.com/turkish-nlp-suite/.github/main/profile/beyazPerde.png" width="20%" height="20%"> ## Dataset Description - **Repository:** [BeyazPerde Top 300 Movie Reviews](https://github.com/turkish-nlp-suite/BeyazPerde-Movie-Reviews/) - **Paper:** [ACL link](https://aclanthology.org/2023.acl-long.768/) - **Dataset:** BeyazPerde Top 300 Movie Reviews - **Domain:** Social Media ### Dataset Summary Beyazperde Movie Reviews offers Turkish sentiment analysis datasets that is scraped from popular movie reviews website Beyazperde.com. Top 300 Movies include audience reviews about best 300 movies of all the time. Here's the star rating distribution: | star rating | count | |---|---| | 0.5 | 1.657 | | 1.0 | 535 | | 1.5 | 273 | | 2.0 | 608 | | 2.5 | 2.439 | | 3.0 |2.277 | | 3.5 | 5.550 | | 4.0 | 13.248 | | 4.5 | 10.077 | | 5.0 | 17.351 | | total | 54.015 | As one sees, this dataset is highly unbalanced, number of 4 and 5 star ratings are much higher than 0, 1, 2 and 3 star reviews. This dataset offers the challenge of understanding the sentiment in a refined way, dissecting the positive sentiment into "very positive" or "okayish positive". ### Dataset Instances An instance of this dataset looks as follows: ``` { "movie": "Bay Evet", "text": "Tam kıvamında çok keyifli bir film", "rating": 4 } ``` ### Data Split | name |train|validation|test| |---------|----:|---:|---:| |BeyazPerde Top 300 Movie Reviews|44015|5000|5000| ### Citation This work is supported by Google Developer Experts Program. Part of Duygu 2022 Fall-Winter collection, "Turkish NLP with Duygu"/ "Duygu'yla Türkçe NLP". All rights reserved. If you'd like to use this dataset in your own work, please kindly cite [A Diverse Set of Freely Available Linguistic Resources for Turkish](https://aclanthology.org/2023.acl-long.768/) : ``` @inproceedings{altinok-2023-diverse, title = "A Diverse Set of Freely Available Linguistic Resources for {T}urkish", author = "Altinok, Duygu", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-long.768", pages = "13739--13750", abstract = "This study presents a diverse set of freely available linguistic resources for Turkish natural language processing, including corpora, pretrained models and education material. Although Turkish is spoken by a sizeable population of over 80 million people, Turkish linguistic resources for natural language processing remain scarce. In this study, we provide corpora to allow practitioners to build their own applications and pretrained models that would assist industry researchers in creating quick prototypes. The provided corpora include named entity recognition datasets of diverse genres, including Wikipedia articles and supplement products customer reviews. In addition, crawling e-commerce and movie reviews websites, we compiled several sentiment analysis datasets of different genres. Our linguistic resources for Turkish also include pretrained spaCy language models. To the best of our knowledge, our models are the first spaCy models trained for the Turkish language. Finally, we provide various types of education material, such as video tutorials and code examples, that can support the interested audience on practicing Turkish NLP. The advantages of our linguistic resources are three-fold: they are freely available, they are first of their kind, and they are easy to use in a broad range of implementations. Along with a thorough description of the resource creation process, we also explain the position of our resources in the Turkish NLP world.", } ```
4,163
[ [ -0.06292724609375, -0.037322998046875, -0.002918243408203125, 0.023651123046875, -0.04608154296875, -0.0101470947265625, -0.036163330078125, -0.0192718505859375, 0.020782470703125, 0.036956787109375, -0.047607421875, -0.0657958984375, -0.053375244140625, 0.0...
0x22almostEvil/tatoeba-mt-llama-only
2023-05-10T09:14:37.000Z
[ "task_categories:translation", "size_categories:1M<n<10M", "language:en", "language:ru", "language:de", "language:uk", "language:sv", "language:sr", "language:sl", "language:ro", "language:pt", "language:pl", "language:nl", "language:it", "language:hu", "language:hr", "language:fr", ...
0x22almostEvil
null
null
0
3
2023-05-08T15:42:22
--- license: cc-by-2.0 task_categories: - translation language: - en - ru - de - uk - sv - sr - sl - ro - pt - pl - nl - it - hu - hr - fr - es - da - cs - ca - bg tags: - tatoeba - Translation pretty_name: tatoeba-mt-llama-only size_categories: - 1M<n<10M --- # Dataset Card for multilingual tatoeba translations with ~3M entries (llama supported languages only). ### Dataset Summary ~3M entries. Just more user-friendly version that combines all of the entries of original dataset in a single file (llama supported languages only): https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt
593
[ [ -0.032806396484375, -0.0169219970703125, 0.0166778564453125, 0.05230712890625, -0.06689453125, 0.00969696044921875, -0.018157958984375, -0.05072021484375, 0.06390380859375, 0.05426025390625, -0.0302734375, -0.06524658203125, -0.050994873046875, 0.03842163085...
abatilo/myanimelist-embeddings
2023-05-09T20:51:17.000Z
[ "task_categories:text-classification", "task_categories:summarization", "size_categories:10K<n<100K", "language:en", "license:mit", "region:us" ]
abatilo
null
null
1
3
2023-05-09T19:28:09
--- license: mit task_categories: - text-classification - summarization language: - en pretty_name: MyAnimeList Embeddings size_categories: - 10K<n<100K --- # myanimelist-embeddings This dataset is every non-empty anime synopsis from [MyAnimeList.net](https://myanimelist.net) ran through the `embed-multilingual-v2.0` embedding model from [Cohere AI](https://cohere.com). ## Sample code for searching for anime Install some dependencies ``` pip install cohere==4.4.1 datasets==2.12.0 torch==2.0.1 ``` Code heavily inspired by the [Cohere Wikipedia embeddings sample](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings#search) ```python import os import cohere import torch from datasets import load_dataset co = cohere.Client( os.environ["COHERE_API_KEY"] ) # Add your cohere API key from www.cohere.com docs_stream = load_dataset( f"abatilo/myanimelist-embeddings", split="train", streaming=True ) docs = [] doc_embeddings = [] for doc in docs_stream: docs.append(doc) doc_embeddings.append(doc["embedding"]) doc_embeddings = torch.tensor(doc_embeddings) while True: query = input("What do you want to see?: ") response = co.embed(texts=[query], model="embed-multilingual-v2.0") query_embedding = response.embeddings query_embedding = torch.tensor(query_embedding) # Compute dot score between query embedding and document embeddings dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1)) top_k = torch.topk(dot_scores, k=3) for doc_id in top_k.indices[0].tolist(): print(docs[doc_id]["title"]) print(docs[doc_id]["synopsis"], "\n") ``` ## Sample search queries ### high schoolers with super powers fight evil ``` What do you want to see?: high schoolers with super powers fight evil Kigurumi Sentai Quiltian Twin schoolgirls transform into their superhero aspects to save the world from an evil cabal of would-be dictators, but they can only fight for justice by having a lot of sex. (Source: ANN) Kekkaishi Yoshimura Sumimura comes from a long line of "Kekkaishi," individuals who have supernatural abilities and are able to destroy evil creatures called Ayakashi that venture into the human realm from time to time. The Ayakashi are demons that look to feast on the power emanating from the land of Karasumori, which also happens to be where Yoshimura's high school is located. Now, Yoshimura must fight to protect his beloved school and hometown. Although, if it were up to him, he would rather be baking cakes than fighting off the ugly characters that show up at night. Thankfully, Yoshimura isn't the only one helping to keep the baddies at bay. His childhood friend and neighbor, Tokine Yukimura, joins him in this righteous battle. Despite the fact that they are from rival clans, these two make a fantastic team. And teamwork is something vital to fighting the evil that is closing in, as the Ayakashi attack in waves, looking to claim the land as their own, and a shadowy organization looks on, ready to pounce when the time is right... Shiritsu Araiso Koutougakkou Seitokai Shikkoubu Kubota Makoto and Tokitoh Minoru (characters from Kazuya Minekura's manga Wild Adaptor—though no reference is made to the darker storyline of WA in this light-hearted anime)—are the muscle of their high school's all-powerful student council. They defend the student body from disorder—generated by both humans and demons—while avoiding their classes. (Source: ANN) ``` ### a pokemon trainer wants to be the very best ``` What do you want to see?: a pokemon trainer wants to be the very best Pokemon Pokémon are peculiar creatures with a vast array of different abilities and appearances; many people, known as Pokémon trainers, capture and train them, often with the intent of battling others. Young Satoshi has not only dreamed of becoming a Pokémon trainer but also a "Pokémon Master," and on the arrival of his 10th birthday, he finally has a chance to make that dream a reality. Unfortunately for him, all three Pokémon available to beginning trainers have already been claimed and only Pikachu, a rebellious Electric-type Pokémon, remains. However, this chance encounter would mark the start of a lifelong friendship and an epic adventure! Setting off on a journey to become the very best, Satoshi and Pikachu travel across beautiful, sprawling regions with their friends Kasumi, a Water-type trainer, and Takeshi, a Rock-type trainer. But danger lurks around every corner. The infamous Team Rocket is always nearby, seeking to steal powerful Pokémon through nefarious schemes. It'll be up to Satoshi and his friends to thwart their efforts as he also strives to earn the eight Pokémon Gym Badges he'll need to challenge the Pokémon League, and eventually claim the title of Pokémon Master. [Written by MAL Rewrite] Pokemon Best Wishes! As with both the Advanced Generation and Diamond & Pearl series before it, the Best Wishes! series begins with only Satoshi, headed off to the Isshu region, located far away from Kanto, Johto, Houen, and Sinnoh, with his Pikachu. After he meets up with the new trainer and rival Shooty and the region's Professor Araragi, he gains traveling companions in Iris, a girl from a town known for its Dragon Pokémon, and Dent, Pokémon Connoisseur and the Grass Pokémon specialist of the three Sanyou City Gym Leaders. Pokemon Sun & Moon After his mother wins a free trip to the islands, Pokémon trainer Satoshi and his partner Pikachu head for Melemele Island of the beautiful Alola region, which is filled with lots of new Pokémon and even variations of familiar faces. Eager to explore the island, Satoshi and Pikachu run wild with excitement, quickly losing their way while chasing after a Pokémon. The pair eventually stumbles upon the Pokémon School, an institution where students come to learn more about these fascinating creatures. At the school, when he and one of the students—the no-nonsense Kaki—have a run-in with the nefarious thugs of Team Skull, Satoshi discovers the overwhelming might of the Z-Moves, powerful attacks originating from the Alola region that require the trainer and Pokémon to be in sync. Later that night, he and Pikachu have an encounter with the guardian deity Pokémon of Melemele Island, the mysterious Kapu Kokeko. The Pokémon of legend bestows upon them a Z-Ring, a necessary tool in using the Z-Moves. Dazzled by their earlier battle and now in possession of a Z-Ring, Satoshi and Pikachu decide to stay behind in the Alola Region to learn and master the strength of these powerful new attacks. Enrolling in the Pokémon School, Satoshi is joined by classmates such as Lillie, who loves Pokémon but cannot bring herself to touch them, Kaki, and many others. Between attending classes, fending off the pesky Team Rocket—who themselves have arrived in Alola to pave the way for their organization's future plans—and taking on the Island Challenge that is necessary to master the Z-Moves, Satoshi and Pikachu are in for an exciting new adventure. [Written by MAL Rewrite] ``` ### hunting demons with swords ``` What do you want to see?: hunting demons with swords Grandeek This is a tale of swords and sorcery as the young warrior-woman Tia Allbright and her hapless assistant, Luke, battle demon assassins in a fantasy land. Tia arrives on the island of Marcleida with her trusted sword 'Grandeek,' which holds a spirit within that helps her on her quests. She is soon turned away however. Determined to get on the island, Tia searches for a way past the fences that guard the entrance, as another stranger arrives on the island to take on a mysterious job. Someone has been killing the inhabitants of the island and has the ability to appear and disappear at will. Seems the sword 'Aihorn' has been stolen and the spirit that resides within it seeks vengenance on those who killed its master 50 years before. As Tia makes her way inside the island, it becomes clear that both she, and the stranger, are after the sword Aihorn, hoping to bring to an end its bloody goal. But the sword has the ability to possess the person who wields it - putting Tia and the stranger at a great disadvantage. Based on the manga by Kohime Ohse, Tia and Grandeek will have to face their most difficult challenge yet... (Source: AnimeNfo) Bemubemu Hunter Kotengu Tenmaru Adventures of a demon slayer Tenmaru. Karasu Tengu Kabuto 500 years ago in the Tensho Era of Japan, a man was born who defied the will of a demon; a man who had gods of good on his side; a man destined to battle evil....his name was Kabuto. Somehow, Kuroyasya Douki, the vile Black Night Demon, escaped his prison in hell and returned to the earthly plane to wreak vengeance on the family-line of Kabuto. None can escape his deadly magic and masterful skills with the blade; however, the gods of the North, West, East, and South band together to help Kabuto stand for Justice. With the questionable help of a diabolical talking sword that his own father forged, Kabuto may live another day to see his own sons born.... ```
9,082
[ [ -0.045257568359375, -0.040374755859375, 0.031646728515625, -0.01415252685546875, -0.01442718505859375, 0.00923919677734375, 0.00234222412109375, -0.0215301513671875, 0.08074951171875, 0.031280517578125, -0.056854248046875, -0.0233001708984375, -0.05859375, 0...
PORTULAN/parlamento-pt
2023-05-12T06:34:53.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:pt", "license:other", "parlame...
PORTULAN
null
null
2
3
2023-05-10T08:15:20
--- annotations_creators: - no-annotation language: - pt license: - other multilinguality: - monolingual pretty_name: ParlamentoPT size_categories: - 1M<n<10M source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling tags: - parlamentopt - parlamento - parlamento-pt - albertina-pt* - albertina-ptpt - albertina-ptbr - fill-mask - bert - deberta - portuguese - encoder - foundation model --- # Dataset Card for ParlamentoPT ### Dataset Summary The ParlamentoPT is a **Portuguese** language data set obtained by collecting publicly available documents containing transcriptions of debates in the Portuguese Parliament. The data was collected from the Portuguese Parliament portal in accordance with its [open data policy](https://www.parlamento.pt/Cidadania/Paginas/DadosAbertos.aspx). This dataset was collected with the purpose of creating the [Albertina-PT*](https://huggingface.co/PORTULAN/albertina-ptpt) language model, and it serves as training data for model development. The development of the model is a collaborative effort between the University of Lisbon and the University of Porto in Portugal </br> # Citation When using or citing this data set, kindly cite the following [publication](https://arxiv.org/abs/2305.06721): ``` latex @misc{albertina-pt, title={Advancing Neural Encoding of Portuguese with Transformer Albertina PT-*}, author={João Rodrigues and Luís Gomes and João Silva and António Branco and Rodrigo Santos and Henrique Lopes Cardoso and Tomás Osório}, year={2023}, eprint={2305.06721}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <br> # Acknowledgments The research reported here was partially supported by: PORTULAN CLARIN—Research Infrastructure for the Science and Technology of Language, funded by Lisboa 2020, Alentejo 2020 and FCT—Fundação para a Ciência e Tecnologia under the grant PINFRA/22117/2016; research project ALBERTINA - Foundation Encoder Model for Portuguese and AI, funded by FCT—Fundação para a Ciência e Tecnologia under the grant CPCA-IAC/AV/478394/2022; innovation project ACCELERAT.AI - Multilingual Intelligent Contact Centers, funded by IAPMEI, I.P. - Agência para a Competitividade e Inovação under the grant C625734525-00462629, of Plano de Recuperação e Resiliência, call RE-C05-i01.01 – Agendas/Alianças Mobilizadoras para a Reindustrialização; and LIACC - Laboratory for AI and Computer Science, funded by FCT—Fundação para a Ciência e Tecnologia under the grant FCT/UID/CEC/0027/2020.
2,619
[ [ -0.033477783203125, -0.05218505859375, 0.0033626556396484375, 0.036773681640625, -0.03021240234375, -0.030487060546875, -0.0445556640625, -0.024078369140625, 0.0109100341796875, 0.0335693359375, -0.014312744140625, -0.041961669921875, -0.03759765625, 0.02520...
pietrolesci/dbpedia_14_indexed
2023-05-11T13:34:45.000Z
[ "task_categories:text-classification", "task_ids:topic-classification", "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "region:us" ]
pietrolesci
null
null
0
3
2023-05-10T22:11:57
--- annotations_creators: - machine-generated language_creators: - crowdsourced language: - en license: - cc-by-sa-3.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-classification task_ids: - topic-classification paperswithcode_id: dbpedia pretty_name: DBpedia dataset_info: features: - name: labels dtype: class_label: names: '0': Company '1': EducationalInstitution '2': Artist '3': Athlete '4': OfficeHolder '5': MeanOfTransportation '6': Building '7': NaturalPlace '8': Village '9': Animal '10': Plant '11': Album '12': Film '13': WrittenWork - name: title dtype: string - name: content dtype: string - name: uid dtype: int64 - name: embedding_all-mpnet-base-v2 sequence: float32 - name: embedding_multi-qa-mpnet-base-dot-v1 sequence: float32 - name: embedding_all-MiniLM-L12-v2 sequence: float32 splits: - name: train num_bytes: 4490428970 num_examples: 560000 - name: test num_bytes: 561310285 num_examples: 70000 download_size: 0 dataset_size: 5051739255 --- This is the same dataset as [`dbpedia_14`](https://huggingface.co/datasets/dbpedia_14). The only differences are 1. Addition of a unique identifier, `uid` 1. Addition of the indices, that is 3 columns with the embeddings of 3 different sentence-transformers - `all-mpnet-base-v2` - `multi-qa-mpnet-base-dot-v1` - `all-MiniLM-L12-v2` 1. Renaming of the `label` column to `labels` for easier compatibility with the transformers library
1,705
[ [ -0.03179931640625, -0.033050537109375, 0.018585205078125, 0.0262908935546875, 0.0089569091796875, -0.01129150390625, -0.00931549072265625, -0.0141143798828125, 0.035369873046875, 0.03509521484375, -0.06304931640625, -0.042327880859375, -0.0276336669921875, 0...
dspoka/sdg-single
2023-05-15T05:14:42.000Z
[ "region:us" ]
dspoka
null
null
0
3
2023-05-11T03:55:36
--- dataset_info: features: - name: iso3 dtype: string - name: country dtype: string - name: goal dtype: string - name: target dtype: string - name: text dtype: string - name: status dtype: string - name: sector dtype: string - name: response dtype: string - name: infotype dtype: string - name: start dtype: float64 - name: end dtype: float64 - name: filename dtype: string - name: __index_level_0__ dtype: int64 splits: - name: full num_bytes: 4297968 num_examples: 14219 download_size: 0 dataset_size: 4297968 --- # Dataset Card for "sdg-single" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
777
[ [ -0.055419921875, -0.0211944580078125, 0.0117034912109375, 0.020263671875, -0.034210205078125, -0.00833892822265625, 0.0189361572265625, 0.003765106201171875, 0.071533203125, 0.045440673828125, -0.07952880859375, -0.07098388671875, -0.0341796875, -0.018356323...
helenlu/ade20k
2023-05-12T03:51:47.000Z
[ "region:us" ]
helenlu
null
null
1
3
2023-05-11T06:05:44
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
gshbao/DocNMT
2023-05-12T07:52:30.000Z
[ "task_categories:translation", "size_categories:100K<n<1M", "language:en", "language:de", "license:afl-3.0", "region:us" ]
gshbao
null
null
1
3
2023-05-12T07:00:08
--- license: afl-3.0 task_categories: - translation language: - en - de pretty_name: Doc-Level NMT size_categories: - 100K<n<1M --- # Dataset Card for Dataset Name ### Dataset Summary The benchmark datasets for document-level machine translation. ### Supported Tasks Document-level Machine Translation Tasks. ### Languages English-German ## Dataset Structure ### Data Instances TED: iwslt17, News: nc2016, Europarl: europarl7 ### Data Fields Pure text that each line represents a sentence and multiple lines separated by '\<d\>' line form a document. ### Data Splits train, dev, test ### Data Usage This dataset is created for the convenience of usage by https://github.com/baoguangsheng/g-transformer
716
[ [ -0.0294036865234375, -0.039398193359375, 0.0210723876953125, 0.0142059326171875, -0.0225830078125, 0.0092926025390625, -0.00426483154296875, -0.006298065185546875, -0.01276397705078125, 0.03582763671875, -0.05615234375, -0.0611572265625, -0.04681396484375, 0...
Abrumu/Fashion_controlnet_dataset
2023-05-16T00:45:16.000Z
[ "region:us" ]
Abrumu
null
null
4
3
2023-05-12T11:48:44
--- dataset_info: features: - name: target dtype: image - name: prompt dtype: string - name: control dtype: image - name: CLIP_captions dtype: string splits: - name: train num_bytes: 9533440093.0 num_examples: 11647 download_size: 9530317166 dataset_size: 9533440093.0 --- # Dataset Card for "Fashion_controlnet_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
495
[ [ -0.028961181640625, -0.00839996337890625, -0.01190948486328125, 0.0257720947265625, -0.0178070068359375, 0.004558563232421875, 0.02923583984375, -0.0132598876953125, 0.0675048828125, 0.02764892578125, -0.07562255859375, -0.051971435546875, -0.032623291015625, ...
senyukhin/ru-ego-literature
2023-06-25T09:42:11.000Z
[ "task_categories:summarization", "size_categories:n<1K", "language:ru", "license:openrail", "art", "region:us" ]
senyukhin
null
null
0
3
2023-05-12T21:05:41
--- license: openrail task_categories: - summarization language: - ru size_categories: - n<1K tags: - art viewer: true --- # Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary For this dataset, we selected literary texts in Russian that are closest in style and subject matter to real diary entries, giving priority to texts written in the first person and paying considerable attention to the inner state of the characters. By parsing popular Internet resources with retellings of literary works, we received briefings for each of the works selected in the previous step and supplemented the dataset. ### Supported Tasks and Leaderboards [Summarization] ### Languages [Russian] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
1,823
[ [ 0.00518035888671875, -0.0199127197265625, 0.01262664794921875, 0.0191650390625, -0.032012939453125, 0.005817413330078125, -0.02593994140625, -0.023162841796875, 0.03948974609375, 0.04864501953125, -0.06689453125, -0.0755615234375, -0.045257568359375, 0.01359...
a6kme/minds14-mirror
2023-05-13T11:42:15.000Z
[ "task_categories:automatic-speech-recognition", "task_ids:keyword-spotting", "annotations_creators:expert-generated", "annotations_creators:crowdsourced", "annotations_creators:machine-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:multilingual", ...
a6kme
MINDS-14 is training and evaluation resource for intent detection task with spoken data. It covers 14 intents extracted from a commercial system in the e-banking domain, associated with spoken examples in 14 diverse language varieties.
@article{gerz2021multilingual, title={Multilingual and cross-lingual intent detection from spoken data}, author={Gerz, Daniela and Su, Pei-Hao and Kusztos, Razvan and Mondal, Avishek and Lis, Michal and Singhal, Eshan and Mrk{\v{s}}i{\'c}, Nikola and Wen, Tsung-Hsien and Vuli{\'c}, Ivan}, journal={arXiv preprint arXiv:2104.08524}, year={2021} }
0
3
2023-05-13T07:56:01
--- annotations_creators: - expert-generated - crowdsourced - machine-generated language_creators: - crowdsourced - expert-generated language: - en - fr - it - es - pt - de - nl - ru - pl - cs - ko - zh language_bcp47: - en - en-GB - en-US - en-AU - fr - it - es - pt - de - nl - ru - pl - cs - ko - zh license: - cc-by-4.0 multilinguality: - multilingual pretty_name: 'MInDS-14' size_categories: - 10K<n<100K task_categories: - automatic-speech-recognition - speech-processing task_ids: - speech-recognition - keyword-spotting --- # MInDS-14 ## Dataset Description - **Fine-Tuning script:** [pytorch/audio-classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification) - **Paper:** [Multilingual and Cross-Lingual Intent Detection from Spoken Data](https://arxiv.org/abs/2104.08524) - **Total amount of disk used:** ca. 500 MB MINDS-14 is training and evaluation resource for intent detection task with spoken data. It covers 14 intents extracted from a commercial system in the e-banking domain, associated with spoken examples in 14 diverse language varieties. ## Example MInDS-14 can be downloaded and used as follows: ```py from datasets import load_dataset minds_14 = load_dataset("PolyAI/minds14", "fr-FR") # for French # to download all data for multi-lingual fine-tuning uncomment following line # minds_14 = load_dataset("PolyAI/all", "all") # see structure print(minds_14) # load audio sample on the fly audio_input = minds_14["train"][0]["audio"] # first decoded audio sample intent_class = minds_14["train"][0]["intent_class"] # first transcription intent = minds_14["train"].features["intent_class"].names[intent_class] # use audio_input and language_class to fine-tune your model for audio classification ``` ## Dataset Structure We show detailed information the example configurations `fr-FR` of the dataset. All other configurations have the same structure. ### Data Instances **fr-FR** - Size of downloaded dataset files: 471 MB - Size of the generated dataset: 300 KB - Total amount of disk used: 471 MB An example of a datainstance of the config `fr-FR` looks as follows: ``` { "path": "/home/patrick/.cache/huggingface/datasets/downloads/extracted/3ebe2265b2f102203be5e64fa8e533e0c6742e72268772c8ac1834c5a1a921e3/fr-FR~ADDRESS/response_4.wav", "audio": { "path": "/home/patrick/.cache/huggingface/datasets/downloads/extracted/3ebe2265b2f102203be5e64fa8e533e0c6742e72268772c8ac1834c5a1a921e3/fr-FR~ADDRESS/response_4.wav", "array": array( [0.0, 0.0, 0.0, ..., 0.0, 0.00048828, -0.00024414], dtype=float32 ), "sampling_rate": 8000, }, "transcription": "je souhaite changer mon adresse", "english_transcription": "I want to change my address", "intent_class": 1, "lang_id": 6, } ``` ### Data Fields The data fields are the same among all splits. - **path** (str): Path to the audio file - **audio** (dict): Audio object including loaded audio array, sampling rate and path ot audio - **transcription** (str): Transcription of the audio file - **english_transcription** (str): English transcription of the audio file - **intent_class** (int): Class id of intent - **lang_id** (int): Id of language ### Data Splits Every config only has the `"train"` split containing of *ca.* 600 examples. ## Dataset Creation [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/). ### Citation Information ``` @article{DBLP:journals/corr/abs-2104-08524, author = {Daniela Gerz and Pei{-}Hao Su and Razvan Kusztos and Avishek Mondal and Michal Lis and Eshan Singhal and Nikola Mrksic and Tsung{-}Hsien Wen and Ivan Vulic}, title = {Multilingual and Cross-Lingual Intent Detection from Spoken Data}, journal = {CoRR}, volume = {abs/2104.08524}, year = {2021}, url = {https://arxiv.org/abs/2104.08524}, eprinttype = {arXiv}, eprint = {2104.08524}, timestamp = {Mon, 26 Apr 2021 17:25:10 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2104-08524.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset
5,292
[ [ -0.034912109375, -0.044708251953125, 0.0276031494140625, 0.01995849609375, -0.00913238525390625, -0.025543212890625, -0.04058837890625, -0.0292205810546875, 0.0241241455078125, 0.033905029296875, -0.05853271484375, -0.0714111328125, -0.041168212890625, 0.007...
0x22almostEvil/russe-semantics-sim
2023-05-17T15:43:59.000Z
[ "task_categories:text-classification", "size_categories:100K<n<1M", "language:ru", "license:mit", "semantics", "region:us" ]
0x22almostEvil
null
null
0
3
2023-05-14T13:45:30
--- license: mit task_categories: - text-classification language: - ru tags: - semantics size_categories: - 100K<n<1M --- # Dataset Card for russe-semantics-sim with ~200K entries. Russian language. ### Dataset Summary License: MIT. Contains CSV of a list of word1, word2, their `connection score` (are they synonymous or associations), type of connection. ### Original Datasets are available here: - https://github.com/nlpub/russe-evaluation
448
[ [ -0.025726318359375, -0.0134735107421875, 0.01398468017578125, 0.033538818359375, -0.030731201171875, -0.0103607177734375, -0.005870819091796875, -0.0107269287109375, 0.00835418701171875, 0.006458282470703125, -0.0440673828125, -0.064208984375, -0.043731689453125...
omniquad/BC5CDR-IOB
2023-05-16T11:17:02.000Z
[ "region:us" ]
omniquad
The automatic extraction of chemical information from text requires the recognition of chemical entity mentions as one of its key steps. When developing supervised named entity recognition (NER) systems, the availability of a large, manually annotated text corpus is desirable. Furthermore, large corpora permit the robust evaluation and comparison of different approaches that detect chemicals in documents. We present the CHEMDNER corpus, a collection of 10,000 PubMed abstracts that contain a total of 84,355 chemical entity mentions labeled manually by expert chemistry literature curators, following annotation guidelines specifically defined for this task. The abstracts of the CHEMDNER corpus were selected to be representative for all major chemical disciplines. Each of the chemical entity mentions was manually labeled according to its structure-associated chemical entity mention (SACEM) class: abbreviation, family, formula, identifier, multiple, systematic and trivial. The difficulty and consistency of tagging chemicals in text was measured using an agreement study between annotators, obtaining a percentage agreement of 91. For a subset of the CHEMDNER corpus (the test set of 3,000 abstracts) we provide not only the Gold Standard manual annotations, but also mentions automatically detected by the 26 teams that participated in the BioCreative IV CHEMDNER chemical mention recognition task. In addition, we release the CHEMDNER silver standard corpus of automatically extracted mentions from 17,000 randomly selected PubMed abstracts. A version of the CHEMDNER corpus in the BioC format has been generated as well. We propose a standard for required minimum information about entity annotations for the construction of domain specific corpora on chemical and drug entities. The CHEMDNER corpus and annotation guidelines are available at: http://www.biocreative.org/resources/biocreative-iv/chemdner-corpus/
@article{Krallinger2015TheCC, title={The CHEMDNER corpus of chemicals and drugs and its annotation principles}, author={Martin Krallinger and Obdulia Rabal and Florian Leitner and Miguel Vazquez and David Salgado and Zhiyong Lu and Robert Leaman and Yanan Lu and Dong-Hong Ji and Daniel M. Lowe and Roger A. Sayle and Riza Theresa Batista-Navarro and Rafal Rak and Torsten Huber and Tim Rockt{\"a}schel and S{\'e}rgio Matos and David Campos and Buzhou Tang and Hua Xu and Tsendsuren Munkhdalai and Keun Ho Ryu and S. V. Ramanan and P. Senthil Nathan and Slavko Zitnik and Marko Bajec and Lutz Weber and Matthias Irmer and Saber Ahmad Akhondi and Jan A. Kors and Shuo Xu and Xin An and Utpal Kumar Sikdar and Asif Ekbal and Masaharu Yoshioka and Thaer M. Dieb and Miji Choi and Karin M. Verspoor and Madian Khabsa and C. Lee Giles and Hongfang Liu and K. E. Ravikumar and Andre Lamurias and Francisco M. Couto and Hong-Jie Dai and Richard Tzong-Han Tsai and C Ata and Tolga Can and Anabel Usie and Rui Alves and Isabel Segura-Bedmar and Paloma Mart{\'i}nez and Julen Oyarz{\'a}bal and Alfonso Valencia}, journal={Journal of Cheminformatics}, year={2015}, volume={7}, pages={S2 - S2} }
1
3
2023-05-16T10:31:48
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...