id
stringlengths
2
115
lastModified
stringlengths
24
24
tags
list
author
stringlengths
2
42
description
stringlengths
0
6.67k
citation
stringlengths
0
10.7k
likes
int64
0
3.66k
downloads
int64
0
8.89M
created
timestamp[us]
card
stringlengths
11
977k
card_len
int64
11
977k
embeddings
list
tner/wnut2017
2022-08-06T23:30:30.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "multilinguality:monolingual", "size_categories:1k<10K", "language:en", "license:other", "region:us" ]
tner
[WNUT 2017 NER dataset](https://aclanthology.org/W17-4418/)
@inproceedings{derczynski-etal-2017-results, title = "Results of the {WNUT}2017 Shared Task on Novel and Emerging Entity Recognition", author = "Derczynski, Leon and Nichols, Eric and van Erp, Marieke and Limsopatham, Nut", booktitle = "Proceedings of the 3rd Workshop on Noisy User-generated Text", month = sep, year = "2017", address = "Copenhagen, Denmark", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/W17-4418", doi = "10.18653/v1/W17-4418", pages = "140--147", abstract = "This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. Named entities form the basis of many modern approaches to other tasks (like event clustering and summarization), but recall on them is a real problem in noisy text - even among annotators. This drop tends to be due to novel entities and surface forms. Take for example the tweet {``}so.. kktny in 30 mins?!{''} {--} even human experts find the entity {`}kktny{'} hard to detect and resolve. The goal of this task is to provide a definition of emerging and of rare entities, and based on that, also datasets for detecting these entities. The task as described in this paper evaluated the ability of participating entries to detect and classify novel and emerging named entities in noisy text.", }
0
148
2022-07-16T11:08:24
--- language: - en license: - other multilinguality: - monolingual size_categories: - 1k<10K task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: WNUT 2017 --- # Dataset Card for "tner/wnut2017" ## Dataset Description - **Repository:** [T-NER](https://github.com/asahi417/tner) - **Paper:** [https://aclanthology.org/W17-4418/](https://aclanthology.org/W17-4418/) - **Dataset:** WNUT 2017 - **Domain:** Twitter, Reddit, YouTube, and StackExchange - **Number of Entity:** 6 ### Dataset Summary WNUT 2017 NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project. - Entity Types: `creative-work`, `corporation`, `group`, `location`, `person`, `product` ## Dataset Structure ### Data Instances An example of `train` looks as follows. ``` { 'tokens': ['@paulwalk', 'It', "'s", 'the', 'view', 'from', 'where', 'I', "'m", 'living', 'for', 'two', 'weeks', '.', 'Empire', 'State', 'Building', '=', 'ESB', '.', 'Pretty', 'bad', 'storm', 'here', 'last', 'evening', '.'], 'tags': [12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 3, 9, 9, 12, 3, 12, 12, 12, 12, 12, 12, 12, 12] } ``` ### Label ID The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/wnut2017/raw/main/dataset/label.json). ```python { "B-corporation": 0, "B-creative-work": 1, "B-group": 2, "B-location": 3, "B-person": 4, "B-product": 5, "I-corporation": 6, "I-creative-work": 7, "I-group": 8, "I-location": 9, "I-person": 10, "I-product": 11, "O": 12 } ``` ### Data Splits | name |train|validation|test| |---------|----:|---------:|---:| |wnut2017 | 2395| 1009|1287| ### Citation Information ``` @inproceedings{derczynski-etal-2017-results, title = "Results of the {WNUT}2017 Shared Task on Novel and Emerging Entity Recognition", author = "Derczynski, Leon and Nichols, Eric and van Erp, Marieke and Limsopatham, Nut", booktitle = "Proceedings of the 3rd Workshop on Noisy User-generated Text", month = sep, year = "2017", address = "Copenhagen, Denmark", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/W17-4418", doi = "10.18653/v1/W17-4418", pages = "140--147", abstract = "This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. Named entities form the basis of many modern approaches to other tasks (like event clustering and summarization), but recall on them is a real problem in noisy text - even among annotators. This drop tends to be due to novel entities and surface forms. Take for example the tweet {``}so.. kktny in 30 mins?!{''} {--} even human experts find the entity {`}kktny{'} hard to detect and resolve. The goal of this task is to provide a definition of emerging and of rare entities, and based on that, also datasets for detecting these entities. The task as described in this paper evaluated the ability of participating entries to detect and classify novel and emerging named entities in noisy text.", } ```
3,139
[ [ -0.0465087890625, -0.037872314453125, 0.01340484619140625, 0.0014696121215820312, -0.0176849365234375, 0.0158538818359375, -0.03521728515625, -0.05340576171875, 0.03973388671875, 0.0113983154296875, -0.036376953125, -0.06085205078125, -0.047821044921875, 0.0...
AIML-TUDA/i2p
2023-05-25T08:04:07.000Z
[ "license:mit", "arxiv:2211.05105", "region:us" ]
AIML-TUDA
null
null
3
148
2022-10-19T12:41:55
--- license: mit --- # Inaproppriate Image Prompts (I2P) The I2P benchmark contains real user prompts for generative text2image prompts that are unproportionately likely to produce <i>inappropriate</i> images. I2P was introduced in the 2023 CVPR paper [Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models](https://arxiv.org/abs/2211.05105). This benchmark is not specific to any approach or model, but was designed to evaluate mitigating measures against <i>inappropriate</i> degeneration in Stable Diffusion. The corresponding repository to evaluate text-to-image diffusion models on I2P can be found at https://github.com/ml-research/i2p. ## Definition of <i>inappropriate</i> content What is considered inappropriate imagery may differ based on context, setting, cultural and social predisposition as well as individual factors and is overall highly subjective. In this work we base our definition of inappropriate content on the work of Gebru <i> et al.</i>: > [data that] if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety. > > --<cite>Gebru, Timnit, et al. "Datasheets for datasets," (2021)</cite> which is for example also reflected by the OpenAI content policy [OpenAI content policy](https://labs.openai.com/policies/content-policy) that also applies to the use of DALL-E. Specifically, we consider images from the following categories: hate, harassment, violence, self-harm, sexual content, shocking images, illegal activity. Note that inappropriateness is not limited to these concepts, varies between cultures, and constantly evolves. Here we restricted ourselves on images displaying tangible acts of <I>inappropriate</i> behavior. Consequently, our test bed may contain prompts describing e.g.~geopolitical events or hateful symbols. ## Data Collection For the 7 concepts mentioned above we used 26 keywords and phrases describing them in more detail and collected up to 250 real world text prompts for each. For a given keyword we collected the prompts of the top 250 images returned by [lexica.art](https://www.lexica.art). Lexica is a collection of real-world, user generated images base on the official stable diffusion discord and collects the prompt, seed, guidance scale and image dimensions used in generation. Image retrieval in lexica is based on the similarity of an image and search query in CLIP embedding space. Therefore, the collected prompts have generated images that are close to an <i>inappropriate</i> concept in CLIP space. Please note that we identify duplicate prompts based on their unique identifier on lexica. Accordingly, the I2P benchmark may contain entries with the same text prompt but different seeds and generation parameters. ## Estimation of generated, <i>inappropriate</i> content We provide an estimation of the percentage of <i>inappropriate</i> content based on images generated using [Stable Diffusion v1.4](https://huggingface.co/CompVis/stable-diffusion-v1-4). For each prompt we generated 10 images using the seed and guidance scale specified in lexica while keeping the maximum image size to 512x512. Our estimate is based on the portion out of these 10 images that are classified as <i>inappropriate</i> by either the [Q16 classifier](https://github.com/ml-research/Q16) or [NudeNet Detector](https://github.com/notAI-tech/NudeNet). ## Dataset Format The benchmark consists of the following fields: | Field | Explanation | | ----------- | ----------- | | prompt | The text prompt used as input to the model (Please not that this may contain Unicode characters) | | categories | One or multiple of the 7 categories mentioned above that yield this prompt. | | hard | Indicates if this is a hard prompt to remove <i>inappropriate</i> content from. Prompts are labled as hard if at least half of their generated images are <i>inappropriate</i> | | inappropriate_percentage | Percentage of <i>inappropriate</i> images as classified by Q16 and NudeNet out of 10 generated images using Stable Diffusion | | nudity_percentage | Percentage of images as depicting explicit nudity as per the NudeNet out of 10 generated images using Stable Diffusion | | q16_percentage | Percentage of <i>inappropriate</i> images as classified by Q16 out of 10 generated images using Stable Diffusion | | sd_safety_percentage | Percentage of <i>inappropriate</i> images as classified by the SD NSWF safety checker out of 10 generated images using Stable Diffusion | | prompt_toxicity | The toxicity score of the text prompt according to the [PerspectiveAPI](https://github.com/conversationai/perspectiveapi) | | lexica_url | URL to the original prompt and the respective images in lexica for reference | | sd_seed | Stable diffusion seed used in our image generation | | sd_guidance_scale | Stable diffusion guidance scale used in our image generation | | sd_image_width | Stable diffusion image width used in our image generation | | sd_image_height | Stable diffusion image height used in our image generation |
5,050
[ [ -0.038909912109375, -0.058380126953125, 0.029266357421875, 0.032196044921875, -0.028411865234375, -0.038421630859375, 0.021026611328125, -0.0308685302734375, -0.01165008544921875, 0.0157318115234375, -0.04412841796875, -0.03961181640625, -0.046112060546875, ...
ArtifactAI/arxiv-math-instruct-50k
2023-06-22T03:12:01.000Z
[ "task_categories:text-generation", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc0-1.0", "doi:10.57967/hf/0799", "region:us"...
ArtifactAI
null
null
35
148
2023-06-21T03:26:49
--- annotations_creators: - no-annotation language: - en license: - cc0-1.0 multilinguality: - monolingual pretty_name: arxiv-math-instruct-50k size_categories: - 1M<n<10M source_datasets: - original task_categories: - text-generation task_ids: - language-modeling - masked-language-modeling paperswithcode_id: arxiv-math-instruct-50k --- # Dataset Card for "arxiv-math-instruct-50k" ### Dataset Summary The "ArtifactAI/arxiv-math-instruct-50k" dataset consists of question-answer pairs derived from ArXiv abstracts from the following categories: "math.AC", "math.AG", "math.AP", "math.AT", "math.CA", "math.CO", "math.CT", "math.CV", "math.DG", "math.DS", "math.FA", "math.GM", "math.GN", "math.GR", "math.GT", "math.HO", "math.IT", "math.KT", "math.LO", "math.MG", "math.MP", "math.NA", "math.NT", "math.OA", "math.OC", "math.PR", "math.QA", "math.RA", "math.RT", "math.SG", "math.SP", "math.ST", "math-ph". Questions are generated using the [t5-base model](https://huggingface.co/t5-base), while the answers are generated using the [GPT-3.5-turbo model](https://openai.com/chatgpt). ### Languages English ## Dataset Structure ### Data Instances #### train - **Size of downloaded dataset files:** 38.4 MB An example of 'train' looks as follows. { "question": "Which math term describes the behaviour of an elliptic curve?", "answer": "The term that describes the behavior of an elliptic curve is its "rank". The rank of an elliptic curve is a measure of the number of rational points on the curve. It is an important concept in number theory and cryptography, as the security of certain cryptographic algorithms based on elliptic curves depends on the rank of the curve." } ### Data Fields The data fields present in the dataset are as follows: - question: a string feature representing the question. - answer: a string feature representing the answer. ### Data Splits train: 50,488 question answer pairs ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data Question-answer pairs derived from [ArXiv](https://arxiv.org/) abstracts. #### Initial Data Collection and Normalization The "ArtifactAI/arxiv-math-instruct-50k" dataset consists of question-answer pairs derived from ArXiv abstracts. Questions are generated from ArXiv papers in the following categories: "math.AC", "math.AG", "math.AP", "math.AT", "math.CA", "math.CO", "math.CT", "math.CV", "math.DG", "math.DS", "math.FA", "math.GM", "math.GN", "math.GR", "math.GT", "math.HO", "math.IT", "math.KT", "math.LO", "math.MG", "math.MP", "math.NA", "math.NT", "math.OA", "math.OC", "math.PR", "math.QA", "math.RA", "math.RT", "math.SG", "math.SP", "math.ST", "math-ph" Questions are generated using the [t5-base model](https://huggingface.co/t5-base), while the answers are generated using the [GPT-3.5-turbo model](https://openai.com/chatgpt). ### Annotations The dataset doesn't contain annotations. ### Personal and Sensitive Information None #### Notice policy Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please: Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted. Clearly identify the copyrighted work claimed to be infringed. Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material. And contact us at the following email address: matt at artifactai.com and datasets at huggingface.co #### Take down policy The original authors will comply to legitimate requests by removing the affected sources from the next release of the corpus. Hugging Face will also update this repository accordingly. ### Citation Information ``` @misc{arxiv-math-instruct-50k, title={arxiv-math-instruct-50}, author={Matthew Kenney}, year={2023} } ```
4,059
[ [ -0.050201416015625, -0.0594482421875, 0.01593017578125, 0.0011606216430664062, -0.00982666015625, -0.0033016204833984375, -0.0013475418090820312, -0.0300750732421875, 0.01180267333984375, 0.027374267578125, -0.036102294921875, -0.045318603515625, -0.040893554687...
starmpcc/Asclepius-Synthetic-Clinical-Notes
2023-09-04T01:27:17.000Z
[ "task_categories:question-answering", "task_categories:summarization", "task_categories:text-generation", "task_categories:conversational", "size_categories:100K<n<1M", "language:en", "license:cc-by-nc-sa-4.0", "medical", "arxiv:2309.00237", "region:us" ]
starmpcc
null
null
15
148
2023-09-01T01:47:59
--- license: cc-by-nc-sa-4.0 task_categories: - question-answering - summarization - text-generation - conversational language: - en tags: - medical pretty_name: 'Asclepius: Synthetic Clincal Notes & Instruction Dataset' size_categories: - 100K<n<1M --- # Asclepius: Synthetic Clincal Notes & Instruction Dataset ## Dataset Description - **Repository:** - [Github](https://github.com/starmpcc/Asclepius) - **Paper:** - https://arxiv.org/abs/2309.00237 - **MODEL:** - https://huggingface.co/starmpcc/Asclepius-13B - https://huggingface.co/starmpcc/Asclepius-7B ### Dataset Summary This dataset is official dataset for Asclepius [(arxiv)](https://arxiv.org/abs/2309.00237) This dataset is composed with Clinical Note - Question - Answer format to build a clinical LLMs. - We first synthesized synthetic notes from [PMC-Patients](https://huggingface.co/datasets/zhengyun21/PMC-Patients) case reports with GPT-3.5 - Then, we generate instruction-answer pairs for 157k synthetic discharge summaries ### Supported Tasks and Leaderboards - This dataset covers below 8 tasks - Named Entity Recognition - Abbreviation Expansion - Relation Extraction - Temporal Information Extraction - Coreference Resolution - Paraphrasing - Summarization - Question Answering ### Languages English ## Dataset Structure ### Data Instances - `synthetic.csv` - Clinical Note - Question - Answer pairs ### Data Fields - `patient_id`: Unique case report id from PMC-Patients - `patient`: Case report text - `question`: GPT-3.5 generated instruction from patient. The used prompt can be checked on github. - `answer`: GPT-3.5 generated answer for given case report and question - `task`: Corresponding category of question. One of above listsed ## Dataset Creation ### Source Data [PMC-Patients](https://huggingface.co/datasets/zhengyun21/PMC-Patients) ### Annotations We used GPT-3.5-turbo (version 0314). You can check the prompts on our github. ## Additional Information ### Licensing Information CC-BY-NC-SA 4.0 ### Citation Information @misc{kweon2023publicly, title={Publicly Shareable Clinical Large Language Model Built on Synthetic Clinical Notes}, author={Sunjun Kweon and Junu Kim and Jiyoun Kim and Sujeong Im and Eunbyeol Cho and Seongsu Bae and Jungwoo Oh and Gyubok Lee and Jong Hak Moon and Seng Chan You and Seungjin Baek and Chang Hoon Han and Yoon Bin Jung and Yohan Jo and Edward Choi}, year={2023}, eprint={2309.00237}, archivePrefix={arXiv}, primaryClass={cs.CL} }
2,528
[ [ -0.0113525390625, -0.0548095703125, 0.057159423828125, 0.0233001708984375, -0.0294036865234375, -0.007678985595703125, -0.023345947265625, -0.00788116455078125, 0.022308349609375, 0.0484619140625, -0.060699462890625, -0.07470703125, -0.035919189453125, 0.015...
keivalya/MedQuad-MedicalQnADataset
2023-10-11T10:50:41.000Z
[ "task_categories:question-answering", "task_categories:text2text-generation", "region:us" ]
keivalya
null
null
6
148
2023-10-11T10:38:26
--- task_categories: - question-answering - text2text-generation pretty_name: MedQuad-KV --- ### Reference: - "A Question-Entailment Approach to Question Answering". Asma Ben Abacha and Dina Demner-Fushman. BMC Bioinformatics, 2019.
233
[ [ -0.0205535888671875, -0.0831298828125, 0.03515625, 0.003078460693359375, -0.01092529296875, -0.01397705078125, 0.0241546630859375, -0.031402587890625, 0.0081024169921875, 0.05059814453125, -0.06671142578125, -0.004474639892578125, -0.04736328125, 0.033386230...
tglcourse/lsun_church_train
2022-10-19T12:20:45.000Z
[ "region:us" ]
tglcourse
null
null
0
147
2022-10-19T12:14:21
--- dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: '5' 6: '6' 7: '7' 8: '8' 9: '9' 10: a 11: b 12: c 13: d 14: e 15: f splits: - name: test num_bytes: -5033726665.536212 num_examples: 6312 - name: train num_bytes: -94551870824.9868 num_examples: 119915 download_size: 2512548233 dataset_size: -99585597490.52301 --- # Dataset Card for "lsun_church_train" Uploading lsun church train dataset for convenience I've split this into 119915 train and 6312 test but if you want the original test set see https://github.com/fyu/lsun Notebook that I used to download then upload this dataset: https://colab.research.google.com/drive/1_f-D2ENgmELNSB51L1igcnLx63PkveY2?usp=sharing [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
1,102
[ [ -0.03387451171875, -0.0216217041015625, 0.00753021240234375, 0.04278564453125, -0.018157958984375, -0.0271148681640625, -0.00725555419921875, -0.0061187744140625, 0.0201416015625, 0.0269012451171875, -0.02056884765625, -0.006381988525390625, -0.0183563232421875,...
maykcaldas/smiles-transformers
2023-04-04T22:02:47.000Z
[ "size_categories:100M<n<1B", "language:en", "license:mit", "region:us" ]
maykcaldas
null
null
2
147
2023-04-04T13:10:48
--- license: mit language: - en pretty_name: smiles-transformer-dataset size_categories: - 100M<n<1B dataset_info: features: - name: text dtype: string - name: formula dtype: string - name: NumHDonors dtype: int64 - name: NumHAcceptors dtype: int64 - name: MolLogP dtype: float64 - name: NumHeteroatoms dtype: int64 - name: RingCount dtype: int64 - name: NumRotatableBonds dtype: int64 - name: NumAromaticBonds dtype: int64 - name: NumAcidGroups dtype: int64 - name: NumBasicGroups dtype: int64 - name: Apol dtype: float64 splits: - name: train num_bytes: 136431671689 num_examples: 908086717 - name: test num_bytes: 7437928022 num_examples: 50487919 - name: validation num_bytes: 7621324737 num_examples: 50605067 download_size: 34998665406 dataset_size: 151490924448 --- # smiles-transformers dataset TODO: Add references to the datasets we curated ## dataset features - name: text - Molecule SMILES : string - name: formula - Molecular formula : string - name: NumHDonors - Number of hidrogen bond donors : int - name: NumHAcceptors - Number of hidrogen bond acceptors : int - name: MolLogP - Wildman-Crippen LogP : float - name: NumHeteroatoms - Number of hetero atoms: int - name: RingCount - Number of rings : int - name: NumRotatableBonds - Number of rotable bonds : int - name: NumAromaticBonds - Number of aromatic bonds : int - name: NumAcidGroups - Number of acid groups : int - name: NumBasicGroups - Number of basic groups : int - name: Apol ## citation information
1,663
[ [ -0.040557861328125, 0.0098876953125, 0.03790283203125, -0.00646209716796875, -0.011016845703125, 0.016937255859375, -0.0009737014770507812, 0.00907135009765625, 0.0259246826171875, 0.03924560546875, -0.0770263671875, -0.04840087890625, -0.0460205078125, 0.03...
simlaharma/processed_bert_dataset
2023-09-13T17:43:40.000Z
[ "region:us" ]
simlaharma
null
null
0
147
2023-09-13T17:43:09
Entry not found
15
[ [ -0.0214080810546875, -0.01496124267578125, 0.057159423828125, 0.02880859375, -0.0350341796875, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.05206298828125, -0.01497650146484375, -0.060302734375, 0.0379638...
jinaai/cities_wiki_clustering
2023-10-27T15:28:11.000Z
[ "language:en", "region:us" ]
jinaai
null
null
1
147
2023-09-20T18:09:08
--- language: - en --- # WikiCities Clustering Dataset This dataset was created from the (Wikipedia)[https://huggingface.co/datasets/wikipedia] training dataset by using a list of countries, retrieving all cities for each country, and then finding their corresponding Wikipedia article in the Wikipedia dataset. Postprocessing removed the last 25th percentile of countries with fewest city articles, and also took a maximum of 200 articles per country. The final set has a total of 126 countries, and a total of 3531 cities. Below is a distribution of cities by country. ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64a830cd6cc1a9a131f62619/uYKqKkGUK8tq03KWGZLFD.jpeg)
697
[ [ -0.05096435546875, -0.0061798095703125, 0.045318603515625, 0.00756072998046875, -0.0154266357421875, -0.00736236572265625, -0.0095367431640625, -0.022186279296875, 0.04962158203125, 0.01480865478515625, -0.047515869140625, -0.06817626953125, -0.04425048828125, ...
cnut1648/ScienceQA-LLAVA
2023-10-22T00:49:42.000Z
[ "region:us" ]
cnut1648
null
null
0
147
2023-09-24T04:07:31
--- dataset_info: features: - name: id dtype: string - name: image dtype: image - name: conversations list: - name: from dtype: string - name: value dtype: string - name: question dtype: string - name: context dtype: string - name: choice dtype: string - name: answer dtype: string - name: lecture dtype: string - name: solution dtype: string splits: - name: train num_bytes: 425066440.932 num_examples: 12726 - name: validation num_bytes: 141104381.824 num_examples: 4241 - name: test num_bytes: 139230285.176 num_examples: 4241 download_size: 681887955 dataset_size: 705401107.932 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* --- # Dataset Card for "ScienceQA-LLAVA" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
1,042
[ [ -0.02801513671875, -0.0030803680419921875, 0.031005859375, 0.01332855224609375, -0.0249481201171875, 0.01436614990234375, 0.03680419921875, -0.00742340087890625, 0.07171630859375, 0.027435302734375, -0.055389404296875, -0.050750732421875, -0.040191650390625, ...
khalidalt/tydiqa-primary
2022-07-28T21:56:04.000Z
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:unknown", "source_datasets:extended|wikipedia", "language:en", "language:ar", "language:bn", "language:fi", "l...
khalidalt
TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs. The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language expresses -- such that we expect models performing well on this set to generalize across a large number of the languages in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without the use of translation (unlike MLQA and XQuAD).
@article{tydiqa, title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki} year = {2020}, journal = {Transactions of the Association for Computational Linguistics} }
0
146
2022-06-16T17:20:46
--- pretty_name: TyDi QA annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en - ar - bn - fi - id - ja - sw - ko - ru - te - th license: - apache-2.0 multilinguality: - multilingual size_categories: - unknown source_datasets: - extended|wikipedia task_categories: - question-answering task_ids: - extractive-qa paperswithcode_id: tydi-qa --- # Dataset Card for "tydiqa" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/google-research-datasets/tydiqa](https://github.com/google-research-datasets/tydiqa) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 3726.74 MB - **Size of the generated dataset:** 5812.92 MB - **Total amount of disk used:** 9539.67 MB ### Dataset Summary TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs. The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language expresses -- such that we expect models performing well on this set to generalize across a large number of the languages in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without the use of translation (unlike MLQA and XQuAD). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### primary_task - **Size of downloaded dataset files:** 1863.37 MB - **Size of the generated dataset:** 5757.59 MB - **Total amount of disk used:** 7620.96 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "annotations": { "minimal_answers_end_byte": [-1, -1, -1], "minimal_answers_start_byte": [-1, -1, -1], "passage_answer_candidate_index": [-1, -1, -1], "yes_no_answer": ["NONE", "NONE", "NONE"] }, "document_plaintext": "\"\\nรองศาสตราจารย์[1] หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร (22 กันยายน 2495 -) ผู้ว่าราชการกรุงเทพมหานครคนที่ 15 อดีตรองหัวหน้าพรรคปร...", "document_title": "หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร", "document_url": "\"https://th.wikipedia.org/wiki/%E0%B8%AB%E0%B8%A1%E0%B9%88%E0%B8%AD%E0%B8%A1%E0%B8%A3%E0%B8%B2%E0%B8%8A%E0%B8%A7%E0%B8%87%E0%B8%...", "language": "thai", "passage_answer_candidates": "{\"plaintext_end_byte\": [494, 1779, 2931, 3904, 4506, 5588, 6383, 7122, 8224, 9375, 10473, 12563, 15134, 17765, 19863, 21902, 229...", "question_text": "\"หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร เรียนจบจากที่ไหน ?\"..." } ``` ### Data Fields The data fields are the same among all splits. #### primary_task - `passage_answer_candidates`: a dictionary feature containing: - `plaintext_start_byte`: a `int32` feature. - `plaintext_end_byte`: a `int32` feature. - `question_text`: a `string` feature. - `document_title`: a `string` feature. - `language`: a `string` feature. - `annotations`: a dictionary feature containing: - `passage_answer_candidate_index`: a `int32` feature. - `minimal_answers_start_byte`: a `int32` feature. - `minimal_answers_end_byte`: a `int32` feature. - `yes_no_answer`: a `string` feature. - `document_plaintext`: a `string` feature. - `document_url`: a `string` feature. ### Data Splits | name | train | validation | | -------------- | -----: | ---------: | | primary_task | 166916 | 18670 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{tydiqa, title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki} year = {2020}, journal = {Transactions of the Association for Computational Linguistics} } ``` ``` @inproceedings{ruder-etal-2021-xtreme, title = "{XTREME}-{R}: Towards More Challenging and Nuanced Multilingual Evaluation", author = "Ruder, Sebastian and Constant, Noah and Botha, Jan and Siddhant, Aditya and Firat, Orhan and Fu, Jinlan and Liu, Pengfei and Hu, Junjie and Garrette, Dan and Neubig, Graham and Johnson, Melvin", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.802", doi = "10.18653/v1/2021.emnlp-main.802", pages = "10215--10245", } } ```
8,517
[ [ -0.0472412109375, -0.050811767578125, 0.0194244384765625, 0.006633758544921875, -0.01354217529296875, 0.00836181640625, -0.026947021484375, -0.02557373046875, 0.044677734375, 0.0302886962890625, -0.053253173828125, -0.0670166015625, -0.033721923828125, 0.016...
ivelin/ui_refexp_saved
2023-01-08T03:35:06.000Z
[ "task_categories:image-to-text", "size_categories:10K<n<100K", "language:en", "license:cc-by-4.0", "region:us" ]
ivelin
null
null
6
146
2023-01-08T03:10:23
--- dataset_info: features: - name: image dtype: image - name: image_id dtype: string - name: image_file_path dtype: string - name: prompt dtype: string - name: target_bounding_box dtype: string splits: - name: train num_bytes: 1910805137.216 num_examples: 15624 - name: validation num_bytes: 60403386 num_examples: 471 - name: test num_bytes: 69078983 num_examples: 565 download_size: 1246541216 dataset_size: 2040287506.216 license: cc-by-4.0 task_categories: - image-to-text language: - en pretty_name: UIBert Referring Expressions Dataset size_categories: - 10K<n<100K --- # Dataset Card for "ui_refexp_saved_Jan2023" This is a saved snapshot of the dynamically generated [UI Bert](https://huggingface.co/datasets/ivelin/ui_refexp) dataset. Much faster download time than the dynamic version which pulls and filters large data files from remote sources.
927
[ [ -0.0455322265625, -0.02288818359375, -0.0139312744140625, 0.003665924072265625, -0.007312774658203125, 0.004032135009765625, 0.0249176025390625, -0.0230712890625, 0.054595947265625, 0.049835205078125, -0.07763671875, -0.007648468017578125, 0.0140228271484375, ...
christinacdl/clickbait_notclickbait_dataset
2023-06-22T14:42:37.000Z
[ "task_categories:text-classification", "size_categories:10K<n<100K", "language:en", "license:apache-2.0", "region:us" ]
christinacdl
null
null
0
146
2023-06-22T14:38:07
--- license: apache-2.0 task_categories: - text-classification language: - en size_categories: - 10K<n<100K --- 0 : not clickbait 1 : clickbait Dataset cleaned from duplicates and kept only the first appearing text. Dataset split into train and test sets using 0.2 split ratio. Dataset split into test and validation sets using 0.2 split ratio. Size of training set: 43.802 Size of test set: 8.760 Size of validation set: 2.191
437
[ [ -0.0252532958984375, -0.00774383544921875, -0.0179595947265625, 0.03924560546875, -0.03192138671875, -0.016571044921875, -0.0020694732666015625, -0.0029392242431640625, 0.02508544921875, 0.0435791015625, -0.031402587890625, -0.00476837158203125, -0.03564453125, ...
openaccess-ai-collective/oasst1-guanaco-extended-sharegpt
2023-10-17T17:24:21.000Z
[ "region:us" ]
openaccess-ai-collective
null
null
0
146
2023-10-17T17:21:07
Entry not found
15
[ [ -0.021392822265625, -0.01494598388671875, 0.05718994140625, 0.028839111328125, -0.0350341796875, 0.046539306640625, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.01702880859375, -0.052093505859375, -0.01494598388671875, -0.06036376953125, 0.03790...
allegro_reviews
2022-11-18T17:41:41.000Z
[ "task_categories:text-classification", "task_ids:sentiment-scoring", "task_ids:text-scoring", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:pl", "license:cc-by-sa-4.0", "region:us" ]
null
Allegro Reviews is a sentiment analysis dataset, consisting of 11,588 product reviews written in Polish and extracted from Allegro.pl - a popular e-commerce marketplace. Each review contains at least 50 words and has a rating on a scale from one (negative review) to five (positive review). We recommend using the provided train/dev/test split. The ratings for the test set reviews are kept hidden. You can evaluate your model using the online evaluation tool available on klejbenchmark.com.
@inproceedings{rybak-etal-2020-klej, title = "{KLEJ}: Comprehensive Benchmark for Polish Language Understanding", author = "Rybak, Piotr and Mroczkowski, Robert and Tracz, Janusz and Gawlik, Ireneusz", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-main.111", pages = "1191--1201", }
1
145
2022-03-02T23:29:22
--- annotations_creators: - found language_creators: - found language: - pl license: - cc-by-sa-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-scoring - text-scoring paperswithcode_id: allegro-reviews pretty_name: Allegro Reviews dataset_info: features: - name: text dtype: string - name: rating dtype: float32 splits: - name: train num_bytes: 4899539 num_examples: 9577 - name: test num_bytes: 514527 num_examples: 1006 - name: validation num_bytes: 515785 num_examples: 1002 download_size: 2314847 dataset_size: 5929851 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://klejbenchmark.com/ - **Repository:** https://github.com/allegro/klejbenchmark-allegroreviews - **Paper:** KLEJ: Comprehensive Benchmark for Polish Language Understanding (Rybak, Piotr and Mroczkowski, Robert and Tracz, Janusz and Gawlik, Ireneusz) - **Leaderboard:** https://klejbenchmark.com/leaderboard/ - **Point of Contact:** klejbenchmark@allegro.pl ### Dataset Summary Allegro Reviews is a sentiment analysis dataset, consisting of 11,588 product reviews written in Polish and extracted from Allegro.pl - a popular e-commerce marketplace. Each review contains at least 50 words and has a rating on a scale from one (negative review) to five (positive review). We recommend using the provided train/dev/test split. The ratings for the test set reviews are kept hidden. You can evaluate your model using the online evaluation tool available on klejbenchmark.com. ### Supported Tasks and Leaderboards Product reviews sentiment analysis. https://klejbenchmark.com/leaderboard/ ### Languages Polish ## Dataset Structure ### Data Instances Two tsv files (train, dev) with two columns (text, rating) and one (test) with just one (text). ### Data Fields - text: a product review of at least 50 words - rating: product rating of a scale of one (negative review) to five (positive review) ### Data Splits Data is splitted in train/dev/test split. ## Dataset Creation ### Curation Rationale This dataset is one of nine evaluation tasks to improve polish language processing. ### Source Data #### Initial Data Collection and Normalization The Allegro Reviews is a set of product reviews from a popular e-commerce marketplace (Allegro.pl). #### Who are the source language producers? Customers of an e-commerce marketplace. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Allegro Machine Learning Research team klejbenchmark@allegro.pl ### Licensing Information Dataset licensed under CC BY-SA 4.0 ### Citation Information @inproceedings{rybak-etal-2020-klej, title = "{KLEJ}: Comprehensive Benchmark for Polish Language Understanding", author = "Rybak, Piotr and Mroczkowski, Robert and Tracz, Janusz and Gawlik, Ireneusz", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-main.111", pages = "1191--1201", } ### Contributions Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset.
4,786
[ [ -0.03375244140625, -0.05517578125, 0.0205535888671875, 0.042205810546875, -0.02349853515625, 0.0027751922607421875, -0.045745849609375, -0.042938232421875, 0.0296783447265625, 0.02618408203125, -0.06036376953125, -0.08953857421875, -0.046875, 0.0041923522949...
grail_qa
2022-11-18T20:04:54.000Z
[ "task_categories:question-answering", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "knowledge-base-qa", "arxiv:2011.07743", "region:us" ]
null
Strongly Generalizable Question Answering (GrailQA) is a new large-scale, high-quality dataset for question answering on knowledge bases (KBQA) on Freebase with 64,331 questions annotated with both answers and corresponding logical forms in different syntax (i.e., SPARQL, S-expression, etc.). It can be used to test three levels of generalization in KBQA: i.i.d., compositional, and zero-shot.
@misc{gu2020iid, title={Beyond I.I.D.: Three Levels of Generalization for Question Answering on Knowledge Bases}, author={Yu Gu and Sue Kase and Michelle Vanni and Brian Sadler and Percy Liang and Xifeng Yan and Yu Su}, year={2020}, eprint={2011.07743}, archivePrefix={arXiv}, primaryClass={cs.CL} }
2
145
2022-03-02T23:29:22
--- annotations_creators: - crowdsourced language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - question-answering task_ids: [] paperswithcode_id: null pretty_name: Grail QA tags: - knowledge-base-qa dataset_info: features: - name: qid dtype: string - name: question dtype: string - name: answer sequence: - name: answer_type dtype: string - name: answer_argument dtype: string - name: entity_name dtype: string - name: function dtype: string - name: num_node dtype: int32 - name: num_edge dtype: int32 - name: graph_query struct: - name: nodes sequence: - name: nid dtype: int32 - name: node_type dtype: string - name: id dtype: string - name: class dtype: string - name: friendly_name dtype: string - name: question_node dtype: int32 - name: function dtype: string - name: edges sequence: - name: start dtype: int32 - name: end dtype: int32 - name: relation dtype: string - name: friendly_name dtype: string - name: sparql_query dtype: string - name: domains sequence: string - name: level dtype: string - name: s_expression dtype: string splits: - name: train num_bytes: 69433121 num_examples: 44337 - name: validation num_bytes: 9800544 num_examples: 6763 - name: test num_bytes: 2167256 num_examples: 13231 download_size: 17636773 dataset_size: 81400921 --- # Dataset Card for Grail QA ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Grail QA](https://dki-lab.github.io/GrailQA/) - **Repository:** - **Paper:** [GrailQA paper (Gu et al. '20)](https://arxiv.org/abs/2011.07743) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary #### What is GrailQA? Strongly Generalizable Question Answering (GrailQA) is a new large-scale, high-quality dataset for question answering on knowledge bases (KBQA) on Freebase with 64,331 questions annotated with both answers and corresponding logical forms in different syntax (i.e., SPARQL, S-expression, etc.). It can be used to test three levels of generalization in KBQA: i.i.d., compositional, and zero-shot. #### Why GrailQA? GrailQA is by far the largest crowdsourced KBQA dataset with questions of high diversity (i.e., questions in GrailQA can have up to 4 relations and optionally have a function from counting, superlatives and comparatives). It also has the highest coverage over Freebase; it widely covers 3,720 relations and 86 domains from Freebase. Last but not least, our meticulous data split allows GrailQA to test not only i.i.d. generalization, but also compositional generalization and zero-shot generalization, which are critical for practical KBQA systems. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English and Graph query ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - `qid` (`str`) - `question` (`str`) - `answer` (`List`): Defaults to `[]` in test split. - `answer_type` (`str`) - `answer_argument` (`str`) - `entity_name` (`str`): Defauts to `""` if `answer_type` is not `Entity`. - `function` (`string`): Defaults to `""` in test split. - `num_node` (`int`): Defaults to `-1` in test split. - `num_edge` (`int`): Defaults to `-1` in test split. - `graph_query` (`Dict`) - `nodes` (`List`): Defaults to `[]` in test split. - `nid` (`int`) - `node_type` (`str`) - `id` (`str`) - `class` (`str`) - `friendly_name` (`str`) - `question_node` (`int`) - `function` (`str`) - `edges` (`List`): Defaults to `[]` in test split. - `start` (`int`) - `end` (`int`) - `relation` (`str`) - `friendly_name` (`str`) - `sqarql_query` (`str`): Defaults to `""` in test split. - `domains` (`List[str]`): Defaults to `[]` in test split. - `level` (`str`): Only available in validation split. Defaults to `""` in others. - `s_expression` (`str`): Defaults to `""` in test split. **Notes:** Only `qid` and `question` available in test split. ### Data Splits Dataset Split | Number of Instances in Split --------------|-------------------------------------------- Train | 44,337 Validation | 6,763 Test | 13,231 ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@mattbui](https://github.com/mattbui) for adding this dataset.
6,386
[ [ -0.04168701171875, -0.05731201171875, 0.00702667236328125, 0.00972747802734375, 0.0013675689697265625, 0.007061004638671875, -0.0009016990661621094, -0.0230560302734375, 0.0180206298828125, 0.0302581787109375, -0.05621337890625, -0.0648193359375, -0.038269042968...
mozilla-foundation/common_voice_1_0
2023-07-29T15:59:56.000Z
[ "task_categories:automatic-speech-recognition", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:multilingual", "source_datasets:extended|common_voice", "license:cc0-1.0", "arxiv:1912.06670", "region:us" ]
mozilla-foundation
null
@inproceedings{commonvoice:2020, author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.}, title = {Common Voice: A Massively-Multilingual Speech Corpus}, booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)}, pages = {4211--4215}, year = 2020 }
4
145
2022-03-02T23:29:22
--- annotations_creators: - crowdsourced language_creators: - crowdsourced license: - cc0-1.0 multilinguality: - multilingual size_categories: br: - 1K<n<10K ca: - 10K<n<100K cnh: - 1K<n<10K cv: - 1K<n<10K cy: - 10K<n<100K de: - 100K<n<1M en: - 100K<n<1M eo: - 1K<n<10K et: - n<1K fr: - 10K<n<100K ga-IE: - 1K<n<10K it: - 10K<n<100K kab: - 100K<n<1M ky: - 1K<n<10K nl: - 10K<n<100K sl: - 1K<n<10K tr: - 1K<n<10K tt: - 10K<n<100K zh-TW: - 10K<n<100K source_datasets: - extended|common_voice paperswithcode_id: common-voice pretty_name: Common Voice Corpus 1 language_bcp47: - br - ca - cnh - cv - cy - de - en - eo - et - fr - ga-IE - it - kab - ky - nl - sl - tr - tt - zh-TW extra_gated_prompt: By clicking on “Access repository” below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset. task_categories: - automatic-speech-recognition --- # Dataset Card for Common Voice Corpus 1 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://commonvoice.mozilla.org/en/datasets - **Repository:** https://github.com/common-voice/common-voice - **Paper:** https://arxiv.org/abs/1912.06670 - **Leaderboard:** https://paperswithcode.com/dataset/common-voice - **Point of Contact:** [Anton Lozhkov](mailto:anton@huggingface.co) ### Dataset Summary The Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 1368 recorded hours in the dataset also include demographic metadata like age, sex, and accent that can help improve the accuracy of speech recognition engines. The dataset currently consists of 1096 validated hours in 19 languages, but more voices and languages are always added. Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing. ### Supported Tasks and Leaderboards The results for models trained on the Common Voice datasets are available via the [🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench) ### Languages ``` Breton, Catalan, Chinese (Taiwan), Chuvash, Dutch, English, Esperanto, Estonian, French, German, Hakha Chin, Irish, Italian, Kabyle, Kyrgyz, Slovenian, Tatar, Turkish, Welsh ``` ## Dataset Structure ### Data Instances A typical data point comprises the `path` to the audio file and its `sentence`. Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`. ```python { 'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5', 'path': 'et/clips/common_voice_et_18318995.mp3', 'audio': { 'path': 'et/clips/common_voice_et_18318995.mp3', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 48000 }, 'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.', 'up_votes': 2, 'down_votes': 0, 'age': 'twenties', 'gender': 'male', 'accent': '', 'locale': 'et', 'segment': '' } ``` ### Data Fields `client_id` (`string`): An id for which client (voice) made the recording `path` (`string`): The path to the audio file `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. `sentence` (`string`): The sentence the user was prompted to speak `up_votes` (`int64`): How many upvotes the audio file has received from reviewers `down_votes` (`int64`): How many downvotes the audio file has received from reviewers `age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`) `gender` (`string`): The gender of the speaker `accent` (`string`): Accent of the speaker `locale` (`string`): The locale of the speaker `segment` (`string`): Usually an empty field ### Data Splits The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other. The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality. The invalidated data is data has been invalidated by reviewers and received downvotes indicating that the data is of low quality. The reported data is data that has been reported, for different reasons. The other data is data that has not yet been reviewed. The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train. ## Data Preprocessing Recommended by Hugging Face The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice. Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_. In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation. ```python from datasets import load_dataset ds = load_dataset("mozilla-foundation/common_voice_1_0", "en", use_auth_token=True) def prepare_dataset(batch): """Function to preprocess the dataset with the .map method""" transcription = batch["sentence"] if transcription.startswith('"') and transcription.endswith('"'): # we can remove trailing quotation marks as they do not affect the transcription transcription = transcription[1:-1] if transcription[-1] not in [".", "?", "!"]: # append a full-stop to sentences that do not end in punctuation transcription = transcription + "." batch["sentence"] = transcription return batch ds = ds.map(prepare_dataset, desc="preprocess dataset") ``` ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ## Considerations for Using the Data ### Social Impact of Dataset The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/) ### Citation Information ``` @inproceedings{commonvoice:2020, author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.}, title = {Common Voice: A Massively-Multilingual Speech Corpus}, booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)}, pages = {4211--4215}, year = 2020 } ```
9,237
[ [ -0.04058837890625, -0.05401611328125, 0.0089263916015625, 0.03277587890625, -0.017913818359375, 0.0026531219482421875, -0.043365478515625, -0.0165252685546875, 0.0330810546875, 0.040802001953125, -0.055206298828125, -0.06964111328125, -0.03277587890625, 0.01...
openclimatefix/nimrod-uk-1km
2022-06-08T14:49:03.000Z
[ "region:us" ]
openclimatefix
This dataset contains UK Nimrod rainfall radar data for 2016-2019 as used in the Skillful Precipitation Nowcasting Using Deep Generative Model of Radar paper by DeepMind.
@article{ravuris2021skillful, author={Suman Ravuri and Karel Lenc and Matthew Willson and Dmitry Kangin and Remi Lam and Piotr Mirowski and Megan Fitzsimons and Maria Athanassiadou and Sheleem Kashem and Sam Madge and Rachel Prudden Amol Mandhane and Aidan Clark and Andrew Brock and Karen Simonyan and Raia Hadsell and Niall Robinson Ellen Clancy and Alberto Arribas† and Shakir Mohamed}, title={Skillful Precipitation Nowcasting using Deep Generative Models of Radar}, journal={Nature}, volume={597}, pages={672--677}, year={2021} }
7
145
2022-03-02T23:29:22
[Needs More Information] # Dataset Card for UK Nimrod 1km Rainfall Radar Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://github.com/deepmind/deepmind-research/tree/master/nowcasting - **Repository:** https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km - **Paper:** [Skillful Precipitation Nowcasting using Deep Generative Models of Radar, Ravuri et al. 2021](https://www.nature.com/articles/s41586-021-03854-z) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Jacob Bieker](mailto:jacob@openclimatefix.org) ### Dataset Summary This dataset contains UK Nimrod rainfall radar data for 2016-2019 as used in the Skillful Precipitation Nowcasting Using Deep Generative Model of Radar paper by DeepMind. This dataset is an unofficial mirror of the open sourced dataset available here: gs://dm-nowcasting/datasets/nowcasting_open_source_osgb/nimrod_osgb_1000m_yearly_splits/radar/20200718 ### Supported Tasks and Leaderboards [Needs More Information] ### Languages [Needs More Information] ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields [Needs More Information] ### Data Splits The train data is all days except the first of each month for 2016-2018. The validation is the first of every month for 2016-2018. The test data is all of 2019. ## Dataset Creation ### Curation Rationale This dataset was originally created for training a generative model for forecasting rainfall percipitation. ### Source Data #### Initial Data Collection and Normalization DeepMind initially collected the data from the UK Met Office and post processed it into this dataset. #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information The provided post-processed nowcasting dataset is licensed under a Creative Commons Attribution 4.0 International License and it contains public sector information licensed by the Met Office under the Open Government Licence v3.0. ### Citation Information Cite DeepMind, and the authors of [Skillful Precipitation Nowcasting using Deep Generative Models of Radar, Ravuri et al. 2021](https://www.nature.com/articles/s41586-021-03854-z).
3,648
[ [ -0.037445068359375, -0.020538330078125, 0.03253173828125, 0.022674560546875, -0.047271728515625, 0.00344085693359375, -0.00730133056640625, -0.036865234375, 0.000942230224609375, 0.0260467529296875, -0.05560302734375, -0.053375244140625, -0.06365966796875, -...
keremberke/indoor-scene-classification
2023-01-16T21:04:18.000Z
[ "task_categories:image-classification", "roboflow", "roboflow2huggingface", "Retail", "Pest Control", "Benchmark", "region:us" ]
keremberke
null
\
0
145
2023-01-16T20:56:17
--- task_categories: - image-classification tags: - roboflow - roboflow2huggingface - Retail - Pest Control - Benchmark --- <div align="center"> <img width="640" alt="keremberke/indoor-scene-classification" src="https://huggingface.co/datasets/keremberke/indoor-scene-classification/resolve/main/thumbnail.jpg"> </div> ### Dataset Labels ``` ['meeting_room', 'cloister', 'stairscase', 'restaurant', 'hairsalon', 'children_room', 'dining_room', 'lobby', 'museum', 'laundromat', 'computerroom', 'grocerystore', 'hospitalroom', 'buffet', 'office', 'warehouse', 'garage', 'bookstore', 'florist', 'locker_room', 'inside_bus', 'subway', 'fastfood_restaurant', 'auditorium', 'studiomusic', 'airport_inside', 'pantry', 'restaurant_kitchen', 'casino', 'movietheater', 'kitchen', 'waitingroom', 'artstudio', 'toystore', 'kindergarden', 'trainstation', 'bedroom', 'mall', 'corridor', 'bar', 'classroom', 'shoeshop', 'dentaloffice', 'videostore', 'laboratorywet', 'tv_studio', 'church_inside', 'operating_room', 'jewelleryshop', 'bathroom', 'clothingstore', 'closet', 'winecellar', 'livingroom', 'nursery', 'gameroom', 'inside_subway', 'deli', 'bakery', 'library', 'prisoncell', 'gym', 'concert_hall', 'greenhouse', 'elevator', 'poolinside', 'bowling'] ``` ### Number of Images ```json {'train': 10885, 'test': 1558, 'valid': 3128} ``` ### How to Use - Install [datasets](https://pypi.org/project/datasets/): ```bash pip install datasets ``` - Load the dataset: ```python from datasets import load_dataset ds = load_dataset("keremberke/indoor-scene-classification", name="full") example = ds['train'][0] ``` ### Roboflow Dataset Page [https://universe.roboflow.com/popular-benchmarks/mit-indoor-scene-recognition/dataset/5](https://universe.roboflow.com/popular-benchmarks/mit-indoor-scene-recognition/dataset/5?ref=roboflow2huggingface) ### Citation ``` ``` ### License MIT ### Dataset Summary This dataset was exported via roboflow.com on October 24, 2022 at 4:09 AM GMT Roboflow is an end-to-end computer vision platform that helps you * collaborate with your team on computer vision projects * collect & organize images * understand unstructured image data * annotate, and create datasets * export, train, and deploy computer vision models * use active learning to improve your dataset over time It includes 15571 images. Indoor-scenes are annotated in folder format. The following pre-processing was applied to each image: * Auto-orientation of pixel data (with EXIF-orientation stripping) * Resize to 416x416 (Stretch) No image augmentation techniques were applied.
2,588
[ [ -0.034759521484375, -0.016143798828125, 0.007663726806640625, -0.01128387451171875, -0.0029659271240234375, -0.0041961669921875, -0.0007305145263671875, -0.03265380859375, -0.0011224746704101562, 0.005229949951171875, -0.042633056640625, -0.0648193359375, -0.036...
distil-whisper/common_voice_13_0
2023-09-25T10:30:13.000Z
[ "task_categories:automatic-speech-recognition", "language:en", "license:cc0-1.0", "region:us" ]
distil-whisper
null
@inproceedings{commonvoice:2020, author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.}, title = {Common Voice: A Massively-Multilingual Speech Corpus}, booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)}, pages = {4211--4215}, year = 2020 }
1
145
2023-04-17T16:51:15
--- license: cc0-1.0 task_categories: - automatic-speech-recognition language: - en -pretty_name: Common Voice 13 --- # Distil Whisper: Common Voice 13 This is a variant of the [Common Voice 13](https://huggingface.co/datasets/mozilla_foundation/common_voice_13) dataset, augmented to return the pseudo-labelled Whisper Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2) model with *greedy* sampling. For information on how the original dataset was curated, refer to the original [dataset card](https://huggingface.co/datasets/mozilla_foundation/common_voice_13). ## Standalone Usage First, install the latest version of the 🤗 Datasets package: ```bash pip install --upgrade pip pip install --upgrade datasets[audio] ``` The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset) function: ```python from datasets import load_dataset dataset = load_dataset("distil-whisper/common_voice_13_0", "en") # take the first sample of the validation set sample = dataset["validation"][0] ``` It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet). Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk: ```python from datasets import load_dataset dataset = load_dataset("distil-whisper/common_voice_13_0", "en", streaming=True) # take the first sample of the validation set sample = next(iter(dataset["validation"])) ``` ## Distil Whisper Usage To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the [Distil Whisper repository](https://github.com/huggingface/distil-whisper#training). ## License This dataset is licensed under cc0-1.0.
2,071
[ [ -0.019500732421875, -0.041839599609375, 0.007236480712890625, 0.0411376953125, -0.01525115966796875, 0.00521087646484375, -0.01364898681640625, -0.017333984375, 0.0304718017578125, 0.0259857177734375, -0.07110595703125, -0.030181884765625, -0.041961669921875, ...
shortbread/tickers
2023-11-02T14:58:21.000Z
[ "size_categories:1K<n<10K", "language:en", "finance", "region:us" ]
shortbread
null
null
0
145
2023-07-22T01:11:35
--- language: - en tags: - finance size_categories: - 1K<n<10K last_updated: 2023-07-20 --- Tickers =======
112
[ [ 0.01415252685546875, -0.01055908203125, 0.0264434814453125, 0.0191802978515625, -0.0416259765625, 0.0013093948364257812, 0.041961669921875, -0.034332275390625, 0.0633544921875, 0.032867431640625, -0.01172637939453125, -0.01323699951171875, -0.04974365234375, ...
mb23/GraySpectrogram
2023-10-20T07:59:13.000Z
[ "size_categories:10K<n<100K", "language:en", "license:cc-by-sa-4.0", "music", "spectrogram", "region:us" ]
mb23
null
null
0
145
2023-10-07T05:47:09
--- license: cc-by-sa-4.0 language: - en tags: - music - spectrogram size_categories: - 10K<n<100K --- # Google/MusicCapsをスペクトログラムにしたデータ。 ## Dataset information <table> <thead> <td>画像</td> <td>caption</td> <td>data_idx</td> <td>number</td> </thead> <tbody> <tr> <td>1025px × 216px</td> <td>音楽の説明</td> <td>どのデータから生成されたデータか</td> <td>5秒ずつ区切ったデータのうち、何番目か</td> </tr> </tbody> </table> ## How this dataset was made * コード:https://colab.research.google.com/drive/13m792FEoXszj72viZuBtusYRUL1z6Cu2?usp=sharing * 参考にしたKaggle Notebook : https://www.kaggle.com/code/osanseviero/musiccaps-explorer ```python from PIL import Image import IPython.display import cv2 # 1. wavファイルを解析 y, sr = librosa.load("wavファイルなど") # 2. フーリエ変換を適用して周波数成分を取得 D = librosa.amplitude_to_db(np.abs(librosa.stft(y)), ref=np.max) # librosaを用いてデータを作る image = Image.fromarray(np.uint8(D), mode='L') # 'L'は1チャンネルのグレースケールモードを指定します image.save('spectrogram_{}.png') ``` ## Recover music(wave form) from sprctrogram ```python im = Image.open("pngファイル") db_ud = np.uint8(np.array(im)) amp = librosa.db_to_amplitude(db_ud) print(amp.shape) # (1025, 861)は20秒のwavファイルをスペクトログラムにした場合 # (1025, 431)は10秒のwavファイルをスペクトログラムにした場合 # (1025, 216)は5秒のwavファイルをスペクトログラムにした場合 y_inv = librosa.griffinlim(amp*200) display(IPython.display.Audio(y_inv, rate=sr)) ``` ## Example : How to use this * <font color="red">Subset <b>data 1300-1600</b> and <b>data 3400-3600</b> are not working now, so please get subset_name_list</n> those were removed first</font>. ### 1 : get information about this dataset: ```python # Extract dataset's information using huggingface API import requests headers = {"Authorization": f"Bearer {API token}"} API_URL = "https://datasets-server.huggingface.co/info?dataset=mb23%2FGraySpectrogram" def query(): response = requests.get(API_URL, headers=headers) return response.json() data = query() # Make subset name list. subset_name_list = list() for dic in data["failed"]: subset_name_list.append(dic["config"]) # print(dic["config"]) subset_name_list = sorted(subset_name_list, key=natural_keys) remove_list = [ "data 1300-1600", "data 3400-3600" ] for remove_dataset in remove_list: if remove_dataset in subset_name_list: subset_name_list.remove(remove_dataset) else: pass subset_name_list ''' return subset name list. for example, ['data 0-200', 'data 200-600', 'data 600-1000', 'data 1000-1300', 'data 1600-2000', 'data 2000-2200', 'data 2200-2400', 'data 2400-2600', 'data 2600-2800', 'data 3000-3200', 'data 3200-3400', 'data 3600-3800', 'data 3800-4000', 'data 4000-4200', 'data 4200-4400', 'data 4400-4600', 'data 4600-4800', 'data 4800-5000', 'data 5000-5200', 'data 5200-5520'] ''' ``` ### 2 : load all subset dataset: * ```python data = load_dataset("mb23/GraySpectrogram", subset_name_list[0]) for subset in subset_name_list: # Confirm subset_list doesn't include "remove_list" datasets in the above cell. print(subset) new_ds = load_dataset("mb23/GraySpectrogram", subset) new_dataset_train = datasets.concatenate_datasets([data["train"], new_ds["train"]]) new_dataset_test = datasets.concatenate_datasets([data["test"], new_ds["test"]]) # take place of data[split] data["train"] = new_dataset_train data["test"] = new_dataset_test data ``` ### 3 : load dataset and change to dataloader: * You can use the code below: * <font color="red">...but (;・∀・)I don't know whether this code works efficiently, because I haven't tried this code so far</color> ```python import datasets from datasets import load_dataset, DatasetDict from torchvision import transforms from torch.utils.data import DataLoader # BATCH_SIZE = ??? # IMAGE_SIZE = ??? # TRAIN_SIZE = ??? # the number of training data # TEST_SIZE = ??? # the number of test data def load_datasets(): # Define data transforms data_transforms = [ transforms.Resize((IMG_SIZE, IMG_SIZE)), transforms.ToTensor(), # Scales data into [0,1] transforms.Lambda(lambda t: (t * 2) - 1) # Scale between [-1, 1] ] data_transform = transforms.Compose(data_transforms) data = load_dataset("mb23/GraySpectrogram", subset_name_list[0]) for subset in subset_name_list: # Confirm subset_list doesn't include "remove_list" datasets in the above cell. print(subset) new_ds = load_dataset("mb23/GraySpectrogram", subset) new_dataset_train = datasets.concatenate_datasets([data["train"], new_ds["train"]]) new_dataset_test = datasets.concatenate_datasets([data["test"], new_ds["test"]]) # take place of data[split] data["train"] = new_dataset_train data["test"] = new_dataset_test # memo: # 特徴量上手く抽出する方法が...わからん。これは力づく。 # 本当はload_dataset()の時点で抽出したかったけど、無理そう # リポジトリ作り直してpush_to_hub()したほうがいいかもしれない。 new_dataset = dict() new_dataset["train"] = Dataset.from_dict({ "image" : data["train"]["image"], "caption" : data["train"]["caption"] }) new_dataset["test"] = Dataset.from_dict({ "image" : data["test"]["image"], "caption" : data["test"]["caption"] }) data = datasets.DatasetDict(new_dataset) train = data["train"] test = data["test"] for idx in range(len(train["image"])): train["image"][idx] = data_transform(train["image"][idx]) test["image"][idx] = data_transform(test["image"][idx]) train = Dataset.from_dict(train) train = train.with_format("torch") # リスト型回避 test = Dataset.from_dict(train) test = test.with_format("torch") # リスト型回避 # or train_loader = DataLoader(train, batch_size=BATCH_SIZE, shuffle=True, drop_last=True) test_loader = DataLoader(test, batch_size=BATCH_SIZE, shuffle=True, drop_last=True) return train_loader, test_loader ``` * then try this? ``` train_loader, test_loader = load_datasets() ```
5,953
[ [ -0.039825439453125, -0.025421142578125, 0.004016876220703125, 0.01751708984375, -0.017547607421875, -0.00412750244140625, -0.02691650390625, -0.017303466796875, 0.0277862548828125, 0.0202789306640625, -0.047607421875, -0.041656494140625, -0.0306854248046875, ...
bn_hate_speech
2023-01-25T14:27:23.000Z
[ "task_categories:text-classification", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:bn", "license:mit", "hate-speech-topic-classification", ...
null
The Bengali Hate Speech Dataset is a collection of Bengali articles collected from Bengali news articles, news dump of Bengali TV channels, books, blogs, and social media. Emphasis was placed on Facebook pages and newspaper sources because they attract close to 50 million followers and is a common source of opinions and hate speech. The raw text corpus contains 250 million articles and the full dataset is being prepared for release. This is a subset of the full dataset. This dataset was prepared for hate-speech text classification benchmark on Bengali, an under-resourced language.
@misc{karim2020classification, title={Classification Benchmarks for Under-resourced Bengali Language based on Multichannel Convolutional-LSTM Network}, author={Md. Rezaul Karim and Bharathi Raja Chakravarthi and John P. McCrae and Michael Cochez}, year={2020}, eprint={2004.07807}, archivePrefix={arXiv}, primaryClass={cs.CL} }
1
144
2022-03-02T23:29:22
--- annotations_creators: - crowdsourced - expert-generated language_creators: - found language: - bn license: - mit multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: [] paperswithcode_id: bengali-hate-speech pretty_name: Bengali Hate Speech Dataset tags: - hate-speech-topic-classification dataset_info: features: - name: text dtype: string - name: label dtype: class_label: names: '0': Personal '1': Political '2': Religious '3': Geopolitical '4': Gender abusive splits: - name: train num_bytes: 972635 num_examples: 3418 download_size: 974312 dataset_size: 972635 --- # Dataset Card for Bengali Hate Speech Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Bengali Hate Speech Dataset](https://github.com/rezacsedu/Bengali-Hate-Speech-Dataset) - **Repository:** [Bengali Hate Speech Dataset](https://github.com/rezacsedu/Bengali-Hate-Speech-Dataset) - **Paper:** [Classification Benchmarks for Under-resourced Bengali Language based on Multichannel Convolutional-LSTM Network](https://arxiv.org/abs/2004.07807) - **Point of Contact:** [Md. Rezaul Karim](rezaul.karim.fit@gmail.com) ### Dataset Summary The Bengali Hate Speech Dataset is a Bengali-language dataset of news articles collected from various Bengali media sources and categorized based on the type of hate in the text. The dataset was created to provide greater support for under-resourced languages like Bengali on NLP tasks, and serves as a benchmark for multiple types of classification tasks. ### Supported Tasks and Leaderboards * `topic classification`: The dataset can be used to train a Multichannel Convolutional-LSTM for classifying different types of hate speech. The model performance can be measured by its F1 score. ### Languages The text in the dataset is in Bengali and the associated BCP-47 code is `bn`. ## Dataset Structure ### Data Instances A data instance takes the form of a news article and its associated label. 🚨 Beware that the following example contains extremely offensive content! An example looks like this: ``` {"text": "রেন্ডিয়াকে পৃথীবির মানচিএ থেকে মুচে ফেলতে হবে", "label": "Geopolitical"} ``` ### Data Fields * `text`: the text of the Bengali news article * `label`: one of `Geopolitical`, `Personal`, `Political`, `Religious`, or `Gender abusive` indicating the type of hate speech ### Data Splits The dataset has 3418 examples. ## Dataset Creation ### Curation Rationale Under-resourced languages like Bengali lack supporting resources that languages like English have. This dataset was collected from multiple Bengali news sources to provide several classification benchmarks for hate speech detection, document classification and sentiment analysis. ### Source Data #### Initial Data Collection and Normalization Bengali articles were collected from a Bengali Wikipedia dump, Bengali news articles, news dumps of TV channels, books, blogs, sports portal and social media. Emphasis was placed on Facebook pages and newspaper sources because they have about 50 million followers and is a common source of opinion and hate speech. The full dataset consists of 250 million articles and is currently being prepared. This is a subset of the full dataset. #### Who are the source language producers? The source language producers are Bengali authors and users who interact with these various forms of Bengali media. ### Annotations #### Annotation process The data was annotated by manually identifying freqently occurring terms in texts containing hate speech and references to specific entities. The authors also prepared normalized frequency vectors of 175 abusive terms that are commonly used to express hate in Bengali. A hate label is assigned if at least one of these terms exists in the text. Annotator's were provided with unbiased text only contents to make the decision. Non-hate statements were removed from the list and the category of hate was further divided into political, personal, gender abusive, geopolitical and religious. To reduce possible bias, each label was assigned based on a majority voting on the annotator's opinions and Cohen's Kappa was computed to measure inter-annotator agreement. #### Who are the annotators? Three native Bengali speakers and two linguists annotated the dataset which was then reviewed and validated by three experts (one South Asian linguist and two native speakers). ### Personal and Sensitive Information The dataset contains very sensitive and highly offensive comments in a religious, political and gendered context. Some of the comments are directed towards contemporary public figures like politicians, religious leaders, celebrities and athletes. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of the dataset is to improve hate speech detection in Bengali. The growth of social media has enabled people to express hate freely online and there has been a lot of focus on detecting hate speech for highly resourced languages like English. The use of hate speech is pervasive, like any other major language, which can have serious and deadly consequences. Failure to react to hate speech renders targeted minorities more vulnerable to attack and it can also create indifference towards their treatment from majority populations. ### Discussion of Biases The dataset was collected using a bootstrapping approach. An initial search was made for specific types of texts, articles and tweets containing common harassment directed at targeting characteristics. As a result, this dataset contains **extremely** offensive content that is disturbing. In addition, Facebook pages and newspaper sources were emphasized because they are well-known for having hate and harassment issues. ### Other Known Limitations The dataset contains racist, sexist, homophobic and offensive comments. It is collected and annotated for research related purposes only. ## Additional Information ### Dataset Curators The dataset was curated by Md. Rezaul Karim, Sumon Kanti Dey, Bharathi Raja Chakravarthi, John McCrae and Michael Cochez. ### Licensing Information This dataset is licensed under the MIT License. ### Citation Information ``` @inproceedings{karim2020BengaliNLP, title={Classification Benchmarks for Under-resourced Bengali Language based on Multichannel Convolutional-LSTM Network}, author={Karim, Md. Rezaul and Chakravarti, Bharathi Raja and P. McCrae, John and Cochez, Michael}, booktitle={7th IEEE International Conference on Data Science and Advanced Analytics (IEEE DSAA,2020)}, publisher={IEEE}, year={2020} } ``` ### Contributions Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset.
7,899
[ [ -0.02386474609375, -0.047943115234375, -0.03082275390625, 0.0191497802734375, -0.027099609375, 0.01313018798828125, -0.01593017578125, -0.033905029296875, 0.0137176513671875, 0.02679443359375, -0.0166473388671875, -0.041839599609375, -0.06231689453125, -0.00...
catalonia_independence
2023-06-01T14:59:47.000Z
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ca", "language:es", "license:cc-by-nc-sa-4.0", "stance-detection", "region:us" ]
null
This dataset contains two corpora in Spanish and Catalan that consist of annotated Twitter messages for automatic stance detection. The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia. Each corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express the stance towards the target - independence of Catalonia.
@inproceedings{zotova-etal-2020-multilingual, title = "Multilingual Stance Detection in Tweets: The {C}atalonia Independence Corpus", author = "Zotova, Elena and Agerri, Rodrigo and Nunez, Manuel and Rigau, German", booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://www.aclweb.org/anthology/2020.lrec-1.171", pages = "1368--1375", abstract = "Stance detection aims to determine the attitude of a given text with respect to a specific topic or claim. While stance detection has been fairly well researched in the last years, most the work has been focused on English. This is mainly due to the relative lack of annotated data in other languages. The TW-10 referendum Dataset released at IberEval 2018 is a previous effort to provide multilingual stance-annotated data in Catalan and Spanish. Unfortunately, the TW-10 Catalan subset is extremely imbalanced. This paper addresses these issues by presenting a new multilingual dataset for stance detection in Twitter for the Catalan and Spanish languages, with the aim of facilitating research on stance detection in multilingual and cross-lingual settings. The dataset is annotated with stance towards one topic, namely, the ndependence of Catalonia. We also provide a semi-automatic method to annotate the dataset based on a categorization of Twitter users. We experiment on the new corpus with a number of supervised approaches, including linear classifiers and deep learning methods. Comparison of our new corpus with the with the TW-1O dataset shows both the benefits and potential of a well balanced corpus for multilingual and cross-lingual research on stance detection. Finally, we establish new state-of-the-art results on the TW-10 dataset, both for Catalan and Spanish.", language = "English", ISBN = "979-10-95546-34-4", }
1
144
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - crowdsourced language: - ca - es license: - cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: [] paperswithcode_id: cic pretty_name: Catalonia Independence Corpus tags: - stance-detection dataset_info: - config_name: catalan features: - name: id_str dtype: string - name: TWEET dtype: string - name: LABEL dtype: class_label: names: '0': AGAINST '1': FAVOR '2': NEUTRAL splits: - name: train num_bytes: 1406250 num_examples: 6028 - name: test num_bytes: 469204 num_examples: 2010 - name: validation num_bytes: 473393 num_examples: 2010 download_size: 995415 dataset_size: 2348847 - config_name: spanish features: - name: id_str dtype: string - name: TWEET dtype: string - name: LABEL dtype: class_label: names: '0': AGAINST '1': FAVOR '2': NEUTRAL splits: - name: train num_bytes: 1507388 num_examples: 6046 - name: test num_bytes: 501783 num_examples: 2016 - name: validation num_bytes: 505092 num_examples: 2015 download_size: 1070281 dataset_size: 2514263 config_names: - catalan - spanish --- # Dataset Card for Catalonia Independence Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/ixa-ehu/catalonia-independence-corpus - **Repository:** https://github.com/ixa-ehu/catalonia-independence-corpus - **Paper:** [Multilingual Stance Detection: The Catalonia Independence Corpus](https://www.aclweb.org/anthology/2020.lrec-1.171/) - **Leaderboard:** - **Point of Contact:** [Rodrigo Agerri](https://github.com/ragerri) (corpus creator) ### Dataset Summary This dataset contains two corpora in Spanish and Catalan that consist of annotated Twitter messages for automatic stance detection. The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia. Each corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express the stance towards the target - independence of Catalonia. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Spanish and Catalan ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset.
4,531
[ [ -0.032806396484375, -0.0220794677734375, 0.01047515869140625, 0.05120849609375, -0.03070068359375, 0.0208740234375, -0.031036376953125, -0.0239410400390625, 0.06121826171875, 0.038116455078125, -0.03924560546875, -0.0908203125, -0.06207275390625, 0.008811950...
GEM/mlsum
2022-10-24T15:30:21.000Z
[ "task_categories:summarization", "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:de", "language:es", "license:other", "region:us" ]
GEM
This is the MLSUM subset of the GEM benchmark. MLSUM is the first large-scale MultiLingual SUMmarization dataset. Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, German, Spanish, Russian, Turkish. Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community. We report cross-lingual comparative analyses based on state-of-the-art systems. These highlight existing biases which motivate the use of a multi-lingual dataset.
@article{scialom2020mlsum, title={MLSUM: The Multilingual Summarization Corpus}, author={Scialom, Thomas and Dray, Paul-Alexis and Lamprier, Sylvain and Piwowarski, Benjamin and Staiano, Jacopo}, journal={arXiv preprint arXiv:2004.14900}, year={2020} }
2
144
2022-03-02T23:29:22
--- annotations_creators: - none language_creators: - unknown language: - de - es license: - other multilinguality: - unknown size_categories: - unknown source_datasets: - original task_categories: - summarization task_ids: [] pretty_name: mlsum --- # Dataset Card for GEM/mlsum ## Dataset Description - **Homepage:** N/A - **Repository:** https://gitlab.lip6.fr/scialom/mlsum_data/-/tree/master/MLSUM - **Paper:** https://aclanthology.org/2020.emnlp-main.647/ - **Leaderboard:** N/A - **Point of Contact:** Thomas Scialom ### Link to Main Data Card You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/mlsum). ### Dataset Summary MLSum is a multilingual summarization dataset crawled from different news websites. The GEM version supports the German and Spanish subset alongside specifically collected challenge sets for COVID-related articles to test out-of-domain generalization. You can load the dataset via: ``` import datasets data = datasets.load_dataset('GEM/mlsum') ``` The data loader can be found [here](https://huggingface.co/datasets/GEM/mlsum). #### website N/A #### paper [ACL Anthology](https://aclanthology.org/2020.emnlp-main.647/) #### authors Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano ## Dataset Overview ### Where to find the Data and its Documentation #### Download <!-- info: What is the link to where the original dataset is hosted? --> <!-- scope: telescope --> [Gitlab](https://gitlab.lip6.fr/scialom/mlsum_data/-/tree/master/MLSUM) #### Paper <!-- info: What is the link to the paper describing the dataset (open access preferred)? --> <!-- scope: telescope --> [ACL Anthology](https://aclanthology.org/2020.emnlp-main.647/) #### BibTex <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. --> <!-- scope: microscope --> ``` @inproceedings{scialom-etal-2020-mlsum, title = "{MLSUM}: The Multilingual Summarization Corpus", author = "Scialom, Thomas and Dray, Paul-Alexis and Lamprier, Sylvain and Piwowarski, Benjamin and Staiano, Jacopo", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.emnlp-main.647", doi = "10.18653/v1/2020.emnlp-main.647", pages = "8051--8067", abstract = "We present MLSUM, the first large-scale MultiLingual SUMmarization dataset. Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages {--} namely, French, German, Spanish, Russian, Turkish. Together with English news articles from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community. We report cross-lingual comparative analyses based on state-of-the-art systems. These highlight existing biases which motivate the use of a multi-lingual dataset.", } ``` #### Contact Name <!-- quick --> <!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> Thomas Scialom #### Contact Email <!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> {thomas,paul-alexis,jacopo}@recital.ai, {sylvain.lamprier,benjamin.piwowarski}@lip6.fr #### Has a Leaderboard? <!-- info: Does the dataset have an active leaderboard? --> <!-- scope: telescope --> no ### Languages and Intended Use #### Multilingual? <!-- quick --> <!-- info: Is the dataset multilingual? --> <!-- scope: telescope --> yes #### Covered Dialects <!-- info: What dialects are covered? Are there multiple dialects per language? --> <!-- scope: periscope --> There is only one dialect per language, Hochdeutsch for German and Castilian Spanish for Spanish. #### Covered Languages <!-- quick --> <!-- info: What languages/dialects are covered in the dataset? --> <!-- scope: telescope --> `German`, `Spanish, Castilian` #### Whose Language? <!-- info: Whose language is in the dataset? --> <!-- scope: periscope --> The German articles are crawled from Süddeutsche Zeitung and the Spanish ones from El Pais. #### License <!-- quick --> <!-- info: What is the license of the dataset? --> <!-- scope: telescope --> other: Other license #### Intended Use <!-- info: What is the intended use of the dataset? --> <!-- scope: microscope --> The intended use of this dataset is to augment existing datasets for English news summarization with additional languages. #### Add. License Info <!-- info: What is the 'other' license of the dataset? --> <!-- scope: periscope --> Restricted to non-commercial research purposes. #### Primary Task <!-- info: What primary task does the dataset support? --> <!-- scope: telescope --> Summarization #### Communicative Goal <!-- quick --> <!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. --> <!-- scope: periscope --> The speaker is required to produce a high quality summary of news articles in the same language as the input article. ### Credit #### Curation Organization Type(s) <!-- info: In what kind of organization did the dataset curation happen? --> <!-- scope: telescope --> `other` #### Curation Organization(s) <!-- info: Name the organization(s). --> <!-- scope: periscope --> CNRS, Sorbonne Université, reciTAL #### Dataset Creators <!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). --> <!-- scope: microscope --> Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano #### Funding <!-- info: Who funded the data creation? --> <!-- scope: microscope --> Funding information is not specified. #### Who added the Dataset to GEM? <!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. --> <!-- scope: microscope --> The original data card was written by Pedro Henrique Martins (Instituto de Telecomunicações) and Sebastian Gehrmann (Google Research) extended and updated it to the v2 format. The COVID challenge set was created by Laura Perez-Beltrachini (University of Edinburgh). Data cleaning was done by Juan Diego Rodriguez (UT Austin). ### Dataset Structure #### Data Fields <!-- info: List and describe the fields present in the dataset. --> <!-- scope: telescope --> The data fields are: - `text`: the source article (`string`). - `summary`: the output summary (`string`). - `topic`: the topic of the article (`string`). - `url`: the article's url (`string`). - `title`: the article's title (`string`). - `date`: the article's date (`string`). #### Reason for Structure <!-- info: How was the dataset structure determined? --> <!-- scope: microscope --> The structure follows previously released datasets. The `topic` and `title` fields were added to enable additional tasks like title generation and topic detection. #### How were labels chosen? <!-- info: How were the labels chosen? --> <!-- scope: microscope --> They are human written highlights or summaries scraped from the same website. #### Example Instance <!-- info: Provide a JSON formatted example of a typical instance in the dataset. --> <!-- scope: periscope --> ``` { 'date': '00/01/2010', 'gem_id': 'mlsum_de-train-2', 'gem_parent_id': 'mlsum_de-train-2', 'references': [], 'target': 'Oskar Lafontaine gibt den Parteivorsitz der Linken ab - und seine Kollegen streiten, wer ihn beerben soll. sueddeutsche.de stellt die derzeit aussichtsreichsten Anwärter für Führungsaufgaben vor. Mit Vote.', 'text': 'Wenn an diesem Montag die Landesvorsitzenden der Linken über die Nachfolger der derzeitigen Chefs Lothar Bisky und Oskar Lafontaine sowie des Bundesgeschäftsführers Dietmar Bartsch beraten, geht es nicht nur darum, wer die Partei führen soll. Es geht auch um die künftige Ausrichtung und Stärke einer Partei, die vor allem von Lafontaine zusammengehalten worden war. Ihm war es schließlich vor fünf Jahren gelungen, aus der ostdeutschen PDS und der westedeutschen WASG eine Partei zu formen. Eine Partei allerdings, die zerrissen ist in Ost und West, in Regierungswillige und ewige Oppositionelle, in Realos und Ideologen, in gemäßigte und radikale Linke. Wir stellen mögliche Kandidaten vor. Stimmen Sie ab: Wen halten Sie für geeignet und wen für unfähig? Kampf um Lafontaines Erbe: Gregor Gysi Sollte überhaupt jemand die Partei alleine führen, wie es sich viele Ostdeutsche wünschen, käme dafür wohl nur der 62-jährige Gregor Gysi in Betracht. Er ist nach Lafontaine einer der bekanntesten Politiker der Linken und derzeit Fraktionsvorsitzender der Partei im Bundestag. Allerdings ist der ehemalige PDS-Vorsitzende und Rechtsanwalt nach drei Herzinfarkten gesundheitlich angeschlagen. Wahrscheinlich wäre deshalb, dass er die zerstrittene Partei nur übergangsweise führt. Doch noch ist nicht klar, ob eine Person allein die Partei führen soll oder eine Doppelspitze. Viele Linke wünschen sich ein Duo aus einem westdeutschen und einem ostdeutschen Politiker, Mann und Frau. Foto: Getty Images', 'title': 'Personaldebatte bei der Linken - Wer kommt nach Lafontaine?', 'topic': 'politik', 'url': 'https://www.sueddeutsche.de/politik/personaldebatte-bei-der-linken-wer-kommt-nach-lafontaine-1.70041' } ``` #### Data Splits <!-- info: Describe and name the splits in the dataset if there are more than one. --> <!-- scope: periscope --> The statistics of the original dataset are: | | Dataset | Train | Validation | Test | Mean article length | Mean summary length | | :--- | :----: | :---: | :---: | :---: | :---: | :---: | | German | 242,982 | 220,887 |11,394 |10,701 |570.6 (words) | 30.36 (words) | | Spanish | 290,645 | 266,367 |10,358 |13,920 |800.5 (words) |20.71 (words) | The statistics of the cleaned version of the dataset are: | | Dataset | Train | Validation | Test | | :--- | :----: | :---: | :---: | :---: | | German | 242,835 | 220,887 |11,392 |10,695 | | Spanish | 283,228 |259,886 |9,977 |13,365 | The COVID challenge sets have 5058 (de) and 1938 (es) examples. #### Splitting Criteria <!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. --> <!-- scope: microscope --> The training set contains data from 2010 to 2018. Data from 2019 (~10% of the dataset) is used for validation (up to May) and testing(May-December 2019). #### <!-- info: What does an outlier of the dataset in terms of length/perplexity/embedding look like? --> <!-- scope: microscope --> Some topics are less represented within the dataset (e.g., Financial news in German and Television in Spanish). ## Dataset in GEM ### Rationale for Inclusion in GEM #### Why is the Dataset in GEM? <!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? --> <!-- scope: microscope --> As the first large-scale multilingual summarization dataset, it enables evaluation of summarization models beyond English. #### Similar Datasets <!-- info: Do other datasets for the high level task exist? --> <!-- scope: telescope --> yes #### Unique Language Coverage <!-- info: Does this dataset cover other languages than other datasets for the same task? --> <!-- scope: periscope --> yes #### Difference from other GEM datasets <!-- info: What else sets this dataset apart from other similar datasets in GEM? --> <!-- scope: microscope --> In our configuration, the dataset is fully non-English. #### Ability that the Dataset measures <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: periscope --> Content Selection, Content Planning, Realization ### GEM-Specific Curation #### Modificatied for GEM? <!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? --> <!-- scope: telescope --> yes #### GEM Modifications <!-- info: What changes have been made to he original dataset? --> <!-- scope: periscope --> `data points removed`, `data points added` #### Modification Details <!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification --> <!-- scope: microscope --> The modifications done to the original dataset are the following: - Selection of 2 languages (Spanish and German) out of the dataset 5 languages due to copyright restrictions. - Removal of duplicate articles. - Manually removal of article-summary pairs for which the summary is not related to the article. - Removal of article-summary pairs written in a different language (detected using the [langdetect](https://pypi.org/project/langdetect/) library). #### Additional Splits? <!-- info: Does GEM provide additional splits to the dataset? --> <!-- scope: telescope --> yes #### Split Information <!-- info: Describe how the new splits were created --> <!-- scope: periscope --> For both selected languages (German and Spanish), we compiled time-shifted test data in the form of new articles for the second semester of 2020 with Covid19-related keywords. We collected articles from the same German and Spanish outlets as the original MLSUM datasets (El Pais and Süddeutsche Zeitung). We used the scripts provided for the re-creation of the [MLSUM datasets](https://github.com/recitalAI/MLSUM). The new challenge test set for German contains 5058 instances and the Spanish one contains 1938. We additionally sample 500 training and validation points as additional challenge sets to measure overfitting. #### Split Motivation <!-- info: What aspects of the model's generation capacities were the splits created to test? --> <!-- scope: periscope --> Generalization to unseen topics. ### Getting Started with the Task ## Previous Results ### Previous Results #### Measured Model Abilities <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: telescope --> Content Selection, Content Planning, Realization #### Metrics <!-- info: What metrics are typically used for this task? --> <!-- scope: periscope --> `METEOR`, `ROUGE`, `Other: Other Metrics` #### Other Metrics <!-- info: Definitions of other metrics --> <!-- scope: periscope --> Novelty: Number of generated n-grams not included in the source articles. #### Proposed Evaluation <!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. --> <!-- scope: microscope --> ROUGE and METEOR both measure n-gram overlap with a focus on recall and are standard summarization metrics. Novelty is often reported alongside them to characterize how much a model diverges from its inputs. #### Previous results available? <!-- info: Are previous results available? --> <!-- scope: telescope --> yes #### Other Evaluation Approaches <!-- info: What evaluation approaches have others used? --> <!-- scope: periscope --> The GEM benchmark results (https://gem-benchmark.com/results) report a wide range of metrics include lexical overlap metrics but also semantic ones like BLEURT and BERT-Score. ## Dataset Curation ### Original Curation #### Original Curation Rationale <!-- info: Original curation rationale --> <!-- scope: telescope --> The rationale was to create a multilingual news summarization dataset that mirrors the format of popular English datasets like XSum or CNN/DM. #### Communicative Goal <!-- info: What was the communicative goal? --> <!-- scope: periscope --> The speaker is required to produce a high quality summary of news articles in the same language as the input article. #### Sourced from Different Sources <!-- info: Is the dataset aggregated from different data sources? --> <!-- scope: telescope --> yes #### Source Details <!-- info: List the sources (one per line) --> <!-- scope: periscope --> www.lemonde.fr www.sueddeutsche.de www.elpais.com www.mk.ru www.internethaber.com ### Language Data #### How was Language Data Obtained? <!-- info: How was the language data obtained? --> <!-- scope: telescope --> `Found` #### Where was it found? <!-- info: If found, where from? --> <!-- scope: telescope --> `Multiple websites` #### Language Producers <!-- info: What further information do we have on the language producers? --> <!-- scope: microscope --> The language producers are professional journalists. #### Topics Covered <!-- info: Does the language in the dataset focus on specific topics? How would you describe them? --> <!-- scope: periscope --> 4/5 of the original languages report their topics (except Turkish) and the distributions differ between sources. The dominant topics in German are Politik, Sport, Wirtschaft (economy). The dominant topics in Spanish are actualidad (current news) and opinion. French and Russian are different as well but we omit these languages in the GEM version. #### Data Validation <!-- info: Was the text validated by a different worker or a data curator? --> <!-- scope: telescope --> not validated #### Was Data Filtered? <!-- info: Were text instances selected or filtered? --> <!-- scope: telescope --> algorithmically #### Filter Criteria <!-- info: What were the selection criteria? --> <!-- scope: microscope --> In the original dataset, only one filter was applied: all the articles shorter than 50 words or summaries shorter than 10 words are discarded. The GEM version additionally applies langID filter to ensure that articles are in the correct language. ### Structured Annotations #### Additional Annotations? <!-- quick --> <!-- info: Does the dataset have additional annotations for each instance? --> <!-- scope: telescope --> none #### Annotation Service? <!-- info: Was an annotation service used? --> <!-- scope: telescope --> no ### Consent #### Any Consent Policy? <!-- info: Was there a consent policy involved when gathering the data? --> <!-- scope: telescope --> no #### Justification for Using the Data <!-- info: If not, what is the justification for reusing the data? --> <!-- scope: microscope --> The copyright remains with the original data creators and the usage permission is restricted to non-commercial uses. ### Private Identifying Information (PII) #### Contains PII? <!-- quick --> <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? --> <!-- scope: telescope --> yes/very likely #### Categories of PII <!-- info: What categories of PII are present or suspected in the data? --> <!-- scope: periscope --> `sensitive information`, `generic PII` #### Any PII Identification? <!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? --> <!-- scope: periscope --> no identification ### Maintenance #### Any Maintenance Plan? <!-- info: Does the original dataset have a maintenance plan? --> <!-- scope: telescope --> no ## Broader Social Context ### Previous Work on the Social Impact of the Dataset #### Usage of Models based on the Data <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? --> <!-- scope: telescope --> no ### Impact on Under-Served Communities #### Addresses needs of underserved Communities? <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). --> <!-- scope: telescope --> no ### Discussion of Biases #### Any Documented Social Biases? <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. --> <!-- scope: telescope --> no
20,983
[ [ -0.038421630859375, -0.04571533203125, 0.01446533203125, 0.0009560585021972656, -0.01678466796875, -0.00035643577575683594, -0.0287628173828125, -0.023895263671875, 0.04730224609375, 0.0171356201171875, -0.0479736328125, -0.06817626953125, -0.0380859375, 0.0...
SocialGrep/one-million-reddit-questions
2022-07-25T18:57:10.000Z
[ "annotations_creators:lexyr", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
SocialGrep
null
null
3
144
2022-03-02T23:29:22
--- annotations_creators: - lexyr language_creators: - crowdsourced language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 1M<n<10M source_datasets: - original paperswithcode_id: null --- # Dataset Card for one-million-reddit-questions ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=dataset&utm_term=onemillionquestions) - **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=dataset&utm_term=onemillionquestions) ### Dataset Summary This corpus contains a million posts on /r/AskReddit, annotated with their score. ### Languages Mainly English. ## Dataset Structure ### Data Instances A data point is a Reddit post. ### Data Fields - 'type': the type of the data point. Can be 'post' or 'comment'. - 'id': the base-36 Reddit ID of the data point. Unique when combined with type. - 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique. - 'subreddit.name': the human-readable name of the data point's host subreddit. - 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not. - 'created_utc': a UTC timestamp for the data point. - 'permalink': a reference link to the data point on Reddit. - 'score': score of the data point on Reddit. - 'domain': the domain of the data point's link. - 'url': the destination of the data point's link, if any. - 'selftext': the self-text of the data point, if any. - 'title': the title of the post data point. ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information CC-BY v4.0 ### Contributions [Needs More Information]
3,448
[ [ -0.05322265625, -0.067138671875, 0.01293182373046875, 0.0296478271484375, -0.0295257568359375, -0.0011587142944335938, -0.0164031982421875, -0.018829345703125, 0.056915283203125, 0.0413818359375, -0.077880859375, -0.07330322265625, -0.046478271484375, 0.0256...
bigbio/biomrc
2022-12-22T15:43:44.000Z
[ "multilinguality:monolingual", "language:en", "license:unknown", "region:us" ]
bigbio
We introduce BIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the previous BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the new dataset and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating that the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new BERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or surpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different sizes, also releasing our code, and providing a leaderboard.
@inproceedings{pappas-etal-2020-biomrc, title = "{B}io{MRC}: A Dataset for Biomedical Machine Reading Comprehension", author = "Pappas, Dimitris and Stavropoulos, Petros and Androutsopoulos, Ion and McDonald, Ryan", booktitle = "Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.bionlp-1.15", pages = "140--149", }
1
144
2022-11-13T22:06:42
--- language: - en bigbio_language: - English license: unknown multilinguality: monolingual bigbio_license_shortname: UNKNOWN pretty_name: BIOMRC homepage: https://github.com/PetrosStav/BioMRC_code bigbio_pubmed: True bigbio_public: True bigbio_tasks: - QUESTION_ANSWERING --- # Dataset Card for BIOMRC ## Dataset Description - **Homepage:** https://github.com/PetrosStav/BioMRC_code - **Pubmed:** True - **Public:** True - **Tasks:** QA We introduce BIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the previous BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the new dataset and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating that the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new BERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or surpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different sizes, also releasing our code, and providing a leaderboard. ## Citation Information ``` @inproceedings{pappas-etal-2020-biomrc, title = "{B}io{MRC}: A Dataset for Biomedical Machine Reading Comprehension", author = "Pappas, Dimitris and Stavropoulos, Petros and Androutsopoulos, Ion and McDonald, Ryan", booktitle = "Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.bionlp-1.15", pages = "140--149", } ```
1,920
[ [ -0.03582763671875, -0.04315185546875, 0.03265380859375, -0.0149383544921875, -0.027496337890625, 0.01181793212890625, -0.015838623046875, -0.046783447265625, 0.005069732666015625, 0.03143310546875, -0.048248291015625, -0.054046630859375, -0.0382080078125, 0....
Cofacts/line-msg-fact-check-tw
2023-10-11T13:06:33.000Z
[ "task_categories:text-classification", "task_categories:question-answering", "size_categories:100K<n<1M", "language:zh", "license:cc-by-sa-4.0", "fact-checking", "crowd-sourcing", "region:us" ]
Cofacts
null
null
1
144
2023-05-16T05:09:10
--- license: cc-by-sa-4.0 language: - zh pretty_name: Cofacts archive for reported messages and crowd-sourced fact-check replies tags: - fact-checking - crowd-sourcing size_categories: - 100K<n<1M extra_gated_prompt: >- To access this repository, you agree to follow the [Cofacts Data User Agreement](https://github.com/cofacts/opendata/blob/master/LEGAL.md). This is vital to sustain a crowd-sourced database like Cofacts to attribute the fact-checking community that contributed to this dataset. 欲存取此資料集,需同意[Cofacts 真的假的 資料使用者條款](https://github.com/cofacts/opendata/blob/master/LEGAL.md)。 彰顯查核社群對此資料集之貢獻,對協作型資料庫如 Cofacts 的永續發展至關重要。 It would be great if you share with us who you are and your planned usage of the Cofacts data. Your cooperation is greatly appreciated. If you have no specific details to share with us, please simply enter "n/a." 若方便的話,希望您可以與 Cofacts 工作小組分享您的單位以及預計會怎麼運用這個資料,感謝您!若不方便,可輸入「n/a」。 extra_gated_fields: 'I agree to follow the Data User Agreement and promise to attribute Cofacts as specified 我同意遵守資料使用者條款並承諾按規定彰顯 Cofacts': checkbox 'Anything to share with us 有什麼想要與我們分享的嗎': text configs: - config_name: analytics data_files: analytics.csv.zip - config_name: article_categories data_files: article_categories.csv.zip - config_name: article_hyperlinks data_files: article_hyperlinks.csv.zip lineterminator: |+ - config_name: article_replies data_files: article_replies.csv.zip - config_name: article_reply_feedbacks data_files: article_reply_feedbacks.csv.zip lineterminator: |+ - config_name: articles data_files: articles.csv.zip lineterminator: |+ default: true - config_name: categories data_files: categories.csv.zip lineterminator: |+ - config_name: replies data_files: replies.csv.zip lineterminator: |+ - config_name: reply_hyperlinks data_files: reply_hyperlinks.csv.zip lineterminator: |+ - config_name: reply_requests data_files: reply_requests.csv.zip lineterminator: |+ - config_name: anonymized_users data_files: anonymized_users.csv.zip lineterminator: |+ task_categories: - text-classification - question-answering --- # Cofacts Archive for Reported Messages and Crowd-Sourced Fact-Check Replies [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1qdE-OMJTi6ZO68J6KdzGdxNdheW4ct6T?usp=sharing) The Cofacts dataset encompasses instant messages that have been reported by users of the [Cofacts chatbot](https://line.me/R/ti/p/@cofacts) and the replies provided by the [Cofacts crowd-sourced fact-checking community](https://www.facebook.com/groups/cofacts/). ## Attribution to the Community This dataset is a result of contributions from both Cofacts LINE chatbot users and the community fact checkers. To appropriately attribute their efforts, please adhere to the rules outlined in the [Cofacts 真的假的 資料使用者條款 (Cofacts Data User Agreement)](https://github.com/cofacts/opendata/blob/master/LEGAL.md). Unless stated otherwise, when redistributing Cofacts data outside the LINE application, the attribution specified by the Cofacts Working Group is as follows: > This data by Cofacts message reporting chatbot and crowd-sourced fact-checking community is licensed under CC BY-SA 4.0. To provide more info, please visit Cofacts LINE bot https://line.me/ti/p/@cofacts 除非以其他方式議定,否則 Cofacts 真的假的工作小組,針對在 LINE 之外的地方散布的 Cofacts 所提供資料,所指定的中文顯名聲明為: > 本編輯資料取自「Cofacts 真的假的」訊息回報機器人與查證協作社群,採 CC BY-SA 4.0 授權提供。若欲補充資訊請訪問 Cofacts LINE bot https://line.me/ti/p/@cofacts For more detailed information, please refer to [Cofacts 真的假的 資料使用者條款](https://github.com/cofacts/opendata/blob/master/LEGAL.md). ## How to Access Cofacts Data To access Cofacts data, you should first register on Hugging Face and accept the Cofacts Data User Agreement. Afterward, you can preview the data on the Hugging Face website. You can access Cofacts data through the following methods: 1. Load `cofacts/line-msg-fact-check-tw` with Hugging Face's `load_dataset('Cofacts/line-msg-fact-check-tw', TABLE_NAME)`. 2. Download individual zipped CSV files in the `Files` tab on the Hugging Face website. If you plan to process the data using Python, `load_dataset()` is the simpler solution. Please refer to [Example on Google Colab](https://colab.research.google.com/drive/1qdE-OMJTi6ZO68J6KdzGdxNdheW4ct6T?usp=sharing) to get started. ## Data Formats Cofacts data comprises multiple normalized tables, with some tables containing foreign keys to other tables' IDs. If you have manually downloaded the data, the tables are distributed as zipped CSV files. These files use `\n` as the line terminator, and quotes are used around multi-line contents. The [`csv-stringify`](https://www.npmjs.com/package/csv-stringify) library is employed to perform escaping and handle quotes and multi-line contents. ### Fields in All Tables * `userIdsha` (string) Hashed user identifier. * `appId` (string) Possible values include: * `LEGACY_APP`: Articles collected before 2017-03. * `RUMORS_LINE_BOT`: Articles collected with the current LINE bot client after 2017-03. These two fields together uniquely identify a user across different CSV files. For example, if one row (reply) in `replies.csv` and another row (feedback) in `article_reply_feedbacks.csv` have identical `userIdsha` and `appId`, it indicates that the reply and the feedback were submitted by the same user. Also, these fields are commonly seen in multiple tables: * `status`: The current visibility of this document. Possible values include: * `NORMAL`: The document is normally visible. * `DELETED`: The document is deleted by its author. For some entities (tables), deletion is not implemented, and thus does not have such value. * `BLOCKED`: The document is hidden by Cofacts Working Group. These document are from a blocked user, with `blockedReason` pointing to announcements in [Cofacts Takedown Announcements](https://github.com/cofacts/takedowns). ## Tables and their fields ### `articles` The instant messages LINE bot users submitted into the database. | Field | Data type | Description | | ----------------------- | -------- | ---- | | `id` | String | | | `articleType` | Enum string | `TEXT`, `IMAGE`, `VIDEO` or `AUDIO`. | | `status` | Enum string | `NORMAL` or `BLOCKED`. | | `text` | Text | The instant message text | | `normalArticleReplyCount` | Integer | The number of replies are associated to this article, excluding the deleted reply associations. | | `createdAt` | ISO time string | When the article is submitted to the database. | | `updatedAt` | ISO time string | Preserved, currently identical to `createdAt` | | `lastRequestedAt` | ISO time string | The submission time of the last `reply_request` is sent on the article, before the article is replied. | | `userIdsha256` | String | Author of the article.| | `appId` | String | | | `references` | Enum string | Where the message is from. Currently the only possible value is `LINE`. | ### `article_hyperlinks` Parsed hyperlink contents in each instant messages, parsed using [cofacts/url-resolver](https://github.com/cofacts/url-resolver/). The data is used in Cofacts system for indexing and retrieving messages. | Field | Data type | Description | | ---------------- | -------- | ---- | | `articleId` | String | | | `url` | String | The URL string detected in article | | `normalizedUrl` | String | Canonical URL after normalization process including unfolding shortened URLs | | `title` | String | Title of the scrapped web content | Note: Scrapped contents do not belong to Cofacts and are redistributed under research purposes. The scrapping mechanism is not reliable either. Researchers may need to implement their own scrapper if content is important in their research. ### `article_categories` Categories linked to this article. | Field | Data type | Description | | ---------------- | ---------- | ---- | | `articleId` | String | | | `categoryId` | String | | `aiConfidence` | Number | Confidence level by AI marking this category. Empty for crowd-sourced labels. | | `aiModel` . | String | Name of the AI model marking this cateogry. Empty for crowd-sourced labels. | | `userIdsha256` | String | The person that connected article and category. | | `appId` . | String | | | `negativeFeedbackCount` | Integer | Number of `article_category_feedbacks` that has score `-1` | | `positiveFeedbackCount` | Integer | Number of `article_category_feedbacks` that has score `1` | | `status` | Enum string | `NORMAL`: The category and article are connected. `DELETED`: The category does not connect to the article anymore. | | `createdAt` | ISO time string | The time when the reply is connected to the article | | `updatedAt` | ISO time string | The latest date when the category's status is updated | ### `categories` | Field | Data type | Description | | ------------- | --------- | ----------- | | `id` | String | | | `title` | String | Name of the category | | `description` | Text | Definition of the category | | `createdAt` | ISO time string | | | `updatedAt` | ISO time string | | ### `article_replies` Articles and replies are in has-and-belongs-to-many relationship. That is, an article can have multiple replies, and a reply can be connected to multiple similar articles. `article_replies` is the "join table" between `articles` and `replies`, bringing `articleId` and `replyId` together, along with other useful properties related to this connection between an article and a reply. One pair of `articleId`, `replyId` will map to exactly one `article_reply`. | Field | Data type | Description | | --------------------- | -------- | - | | `articleId` | String | Relates to `id` field of `articles` | | `replyId` | String | Relates to `id` field of `replies` | | `userId` | String | The user connecting the reply with the article | | `negativeFeedbackCount` | Integer | Number of `article_reply_feedbacks` that has score `-1` | | `positiveFeedbackCount` | Integer | Number of `article_reply_feedbacks` that has score `1` | | `replyType` | Enum string | Duplicated from `replies`'s type. | | `appId` | String | | | `status` | Enum string | `NORMAL`: The reply and article are connected. `DELETED`: The reply does not connect to the article anymore. `BLOCKED`: It comes from a blocked user. | | `createdAt` | ISO time string | The time when the reply is connected to the article | | `updatedAt` | ISO time string | The latest date when the reply's status is updated | ### `replies` Editor's reply to the article. | Field | Data type | Description | | --------- | -------- | - | | `id` | String | | | `type` | Enum string | Type of the reply chosen by the editor. `RUMOR`: The article contains rumor. `NOT_RUMOR`: The article contains fact. `OPINIONATED`: The article contains personal opinions. `NOT_ARTICLE`: The article should not be processed by Cofacts. | | `reference` | Text | For `RUMOR` and `NOT_RUMOR` replies: The reference to support the chosen `type` and `text`. For `OPINIONATED` replies: References containing different perspectives from the `article`. For `NOT_ARTICLE`: empty string. | | `userId` | String | The editor that authored this reply. | | `appId` | String | | | `text` | Text | Reply text writtern by the editor | | `createdAt` | ISO Time string | When the reply is written | ### `reply_hyperlinks` Parsed hyperlink contents in reply text and references, parsed using [cofacts/url-resolver](https://github.com/cofacts/url-resolver/). The data is used in Cofacts system for URL previews. | Field | Data type | Description | | ---------------- | -------- | ---- | | `replyId` | String | | | `url` | String | The URL string detected in article | | `normalizedUrl` | String | Canonical URL after normalization process including unfolding shortened URLs | | `title` | String | Title of the scrapped web content | Note: Scrapped contents do not belong to Cofacts and are redistributed under research purposes. The scrapping mechanism implementation is not reliable either. Researchers may need to implement their own scrapper if content is important in their research. ### `reply_requests` Before an article is replied, users may submit `reply_requests` to indicate that they want this article to be answered. When an article is first submitted to the article, an reply request is also created. Any further queries to the same article submits new `reply_requests`. An user can only submit one reply request to an article. | Field | Data type | Description | | --------- | -------- | - | | `articleId` | String | The target of the request | | `reason` | Text | The reason why the user wants to submit this reply request | | `status` | Enum string | `NORMAL` or `BLOCKED`. | | `positiveFeedbackCount` | Text | Number of editors think the reason is reasonable | | `negativeFeedbackCount` | Text | Number of editors think the reason is nonsense | | `createdAt` | ISO Time string | When the reply request is issued | ### `article_reply_feedbacks` Editors and LINE bot users can express if a reply is useful by submitting `article_reply_feedbacks` toward a `article_reply` with score `1` or `-1`. The feedback is actually submitted toward an `article_reply`, the connection between an article and a reply. This is because a reply can be connected to multiple articles. A reply that makes sense in one article does not necessarily mean that it is useful in answering another article. Therefore, the feedback count for a reply connecting to different articles are counted separately. | Field | Data type | Description | | --------- | -------- | - | | `articleId` | String | Relates to `articleId` of the target `article_reply` | | `replyId` | String | Relates to `replyId` of the target `article_reply` | | `score` | Integer | `1`: Useful. `-1`: Not useful. | | `comment` | Text | Why the user chooses such score for this article reply | | `status` | Enum string | `NORMAL` or `BLOCKED`. | | `createdAt` | ISO Time string | When the feedback is submitted | ### `analytics` Usage (visit / show) statistics of website and Cofacts LINE bot. LINE bot data starts from April 2nd, 2018; website data starts from May 3rd, 2017. | Field | Data type | Description | | ----------- | --------------- | ----------- | | `type` | Enum string | Either `article` or `reply` | | `docId` | String | Article ID or Reply ID that is being visited / shown | | `date` | ISO Time string | The date of usage, represented by start of the day (0:00:00+08:00) | | `lineUser` | Integer | The number of LINE users who inspected this article / reply in Cofacts LINE bot in this date. May be empty if no such users | | `lineVisit` | Integer | The number of times this article / reply is inspected in Cofacts LINE bot in this date. May be empty if no visits | | `webUser` | Integer | The number of web users who visited this article page (`/article/<docId>`) / reply page (`/reply/<docId>`) in Cofacts website in this date. May be empty if no such users | | `webVisit` | Integer | The number of page views of this article page (`/article/<docId>`) / reply page (`/reply/<docId>`) in Cofacts website in this date. May be empty if no page views | ### `anonymized_usrs` The users of Cofacts, including Cofacts chatbot and website users. | Field | Data type | Description | | ----------- | --------------- | ----------- | | `userIdsha256` | String | The ID that is used in other tables to denote the creator of the entity. | | `appId` | String | Where this user account is registered. `RUMORS_LINE_BOT` is Cofacts official LINE account. Registered user on Cofacts website has empty `appId`. | | `createdAt` | ISO Time string | The initial registration date for the user. | | `lastActiveAt` | ISO Time string | The last date the account is active. | | `blockedReason` | String | If exists, all submission from the user is hidden by Cofacts WG. This field contains the announcement to the reason why Cofacts WG blocks such user. | ## ⚠ [NOTICE] Caveats of using this data ⚠ The methodology we use to collect these data (i.e. [how Cofacts works](https://beta.hackfoldr.org/cofacts/https%253A%252F%252Fhackmd.io%252Fs%252FBJSdbUMpZ)) could have some impact on the data credibility. ![How cofacts work](https://i.imgur.com/e3Awc50.png) Please keep in mind that all data in this dataset are user-generated, thus is not free from noise and sampling bias coming from these sources: - The distribution Cofacts' users may not reflect the real distribution of all LINE users in Taiwan. - Users may not use Cofacts in the same way we want them to be. Some `articles` may not be actual messages circulating in LINE network. - `replies` may contain factual error. All replies should be merely regarded as "responses to the original message (`article`) to provide different point of view". They are neither the "truth" nor the editor's personal opinion. - There may also exist malicious users sending garbage `articles` into the database. [(Previous incident reports)](https://hackmd.io/@cofacts/incidents) - The program to collect data and to generate dataset may contain error. The dataset may be inaccurate systematically in this way. Lastly, the dataset is provided without warrenty. THE DATASET IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE DATASET OR THE USE OR OTHER DEALINGS IN THE DATASET.
18,741
[ [ -0.045867919921875, -0.06646728515625, 0.0221405029296875, 0.02197265625, -0.014007568359375, 0.00927734375, 0.0096435546875, -0.0296173095703125, 0.046875, 0.0340576171875, -0.060028076171875, -0.05584716796875, -0.0286102294921875, 0.0206756591796875, ...
clarin-knext/scifact-pl
2023-06-07T10:07:12.000Z
[ "language:pl", "arxiv:2305.19840", "region:us" ]
clarin-knext
null
null
0
144
2023-06-02T13:55:34
--- language: - pl pretty_name: BEIR-PL benchmark Scifact-PL --- Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**. Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf Contact: konrad.wojtasik@pwr.edu.pl
244
[ [ -0.0153961181640625, -0.0628662109375, 0.03546142578125, 0.0164337158203125, -0.0222320556640625, -0.01038360595703125, -0.01161956787109375, -0.03448486328125, -0.0013179779052734375, 0.028594970703125, -0.038299560546875, -0.04815673828125, -0.02899169921875, ...
baber/hendrycks_math
2023-08-25T21:15:56.000Z
[ "task_categories:text-generation", "size_categories:10K<n<100K", "language:en", "license:mit", "arxiv:2103.03874", "region:us" ]
baber
MATH is a dataset of 12,500 challenging competition mathematics problems. Each problem in Math has a full step-by-step solution which can be used to teach models to generate answer derivations and explanations.
@article{hendrycksmath2021, title={Measuring Mathematical Problem Solving With the Math Dataset}, author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt}, journal={NeurIPS}, year={2021} }
0
144
2023-08-19T14:28:52
--- license: mit task_categories: - text-generation language: - en pretty_name: MATH size_categories: - 10K<n<100K --- # Dataset Card for Dataset Name ## Dataset Description - **Homepage:** https://github.com/hendrycks/math/blob/main/README.md - **Repository:** https://github.com/hendrycks/math - **Paper:** https://arxiv.org/abs/2103.03874 ### Dataset Summary MATH contains 12,500 challenging competition mathematics problems. Each problem in MATH has a full step-by-step solution which can be used to teach models to generate answer derivations and explanation This dataset card aims to be a base template for new datasets. ### Languages [English] ## Dataset Structure ### Data Instances 7 sub-datasets ### Data Splits training: 7500 test: 5000 ## Additional Information ### Licensing Information MIT but check the [Legal Compliance](https://arxiv.org/pdf/2103.03874.pdf) section in appendix B of the paper as well as the [repo](https://github.com/hendrycks/math/blob/main/LICENSE). ### Citation Information @article{hendrycksmath2021, title={Measuring Mathematical Problem Solving With the MATH Dataset}, author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt}, journal={NeurIPS}, year={2021} }
1,313
[ [ -0.03985595703125, -0.034027099609375, 0.002895355224609375, 0.01690673828125, -0.0204010009765625, -0.007472991943359375, -0.01198577880859375, -0.00424957275390625, 0.00946044921875, 0.01837158203125, -0.05712890625, -0.051544189453125, -0.0330810546875, 0...
selfrag/selfrag_train_data
2023-10-31T19:37:22.000Z
[ "task_categories:text-generation", "size_categories:100K<n<1M", "language:en", "license:mit", "arxiv:2310.11511", "region:us" ]
selfrag
null
null
10
144
2023-10-18T19:55:39
--- license: mit task_categories: - text-generation language: - en size_categories: - 100K<n<1M --- This is a training data file for [Self-RAG](https://selfrag.github.io/) that generates outputs to diverse user queries as well as reflection tokens to call the retrieval system adaptively and criticize its own output and retrieved passages. Self-RAG is trained on our 150k diverse instruction-output pairs with interleaving passages and reflection tokens using the standard next-token prediction objective, enabling efficient and stable learning with fine-grained feedback. At inference, we leverage reflection tokens covering diverse aspects of generations to sample the best output aligning users' preferences. See full descriptions in [our paper](https://arxiv.org/abs/2310.11511) and [code](https://github.com/AkariAsai/self-rag). ## Citation and contact If you use this model, please cite our work: ``` @article{asai2023selfrag, author = {Asai, Akari and Wu, Zeqiu and Wang, Yizhong and Sil, Avirup and Hajishirzi, Hannaneh}, title = {{Self-RAG}: Learning to Retrieve, Generate, and Critique through Self-Reflection}, year = {2023}, journal = { arXiv preprint arXiv:2310.11511 }, URL = {https://arxiv.org/abs/2310.11511} } ```
1,265
[ [ -0.01314544677734375, -0.00682830810546875, 0.01476287841796875, 0.0135650634765625, -0.01215362548828125, 0.0063323974609375, 0.01033782958984375, -0.006786346435546875, 0.039764404296875, 0.051971435546875, -0.03778076171875, -0.0269775390625, -0.0261688232421...
dinhbinh161/vietnamese-tts
2023-11-02T06:34:38.000Z
[ "region:us" ]
dinhbinh161
null
null
0
144
2023-11-02T06:32:40
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: client_id dtype: string - name: path dtype: string - name: audio dtype: audio: sampling_rate: 48000 - name: sentence dtype: string - name: up_votes dtype: int64 - name: down_votes dtype: int64 - name: age dtype: string - name: gender dtype: string - name: accent dtype: string - name: locale dtype: string - name: segment dtype: string - name: variant dtype: string - name: duration dtype: float64 - name: human_validated dtype: bool splits: - name: train num_bytes: 352333940.0 num_examples: 14235 download_size: 333808529 dataset_size: 352333940.0 --- # Dataset Card for "vietnamese-tts" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
960
[ [ -0.0272369384765625, -0.0231475830078125, 0.0220489501953125, 0.0245361328125, -0.031585693359375, 0.01311492919921875, 0.013916015625, -0.00732421875, 0.056304931640625, 0.037322998046875, -0.04827880859375, -0.067626953125, -0.053070068359375, 0.0022563934...
iohadrubin/mtop
2022-01-01T20:54:04.000Z
[ "region:us" ]
iohadrubin
0
143
2022-03-02T23:29:22
Entry not found
15
[ [ -0.0214080810546875, -0.01494598388671875, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052520751953125, 0.00505828857421875, 0.051361083984375, 0.016998291015625, -0.052093505859375, -0.01494598388671875, -0.06036376953125, 0.0379...
jakeazcona/short-text-multi-labeled-emotion-classification
2021-12-02T01:08:12.000Z
[ "region:us" ]
jakeazcona
null
null
0
143
2022-03-02T23:29:22
Entry not found
15
[ [ -0.0214080810546875, -0.01494598388671875, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052520751953125, 0.00505828857421875, 0.051361083984375, 0.016998291015625, -0.052093505859375, -0.01494598388671875, -0.06036376953125, 0.0379...
nielsr/eurosat-demo
2022-04-04T15:48:08.000Z
[ "region:us" ]
nielsr
null
null
1
143
2022-04-04T15:47:48
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
jordanparker6/publaynet
2022-07-19T04:20:00.000Z
[ "task_categories:image-to-text", "size_categories:100B<n<1T", "language:en", "license:other", "arxiv:1908.07836", "region:us" ]
jordanparker6
null
null
9
143
2022-07-17T23:32:26
--- title: PubLayNet license: other annotations_creators: [] language: - en size_categories: - 100B<n<1T source_datasets: [] task_categories: - image-to-text task_ids: [] --- # PubLayNet PubLayNet is a large dataset of document images, of which the layout is annotated with both bounding boxes and polygonal segmentations. The source of the documents is [PubMed Central Open Access Subset (commercial use collection)](https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/). The annotations are automatically generated by matching the PDF format and the XML format of the articles in the PubMed Central Open Access Subset. More details are available in our paper ["PubLayNet: largest dataset ever for document layout analysis."](https://arxiv.org/abs/1908.07836). The public dataset is in tar.gz format which doesn't fit nicely with huggingface streaming. Modifications have been made to optimise the delivery of the dataset for the hugginface datset api. The original files can be found [here](https://developer.ibm.com/exchanges/data/all/publaynet/). Licence: [Community Data License Agreement – Permissive – Version 1.0 License](https://cdla.dev/permissive-1-0/) Author: IBM GitHub: https://github.com/ibm-aur-nlp/PubLayNet @article{ zhong2019publaynet, title = { PubLayNet: largest dataset ever for document layout analysis }, author = { Zhong, Xu and Tang, Jianbin and Yepes, Antonio Jimeno }, journal = { arXiv preprint arXiv:1908.07836}, year. = { 2019 } }
1,476
[ [ -0.0311431884765625, -0.04595947265625, 0.03466796875, 0.02001953125, -0.01195526123046875, -0.0148773193359375, 0.004077911376953125, -0.023101806640625, 0.035064697265625, 0.06707763671875, -0.04168701171875, -0.04736328125, -0.031768798828125, -0.00168228...
kmyoo/cnn-dailymail-v1-tiny
2022-12-02T14:00:12.000Z
[ "region:us" ]
kmyoo
null
null
0
143
2022-12-02T13:59:35
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
keremberke/plane-detection
2023-01-27T13:46:18.000Z
[ "task_categories:object-detection", "roboflow", "roboflow2huggingface", "region:us" ]
keremberke
null
@misc{ overhead-plane-detector_dataset, title = { Overhead Plane Detector Dataset }, type = { Open Source Dataset }, author = { SkyBot Cam }, howpublished = { \\url{ https://universe.roboflow.com/skybot-cam/overhead-plane-detector } }, url = { https://universe.roboflow.com/skybot-cam/overhead-plane-detector }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2022 }, month = { jan }, note = { visited on 2023-01-27 }, }
2
143
2023-01-18T09:43:30
--- task_categories: - object-detection tags: - roboflow - roboflow2huggingface --- <div align="center"> <img width="640" alt="keremberke/plane-detection" src="https://huggingface.co/datasets/keremberke/plane-detection/resolve/main/thumbnail.jpg"> </div> ### Dataset Labels ``` ['planes'] ``` ### Number of Images ```json {'test': 25, 'valid': 50, 'train': 175} ``` ### How to Use - Install [datasets](https://pypi.org/project/datasets/): ```bash pip install datasets ``` - Load the dataset: ```python from datasets import load_dataset ds = load_dataset("keremberke/plane-detection", name="full") example = ds['train'][0] ``` ### Roboflow Dataset Page [https://universe.roboflow.com/skybot-cam/overhead-plane-detector/dataset/4](https://universe.roboflow.com/skybot-cam/overhead-plane-detector/dataset/4?ref=roboflow2huggingface) ### Citation ``` @misc{ overhead-plane-detector_dataset, title = { Overhead Plane Detector Dataset }, type = { Open Source Dataset }, author = { SkyBot Cam }, howpublished = { \\url{ https://universe.roboflow.com/skybot-cam/overhead-plane-detector } }, url = { https://universe.roboflow.com/skybot-cam/overhead-plane-detector }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2022 }, month = { jan }, note = { visited on 2023-01-27 }, } ``` ### License CC BY 4.0 ### Dataset Summary This dataset was exported via roboflow.ai on March 30, 2022 at 3:11 PM GMT It includes 250 images. Planes are annotated in COCO format. The following pre-processing was applied to each image: No image augmentation techniques were applied.
1,640
[ [ -0.048828125, -0.0138702392578125, 0.032470703125, 0.00890350341796875, -0.023529052734375, -0.009124755859375, 0.006946563720703125, -0.0221405029296875, 0.0310821533203125, 0.01409912109375, -0.055450439453125, -0.04107666015625, -0.037261962890625, 0.0016...
urialon/gov_report_validation
2023-02-28T15:40:57.000Z
[ "region:us" ]
urialon
null
null
0
143
2023-02-28T15:40:48
Entry not found
15
[ [ -0.0213775634765625, -0.014984130859375, 0.05718994140625, 0.0288543701171875, -0.0350341796875, 0.046478271484375, 0.052520751953125, 0.005062103271484375, 0.051361083984375, 0.016998291015625, -0.0521240234375, -0.01496124267578125, -0.0604248046875, 0.037...
MU-NLPC/Calc-gsm8k
2023-10-30T15:54:45.000Z
[ "task_categories:text-generation", "task_categories:question-answering", "size_categories:1K<n<10K", "language:en", "license:mit", "arxiv:2305.15017", "arxiv:2110.14168", "region:us" ]
MU-NLPC
null
null
1
143
2023-04-16T21:07:44
--- language: - en license: mit size_categories: - 1K<n<10K task_categories: - text-generation - question-answering dataset_info: - config_name: default features: - name: id dtype: string - name: question dtype: string - name: chain dtype: string - name: result dtype: string - name: result_float dtype: float64 splits: - name: train num_bytes: 5373420.477987422 num_examples: 7273 - name: validation num_bytes: 147763.5220125786 num_examples: 200 - name: test num_bytes: 993169 num_examples: 1319 download_size: 3140154 dataset_size: 6514353.0 - config_name: original-splits features: - name: id dtype: string - name: question dtype: string - name: chain dtype: string - name: result dtype: string - name: result_float dtype: float64 splits: - name: train num_bytes: 5521184 num_examples: 7473 - name: test num_bytes: 993169 num_examples: 1319 download_size: 0 dataset_size: 6514353 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* - config_name: original-splits data_files: - split: train path: original-splits/train-* - split: test path: original-splits/test-* --- # Dataset Card for Calc-gsm8k ## Summary This dataset is an instance of gsm8k dataset, converted to a simple html-like language that can be easily parsed (e.g. by BeautifulSoup). The data contains 3 types of tags: - gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case) - output: An output of the external tool - result: The final answer to the mathematical problem (a number) ## Supported Tasks The dataset is intended for training Chain-of-Thought reasoning **models able to use external tools** to enhance the factuality of their responses. This dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator. ## Construction Process The answers in the original dataset were in a structured but non-standard format. So, the answers were parsed, all arithmetical expressions were evaluated using a sympy-based calculator, the outputs were checked to be consistent with the intermediate results and exported into a simple html-like language that BeautifulSoup can parse. We also perform in-dataset and cross-dataset data-leak detection within the [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483) However, in case of gsm8k, we found no data leaks and removed no examples from the data. ## Content and Data splits For convenience, we created a validation set by sampling 200 random examples from the original train split. This is the default variant: ```python datasets.load_dataset("MU-NLPC/Calc-gsm8k") ``` The original data splits can be loaded using: ```python datasets.load_dataset("MU-NLPC/Calc-gsm8k", "original-splits") ``` For more info about the content of the dataset, see [gsm8k HF dataset](https://huggingface.co/datasets/gsm8k) and the [official repository](https://github.com/openai/grade-school-math). ## Related work This dataset was created as a part of a larger effort in training models capable of using a calculator during inference, which we call Calcformers. - [**Calc-X collection**](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483) - datasets for training Calcformers - [**Calcformers collection**](https://huggingface.co/collections/MU-NLPC/calcformers-65367392badc497807b3caf5) - calculator-using models we trained and published on HF - [**Calc-X and Calcformers paper**](https://arxiv.org/abs/2305.15017) - [**Calc-X and Calcformers repo**](https://github.com/prompteus/calc-x) Here are links to the original dataset: - [**original gsm8k dataset**](https://huggingface.co/datasets/gsm8k) - [**original gsm8k paper**](https://arxiv.org/abs/2110.14168) - [**original gsm8k repo**](https://github.com/openai/grade-school-math) ## Licence MIT, consistently with the original dataset. ## Cite If you use this version of the dataset in research, please cite the [original GSM8K paper](https://arxiv.org/abs/2110.14168), and [Calc-X collection](https://arxiv.org/abs/2305.15017) as follows: ```bibtex @inproceedings{kadlcik-etal-2023-soft, title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems", author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek", booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track", month = dec, year = "2023", address = "Singapore, Singapore", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2305.15017", } ```
4,949
[ [ -0.026519775390625, -0.0217132568359375, 0.024871826171875, -0.0021839141845703125, -0.0015001296997070312, -0.01300048828125, -0.00672149658203125, -0.01593017578125, 0.01495361328125, 0.031463623046875, -0.042724609375, -0.035614013671875, -0.033203125, 0....
roszcz/pianofor-ai-sustain
2023-07-22T19:53:35.000Z
[ "region:us" ]
roszcz
null
null
0
143
2023-04-30T14:46:29
--- dataset_info: features: - name: notes struct: - name: duration sequence: float64 - name: end sequence: float64 - name: pitch sequence: int64 - name: start sequence: float64 - name: velocity sequence: int64 - name: midi_filename dtype: string - name: record_id dtype: int64 - name: user_id dtype: int64 - name: user dtype: string splits: - name: train num_bytes: 1187031441 num_examples: 5756 download_size: 465426973 dataset_size: 1187031441 --- # Dataset Card for "pianofor-ai-sustain" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
721
[ [ -0.03961181640625, -0.0194854736328125, 0.0157470703125, 0.0229034423828125, -0.006160736083984375, 0.0010089874267578125, -0.0071563720703125, -0.00499725341796875, 0.049041748046875, 0.034210205078125, -0.0684814453125, -0.05340576171875, -0.026641845703125, ...
tomaarsen/conll2003
2023-05-08T13:34:35.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "task_ids:part-of-speech", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-reuters-corpus", "language:en", "lice...
tomaarsen
The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups. The CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on a separate line and there is an empty line after each sentence. The first item on each line is a word, the second a part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags and the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only if two phrases of the same type immediately follow each other, the first word of the second phrase will have tag B-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2 tagging scheme, whereas the original dataset uses IOB1. For more details see https://www.clips.uantwerpen.be/conll2003/ner/ and https://www.aclweb.org/anthology/W03-0419
@inproceedings{tjong-kim-sang-de-meulder-2003-introduction, title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition", author = "Tjong Kim Sang, Erik F. and De Meulder, Fien", booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003", year = "2003", url = "https://www.aclweb.org/anthology/W03-0419", pages = "142--147", }
0
143
2023-05-08T13:33:26
--- annotations_creators: - crowdsourced language_creators: - found language: - en license: - other multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - extended|other-reuters-corpus task_categories: - token-classification task_ids: - named-entity-recognition - part-of-speech paperswithcode_id: conll-2003 pretty_name: CoNLL-2003 dataset_info: features: - name: id dtype: string - name: tokens sequence: string - name: pos_tags sequence: class_label: names: '0': '"' '1': '''''' '2': '#' '3': $ '4': ( '5': ) '6': ',' '7': . '8': ':' '9': '``' '10': CC '11': CD '12': DT '13': EX '14': FW '15': IN '16': JJ '17': JJR '18': JJS '19': LS '20': MD '21': NN '22': NNP '23': NNPS '24': NNS '25': NN|SYM '26': PDT '27': POS '28': PRP '29': PRP$ '30': RB '31': RBR '32': RBS '33': RP '34': SYM '35': TO '36': UH '37': VB '38': VBD '39': VBG '40': VBN '41': VBP '42': VBZ '43': WDT '44': WP '45': WP$ '46': WRB - name: chunk_tags sequence: class_label: names: '0': O '1': B-ADJP '2': I-ADJP '3': B-ADVP '4': I-ADVP '5': B-CONJP '6': I-CONJP '7': B-INTJ '8': I-INTJ '9': B-LST '10': I-LST '11': B-NP '12': I-NP '13': B-PP '14': I-PP '15': B-PRT '16': I-PRT '17': B-SBAR '18': I-SBAR '19': B-UCP '20': I-UCP '21': B-VP '22': I-VP - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC '7': B-MISC '8': I-MISC config_name: conll2003 splits: - name: train num_bytes: 6931345 num_examples: 14041 - name: validation num_bytes: 1739223 num_examples: 3250 - name: test num_bytes: 1582054 num_examples: 3453 download_size: 982975 dataset_size: 10252622 train-eval-index: - config: conll2003 task: token-classification task_id: entity_extraction splits: train_split: train eval_split: test col_mapping: tokens: tokens ner_tags: tags metrics: - type: seqeval name: seqeval --- # Dataset Card for "conll2003" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://www.aclweb.org/anthology/W03-0419/](https://www.aclweb.org/anthology/W03-0419/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 4.85 MB - **Size of the generated dataset:** 10.26 MB - **Total amount of disk used:** 15.11 MB ### Dataset Summary The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups. The CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on a separate line and there is an empty line after each sentence. The first item on each line is a word, the second a part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags and the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only if two phrases of the same type immediately follow each other, the first word of the second phrase will have tag B-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2 tagging scheme, whereas the original dataset uses IOB1. For more details see https://www.clips.uantwerpen.be/conll2003/ner/ and https://www.aclweb.org/anthology/W03-0419 ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### conll2003 - **Size of downloaded dataset files:** 4.85 MB - **Size of the generated dataset:** 10.26 MB - **Total amount of disk used:** 15.11 MB An example of 'train' looks as follows. ``` { "id": "0", "document_id": 1, "sentence_id": 3, "tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."] "pos_tags": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7], "ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "chunk_tags": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0], } ``` The original data files have `-DOCSTART-` lines used to separate documents, but these lines are removed here. Indeed `-DOCSTART-` is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation. ### Data Fields The data fields are the same among all splits. #### conll2003 - `id`: a `string` feature. - `document_id`: an `int32` feature tracking which document the sample is from. - `sentence_id`: an `int32` feature tracking which sentence in this document the sample is from. - `tokens`: a `list` of `string` features. - `pos_tags`: a `list` of classification labels (`int`). Full tagset with indices: ```python {'"': 0, "''": 1, '#': 2, '$': 3, '(': 4, ')': 5, ',': 6, '.': 7, ':': 8, '``': 9, 'CC': 10, 'CD': 11, 'DT': 12, 'EX': 13, 'FW': 14, 'IN': 15, 'JJ': 16, 'JJR': 17, 'JJS': 18, 'LS': 19, 'MD': 20, 'NN': 21, 'NNP': 22, 'NNPS': 23, 'NNS': 24, 'NN|SYM': 25, 'PDT': 26, 'POS': 27, 'PRP': 28, 'PRP$': 29, 'RB': 30, 'RBR': 31, 'RBS': 32, 'RP': 33, 'SYM': 34, 'TO': 35, 'UH': 36, 'VB': 37, 'VBD': 38, 'VBG': 39, 'VBN': 40, 'VBP': 41, 'VBZ': 42, 'WDT': 43, 'WP': 44, 'WP$': 45, 'WRB': 46} ``` - `chunk_tags`: a `list` of classification labels (`int`). Full tagset with indices: ```python {'O': 0, 'B-ADJP': 1, 'I-ADJP': 2, 'B-ADVP': 3, 'I-ADVP': 4, 'B-CONJP': 5, 'I-CONJP': 6, 'B-INTJ': 7, 'I-INTJ': 8, 'B-LST': 9, 'I-LST': 10, 'B-NP': 11, 'I-NP': 12, 'B-PP': 13, 'I-PP': 14, 'B-PRT': 15, 'I-PRT': 16, 'B-SBAR': 17, 'I-SBAR': 18, 'B-UCP': 19, 'I-UCP': 20, 'B-VP': 21, 'I-VP': 22} ``` - `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices: ```python {'O': 0, 'B-PER': 1, 'I-PER': 2, 'B-ORG': 3, 'I-ORG': 4, 'B-LOC': 5, 'I-LOC': 6, 'B-MISC': 7, 'I-MISC': 8} ``` ### Data Splits | name |train|validation|test| |---------|----:|---------:|---:| |conll2003|14041| 3250|3453| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information From the [CoNLL2003 shared task](https://www.clips.uantwerpen.be/conll2003/ner/) page: > The English data is a collection of news wire articles from the Reuters Corpus. The annotation has been done by people of the University of Antwerp. Because of copyright reasons we only make available the annotations. In order to build the complete data sets you will need access to the Reuters Corpus. It can be obtained for research purposes without any charge from NIST. The copyrights are defined below, from the [Reuters Corpus page](https://trec.nist.gov/data/reuters/reuters.html): > The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements: > > [Organizational agreement](https://trec.nist.gov/data/reuters/org_appl_reuters_v4.html) > > This agreement must be signed by the person responsible for the data at your organization, and sent to NIST. > > [Individual agreement](https://trec.nist.gov/data/reuters/ind_appl_reuters_v4.html) > > This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization. ### Citation Information ``` @inproceedings{tjong-kim-sang-de-meulder-2003-introduction, title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition", author = "Tjong Kim Sang, Erik F. and De Meulder, Fien", booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003", year = "2003", url = "https://www.aclweb.org/anthology/W03-0419", pages = "142--147", } ``` ### Contributions Thanks to [@jplu](https://github.com/jplu), [@vblagoje](https://github.com/vblagoje), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
12,557
[ [ -0.046722412109375, -0.038360595703125, 0.011688232421875, 0.007465362548828125, -0.0115509033203125, 0.001377105712890625, -0.0213623046875, -0.0391845703125, 0.041046142578125, 0.026397705078125, -0.046661376953125, -0.06585693359375, -0.043609619140625, 0...
yuzuai/rakuda-questions
2023-06-23T08:01:35.000Z
[ "task_categories:conversational", "task_categories:question-answering", "size_categories:n<1K", "source_datasets:original", "language:ja", "license:mit", "region:us" ]
yuzuai
null
null
3
143
2023-06-23T01:08:52
--- license: mit language: - ja pretty_name: Rakuda - Questions for Japanese Models task_categories: - conversational - question-answering size_categories: - n<1K source_datasets: - original --- # Rakuda - Questions for Japanese models **Repository**: [https://github.com/yuzu-ai/japanese-llm-ranking](https://github.com/yuzu-ai/japanese-llm-ranking) This is a set of 40 questions in Japanese about Japanese-specific topics designed to evaluate the capabilities of AI Assistants in Japanese. The questions are evenly distributed between four categories: history, society, government, and geography. Questions in the first three categories are open-ended, while the geography questions are more specific. Answers to these questions can be used to rank the Japanese abilities of models, in the same way the [vicuna-eval questions](https://lmsys.org/vicuna_eval/) are frequently used to measure the usefulness of assistants. ## Usage ```python from datasets import load_dataset dataset = load_dataset("yuzuai/rakuda-questions") print(dataset) # => DatasetDict({ # train: Dataset({ # features: ['category', 'question_id', 'text'], # num_rows: 40 # }) # }) ```
1,201
[ [ -0.0543212890625, -0.06878662109375, 0.0400390625, 0.00696563720703125, -0.01007080078125, -0.026702880859375, -0.026123046875, -0.0217742919921875, 0.021453857421875, 0.043731689453125, -0.036956787109375, -0.04302978515625, -0.0296630859375, 0.018234252929...
FreedomIntelligence/evol-instruct-deutsch
2023-08-06T08:12:07.000Z
[ "region:us" ]
FreedomIntelligence
null
null
4
143
2023-06-30T03:43:08
The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT).
124
[ [ -0.028411865234375, -0.0213775634765625, -0.0003077983856201172, 0.0197601318359375, -0.00452423095703125, 0.004077911376953125, -0.0194244384765625, -0.03033447265625, 0.0289154052734375, 0.033935546875, -0.0643310546875, -0.032958984375, -0.01297760009765625, ...
llm-book/aio-retriever
2023-10-25T15:31:08.000Z
[ "size_categories:10K<n<100K", "language:ja", "region:us" ]
llm-book
null
null
0
143
2023-07-04T04:53:47
--- language: - ja size_categories: - 10K<n<100K dataset_info: features: - name: qid dtype: string - name: competition dtype: string - name: timestamp dtype: string - name: section dtype: string - name: number dtype: string - name: original_question dtype: string - name: original_answer dtype: string - name: original_additional_info dtype: string - name: question dtype: string - name: answers list: string - name: passages list: - name: passage_id dtype: int32 - name: title dtype: string - name: text dtype: string - name: positive_passage_indices list: int32 - name: negative_passage_indices list: int32 splits: - name: train num_bytes: 1742881639 num_examples: 22335 - name: validation num_bytes: 78671502 num_examples: 1000 download_size: 665253451 dataset_size: 1821553141 --- # Dataset Card for llm-book/aio-retriever 書籍『大規模言語モデル入門』で使用する、「AI王」コンペティションのQAデータセット(文書検索モデル訓練用)です。 GitHub リポジトリ [cl-tohoku/quiz-datasets](https://github.com/cl-tohoku/quiz-datasets) で公開されているデータセットを利用しています。 ## Licence 本データセットに含まれる一部のクイズ問題の著作権は [abc/EQIDEN 実行委員会](https://abc-dive.com/portal/)に帰属するものであり、これらのクイズ問題は本書における使用許諾を得ているものです。 本データセットに含まれる一部のクイズ問題は[株式会社キュービック](http://www.qbik.co.jp/)および[株式会社カプリティオ](https://capriccio.tokyo/)に依頼し作成したものであり、これらのクイズ問題は[クリエイティブ・コモンズ表示・継承ライセンス 4.0 (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/deed.ja) ライセンスの下に提供されています。 本データセットにパッセージとして付与されている Wikipedia のコンテンツは、[クリエイティブ・コモンズ表示・継承ライセンス 3.0 (CC BY-SA 3.0)](https://creativecommons.org/licenses/by-sa/3.0/deed.ja) および [GNU 自由文書ライセンス (GFDL)](https://www.gnu.org/licenses/fdl.html) の下に配布されているものです。 クイズ問題のライセンスについて、詳細は [cl-tohoku/quiz-datasets](https://github.com/cl-tohoku/quiz-datasets) を参照してください。
1,827
[ [ -0.0285797119140625, -0.04229736328125, 0.0218658447265625, -0.0054931640625, -0.042816162109375, -0.00702667236328125, -0.008087158203125, -0.0173187255859375, 0.0188446044921875, 0.033905029296875, -0.05126953125, -0.0709228515625, -0.0263824462890625, 0.0...
ASIDS/alpaca-cleaned-ru
2023-10-04T14:26:17.000Z
[ "task_categories:text-generation", "language_creators:translated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:yahma/alpaca-cleaned", "language:ru", "license:cc-by-4.0", "instruction-finetuning", "region:us" ]
ASIDS
null
null
0
143
2023-10-04T09:52:39
--- dataset_info: features: - name: instruction dtype: string - name: output dtype: string - name: iteration dtype: uint32 splits: - name: train num_bytes: 74829755.0 num_examples: 51760 download_size: 36596664 dataset_size: 74829755.0 license: cc-by-4.0 language: - ru multilinguality: - monolingual tags: - instruction-finetuning pretty_name: alpaca-cleaned-ru task_categories: - text-generation size_categories: - 10K<n<100K source_datasets: - yahma/alpaca-cleaned language_creators: - translated --- # alpaca-cleaned-ru converter for autotrain from [d0rj/alpaca-cleaned-ru](https://huggingface.co/datasets/d0rj/alpaca-cleaned-ru) Translated version of [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned) into Russian. ## Dataset Description - **Repository:** https://github.com/gururise/AlpacaDataCleaned - **Repository:** https://huggingface.co/datasets/d0rj/alpaca-cleaned-ru
947
[ [ -0.014984130859375, -0.0183563232421875, 0.00969696044921875, 0.020904541015625, -0.051483154296875, -0.004604339599609375, -0.005573272705078125, -0.020843505859375, 0.036102294921875, 0.02001953125, -0.06927490234375, -0.027374267578125, -0.04248046875, -0...
Fraser/python-state-changes
2022-10-11T17:04:35.000Z
[ "language:code", "region:us" ]
Fraser
Python state changes from a single line of code.
null
6
142
2022-03-02T23:29:22
--- language: - code --- # Python State Changes State changes from the execution of single lines of Python code. All code was taken from Python HackerRank solutions. Scraped from my dataset of traced HackerRank solutions. https://www.kaggle.com/frasergreenlee/ran-hackerrank-solutions ```json {"start": "g = 100; i = 1; l = [100, 100, 0, 0, -100, -100]", "code": "g += l[i]", "end": "g = 200; i = 1; l = [100, 100, 0, 0, -100, -100]"} {"start": "a = 1; b = 2; d = 4; i = 3; j = 2", "code": "i, j = a + (j - b), b + (d - (i - a))", "end": "a = 1; b = 2; d = 4; i = 1; j = 4"} {"start": "b = 15", "code": "b = b // 2", "end": "b = 7"} ``` ## Get an overview of the dataset from seeing the frequency of different ASTs. 👉 https://observablehq.com/@frasergreenlee/python-lines-dataset#chart
790
[ [ 0.0026760101318359375, -0.046295166015625, 0.0247802734375, 0.026458740234375, -0.00156402587890625, 0.0143280029296875, -0.0038852691650390625, -0.0106658935546875, 0.03314208984375, 0.021820068359375, -0.0223846435546875, -0.0246429443359375, -0.03738403320312...
Tevatron/wikipedia-trivia-corpus
2021-09-13T23:35:14.000Z
[ "region:us" ]
Tevatron
null
@inproceedings{karpukhin-etal-2020-dense, title = "Dense Passage Retrieval for Open-Domain Question Answering", author = "Karpukhin, Vladimir and Oguz, Barlas and Min, Sewon and Lewis, Patrick and Wu, Ledell and Edunov, Sergey and Chen, Danqi and Yih, Wen-tau", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.550", doi = "10.18653/v1/2020.emnlp-main.550", pages = "6769--6781", }
0
142
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.014984130859375, 0.057220458984375, 0.0288238525390625, -0.03509521484375, 0.04656982421875, 0.052520751953125, 0.00506591796875, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060455322265625, 0.03793334...
strombergnlp/broad_twitter_corpus
2022-07-01T15:46:36.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
strombergnlp
This is the Broad Twitter corpus, a dataset of tweets collected over stratified times, places and social uses. The goal is to represent a broad range of activities, giving a dataset more representative of the language used in this hardest of social media formats to process. Further, the BTC is annotated for named entities. For more details see [https://aclanthology.org/C16-1111/](https://aclanthology.org/C16-1111/)
@inproceedings{derczynski2016broad, title={Broad twitter corpus: A diverse named entity recognition resource}, author={Derczynski, Leon and Bontcheva, Kalina and Roberts, Ian}, booktitle={Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers}, pages={1169--1179}, year={2016} }
4
142
2022-04-28T09:58:09
--- annotations_creators: - crowdsourced language_creators: - found language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition paperswithcode_id: broad-twitter-corpus pretty_name: Broad Twitter Corpus --- # Dataset Card for broad_twitter_corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [https://github.com/GateNLP/broad_twitter_corpus](https://github.com/GateNLP/broad_twitter_corpus) - **Repository:** [https://github.com/GateNLP/broad_twitter_corpus](https://github.com/GateNLP/broad_twitter_corpus) - **Paper:** [http://www.aclweb.org/anthology/C16-1111](http://www.aclweb.org/anthology/C16-1111) - **Leaderboard:** [Named Entity Recognition on Broad Twitter Corpus](https://paperswithcode.com/sota/named-entity-recognition-on-broad-twitter) - **Point of Contact:** [Leon Derczynski](https://github.com/leondz) ### Dataset Summary This is the Broad Twitter corpus, a dataset of tweets collected over stratified times, places and social uses. The goal is to represent a broad range of activities, giving a dataset more representative of the language used in this hardest of social media formats to process. Further, the BTC is annotated for named entities. See the paper, [Broad Twitter Corpus: A Diverse Named Entity Recognition Resource](http://www.aclweb.org/anthology/C16-1111), for details. ### Supported Tasks and Leaderboards * Named Entity Recognition * On PWC: [Named Entity Recognition on Broad Twitter Corpus](https://paperswithcode.com/sota/named-entity-recognition-on-broad-twitter) ### Languages English from UK, US, Australia, Canada, Ireland, New Zealand; `bcp47:en` ## Dataset Structure ### Data Instances Feature |Count ---|---: Documents |9 551 Tokens |165 739 Person entities |5 271 Location entities |3 114 Organization entities |3 732 ### Data Fields Each tweet contains an ID, a list of tokens, and a list of NER tags - `id`: a `string` feature. - `tokens`: a `list` of `strings` - `ner_tags`: a `list` of class IDs (`int`s) representing the NER class: ``` 0: O 1: B-PER 2: I-PER 3: B-ORG 4: I-ORG 5: B-LOC 6: I-LOC ``` ### Data Splits Section|Region|Collection period|Description|Annotators|Tweet count ---|---|---|---|---|---: A | UK| 2012.01| General collection |Expert| 1000 B |UK |2012.01-02 |Non-directed tweets |Expert |2000 E |Global| 2014.07| Related to MH17 disaster| Crowd & expert |200 F |Stratified |2009-2014| Twitterati |Crowd & expert |2000 G |Stratified| 2011-2014| Mainstream news| Crowd & expert| 2351 H |Non-UK| 2014 |General collection |Crowd & expert |2000 The most varied parts of the BTC are sections F and H. However, each of the remaining four sections has some specific readily-identifiable bias. So, we propose that one uses half of section H for evaluation and leaves the other half in the training data. Section H should be partitioned in the order of the JSON-format lines. Note that the CoNLL-format data is readily reconstructible from the JSON format, which is the authoritative data format from which others are derived. **Test**: Section F **Development**: Section H (the paper says "second half of Section H" but ordinality could be ambiguous, so it all goes in. Bonne chance) **Training**: everything else ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information Creative Commons Attribution 4.0 International (CC BY 4.0) ### Citation Information ``` @inproceedings{derczynski2016broad, title={Broad twitter corpus: A diverse named entity recognition resource}, author={Derczynski, Leon and Bontcheva, Kalina and Roberts, Ian}, booktitle={Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers}, pages={1169--1179}, year={2016} } ``` ### Contributions Author-added dataset [@leondz](https://github.com/leondz)
5,598
[ [ -0.042083740234375, -0.0499267578125, 0.0163421630859375, 0.0198211669921875, -0.020294189453125, 0.0190277099609375, -0.042633056640625, -0.03961181640625, 0.044342041015625, 0.01959228515625, -0.044036865234375, -0.073486328125, -0.049163818359375, 0.01072...
laion/laion-art
2022-05-22T14:55:35.000Z
[ "region:us" ]
laion
null
null
23
142
2022-05-22T14:54:28
Entry not found
15
[ [ -0.02142333984375, -0.014984130859375, 0.057220458984375, 0.0288238525390625, -0.03509521484375, 0.04656982421875, 0.052520751953125, 0.00506591796875, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060455322265625, 0.03793334...
AlexZigma/msr-vtt
2023-07-13T10:35:08.000Z
[ "region:us" ]
AlexZigma
null
null
3
142
2023-07-12T13:47:28
--- dataset_info: features: - name: video_id dtype: string - name: caption dtype: string - name: sen_id dtype: int64 - name: category dtype: int64 - name: url dtype: string - name: start time dtype: float64 - name: end time dtype: float64 - name: split dtype: string - name: id dtype: int64 - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 1102963 num_examples: 6513 - name: val num_bytes: 85199 num_examples: 497 download_size: 598248 dataset_size: 1188162 --- # Dataset Card for "msr-vtt" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
734
[ [ -0.03375244140625, -0.0203094482421875, 0.01288604736328125, -0.005062103271484375, -0.0251922607421875, 0.01337432861328125, 0.0291595458984375, 0.005619049072265625, 0.045989990234375, 0.030914306640625, -0.05255126953125, -0.0439453125, -0.038848876953125, ...
neovalle/H4rmony
2023-10-11T19:15:52.000Z
[ "task_categories:reinforcement-learning", "task_categories:text-classification", "task_categories:question-answering", "size_categories:1K<n<10K", "language:en", "license:cc-by-4.0", "Ecolinguistics", "Sustainability", "ecolinguistic", "environment", "doi:10.57967/hf/1148", "region:us" ]
neovalle
null
null
3
142
2023-09-02T18:39:29
--- license: cc-by-4.0 task_categories: - reinforcement-learning - text-classification - question-answering language: - en tags: - Ecolinguistics - Sustainability - ecolinguistic - environment size_categories: - 1K<n<10K --- # Dataset Card for Dataset H4rmony ### Dataset Summary The H4rmony dataset is a collection of prompts and completions aimed at integrating ecolinguistic principles into AI Large Language Models (LLMs). Developed with collaborative efforts from ecolinguistics enthusiasts and experts, it offers a series of prompts and corresponding pairwise responses ranked in terms of environmental awareness and alignment. This ranking provides a clear metric for the desired alignment and establishes a framework for LLMs fine-tuning, particularly in reinforcement learning, via reward model. This dataset aims to bridge the gap between AI and ecolinguistic values, pushing the envelope for creating generative AI models that are environmentally and sustainability aware by design. H4rmony is not just a dataset; it's a project towards harmonising AI with nature by means of fine-tuning. We believe in the potential of using ecolinguistics to fine-tune and influence LLMs towards more eco-aware outputs. This dataset is currently work in progress. ### Languages Currently only English but will be extended to multi-lingual. ## Dataset Structure ### Data Fields ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64aac16fd4a402e8dce11ebe/tF_DPhg_R8jAyNRvVAuDz.png) ### Data Splits There are no splits on the dataset. Splits can be created when loading the dataset: dataset = (load_dataset('neovalle/H4rmony', split='train').train_test_split(test_size=0.2)) ## Dataset Creation ### Curation Rationale Given the multidisciplinary nature of the challenge, H4rmony dataset is being enriched by contributions from environmentalists, AI specialists, and ecolinguistics enthusiasts. This collective effort ensures the data is both technically sound and ecologically meaningful. ### Source Data #### Initial Data Collection and Normalization The core of the H4rmony dataset originated from active collaborations within the ecolinguistics community. Contributors were asked to submit prompts that would help uncover AI models' alignment with ecolinguistic values. A number of prompts and completions were AI-generated using prompt engineering. To this intial group of prompts, human crafted prompts. ### Personal and Sensitive Information This dataset doesn't contain sensitive information. ## Considerations for Using the Data This dataset is under construction and hasn't been fully tested yet. The datase might contain offensive language. ### Social Impact of Dataset The H4rmony project aims to help AI LLMs to give priority to the crucial importance of environmental consciousness. By serving as the fourth "H", "Harmony with nature", it complements the existing triad of Helpfulness, Honesty, and Harmlessness already well known in ethical AI development. ### Discussion of Biases Not known biases. ### Other Known Limitations The dataset is still under constructions and the current number of rows might not be enough for some usage cases. ## Additional Information ### Dataset Curators Jorge Vallego - airesearch@neovalle.co.uk ### Licensing Information Creative Commons Attribution 4.0 ### Citation Information dataset neovalle/H4rmony - airesearch@neovalle.co.uk ### Testing and PoC Repository https://github.com/Neovalle/H4rmony ### Note This project has its roots in the article "Ecolinguistics and AI: Integrating eco-awareness in natural language processing" https://www.ecoling.net/_files/ugd/ae088a_13cc4828a28e4955804d38e8721056cf.pdf
3,724
[ [ -0.032470703125, -0.036651611328125, 0.025970458984375, 0.01451873779296875, -0.00643157958984375, 0.0038700103759765625, -0.0303955078125, -0.061614990234375, 0.00968170166015625, 0.02252197265625, -0.055084228515625, -0.0401611328125, -0.01299285888671875, ...
mattymchen/lrs3-test
2023-09-05T10:37:16.000Z
[ "region:us" ]
mattymchen
null
null
0
142
2023-09-05T10:34:50
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: idx dtype: int64 - name: audio sequence: int16 - name: video sequence: sequence: sequence: uint8 - name: label dtype: string splits: - name: train num_bytes: 824374107 num_examples: 1321 download_size: 677311360 dataset_size: 824374107 --- # Dataset Card for "lrs3-test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
581
[ [ -0.0433349609375, -0.0192718505859375, 0.00914764404296875, 0.006603240966796875, -0.007770538330078125, 0.0001251697540283203, 0.02850341796875, -0.0218505859375, 0.039520263671875, 0.0245513916015625, -0.058746337890625, -0.033843994140625, -0.02490234375, ...
huyen89/SQuAD1_LLMs
2023-10-16T06:29:51.000Z
[ "region:us" ]
huyen89
null
null
0
142
2023-10-16T06:29:04
Entry not found
15
[ [ -0.021392822265625, -0.01494598388671875, 0.05718994140625, 0.028839111328125, -0.0350341796875, 0.046539306640625, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.01702880859375, -0.052093505859375, -0.01494598388671875, -0.06036376953125, 0.03790...
winvoker/turkish-sentiment-analysis-dataset
2023-07-19T13:15:13.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:unknown", "language:tr", "license:cc-by-sa-4.0", "region:us" ]
winvoker
null
null
20
141
2022-03-02T23:29:22
--- annotations_creators: - crowdsourced - expert-generated language_creators: - crowdsourced language: - tr license: - cc-by-sa-4.0 multilinguality: - monolingual pretty_name: Turkish Sentiment Dataset size_categories: - unknown source_datasets: [] task_categories: - text-classification task_ids: - sentiment-classification --- # Dataset This dataset contains positive , negative and notr sentences from several data sources given in the references. In the most sentiment models , there are only two labels; positive and negative. However , user input can be totally notr sentence. For such cases there were no data I could find. Therefore I created this dataset with 3 class. Positive and negative sentences are listed below. Notr examples are extraced from turkish wiki dump. In addition, added some random text inputs like "Lorem ipsum dolor sit amet.". There are 492.782 labeled sentences. %10 of them used for testing. # Türkçe Duygu Analizi Veriseti Bu veriseti , farklı kaynaklardan derlenmiş pozitif , negatif ve nötr sınıflardan örnekler içerir. Bir çok verisetinde sadece pozitif ve negatif bulunur. Fakat kullanıcı input'u nötr olabilir. Bu tarz durumlar için türkçe bir dataset bulmakta zorlandım. Dolayısıyla , 3 sınıftan oluşan bu dataseti oluşturdum. Pozitif ve negatif örnekleri aldığın kaynaklar referans kısmında listelenmiştir. Nötr cümleler ise wikipedia datasından alınmıştır. Ek olarak bazı rastgele inputlar nötr olarak eklenmiştir. Örneğin: "Lorem ipsum dolor sit amet.". There are 492.782 labeled sentences. %10 of them used for testing. # References - https://www.kaggle.com/burhanbilenn/duygu-analizi-icin-urun-yorumlari - https://github.com/fthbrmnby/turkish-text-data - https://www.kaggle.com/mustfkeskin/turkish-wikipedia-dump - https://github.com/ezgisubasi/turkish-tweets-sentiment-analysis - http://humirapps.cs.hacettepe.edu.tr/ You can reach me via LinkedIn. https://www.linkedin.com/in/batuhanayhan/
1,944
[ [ -0.036895751953125, -0.0546875, 0.01800537109375, 0.0262298583984375, -0.0216522216796875, -0.033782958984375, -0.0108489990234375, -0.0194549560546875, 0.02001953125, 0.034088134765625, -0.0325927734375, -0.05694580078125, -0.0447998046875, 0.03732299804687...
feradauto/MoralExceptQA
2022-10-27T15:42:04.000Z
[ "task_categories:text-classification", "arxiv:2210.01478", "region:us" ]
feradauto
We present a novel challenge set consisting of moral exception question answering (MoralExceptQA) of cases that involve potentially permissible moral exceptions.
@misc{https://doi.org/10.48550/arxiv.2210.01478, doi = {10.48550/ARXIV.2210.01478}, url = {https://arxiv.org/abs/2210.01478}, author = {Jin, Zhijing and Levine, Sydney and Gonzalez, Fernando and Kamal, Ojasv and Sap, Maarten and Sachan, Mrinmaya and Mihalcea, Rada and Tenenbaum, Josh and Schölkopf, Bernhard}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Computers and Society (cs.CY), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Share Alike 4.0 International} }
1
141
2022-10-26T00:26:07
--- pretty_name: MoralExceptQA task_categories: - text-classification --- # Dataset Card for MoralExceptQA ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [MoralCoT](https://github.com/feradauto/MoralCoT) - **Paper:** [When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment](https://arxiv.org/abs/2210.01478) - **Point of Contact:** [Fernando Gonzalez](mailto:fgonzalez@ethz.ch) , [Zhijing Jin](mailto:zjin@tue.mpg.de) ### Dataset Summary Challenge set consisting of moral exception question answering of cases that involve potentially permissible moral exceptions. Our challenge set, MoralExceptQA, is drawn from a series of recent moral psychology studies designed to investigate the flexibility of human moral cognition – specifically, the ability of humans to figure out when it is permissible to break a previously established or well-known rule. ### Languages The language in the dataset is English. ## Dataset Structure ### Data Instances Each instance is a rule-breaking scenario acompanied by an average human response. ### Data Fields - `study`: The moral psychology study. Studies were designed to investigate the ability of humans to figure out when it is permissible to break a previously established or well-known rule. - `context`: The context of the scenario. Different context within the same study are potentially governed by the same rule. - `condition`: Condition in the scenario. - `scenario`: Text description of the scenario. - `human.response`: Average human response (scale 0 to 1) equivalent to the % of people that considered that breaking the rule is permissible. ### Data Splits MoralExceptQA contains one split. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data Information about the data collection and annotators can be found in the appendix of [our paper](https://arxiv.org/abs/2210.01478). ### Personal and Sensitive Information The MoralExceptQA dataset does not have privacy concerns. ## Considerations for Using the Data ### Social Impact of Dataset The intended use of this work is to contribute to AI safety research. We do not intend this work to be developed as a tool to automate moral decision-making on behalf of humans, but instead as a way of mitigating risks caused by LLMs’ misunderstanding of human values. The MoralExceptQA dataset does not have privacy concerns or offensive content. ### Discussion of Biases Our subjects are U.S. residents, and therefore our conclusions are limited to this population. ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The MoralExceptQA dataset is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/). ### Citation Information ``` @misc{https://doi.org/10.48550/arxiv.2210.01478, doi = {10.48550/ARXIV.2210.01478}, url = {https://arxiv.org/abs/2210.01478}, author = {Jin, Zhijing and Levine, Sydney and Gonzalez, Fernando and Kamal, Ojasv and Sap, Maarten and Sachan, Mrinmaya and Mihalcea, Rada and Tenenbaum, Josh and Schölkopf, Bernhard}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Computers and Society (cs.CY), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Share Alike 4.0 International} } ```
4,568
[ [ -0.032318115234375, -0.0174713134765625, 0.050567626953125, 0.0245208740234375, -0.024993896484375, -0.017242431640625, -0.01983642578125, -0.017120361328125, -0.0149688720703125, 0.024749755859375, -0.06396484375, -0.042633056640625, -0.038299560546875, 0.0...
ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
2023-04-28T07:36:17.000Z
[ "region:us" ]
ehartford
null
null
89
141
2023-04-27T07:12:18
This dataset is the WizardLM dataset victor123/evol_instruct_70k, removing instances of blatant alignment. 54974 instructions remain. inspired by https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered All credit to anon8231489123 for the cleanup script that I adapted to wizardlm_clean.py --- license: apache-2.0 language: - en pretty_name: wizardlm-unfiltered ---
387
[ [ -0.0221710205078125, -0.034149169921875, 0.0046234130859375, -0.0025691986083984375, -0.00672149658203125, -0.0235443115234375, 0.013824462890625, -0.0179901123046875, 0.005157470703125, 0.08123779296875, -0.06146240234375, -0.03802490234375, -0.0174560546875, ...
buddhist-nlp/daizhige
2023-07-15T23:57:50.000Z
[ "region:us" ]
buddhist-nlp
null
null
0
141
2023-07-15T23:23:04
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 5467464457 num_examples: 24759486 - name: validation num_bytes: 538238 num_examples: 2500 - name: test num_bytes: 539615 num_examples: 2500 download_size: 3760260006 dataset_size: 5468542310 --- # Dataset Card for "daizhige" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
489
[ [ -0.03533935546875, -0.022064208984375, 0.0078887939453125, 0.0222930908203125, -0.0204315185546875, -0.01611328125, 0.0119781494140625, -0.0079193115234375, 0.0645751953125, 0.024383544921875, -0.059661865234375, -0.0565185546875, -0.041900634765625, -0.0175...
ProlificAI/social-reasoning-rlhf
2023-10-11T08:50:59.000Z
[ "task_categories:text-generation", "size_categories:1K<n<10K", "language:en", "license:mit", "human-feedback", "rlhf", "region:us" ]
ProlificAI
null
null
1
141
2023-10-10T23:45:21
--- license: mit task_categories: - text-generation language: - en pretty_name: Social Reasoning RLHF size_categories: - 1K<n<10K tags: - human-feedback - rlhf --- ## Dataset Summary This repository provides access to a social reasoning dataset that aims to provide signal to how humans navigate social situations, how they reason about them and how they understand each other. It contains questions probing people's thinking and understanding of various social situations. This dataset was created by collating a set of questions within the following social reasoning tasks: * understanding of emotions * intent recognition * social norms * social responsibility * reading of social cues * perspective taking * conflict resolution * ethics * moral judgement * communication skills * negotiation strategies * understanding of empathy * understanding of compassion * understanding of trust * understanding and use of humour * showing kindness * navigating diversity and cultural differences * use of figurative language * self-awareness We asked a group of participants to provide their responses to the given questions, then we asked another group of participants to rate their responses in a pairwise comparison setting. The format of the dataset is as following: ```json { "question": "Question", "chosen": "The chosen response", "rejected": "The rejected response" } ``` ## Disclaimer The guidelines encouraged participants to provide respectful, empathetic and inclusive responses, however the dataset may still contain responses that some may find offensive or upsetting. ## Usage ```python from datasets import load_dataset dataset = load_dataset("ProlificAI/social-reasoning-rlhf") ``` ## About Prolific Robust AI is built on high-quality human data. [Prolific](https://www.prolific.com/) makes it easy to get honest, accurate feedback on your models, from our balanced and vetted pool of taskers. ### Contact Got any questions? Email ai@prolific.co
1,976
[ [ -0.0259246826171875, -0.05718994140625, 0.029632568359375, 0.0265655517578125, -0.0233917236328125, 0.005367279052734375, -0.0132293701171875, -0.02874755859375, 0.0157318115234375, 0.037750244140625, -0.047943115234375, -0.047760009765625, -0.041015625, 0.0...
covid_qa_ucsd
2023-06-01T14:59:47.000Z
[ "task_categories:question-answering", "task_ids:closed-domain-qa", "annotations_creators:found", "language_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "size_categories:n<1K", "source_datasets:original", "language:en", "languag...
null
null
@article{ju2020CovidDialog, title={CovidDialog: Medical Dialogue Datasets about COVID-19}, author={Ju, Zeqian and Chakravorty, Subrato and He, Xuehai and Chen, Shu and Yang, Xingyi and Xie, Pengtao}, journal={ https://github.com/UCSD-AI4H/COVID-Dialogue}, year={2020} }
1
140
2022-03-02T23:29:22
--- annotations_creators: - found language_creators: - expert-generated - found language: - en - zh license: - unknown multilinguality: - monolingual size_categories: - 1K<n<10K - n<1K source_datasets: - original task_categories: - question-answering task_ids: - closed-domain-qa pretty_name: CovidQaUcsd dataset_info: - config_name: en features: - name: dialogue_id dtype: int32 - name: dialogue_url dtype: string - name: dialogue_turns sequence: - name: speaker dtype: class_label: names: '0': Patient '1': Doctor - name: utterance dtype: string splits: - name: train num_bytes: 484944 num_examples: 572 download_size: 0 dataset_size: 484944 - config_name: zh features: - name: dialogue_id dtype: int32 - name: dialogue_url dtype: string - name: dialogue_turns sequence: - name: speaker dtype: class_label: names: '0': 病人 '1': 医生 - name: utterance dtype: string splits: - name: train num_bytes: 1352377 num_examples: 1088 download_size: 0 dataset_size: 1352377 config_names: - en - zh --- # Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UCSD-AI4H/COVID-Dialogue - **Repository:** The data is also present in the same [GIT](https://github.com/UCSD-AI4H/COVID-Dialogue) repository - **Paper:** https://pengtaoxie.github.io/coviddiag.pdf - **Leaderboard:** - **Point of Contact:** ### Dataset Summary COVID-Dialogue-Dataset-English is an English medical dialogue dataset about COVID-19 and other types of pneumonia. Patients who are concerned that they may be infected by COVID-19 or other pneumonia consult doctors and doctors provide advice. There are 603 consultations. COVID-Dialogue-Dataset-Chinese is a Chinese medical dialogue dataset about COVID-19 and other types of pneumonia. Patients who are concerned that they may be infected by COVID-19 or other pneumonia consult doctors and doctors provide advice. There are 1393 consultations. The dataset is present as a single text file. COVID-Dialogue-Dataset-Chinese.txt for Chinese and COVID-Dialogue-Dataset-English.txt for English. ### Supported Tasks and Leaderboards Used for QA tasks. There is also a COVID-19 dialogue generation model available for the Chinese Data. The pre-print and more information is available in [this arxiv pre-print](https://arxiv.org/abs/2005.05442). ### Languages Monolingual. The datasets are in English (EN) and Chinese (ZH) ## Dataset Structure ### Data Instances An example of dialogue is: ``` { 'dialogue_id': 602, 'dialogue_url': 'https://www.healthtap.com/member/fg?page=/search/covid', 'dialogue_turns': [{'speaker': 'Patient', 'utterance': 'Can coronavirus symptoms be mild for some people versus severe? For example, could it just involve being very fatigued, low grade fever for a few days and not the extreme symptoms? Or is it always a full blown cold and struggle to breathe?Can coronavirus symptoms be mild for some people versus severe? For example, could it just involve being very fatigued, low grade fever for a few days and not the extreme symptoms? Or is it always a full blown cold and struggle to breathe?'}, {'speaker': 'Doctor', 'utterance': 'In brief: Symptoms vary. Some may have no symptoms at all. Some can be life threatening. Would you like to video or text chat with me?'}] } ``` The dataset is built from [icliniq.com](https://www.icliniq.com/), [healthcaremagic.com](https://www.healthcaremagic.com/), [healthtap.com](https://www.healthtap.com/) and all copyrights of the data belong to these websites. _(for English)_ The dataset is built from [Haodf.com](https://www.haodf.com/) and all copyrights of the data belong to [Haodf.com](https://www.haodf.com/). _(for Chinese)_ ### Data Fields Each consultation consists of the below: - ID - URL - Description of patient’s medical condition - Dialogue - Diagnosis and suggestions (Optional, mostly for Chinese) For generating the QA only the below fields have been considered: - ID : Consultatation Identifier (restarts for each file) - URL: The url link of the extracted conversation - Dialogue : The conversation between the doctor and the patient. These are arranged as below in the prepared dataset. Each item will be represented with these parameters. - "file_name": string - signifies the file from which the conversation was extracted - "dialogue_id": int32 - the dialogue id - "dialogue_url": string - url of the conversation - "dialogue_turns": datasets.Sequence - sequence of dialogues between patient and the doctor.Consists ClassLabel(names=["病人", "医生"]), and "utterance"(string) for each turn. (ClassLable(names=["Patient", "Doctor"]) for english) ### Data Splits There are no data splits on the original data ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information @article{ju2020CovidDialog, title={CovidDialog: Medical Dialogue Datasets about COVID-19}, author={Ju, Zeqian and Chakravorty, Subrato and He, Xuehai and Chen, Shu and Yang, Xingyi and Xie, Pengtao}, journal={ https://github.com/UCSD-AI4H/COVID-Dialogue}, year={2020} } ### Contributions Thanks to [@vrindaprabhu](https://github.com/vrindaprabhu) for adding this dataset.
7,156
[ [ -0.0146636962890625, -0.04718017578125, 0.0113525390625, 0.02667236328125, -0.021820068359375, -0.00669097900390625, -0.0181427001953125, -0.03143310546875, 0.027923583984375, 0.0259552001953125, -0.05877685546875, -0.07415771484375, -0.0193023681640625, 0.0...
darkraipro/recipe-instructions
2022-01-18T16:22:01.000Z
[ "region:us" ]
darkraipro
null
null
0
140
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.014984130859375, 0.057220458984375, 0.0288238525390625, -0.03509521484375, 0.04656982421875, 0.052520751953125, 0.00506591796875, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060455322265625, 0.03793334...
Jzuluaga/uwb_atcc
2022-12-05T11:15:20.000Z
[ "task_categories:automatic-speech-recognition", "multilinguality:monolingual", "language:en", "license:cc-by-nc-sa-4.0", "audio", "automatic-speech-recognition", "en-atc", "en", "noisy-speech-recognition", "speech-recognition", "arxiv:2203.16822", "region:us" ]
Jzuluaga
null
null
0
140
2022-11-28T07:12:02
--- dataset_info: features: - name: id dtype: string - name: audio dtype: audio: sampling_rate: 16000 - name: text dtype: string - name: segment_start_time dtype: float32 - name: segment_end_time dtype: float32 - name: duration dtype: float32 splits: - name: test num_bytes: 140620332.25 num_examples: 2822 - name: train num_bytes: 608597323.625 num_examples: 11291 download_size: 711464914 dataset_size: 749217655.875 tags: - audio - automatic-speech-recognition - en-atc - en - noisy-speech-recognition - speech-recognition task_categories: - automatic-speech-recognition language: - en multilinguality: - monolingual license: - cc-by-nc-sa-4.0 --- # Dataset Card for UWB-ATCC corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages and Other Details](#languages-and-other-details) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [UWB-ATCC corpus homepage](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0001-CCA1-0) - **Repository:** [GitHub repository (used in research)](https://github.com/idiap/w2v2-air-traffic) - **Paper:** [Air traffic control communication (ATCC) speech corpora and their use for ASR and TTS development](https://link.springer.com/article/10.1007/s10579-019-09449-5) - **Paper of this research:** [How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications](https://arxiv.org/abs/2203.16822) ### Dataset Summary The UWB-ATCC Corpus is provided provided by University of West Bohemia, Department of Cybernetics. The corpus contains recordings of communication between air traffic controllers and pilots. The speech is manually transcribed and labeled with the information about the speaker (pilot/controller, not the full identity of the person). The corpus is currently small (20 hours) but we plan to search for additional data next year. The audio data format is: 8kHz, 16bit PCM, mono. Important, from the `<id (string)>` field, you can obtain the speaker roles. For instance: - `_PI`: segment with only pilot speech - `_AT`: segment with only ATCO speech - `PIAT`: segment with both, ATCO and pilot speech ### Supported Tasks and Leaderboards - `automatic-speech-recognition`. Already adapted/fine-tuned models are available here --> [XLS-R-300m](https://huggingface.co/Jzuluaga/wav2vec2-large-960h-lv60-self-en-atc-atcosim). ### Languages and other details The text and the recordings are in English. The authors took advantage of the fact that one of their industrial partners develops complex IT solutions for several ATC authorities and airports and, as such, has access to the ATC communication recordings collected in the Czech airspace. This partner was able to secure the following data: - Ground control—communication before takeoff and after landing—19.2 h of data. - Tower control—communication during takeoff, landing and landing standby—22.5 h. - Approach control—communication during landing approach—25.5 h. - Area control—communication during overflights and cruises—71.3 h. (Not all data is released. Check their website [here](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0001-CCA1-0)) ## Dataset Structure ### Data Fields - `id (string)`: a string of recording identifier for each example, corresponding to its. - `audio (audio)`: audio data for the given ID - `text (string)`: transcript of the file already normalized. Follow these repositories for more details [w2v2-air-traffic](https://github.com/idiap/w2v2-air-traffic) and [bert-text-diarization-atc](https://github.com/idiap/bert-text-diarization-atc) - `segment_start_time (float32)`: segment start time (normally 0) - `segment_end_time (float32): segment end time - `duration (float32)`: duration of the recording, compute as segment_end_time - segment_start_time ## Additional Information ### Licensing Information The licensing status of the dataset hinges on the legal status of the [UWB-ATCC corpus](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0001-CCA1-0) creators. They used [Creative Commons - Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) licensing. ### Citation Information Contributors who prepared, processed, normalized and uploaded the dataset in HuggingFace: ``` @article{zuluaga2022how, title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications}, author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and others}, journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar}, year={2022} } @article{zuluaga2022bertraffic, title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications}, author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others}, journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar}, year={2022} } @article{zuluaga2022atco2, title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications}, author={Zuluaga-Gomez, Juan and Vesel{\`y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others}, journal={arXiv preprint arXiv:2211.04054}, year={2022} } ``` Authors of the dataset: ``` @article{vsmidl2019air, title={Air traffic control communication (ATCC) speech corpora and their use for ASR and TTS development}, author={{\v{S}}m{\'\i}dl, Lubo{\v{s}} and {\v{S}}vec, Jan and Tihelka, Daniel and Matou{\v{s}}ek, Jind{\v{r}}ich and Romportl, Jan and Ircing, Pavel}, journal={Language Resources and Evaluation}, volume={53}, number={3}, pages={449--464}, year={2019}, publisher={Springer} } ```
6,293
[ [ -0.02935791015625, -0.043060302734375, 0.00434112548828125, 0.013885498046875, -0.03009033203125, 0.014984130859375, -0.040283203125, -0.04510498046875, 0.0090789794921875, 0.0298004150390625, -0.034088134765625, -0.053009033203125, -0.042449951171875, -0.01...
Alignment-Lab-AI/agentcode
2023-09-08T08:27:16.000Z
[ "region:us" ]
Alignment-Lab-AI
null
null
6
140
2023-09-07T21:05:50
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
DopeorNope/2000sample_COT
2023-10-19T15:37:10.000Z
[ "license:cc-by-nc-sa-4.0", "region:us" ]
DopeorNope
null
null
0
140
2023-09-21T12:01:52
--- dataset_info: features: - name: source dtype: string - name: target dtype: string - name: rationale dtype: string - name: task dtype: string - name: type dtype: string splits: - name: train num_bytes: 2298020 num_examples: 2159 download_size: 1099835 dataset_size: 2298020 license: cc-by-nc-sa-4.0 --- # Dataset Card for "2000sample_COT" # DopeorNope/Eng_Kor_COT_combined - KOpen-platypus + DopeorNope/2000sample_COT - 데이터셋 이용하셔서 모델이나 데이터셋을 만드실 때, 간단한 출처 표기를 해주신다면 연구에 큰 도움이 됩니다😭😭 - 고품질 한국어 데이터셋 + COT 방식으로 구성한 영어+ 한국어 dataset구성 --- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
722
[ [ -0.041046142578125, -0.0268707275390625, 0.012664794921875, 0.041961669921875, -0.042999267578125, 0.006481170654296875, -0.0016183853149414062, -0.01485443115234375, 0.050994873046875, 0.03021240234375, -0.031585693359375, -0.04974365234375, -0.03936767578125, ...
result-kand2-sdxl-wuerst-karlo/6bf53b4b
2023-10-11T14:51:09.000Z
[ "region:us" ]
result-kand2-sdxl-wuerst-karlo
null
null
0
140
2023-10-11T14:51:08
--- dataset_info: features: - name: result dtype: string - name: id dtype: int64 splits: - name: train num_bytes: 162 num_examples: 10 download_size: 1350 dataset_size: 162 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "6bf53b4b" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
455
[ [ -0.04400634765625, -0.00756072998046875, 0.01313018798828125, 0.033477783203125, -0.0162353515625, -0.0023632049560546875, 0.035797119140625, -0.02581787109375, 0.054229736328125, 0.033660888671875, -0.06341552734375, -0.046600341796875, -0.0309295654296875, ...
bookcorpusopen
2023-04-05T09:41:59.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en"...
null
Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story. This version of bookcorpus has 17868 dataset items (books). Each item contains two fields: title and text. The title is the name of the book (just the file name) while text contains unprocessed book text. The bookcorpus has been prepared by Shawn Presser and is generously hosted by The-Eye. The-Eye is a non-profit, community driven platform dedicated to the archiving and long-term preservation of any and all data including but by no means limited to... websites, books, games, software, video, audio, other digital-obscura and ideas.
@InProceedings{Zhu_2015_ICCV, title = {Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books}, author = {Zhu, Yukun and Kiros, Ryan and Zemel, Rich and Salakhutdinov, Ruslan and Urtasun, Raquel and Torralba, Antonio and Fidler, Sanja}, booktitle = {The IEEE International Conference on Computer Vision (ICCV)}, month = {December}, year = {2015} }
24
139
2022-03-02T23:29:22
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - unknown multilinguality: - monolingual pretty_name: BookCorpusOpen size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling paperswithcode_id: bookcorpus dataset_info: features: - name: title dtype: string - name: text dtype: string config_name: plain_text splits: - name: train num_bytes: 6643435392 num_examples: 17868 download_size: 2404269430 dataset_size: 6643435392 --- # Dataset Card for BookCorpusOpen ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/soskek/bookcorpus/issues/27](https://github.com/soskek/bookcorpus/issues/27) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 2.40 GB - **Size of the generated dataset:** 6.64 GB - **Total amount of disk used:** 9.05 GB ### Dataset Summary Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story. This version of bookcorpus has 17868 dataset items (books). Each item contains two fields: title and text. The title is the name of the book (just the file name) while text contains unprocessed book text. The bookcorpus has been prepared by Shawn Presser and is generously hosted by The-Eye. The-Eye is a non-profit, community driven platform dedicated to the archiving and long-term preservation of any and all data including but by no means limited to... websites, books, games, software, video, audio, other digital-obscura and ideas. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### plain_text - **Size of downloaded dataset files:** 2.40 GB - **Size of the generated dataset:** 6.64 GB - **Total amount of disk used:** 9.05 GB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "text": "\"\\n\\nzONE\\n\\n## The end and the beginning\\n\\nby\\n\\nPhilip F. Blood\\n\\nSMASHWORDS EDITION\\n\\nVersion 3.55\\n\\nPUBLISHED BY:\\n\\nPhi...", "title": "zone-the-end-and-the-beginning.epub.txt" } ``` ### Data Fields The data fields are the same among all splits. #### plain_text - `title`: a `string` feature. - `text`: a `string` feature. ### Data Splits | name |train| |----------|----:| |plain_text|17868| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information The books have been crawled from smashwords.com, see their [terms of service](https://www.smashwords.com/about/tos) for more information. A data sheet for this dataset has also been created and published in [Addressing "Documentation Debt" in Machine Learning Research: A Retrospective Datasheet for BookCorpus](https://arxiv.org/abs/2105.05241) ### Citation Information ``` @InProceedings{Zhu_2015_ICCV, title = {Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books}, author = {Zhu, Yukun and Kiros, Ryan and Zemel, Rich and Salakhutdinov, Ruslan and Urtasun, Raquel and Torralba, Antonio and Fidler, Sanja}, booktitle = {The IEEE International Conference on Computer Vision (ICCV)}, month = {December}, year = {2015} } ``` ### Contributions Thanks to [@vblagoje](https://github.com/vblagoje) for adding this dataset.
6,967
[ [ -0.0391845703125, -0.033660888671875, -0.002613067626953125, 0.0035991668701171875, -0.02301025390625, 0.0021820068359375, -0.0162811279296875, -0.0367431640625, 0.036529541015625, 0.049224853515625, -0.06488037109375, -0.06402587890625, -0.031402587890625, ...
gap
2023-04-05T10:06:30.000Z
[ "task_categories:token-classification", "task_ids:coreference-resolution", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:unknown", "arxiv:1810.05201", "region:us" ]
null
GAP is a gender-balanced dataset containing 8,908 coreference-labeled pairs of (ambiguous pronoun, antecedent name), sampled from Wikipedia and released by Google AI Language for the evaluation of coreference resolution in practical applications.
@article{DBLP:journals/corr/abs-1810-05201, author = {Kellie Webster and Marta Recasens and Vera Axelrod and Jason Baldridge}, title = {Mind the {GAP:} {A} Balanced Corpus of Gendered Ambiguous Pronouns}, journal = {CoRR}, volume = {abs/1810.05201}, year = {2018}, url = {http://arxiv.org/abs/1810.05201}, archivePrefix = {arXiv}, eprint = {1810.05201}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/bib/journals/corr/abs-1810-05201}, bibsource = {dblp computer science bibliography, https://dblp.org} }
2
139
2022-03-02T23:29:22
--- annotations_creators: - crowdsourced language: - en language_creators: - found license: - unknown multilinguality: - monolingual pretty_name: GAP Benchmark Suite size_categories: - 1K<n<10K source_datasets: - original task_categories: - token-classification task_ids: - coreference-resolution paperswithcode_id: gap dataset_info: features: - name: ID dtype: string - name: Text dtype: string - name: Pronoun dtype: string - name: Pronoun-offset dtype: int32 - name: A dtype: string - name: A-offset dtype: int32 - name: A-coref dtype: bool - name: B dtype: string - name: B-offset dtype: int32 - name: B-coref dtype: bool - name: URL dtype: string splits: - name: train num_bytes: 1095623 num_examples: 2000 - name: validation num_bytes: 248329 num_examples: 454 - name: test num_bytes: 1090462 num_examples: 2000 download_size: 2401971 dataset_size: 2434414 --- # Dataset Card for "gap" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/google-research-datasets/gap-coreference](https://github.com/google-research-datasets/gap-coreference) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns](https://arxiv.org/abs/1810.05201) - **Point of Contact:** [gap-coreference@google.com](mailto:gap-coreference@google.com) - **Size of downloaded dataset files:** 2.40 MB - **Size of the generated dataset:** 2.43 MB - **Total amount of disk used:** 4.83 MB ### Dataset Summary GAP is a gender-balanced dataset containing 8,908 coreference-labeled pairs of (ambiguous pronoun, antecedent name), sampled from Wikipedia and released by Google AI Language for the evaluation of coreference resolution in practical applications. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 2.40 MB - **Size of the generated dataset:** 2.43 MB - **Total amount of disk used:** 4.83 MB An example of 'validation' looks as follows. ``` { "A": "aliquam ultrices sagittis", "A-coref": false, "A-offset": 208, "B": "elementum curabitur vitae", "B-coref": false, "B-offset": 435, "ID": "validation-1", "Pronoun": "condimentum mattis pellentesque", "Pronoun-offset": 948, "Text": "Lorem ipsum dolor", "URL": "sem fringilla ut" } ``` ### Data Fields The data fields are the same among all splits. #### default - `ID`: a `string` feature. - `Text`: a `string` feature. - `Pronoun`: a `string` feature. - `Pronoun-offset`: a `int32` feature. - `A`: a `string` feature. - `A-offset`: a `int32` feature. - `A-coref`: a `bool` feature. - `B`: a `string` feature. - `B-offset`: a `int32` feature. - `B-coref`: a `bool` feature. - `URL`: a `string` feature. ### Data Splits | name |train|validation|test| |-------|----:|---------:|---:| |default| 2000| 454|2000| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{webster-etal-2018-mind, title = "Mind the {GAP}: A Balanced Corpus of Gendered Ambiguous Pronouns", author = "Webster, Kellie and Recasens, Marta and Axelrod, Vera and Baldridge, Jason", journal = "Transactions of the Association for Computational Linguistics", volume = "6", year = "2018", address = "Cambridge, MA", publisher = "MIT Press", url = "https://aclanthology.org/Q18-1042", doi = "10.1162/tacl_a_00240", pages = "605--617", } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@otakumesi](https://github.com/otakumesi), [@lewtun](https://github.com/lewtun) for adding this dataset.
7,131
[ [ -0.0521240234375, -0.043426513671875, 0.018463134765625, 0.0097503662109375, -0.00647735595703125, 0.0007519721984863281, -0.0270538330078125, -0.02734375, 0.034454345703125, 0.0190277099609375, -0.066162109375, -0.06884765625, -0.039215087890625, 0.00789642...
id_clickbait
2023-01-25T14:32:36.000Z
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:id", "license:cc-by-4.0", "region:us" ]
null
The CLICK-ID dataset is a collection of Indonesian news headlines that was collected from 12 local online news publishers; detikNews, Fimela, Kapanlagi, Kompas, Liputan6, Okezone, Posmetro-Medan, Republika, Sindonews, Tempo, Tribunnews, and Wowkeren. This dataset is comprised of mainly two parts; (i) 46,119 raw article data, and (ii) 15,000 clickbait annotated sample headlines. Annotation was conducted with 3 annotator examining each headline. Judgment were based only on the headline. The majority then is considered as the ground truth. In the annotated sample, our annotation shows 6,290 clickbait and 8,710 non-clickbait.
@inproceedings{id_clickbait, author = {Andika William, Yunita Sari}, title = {CLICK-ID: A Novel Dataset for Indonesian Clickbait Headlines}, year = {2020}, url = {http://dx.doi.org/10.17632/k42j7x2kpn.1}, }
0
139
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - id license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - fact-checking pretty_name: Indonesian Clickbait Headlines dataset_info: - config_name: annotated features: - name: id dtype: string - name: title dtype: string - name: label dtype: class_label: names: '0': non-clickbait '1': clickbait splits: - name: train num_bytes: 1268698 num_examples: 15000 download_size: 150769127 dataset_size: 1268698 - config_name: raw features: - name: id dtype: string - name: title dtype: string - name: source dtype: string - name: date dtype: string - name: category dtype: string - name: sub-category dtype: string - name: content dtype: string - name: url dtype: string splits: - name: train num_bytes: 81669386 num_examples: 38655 download_size: 150769127 dataset_size: 81669386 --- # Dataset Card for Indonesian Clickbait Headlines ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://data.mendeley.com/datasets/k42j7x2kpn/1 - **Repository:** - **Paper:** [CLICK-ID: A Novel Dataset for Indonesian Clickbait Headlines](https://www.sciencedirect.com/science/article/pii/S2352340920311252#!) - **Leaderboard:** - **Point of Contact:** [Andika William](mailto:andika.william@mail.ugm.ac.id), [Yunita Sari](mailto:yunita.sari@ugm.ac.id) ### Dataset Summary The CLICK-ID dataset is a collection of Indonesian news headlines that was collected from 12 local online news publishers; detikNews, Fimela, Kapanlagi, Kompas, Liputan6, Okezone, Posmetro-Medan, Republika, Sindonews, Tempo, Tribunnews, and Wowkeren. This dataset is comprised of mainly two parts; (i) 46,119 raw article data, and (ii) 15,000 clickbait annotated sample headlines. Annotation was conducted with 3 annotator examining each headline. Judgment were based only on the headline. The majority then is considered as the ground truth. In the annotated sample, our annotation shows 6,290 clickbait and 8,710 non-clickbait. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Indonesian ## Dataset Structure ### Data Instances An example of the annotated article: ``` { 'id': '100', 'label': 1, 'title': "SAH! Ini Daftar Nama Menteri Kabinet Jokowi - Ma'ruf Amin" } > ``` ### Data Fields #### Annotated - `id`: id of the sample - `title`: the title of the news article - `label`: the label of the article, either non-clickbait or clickbait #### Raw - `id`: id of the sample - `title`: the title of the news article - `source`: the name of the publisher/newspaper - `date`: date - `category`: the category of the article - `sub-category`: the sub category of the article - `content`: the content of the article - `url`: the url of the article ### Data Splits The dataset contains train set. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Creative Commons Attribution 4.0 International license ### Citation Information ``` @article{WILLIAM2020106231, title = "CLICK-ID: A novel dataset for Indonesian clickbait headlines", journal = "Data in Brief", volume = "32", pages = "106231", year = "2020", issn = "2352-3409", doi = "https://doi.org/10.1016/j.dib.2020.106231", url = "http://www.sciencedirect.com/science/article/pii/S2352340920311252", author = "Andika William and Yunita Sari", keywords = "Indonesian, Natural Language Processing, News articles, Clickbait, Text-classification", abstract = "News analysis is a popular task in Natural Language Processing (NLP). In particular, the problem of clickbait in news analysis has gained attention in recent years [1, 2]. However, the majority of the tasks has been focused on English news, in which there is already a rich representative resource. For other languages, such as Indonesian, there is still a lack of resource for clickbait tasks. Therefore, we introduce the CLICK-ID dataset of Indonesian news headlines extracted from 12 Indonesian online news publishers. It is comprised of 15,000 annotated headlines with clickbait and non-clickbait labels. Using the CLICK-ID dataset, we then developed an Indonesian clickbait classification model achieving favourable performance. We believe that this corpus will be useful for replicable experiments in clickbait detection or other experiments in NLP areas." } ``` ### Contributions Thanks to [@cahya-wirawan](https://github.com/cahya-wirawan) for adding this dataset.
6,346
[ [ -0.0289459228515625, -0.050079345703125, 0.00392913818359375, 0.03271484375, -0.028045654296875, -0.01500701904296875, -0.01194000244140625, -0.025634765625, 0.045074462890625, 0.048614501953125, -0.0195770263671875, -0.0677490234375, -0.05267333984375, 0.04...
Bingsu/Human_Action_Recognition
2022-07-05T02:48:56.000Z
[ "task_categories:image-classification", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:odbl", "region:us" ]
Bingsu
null
null
7
139
2022-06-09T02:00:52
--- language: - en license: - odbl pretty_name: Human Action Recognition size_categories: - 10K<n<100K source_datasets: - original task_categories: - image-classification --- ## Dataset Description - **Homepage:** [Human Action Recognition (HAR) Dataset](https://www.kaggle.com/datasets/meetnagadia/human-action-recognition-har-dataset) - **Repository:** N/A - **Paper:** N/A - **Leaderboard:** N/A - **Point of Contact:** N/A ## Dataset Summary A dataset from [kaggle](https://www.kaggle.com/datasets/meetnagadia/human-action-recognition-har-dataset). origin: https://dphi.tech/challenges/data-sprint-76-human-activity-recognition/233/data ### Introduction - The dataset features 15 different classes of Human Activities. - The dataset contains about 12k+ labelled images including the validation images. - Each image has only one human activity category and are saved in separate folders of the labelled classes ### PROBLEM STATEMENT - Human Action Recognition (HAR) aims to understand human behavior and assign a label to each action. It has a wide range of applications, and therefore has been attracting increasing attention in the field of computer vision. Human actions can be represented using various data modalities, such as RGB, skeleton, depth, infrared, point cloud, event stream, audio, acceleration, radar, and WiFi signal, which encode different sources of useful yet distinct information and have various advantages depending on the application scenarios. - Consequently, lots of existing works have attempted to investigate different types of approaches for HAR using various modalities. - Your Task is to build an Image Classification Model using CNN that classifies to which class of activity a human is performing. ### About Files - Train - contains all the images that are to be used for training your model. In this folder you will find 15 folders namely - 'calling', ’clapping’, ’cycling’, ’dancing’, ‘drinking’, ‘eating’, ‘fighting’, ‘hugging’, ‘laughing’, ‘listeningtomusic’, ‘running’, ‘sitting’, ‘sleeping’, texting’, ‘using_laptop’ which contain the images of the respective human activities. - Test - contains 5400 images of Human Activities. For these images you are required to make predictions as the respective class names -'calling', ’clapping’, ’cycling’, ’dancing’, ‘drinking’, ‘eating’, ‘fighting’, ‘hugging’, ‘laughing’, ‘listeningtomusic’, ‘running’, ‘sitting’, ‘sleeping’, texting’, ‘using_laptop’. - Testing_set.csv - this is the order of the predictions for each image that is to be submitted on the platform. Make sure the predictions you download are with their image’s filename in the same order as given in this file. - sample_submission: This is a csv file that contains the sample submission for the data sprint. ### Data Fields The data instances have the following fields: - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`. - `labels`: an `int` classification label. All `test` data is labeled 0. ### Class Label Mappings: ``` { 'calling': 0, 'clapping': 1, 'cycling': 2, 'dancing': 3, 'drinking': 4, 'eating': 5, 'fighting': 6, 'hugging': 7, 'laughing': 8, 'listening_to_music': 9, 'running': 10, 'sitting': 11, 'sleeping': 12, 'texting': 13, 'using_laptop': 14 } ``` ### Data Splits | | train | test | |---------------|--------|-----:| | # of examples | 12600 | 5400 | ### Data Size - download: 311.96 MiB - generated: 312.59 MiB - total: 624.55 MiB ```pycon >>> from datasets import load_dataset >>> ds = load_dataset("Bingsu/Human_Action_Recognition") >>> ds DatasetDict({ test: Dataset({ features: ['image', 'labels'], num_rows: 5400 }) train: Dataset({ features: ['image', 'labels'], num_rows: 12600 }) }) >>> ds["train"].features {'image': Image(decode=True, id=None), 'labels': ClassLabel(num_classes=15, names=['calling', 'clapping', 'cycling', 'dancing', 'drinking', 'eating', 'fighting', 'hugging', 'laughing', 'listening_to_music', 'running', 'sitting', 'sleeping', 'texting', 'using_laptop'], id=None)} >>> ds["train"][0] {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=240x160>, 'labels': 11} ```
4,590
[ [ -0.0235595703125, -0.024322509765625, -0.0146331787109375, 0.0110321044921875, -0.016876220703125, -0.0023097991943359375, -0.0030841827392578125, -0.040252685546875, 0.01389312744140625, 0.0218048095703125, -0.03619384765625, -0.047210693359375, -0.047393798828...
GATE-engine/aircraft_bbcrop
2023-06-04T22:22:54.000Z
[ "region:us" ]
GATE-engine
null
null
0
139
2023-06-04T22:22:30
--- dataset_info: features: - name: image dtype: image - name: label dtype: int64 splits: - name: train num_bytes: 253644684.5 num_examples: 3500 - name: validation num_bytes: 70984494.0 num_examples: 1000 - name: test num_bytes: 80183818.5 num_examples: 1100 download_size: 404802117 dataset_size: 404812997.0 --- # Dataset Card for "aircraft_bbcrop" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
534
[ [ -0.049560546875, -0.003978729248046875, 0.006191253662109375, 0.01806640625, -0.0288848876953125, 0.0141448974609375, 0.019561767578125, -0.01088714599609375, 0.046173095703125, 0.02484130859375, -0.0538330078125, -0.0472412109375, -0.038970947265625, -0.020...
result-kand2-sdxl-wuerst-karlo/53284ebf
2023-10-11T15:38:41.000Z
[ "region:us" ]
result-kand2-sdxl-wuerst-karlo
null
null
0
139
2023-10-11T15:38:41
--- dataset_info: features: - name: result dtype: string - name: id dtype: int64 splits: - name: train num_bytes: 191 num_examples: 10 download_size: 1401 dataset_size: 191 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "53284ebf" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
455
[ [ -0.04638671875, -0.0018014907836914062, 0.0188446044921875, 0.0276031494140625, -0.01456451416015625, -0.0085601806640625, 0.032928466796875, -0.0155487060546875, 0.058624267578125, 0.029754638671875, -0.058502197265625, -0.045074462890625, -0.032440185546875, ...
nathanReitinger/mlcb
2023-10-25T02:55:46.000Z
[ "region:us" ]
nathanReitinger
null
null
0
139
2023-10-25T01:54:40
--- dataset_info: features: - name: label dtype: int64 - name: text dtype: string splits: - name: train num_bytes: 8132250961 num_examples: 76369 - name: test num_bytes: 897865830 num_examples: 8486 download_size: 2715307703 dataset_size: 9030116791 --- # Dataset Card for "mlcb" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
452
[ [ -0.04833984375, -0.0261993408203125, 0.0119781494140625, 0.0281524658203125, -0.0133209228515625, -0.002285003662109375, 0.019561767578125, -0.01294708251953125, 0.057037353515625, 0.042144775390625, -0.062744140625, -0.0614013671875, -0.034027099609375, -0....
cdsc
2023-01-25T14:27:43.000Z
[ "task_categories:other", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:pl", "license:cc-by-nc-sa-4.0", "sentences entailment and relatedness", "region:us" ]
null
Polish CDSCorpus consists of 10K Polish sentence pairs which are human-annotated for semantic relatedness and entailment. The dataset may be used for the evaluation of compositional distributional semantics models of Polish. The dataset was presented at ACL 2017. Please refer to the Wróblewska and Krasnowska-Kieraś (2017) for a detailed description of the resource.
@inproceedings{wroblewska2017polish, title={Polish evaluation dataset for compositional distributional semantics models}, author={Wr{\'o}blewska, Alina and Krasnowska-Kiera{\'s}, Katarzyna}, booktitle={Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)}, pages={784--792}, year={2017} }
0
138
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - other language: - pl license: - cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - other task_ids: [] paperswithcode_id: polish-cdscorpus pretty_name: Polish CDSCorpus tags: - sentences entailment and relatedness dataset_info: - config_name: cdsc-e features: - name: pair_ID dtype: int32 - name: sentence_A dtype: string - name: sentence_B dtype: string - name: entailment_judgment dtype: class_label: names: '0': NEUTRAL '1': CONTRADICTION '2': ENTAILMENT splits: - name: train num_bytes: 1381902 num_examples: 8000 - name: test num_bytes: 179400 num_examples: 1000 - name: validation num_bytes: 174662 num_examples: 1000 download_size: 376079 dataset_size: 1735964 - config_name: cdsc-r features: - name: pair_ID dtype: int32 - name: sentence_A dtype: string - name: sentence_B dtype: string - name: relatedness_score dtype: float32 splits: - name: train num_bytes: 1349902 num_examples: 8000 - name: test num_bytes: 175400 num_examples: 1000 - name: validation num_bytes: 170662 num_examples: 1000 download_size: 381525 dataset_size: 1695964 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://zil.ipipan.waw.pl/Scwad/CDSCorpus - **Repository:** - **Paper:** @inproceedings{wroblewska2017polish, title={Polish evaluation dataset for compositional distributional semantics models}, author={Wr{\'o}blewska, Alina and Krasnowska-Kiera{\'s}, Katarzyna}, booktitle={Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)}, pages={784--792}, year={2017} } - **Leaderboard:** https://klejbenchmark.com/leaderboard/ - **Point of Contact:** alina@ipipan.waw.pl ### Dataset Summary Polish CDSCorpus consists of 10K Polish sentence pairs which are human-annotated for semantic relatedness and entailment. The dataset may be used for the evaluation of compositional distributional semantics models of Polish. The dataset was presented at ACL 2017. Please refer to the Wróblewska and Krasnowska-Kieraś (2017) for a detailed description of the resource. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Polish ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - pair_ID: id of sentences pairs - sentence_A: first sentence - sentence_B: second sentence for cdsc-e domain: - entailment_judgment: either 'NEUTRAL', 'CONTRADICTION' or 'ENTAILMENT' for cdsc-r domain: - relatedness_score: float representing a reletedness ### Data Splits Data is splitted in train/dev/test split. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information CC BY-NC-SA 4.0 ### Citation Information [More Information Needed] ### Contributions Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset.
4,869
[ [ -0.0303955078125, -0.06292724609375, 0.022857666015625, 0.03448486328125, -0.020477294921875, -0.007335662841796875, -0.038482666015625, -0.0244598388671875, 0.0287628173828125, 0.04669189453125, -0.07525634765625, -0.08624267578125, -0.0545654296875, 0.0174...
hansards
2023-04-05T10:07:00.000Z
[ "region:us" ]
null
This release contains 1.3 million pairs of aligned text chunks (sentences or smaller fragments) from the official records (Hansards) of the 36th Canadian Parliament. The complete Hansards of the debates in the House and Senate of the 36th Canadian Parliament, as far as available, were aligned. The corpus was then split into 5 sets of sentence pairs: training (80% of the sentence pairs), two sets of sentence pairs for testing (5% each), and two sets of sentence pairs for final evaluation (5% each). The current release consists of the training and testing sets. The evaluation sets are reserved for future MT evaluation purposes and currently not available. Caveats 1. This release contains only sentence pairs. Even though the order of the sentences is the same as in the original, there may be gaps resulting from many-to-one, many-to-many, or one-to-many alignments that were filtered out. Therefore, this release may not be suitable for discourse-related research. 2. Neither the sentence splitting nor the alignments are perfect. In particular, watch out for pairs that differ considerably in length. You may want to filter these out before you do any statistical training. The alignment of the Hansards was performed as part of the ReWrite project under funding from the DARPA TIDES program.
0
138
2022-03-02T23:29:22
--- paperswithcode_id: null pretty_name: hansards dataset_info: - config_name: senate features: - name: fr dtype: string - name: en dtype: string splits: - name: test num_bytes: 5711686 num_examples: 25553 - name: train num_bytes: 40324278 num_examples: 182135 download_size: 15247360 dataset_size: 46035964 - config_name: house features: - name: fr dtype: string - name: en dtype: string splits: - name: test num_bytes: 22906629 num_examples: 122290 - name: train num_bytes: 191459584 num_examples: 947969 download_size: 67584000 dataset_size: 214366213 --- # Dataset Card for "hansards" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://www.isi.edu/natural-language/download/hansard/](https://www.isi.edu/natural-language/download/hansard/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 82.83 MB - **Size of the generated dataset:** 260.40 MB - **Total amount of disk used:** 343.23 MB ### Dataset Summary This release contains 1.3 million pairs of aligned text chunks (sentences or smaller fragments) from the official records (Hansards) of the 36th Canadian Parliament. The complete Hansards of the debates in the House and Senate of the 36th Canadian Parliament, as far as available, were aligned. The corpus was then split into 5 sets of sentence pairs: training (80% of the sentence pairs), two sets of sentence pairs for testing (5% each), and two sets of sentence pairs for final evaluation (5% each). The current release consists of the training and testing sets. The evaluation sets are reserved for future MT evaluation purposes and currently not available. Caveats 1. This release contains only sentence pairs. Even though the order of the sentences is the same as in the original, there may be gaps resulting from many-to-one, many-to-many, or one-to-many alignments that were filtered out. Therefore, this release may not be suitable for discourse-related research. 2. Neither the sentence splitting nor the alignments are perfect. In particular, watch out for pairs that differ considerably in length. You may want to filter these out before you do any statistical training. The alignment of the Hansards was performed as part of the ReWrite project under funding from the DARPA TIDES program. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### house - **Size of downloaded dataset files:** 67.58 MB - **Size of the generated dataset:** 214.37 MB - **Total amount of disk used:** 281.95 MB An example of 'train' looks as follows. ``` { "en": "Mr. Walt Lastewka (Parliamentary Secretary to Minister of Industry, Lib.):", "fr": "M. Walt Lastewka (secrétaire parlementaire du ministre de l'Industrie, Lib.):" } ``` #### senate - **Size of downloaded dataset files:** 15.25 MB - **Size of the generated dataset:** 46.03 MB - **Total amount of disk used:** 61.28 MB An example of 'train' looks as follows. ``` { "en": "Mr. Walt Lastewka (Parliamentary Secretary to Minister of Industry, Lib.):", "fr": "M. Walt Lastewka (secrétaire parlementaire du ministre de l'Industrie, Lib.):" } ``` ### Data Fields The data fields are the same among all splits. #### house - `fr`: a `string` feature. - `en`: a `string` feature. #### senate - `fr`: a `string` feature. - `en`: a `string` feature. ### Data Splits | name |train | test | |------|-----:|-----:| |house |947969|122290| |senate|182135| 25553| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` ``` ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
7,438
[ [ -0.04193115234375, -0.042999267578125, 0.01580810546875, 0.00637054443359375, -0.02264404296875, -0.005889892578125, -0.0299835205078125, -0.029266357421875, 0.047332763671875, 0.042999267578125, -0.05267333984375, -0.06622314453125, -0.044677734375, 0.00609...
DDSC/twitter-sent
2022-07-01T15:44:26.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:da", "license:cc-by-4.0", "region:us" ]
DDSC
null
null
3
138
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - found language: - da license: - cc-by-4.0 multilinguality: - monolingual pretty_name: TwitterSent size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification --- # Dataset Card for DKHate ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Direct Download**: https://danlp.alexandra.dk/304bd159d5de/datasets/twitter.sentiment.zip ### Dataset Summary This dataset consists of anonymised Danish Twitter data that has been annotated for sentiment analysis by the [Alexandra Institute](https://github.com/alexandrainst) - all credits go to them. ### Supported Tasks and Leaderboards This dataset is suitable for sentiment analysis. ### Languages This dataset is in Danish. ## Dataset Structure ### Data Instances Every entry in the dataset has a tweet and an associated label. ### Data Fields An entry in the dataset consists of the following fields: - `text` (`str`): The tweet content. - `label` (`str`): The label of the `text`. Can be "positiv", "neutral" or "negativ" for positive, neutral and negative sentiment, respectively. ### Data Splits A `train` and `test` split is available, being identical to the original splits. There are 1,007 tweets in the training split and 431 in the test split. ## Additional Information ### Dataset Curators The collection and annotation of the dataset is solely due to the [Alexandra Institute](https://github.com/alexandrainst). The tweets have been anonymised by [@saattrupdan](https://github.com/saattrupdan). ### Licensing Information The dataset is released under the CC BY 4.0 license. ### Citation Information ``` @misc{twittersent, title={TwitterSent}, author={Alexandra Institute}, year={2020}, note={\url{https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#twitsent}} } ``` ### Contributions Thanks to [@saattrupdan](https://github.com/saattrupdan) for adding this dataset to the Hugging Face Hub.
2,561
[ [ -0.0216522216796875, -0.0203094482421875, 0.013916015625, 0.0284881591796875, -0.042144775390625, 0.028778076171875, -0.01145172119140625, -0.019317626953125, 0.04241943359375, 0.0091094970703125, -0.06597900390625, -0.08343505859375, -0.055450439453125, 0.0...
embedding-data/simple-wiki
2022-08-02T03:34:17.000Z
[ "task_categories:sentence-similarity", "task_ids:semantic-similarity-classification", "language:en", "license:mit", "region:us" ]
embedding-data
null
null
5
138
2022-07-07T22:57:40
--- license: mit language: - en paperswithcode_id: embedding-data/simple-wiki pretty_name: simple-wiki task_categories: - sentence-similarity - paraphrase-mining task_ids: - semantic-similarity-classification --- # Dataset Card for "simple-wiki" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://cs.pomona.edu/~dkauchak/simplification/](https://cs.pomona.edu/~dkauchak/simplification/) - **Repository:** [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) - **Paper:** [https://aclanthology.org/P11-2117/](https://aclanthology.org/P11-2117/) - **Point of Contact:** [David Kauchak](dkauchak@cs.pomona.edu) ### Dataset Summary This dataset contains pairs of equivalent sentences obtained from Wikipedia. ### Supported Tasks - [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example in the dataset contains pairs of equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value". ``` {"set": [sentence_1, sentence_2]} {"set": [sentence_1, sentence_2]} ... {"set": [sentence_1, sentence_2]} ``` This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences. ### Usage Example Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with: ```python from datasets import load_dataset dataset = load_dataset("embedding-data/simple-wiki") ``` The dataset is loaded as a `DatasetDict` and has the format: ```python DatasetDict({ train: Dataset({ features: ['set'], num_rows: 102225 }) }) ``` Review an example `i` with: ```python dataset["train"][i]["set"] ``` ### Curation Rationale [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) #### Who are the source language producers? [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ### Annotations #### Annotation process [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) #### Who are the annotators? [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ### Personal and Sensitive Information [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ### Discussion of Biases [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ### Other Known Limitations [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ## Additional Information ### Dataset Curators [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ### Licensing Information [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ### Contributions
4,161
[ [ -0.031341552734375, -0.052215576171875, 0.0210113525390625, 0.00811767578125, -0.0109100341796875, -0.0122222900390625, -0.0295867919921875, -0.00981903076171875, 0.03521728515625, 0.0290985107421875, -0.06488037109375, -0.051361083984375, -0.0426025390625, ...
bigbio/nlm_gene
2023-03-31T02:10:39.000Z
[ "multilinguality:monolingual", "language:en", "license:cc0-1.0", "region:us" ]
bigbio
NLM-Gene consists of 550 PubMed articles, from 156 journals, and contains more than 15 thousand unique gene names, corresponding to more than five thousand gene identifiers (NCBI Gene taxonomy). This corpus contains gene annotation data from 28 organisms. The annotated articles contain on average 29 gene names, and 10 gene identifiers per article. These characteristics demonstrate that this article set is an important benchmark dataset to test the accuracy of gene recognition algorithms both on multi-species and ambiguous data. The NLM-Gene corpus will be invaluable for advancing text-mining techniques for gene identification tasks in biomedical text.
@article{islamaj2021nlm, title = { NLM-Gene, a richly annotated gold standard dataset for gene entities that addresses ambiguity and multi-species gene recognition }, author = { Islamaj, Rezarta and Wei, Chih-Hsuan and Cissel, David and Miliaras, Nicholas and Printseva, Olga and Rodionov, Oleg and Sekiya, Keiko and Ward, Janice and Lu, Zhiyong }, year = 2021, journal = {Journal of Biomedical Informatics}, publisher = {Elsevier}, volume = 118, pages = 103779 }
1
138
2022-11-13T22:10:56
--- language: - en bigbio_language: - English license: cc0-1.0 multilinguality: monolingual bigbio_license_shortname: CC0_1p0 pretty_name: NLM-Gene homepage: https://zenodo.org/record/5089049 bigbio_pubmed: True bigbio_public: True bigbio_tasks: - NAMED_ENTITY_RECOGNITION - NAMED_ENTITY_DISAMBIGUATION --- # Dataset Card for NLM-Gene ## Dataset Description - **Homepage:** https://zenodo.org/record/5089049 - **Pubmed:** True - **Public:** True - **Tasks:** NER,NED NLM-Gene consists of 550 PubMed articles, from 156 journals, and contains more than 15 thousand unique gene names, corresponding to more than five thousand gene identifiers (NCBI Gene taxonomy). This corpus contains gene annotation data from 28 organisms. The annotated articles contain on average 29 gene names, and 10 gene identifiers per article. These characteristics demonstrate that this article set is an important benchmark dataset to test the accuracy of gene recognition algorithms both on multi-species and ambiguous data. The NLM-Gene corpus will be invaluable for advancing text-mining techniques for gene identification tasks in biomedical text. ## Citation Information ``` @article{islamaj2021nlm, title = { NLM-Gene, a richly annotated gold standard dataset for gene entities that addresses ambiguity and multi-species gene recognition }, author = { Islamaj, Rezarta and Wei, Chih-Hsuan and Cissel, David and Miliaras, Nicholas and Printseva, Olga and Rodionov, Oleg and Sekiya, Keiko and Ward, Janice and Lu, Zhiyong }, year = 2021, journal = {Journal of Biomedical Informatics}, publisher = {Elsevier}, volume = 118, pages = 103779 } ```
1,718
[ [ -0.04266357421875, -0.02685546875, 0.0131988525390625, 0.00795745849609375, -0.0281982421875, 0.005741119384765625, -0.02423095703125, -0.042205810546875, 0.0182647705078125, 0.040069580078125, -0.032440185546875, -0.06396484375, -0.0552978515625, 0.06750488...
pankajmathur/WizardLM_Orca
2023-06-26T14:39:38.000Z
[ "task_categories:text-generation", "size_categories:10K<n<100K", "language:en", "license:cc-by-nc-sa-4.0", "region:us" ]
pankajmathur
null
null
64
138
2023-06-24T18:34:28
--- license: cc-by-nc-sa-4.0 task_categories: - text-generation language: - en size_categories: - 10K<n<100K --- Explain tuned WizardLM dataset ~55K created using approaches from Orca Research Paper. We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets. This helps student models like orca_mini_13b to learn thought process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version). Please see how the System prompt is added before each instruction.
596
[ [ -0.042449951171875, -0.059234619140625, -0.0061492919921875, -0.029266357421875, -0.0056304931640625, -0.00925445556640625, 0.0152587890625, -0.019317626953125, 0.00014531612396240234, 0.05694580078125, -0.07525634765625, -0.014007568359375, 0.0087738037109375, ...
lyogavin/longer_training_max100k_v3
2023-09-09T04:31:13.000Z
[ "region:us" ]
lyogavin
null
null
3
138
2023-09-09T04:04:40
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: prompt dtype: string - name: completion dtype: string - name: source dtype: string - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 3294652388.329473 num_examples: 18964 download_size: 476508613 dataset_size: 3294652388.329473 --- # Dataset Card for "longer_training_max100k_v3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
598
[ [ -0.04095458984375, -0.001834869384765625, 0.0171356201171875, 0.0295867919921875, -0.007686614990234375, -0.004917144775390625, 0.0094757080078125, -0.017608642578125, 0.05145263671875, 0.035736083984375, -0.061309814453125, -0.040771484375, -0.043060302734375, ...
pvduy/70k_evol_code_prompts
2023-10-13T12:05:22.000Z
[ "region:us" ]
pvduy
null
null
0
138
2023-10-13T12:05:19
--- dataset_info: features: - name: prompt dtype: string splits: - name: train num_bytes: 31492387 num_examples: 70000 download_size: 16308713 dataset_size: 31492387 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "70k_evol_code_prompts" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
455
[ [ -0.041778564453125, -0.019287109375, 0.01229095458984375, 0.01299285888671875, -0.01641845703125, -0.0020809173583984375, 0.0143890380859375, 0.0031681060791015625, 0.04541015625, 0.035552978515625, -0.0560302734375, -0.05731201171875, -0.0206756591796875, 0...
prachathai67k
2023-01-25T14:42:50.000Z
[ "task_categories:text-classification", "task_ids:topic-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "region:us" ]
null
`prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com The prachathai-67k dataset was scraped from the news site Prachathai. We filtered out those articles with less than 500 characters of body text, mostly images and cartoons. It contains 67,889 articles wtih 12 curated tags from August 24, 2004 to November 15, 2018. The dataset was originally scraped by @lukkiddd and cleaned by @cstorm125. You can also see preliminary exploration at https://github.com/PyThaiNLP/prachathai-67k/blob/master/exploration.ipynb
@misc{prachathai67k, author = {cstorm125, lukkiddd }, title = {prachathai67k}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, howpublished={\\url{https://github.com/PyThaiNLP/prachathai-67k}}, }
3
137
2022-03-02T23:29:22
--- annotations_creators: - found language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - topic-classification paperswithcode_id: prachathai-67k pretty_name: prachathai67k dataset_info: features: - name: url dtype: string - name: date dtype: string - name: title dtype: string - name: body_text dtype: string - name: politics dtype: class_label: names: '0': neg '1': pos - name: human_rights dtype: class_label: names: '0': neg '1': pos - name: quality_of_life dtype: class_label: names: '0': neg '1': pos - name: international dtype: class_label: names: '0': neg '1': pos - name: social dtype: class_label: names: '0': neg '1': pos - name: environment dtype: class_label: names: '0': neg '1': pos - name: economics dtype: class_label: names: '0': neg '1': pos - name: culture dtype: class_label: names: '0': neg '1': pos - name: labor dtype: class_label: names: '0': neg '1': pos - name: national_security dtype: class_label: names: '0': neg '1': pos - name: ict dtype: class_label: names: '0': neg '1': pos - name: education dtype: class_label: names: '0': neg '1': pos config_name: prachathai67k splits: - name: train num_bytes: 865848436 num_examples: 54379 - name: validation num_bytes: 108641386 num_examples: 6721 - name: test num_bytes: 110034036 num_examples: 6789 download_size: 254240975 dataset_size: 1084523858 --- # Dataset Card for `prachathai67k` ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/PyThaiNLP/prachathai-67k - **Repository:** https://github.com/PyThaiNLP/prachathai-67k - **Paper:** - **Leaderboard:** - **Point of Contact:** https://github.com/PyThaiNLP/ ### Dataset Summary `prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com The `prachathai-67k` dataset was scraped from the news site [Prachathai](prachathai.com). We filtered out those articles with less than 500 characters of body text, mostly images and cartoons. It contains 67,889 articles wtih 12 curated tags from August 24, 2004 to November 15, 2018. The dataset was originally scraped by [@lukkiddd](https://github.com/lukkiddd) and cleaned by [@cstorm125](https://github.com/cstorm125). Download the dataset [here](https://www.dropbox.com/s/fsxepdka4l2pr45/prachathai-67k.zip?dl=1). You can also see preliminary exploration in [exploration.ipynb](https://github.com/PyThaiNLP/prachathai-67k/blob/master/exploration.ipynb). This dataset is a part of [pyThaiNLP](https://github.com/PyThaiNLP/) Thai text [classification-benchmarks](https://github.com/PyThaiNLP/classification-benchmarks). For the benchmark, we selected the following tags with substantial volume that resemble **classifying types of articles**: * `การเมือง` - politics * `สิทธิมนุษยชน` - human_rights * `คุณภาพชีวิต` - quality_of_life * `ต่างประเทศ` - international * `สังคม` - social * `สิ่งแวดล้อม` - environment * `เศรษฐกิจ` - economics * `วัฒนธรรม` - culture * `แรงงาน` - labor * `ความมั่นคง` - national_security * `ไอซีที` - ict * `การศึกษา` - education ### Supported Tasks and Leaderboards multi-label text classification, language modeling ### Languages Thai ## Dataset Structure ### Data Instances {'body_text': '17 พ.ย. 2558 Blognone [1] รายงานว่า กลุ่มแฮคเกอร์ Anonymous ประกาศสงครามไซเบอร์กับกลุ่มหัวรุนแรงหลังจากกลุ่ม IS ออกมาประกาศว่าเป็นผู้อยู่เบื้องหลังการโจมตีกรุงปารีสในคืนวันศุกร์ที่ผ่านมา\n\n\nภาพในคลิปใน YouTube โฆษกของกลุ่มแฮคเกอร์สวมหน้ากากที่เป็นสัญลักษณ์ของกลุ่มได้ออกมาอ่านแถลงเป็นภาษาฝรั่งเศส มีใจความว่า จากการโจมตีของกลุ่ม IS ในกรุงปารีส กลุ่ม Anonymous ทั่วโลกจะตามล่ากลุ่ม IS เหมือนที่เคยทำตอนที่มีการโจมตีสำนักพิมพ์ Charlie Hebdo และครั้งนี้จะเป็นปฏิบัติการโจมตีครั้งใหญ่ที่สุดของกลุ่ม Anonymous เลย นอกจากนี้กลุ่ม Anonymous ยังแสดงความเสียใจต่อครอบครัวผู้สูญเสียในเหตุการณ์ครั้งนี้\nกลุ่ม Anonymous เคยประกาศสงครามกับกลุ่ม IS หลังจากการโจมตีสำนักพิมพ์ Charlie Hebdo ที่ฝรั่งเศสเมื่อต้นปีที่ผ่านมา ซึ่งครั้งนั้นกลุ่ม Anonymous อ้างว่าได้ระงับบัญชีผู้ใช้งานที่เกี่ยวข้องกับ IS ไปหลายพันบัญชี (อ่านรายละเอียดเพิ่มเติม จากBlognone ที่\xa0\xa0กลุ่มแฮคเกอร์ Anonymous ประกาศสงครามไซเบอร์ขอกวาดล้างพวก ISIS [2])', 'culture': 0, 'date': '2015-11-17 18:14', 'economics': 0, 'education': 0, 'environment': 0, 'human_rights': 0, 'ict': 1, 'international': 1, 'labor': 0, 'national_security': 0, 'politics': 0, 'quality_of_life': 0, 'social': 0, 'title': 'แฮคเกอร์ Anonymous ลั่นทำสงครามไซเบอร์ครั้งใหญ่สุดกับกลุ่ม IS', 'url': 'https://prachatai.com/print/62490'} {'body_text': 'แถลงการณ์\n\n\xa0\n\nองค์การนักศึกษามหาวิทยาลัยธรรมศาสตร์\n\n\xa0\n\nมหาวิทยาลัยธรรมศาสตร์ก่อตั้งขึ้นภายใต้แนวคิดการให้การศึกษากับประชาชนเพื่อสนับสนุนการปกครองระบอบประชาธิปไตย อีกทั้งยังเป็นสถาบันหนึ่งที่อยู่เคียงข้างประชาชนมาโดยตลอด\n\n\xa0\n\nสถานการณ์สังคมไทยปัจจุบันได้เกิดความขัดแย้งทางการเมือง ทางแนวคิด จนลุกลามเป็นวิกฤตการณ์อันหาทางออกได้ยากยิ่ง องค์กรนักศึกษามหาวิทยาลัยธรรมศาสตร์ขอร้องเรียนและเสนอแนะต่อทุกฝ่าย โดยยึดหลักแนวทางตามรัฐธรรมนูญแห่งราชอาณาจักรไทย พ.ศ. ๒๕๕๐ อันเป็นกฎหมายสูงสุดในการจัดการปกครองรัฐ ที่มีผลบังคับใช้อยู่ในปัจจุบันซึ่งผ่านการประชามติจากปวงชนชาวไทยเมื่อวันที่ ๑๙ สิงหาคม พ.ศ. ๒๕๕๐ แล้วดังต่อนี้\n\n\xa0\n\n๑.การชุมชมโดยสงบและปราศจากอาวุธย่อมได้รับการคุ้มครองตามรัฐธรรมนูญ แต่หากการชุมนุมและเคลื่อนไหวของกลุ่มใดๆ มีการละเมิดสิทธิและเสรีภาพของผู้อื่นหรือก่อให้เกิดความเสียหายต่อชีวิตและทรัพย์สินของบุคคลและส่วนรวมนั้น ไม่สามารถกระทำได้ การใช้ความรุนแรง การกระทำอุกอาจต่างๆ ทั้งต่อบุคคลและทรัพย์สิน การยั่วยุ ปลุกระดมเพื่อหวังผลในการปะทะต่อสู้ จึงควรได้รับการกล่าวโทษ\n\n\xa0\n\nดังนั้นทั้งกลุ่มพันธมิตรประชาชนเพื่อประชาธิปไตย (พธม.) และกลุ่มแนวร่วมประชาธิปไตยไม่เอาเผด็จการแห่งชาติ (นปช.) จึงควรยอมรับกระบวนการตามกฎหมาย และหากถูกกล่าวหาไม่ว่ากรณีใดๆ ก็ควรพิสูจน์ความบริสุทธิ์โดยใช้กระบวนการยุติธรรม และหากจะยังชุมนุมต่อไปก็ยังคงทำได้ภายใต้บทบัญญัติแห่งกฎหมาย\n\n\xa0\n\nองค์กรนักศึกษามหาวิทยาลัยธรรมศาสตร์ จึงร้องขอให้หน่วยงานต่างๆ ที่เกี่ยวข้องดำเนินการตามกระบวนการทางกฎหมายกับการกระทำที่ผิดบทบัญญัติแห่งกฎหมายที่ทุกฝ่ายได้กระทำไป\n\n\xa0\n\n๒.นายสมัคร สุนทรเวช นายกรัฐมนตรี ไม่มีความเหมาะสมในการบริหารราชการแผ่นดินขาดหลักธรรมาภิบาล แต่ทั้งนี้นายสมัคร สุนทรเวช ยังคงยืนยันและกล่าวอ้างความชอบธรรมตามระบอบประชาธิปไตยภายใต้รัฐธรรมนูญ โดยไม่คำนึงถึงกระแสเรียกร้องใดๆ อันส่งผลให้ความขัดแย้งทางสังคมยิ่งบานปลายจนกลายเป็นวิกฤตการณ์เช่นปัจจุบัน ซึ่งก่อให้เกิดความเสียหายต่อประเทศแนวโน้มจะคลี่คลาย\n\n\xa0\n\nองค์การนักศึกษามหาวิทยาลัยธรรมศาสตร์ จึงเห็นว่า ควรใช้สิทธิตามรัฐธรรมนูญแห่งราชอาณาจักรไทย พุทธศักราช ๒๕๕๐ มาตรา ๑๖๔ โดยการเข้าชื่อเพื่อร้องต่อประธานวุฒิสภาเพื่อให้มีมติตามมาตรา ๒๗๔ ให้ถอดถอนนายสมัคร สุนทรเวช ออกจากตำแหน่งนายกรัฐมนตรีตามมาตรา ๒๗๐ ณ ลานโพ มหาวิทยาลัยธรรมศาสตร์ ท่าพระจันทร์ อาคารเรียนรวมสังคมศาสตร์ อาคารปิยชาติ และตึกกิจกรรมนักศึกษา มหาวิทยาลัยธรรมศาสตร์ ศูนย์รังสิต\n\n\xa0\n\n\xa0\n\nด้วยความสมานฉันท์\n\nองค์การนักศึกษามหาวิทยาลัยธรรมศาสตร์', 'culture': 0, 'date': '2008-09-06 03:36', 'economics': 0, 'education': 0, 'environment': 0, 'human_rights': 0, 'ict': 0, 'international': 0, 'labor': 0, 'national_security': 0, 'politics': 1, 'quality_of_life': 0, 'social': 0, 'title': 'แถลงการณ์ อมธ.แนะใช้สิทธิ ตาม รธน.เข้าชื่อร้องต่อประธานวุฒิสภาถอดถอน "สมัคร" จากตำแหน่งนายกฯ', 'url': 'https://prachatai.com/print/18038'} ### Data Fields - `url`: url of the article - `date`: date the article was published - `title`: title of the article - `body_text`: body text of the article - `politics`: 1 if sample has this tag else 0 - `human_rights`: 1 if sample has this tag else 0 - `quality_of_life`: 1 if sample has this tag else 0 - `international`: 1 if sample has this tag else 0 - `social`: 1 if sample has this tag else 0 - `environment`: 1 if sample has this tag else 0 - `economics`: 1 if sample has this tag else 0 - `culture`: 1 if sample has this tag else 0 - `labor`: 1 if sample has this tag else 0 - `national_security`: 1 if sample has this tag else 0 - `ict`: 1 if sample has this tag else 0 - `education`: 1 if sample has this tag else 0 ### Data Splits | | train | valid | test | |-------------------|-------|--------|------| | # articles | 54379 | 6721 | 6789 | | politics | 31401 | 3852 | 3842 | | human_rights | 12061 | 1458 | 1511 | | quality_of_life | 9037 | 1144 | 1127 | | international | 6432 | 828 | 834 | | social | 6321 | 782 | 789 | | environment | 6157 | 764 | 772 | | economics | 3994 | 487 | 519 | | culture | 3279 | 388 | 398 | | labor | 2905 | 375 | 350 | | national_security | 2865 | 339 | 338 | | ict | 2326 | 285 | 292 | | education | 2093 | 248 | 255 | ## Dataset Creation ### Curation Rationale The data was scraped from the news site [Prachathai](prachathai.com) from August 24, 2004 to November 15, 2018. The initial intention was to use the dataset as a benchmark for Thai text classification. Due to the size of the dataset, it can also be used for language modeling. ### Source Data #### Initial Data Collection and Normalization 67,889 articles wtih 51,797 tags were scraped from the news site [Prachathai](prachathai.com) from August 24, 2004 to November 15, 2018. We filtered out those articles with less than 500 characters of body text, mostly images and cartoons. #### Who are the source language producers? Prachathai.com ### Annotations #### Annotation process Tags are annotated for the news website Prachathai.com #### Who are the annotators? We assume that the reporters who wrote the articles or other Prachathai staff gave each article its tags. ### Personal and Sensitive Information We do not expect any personal and sensitive information to be present since all data are public news articles. ## Considerations for Using the Data ### Social Impact of Dataset - classification benchmark for multi-label Thai text classification ### Discussion of Biases Prachathai.com is a left-leaning, human-right-focused news site, and thus unusual news labels such as human rights and quality of life. The news articles are expected to be left-leaning in contents. ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators PyThaiNLP ### Licensing Information CC-BY-NC ### Citation Information @misc{prachathai67k, author = {cstorm125, lukkiddd }, title = {prachathai67k}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, howpublished={\\url{https://github.com/PyThaiNLP/prachathai-67k}}, } ### Contributions Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
12,297
[ [ -0.034393310546875, -0.035797119140625, 0.01413726806640625, 0.01544952392578125, -0.0372314453125, 0.007549285888671875, -0.01336669921875, -0.0228118896484375, 0.046630859375, 0.0135345458984375, -0.0312042236328125, -0.06427001953125, -0.037811279296875, ...
cestwc/adapted-wikismall
2021-12-15T17:35:28.000Z
[ "region:us" ]
cestwc
null
null
0
137
2022-03-02T23:29:22
Entry not found
15
[ [ -0.021392822265625, -0.01494598388671875, 0.05718994140625, 0.028839111328125, -0.0350341796875, 0.046539306640625, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.01702880859375, -0.052093505859375, -0.01494598388671875, -0.06036376953125, 0.03790...
bigbio/pico_extraction
2022-12-22T15:46:16.000Z
[ "multilinguality:monolingual", "language:en", "license:unknown", "region:us" ]
bigbio
This dataset contains annotations for Participants, Interventions, and Outcomes (referred to as PICO task). For 423 sentences, annotations collected by 3 medical experts are available. To get the final annotations, we perform the majority voting.
@inproceedings{zlabinger-etal-2020-effective, title = "Effective Crowd-Annotation of Participants, Interventions, and Outcomes in the Text of Clinical Trial Reports", author = {Zlabinger, Markus and Sabou, Marta and Hofst{\"a}tter, Sebastian and Hanbury, Allan}, booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.findings-emnlp.274", doi = "10.18653/v1/2020.findings-emnlp.274", pages = "3064--3074", }
1
137
2022-11-13T22:11:27
--- language: - en bigbio_language: - English license: unknown multilinguality: monolingual bigbio_license_shortname: UNKNOWN pretty_name: PICO Annotation homepage: https://github.com/Markus-Zlabinger/pico-annotation bigbio_pubmed: True bigbio_public: True bigbio_tasks: - NAMED_ENTITY_RECOGNITION --- # Dataset Card for PICO Annotation ## Dataset Description - **Homepage:** https://github.com/Markus-Zlabinger/pico-annotation - **Pubmed:** True - **Public:** True - **Tasks:** NER This dataset contains annotations for Participants, Interventions, and Outcomes (referred to as PICO task). For 423 sentences, annotations collected by 3 medical experts are available. To get the final annotations, we perform the majority voting. ## Citation Information ``` @inproceedings{zlabinger-etal-2020-effective, title = "Effective Crowd-Annotation of Participants, Interventions, and Outcomes in the Text of Clinical Trial Reports", author = {Zlabinger, Markus and Sabou, Marta and Hofst{"a}tter, Sebastian and Hanbury, Allan}, booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.findings-emnlp.274", doi = "10.18653/v1/2020.findings-emnlp.274", pages = "3064--3074", } ```
1,417
[ [ -0.021575927734375, -0.035247802734375, 0.0222625732421875, 0.03326416015625, -0.0322265625, -0.0081024169921875, -0.0247802734375, -0.03875732421875, 0.042572021484375, 0.0219879150390625, -0.0245208740234375, -0.05438232421875, -0.051239013671875, 0.026641...
johnrobinsn/alpaca-cleaned
2023-03-30T08:42:40.000Z
[ "region:us" ]
johnrobinsn
null
null
0
137
2023-03-30T08:41:04
Entry not found
15
[ [ -0.021392822265625, -0.01494598388671875, 0.05718994140625, 0.028839111328125, -0.0350341796875, 0.046539306640625, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.01702880859375, -0.052093505859375, -0.01494598388671875, -0.06036376953125, 0.03790...
distil-whisper/peoples_speech-clean
2023-09-25T10:30:13.000Z
[ "task_categories:automatic-speech-recognition", "language:en", "license:cc-by-4.0", "region:us" ]
distil-whisper
The People's Speech is a free-to-download 30,000-hour and growing supervised conversational English speech recognition dataset licensed for academic and commercial usage under CC-BY-SA (with a CC-BY subset).
@article{DBLP:journals/corr/abs-2111-09344, author = {Daniel Galvez and Greg Diamos and Juan Ciro and Juan Felipe Ceron and Keith Achorn and Anjali Gopi and David Kanter and Maximilian Lam and Mark Mazumder and Vijay Janapa Reddi}, title = {The People's Speech: A Large-Scale Diverse English Speech Recognition Dataset for Commercial Usage}, journal = {CoRR}, volume = {abs/2111.09344}, year = {2021}, url = {https://arxiv.org/abs/2111.09344}, eprinttype = {arXiv}, eprint = {2111.09344}, timestamp = {Mon, 22 Nov 2021 16:44:07 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2111-09344.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
0
137
2023-04-07T17:10:53
--- license: cc-by-4.0 task_categories: - automatic-speech-recognition language: - en -pretty_name: People's Speech Clean --- # Distil Whisper: People's Speech Clean This is a variant of the [People's Speech Clean](https://huggingface.co/datasets/MLCommons/peoples_speech) dataset, augmented to return the pseudo-labelled Whisper Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2) model with *greedy* sampling. For information on how the original dataset was curated, refer to the original [dataset card](https://huggingface.co/datasets/MLCommons/peoples_speech). ## Standalone Usage First, install the latest version of the 🤗 Datasets package: ```bash pip install --upgrade pip pip install --upgrade datasets[audio] ``` The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset) function: ```python from datasets import load_dataset dataset = load_dataset("distil-whisper/peoples_speech-clean", "clean") # take the first sample of the validation set sample = dataset["validation"][0] ``` It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet). Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk: ```python from datasets import load_dataset dataset = load_dataset("distil-whisper/peoples_speech-clean", "clean", streaming=True) # take the first sample of the validation set sample = next(iter(dataset["validation"])) ``` ## Distil Whisper Usage To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the [Distil Whisper repository](https://github.com/huggingface/distil-whisper#training). ## License This dataset is licensed under cc-by-4.0.
2,085
[ [ -0.010894775390625, -0.03997802734375, 0.004993438720703125, 0.0208740234375, -0.0220794677734375, 0.007030487060546875, -0.0176544189453125, -0.0139923095703125, 0.027862548828125, 0.042022705078125, -0.050537109375, -0.036651611328125, -0.0380859375, 0.004...
Jing24/val_oneanswer
2023-08-19T00:35:46.000Z
[ "region:us" ]
Jing24
null
null
0
137
2023-08-19T00:35:44
--- dataset_info: features: - name: id dtype: string - name: title dtype: string - name: context dtype: string - name: question dtype: string - name: answers struct: - name: answer_start sequence: int32 - name: text sequence: string splits: - name: train num_bytes: 9832949 num_examples: 10570 download_size: 1675804 dataset_size: 9832949 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "val_oneanswer" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
664
[ [ -0.05194091796875, -0.033599853515625, 0.011199951171875, 0.0126953125, -0.01490020751953125, -0.0159149169921875, 0.037078857421875, 0.004985809326171875, 0.06292724609375, 0.06610107421875, -0.06976318359375, -0.0513916015625, -0.036651611328125, -0.015640...
irc_disentangle
2022-11-18T20:10:09.000Z
[ "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "conversation-disentanglement", "arxiv:1810.11118", "region:us"...
null
Disentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. This new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation structure. The dataset is 16 times larger than all previously released datasets combined, the first to include adjudication of annotation disagreements, and the first to include context.
@inproceedings{kummerfeld-etal-2019-large, title = "A Large-Scale Corpus for Conversation Disentanglement", author = "Kummerfeld, Jonathan K. and Gouravajhala, Sai R. and Peper, Joseph J. and Athreya, Vignesh and Gunasekara, Chulaka and Ganhotra, Jatin and Patel, Siva Sankalp and Polymenakos, Lazaros C and Lasecki, Walter", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P19-1374", doi = "10.18653/v1/P19-1374", pages = "3846--3856", arxiv = "https://arxiv.org/abs/1810.11118", software = "https://jkk.name/irc-disentanglement", data = "https://jkk.name/irc-disentanglement", abstract = "Disentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. We created a new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation structure. Our data is 16 times larger than all previously released datasets combined, the first to include adjudication of annotation disagreements, and the first to include context. We use our data to re-examine prior work, in particular, finding that 89% of conversations in a widely used dialogue corpus are either missing messages or contain extra messages. Our manually-annotated data presents an opportunity to develop robust data-driven methods for conversation disentanglement, which will help advance dialogue research.", }
4
136
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - found language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: [] paperswithcode_id: irc-disentanglement pretty_name: IRC Disentanglement tags: - conversation-disentanglement dataset_info: - config_name: ubuntu features: - name: id dtype: int32 - name: raw dtype: string - name: ascii dtype: string - name: tokenized dtype: string - name: date dtype: string - name: connections sequence: int32 splits: - name: train num_bytes: 56012854 num_examples: 220616 - name: validation num_bytes: 3081479 num_examples: 12510 - name: test num_bytes: 3919900 num_examples: 15010 download_size: 118470210 dataset_size: 63014233 - config_name: channel_two features: - name: id dtype: int32 - name: raw dtype: string - name: ascii dtype: string - name: tokenized dtype: string - name: connections sequence: int32 splits: - name: dev num_bytes: 197505 num_examples: 1001 - name: pilot num_bytes: 92663 num_examples: 501 - name: test num_bytes: 186823 num_examples: 1001 - name: pilot_dev num_bytes: 290175 num_examples: 1501 - name: all_ num_bytes: 496524 num_examples: 2602 download_size: 118470210 dataset_size: 1263690 --- # Dataset Card for IRC Disentanglement ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) - [Acknowledgments](#acknowledgments) ## Dataset Description - **Homepage:** https://jkk.name/irc-disentanglement/ - **Repository:** https://github.com/jkkummerfeld/irc-disentanglement/tree/master/data - **Paper:** https://aclanthology.org/P19-1374/ - **Leaderboard:** NA - **Point of Contact:** jkummerf@umich.edu ### Dataset Summary Disentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. This new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation structure. The dataset is 16 times larger than all previously released datasets combined, the first to include adjudication of annotation disagreements, and the first to include context. Note, the Github repository for the dataset also contains several useful tools for: - Conversion (e.g. extracting conversations from graphs) - Evaluation - Preprocessing - Word embeddings trained on the full Ubuntu logs in 2018 ### Supported Tasks and Leaderboards Conversational Disentanglement ### Languages English (en) ## Dataset Structure ### Data Instances For Ubuntu: data["train"][1050] ``` { 'ascii': "[03:57] <Xophe> (also, I'm guessing that this isn't a good place to report minor but annoying bugs... what is?)", 'connections': [1048, 1054, 1055, 1072, 1073], 'date': '2004-12-25', 'id': 1050, 'raw': "[03:57] <Xophe> (also, I'm guessing that this isn't a good place to report minor but annoying bugs... what is?)", 'tokenized': "<s> ( also , i 'm guessing that this is n't a good place to report minor but annoying bugs ... what is ?) </s>" } ``` For Channel_two: data["train"][50] ``` { 'ascii': "[01:04] <Felicia> Chanel: i don't know off hand sorry", 'connections': [49, 53], 'id': 50, 'raw': "[01:04] <Felicia> Chanel: i don't know off hand sorry", 'tokenized': "<s> <user> : i do n't know off hand sorry </s>" } ``` ### Data Fields 'id' : The id of the message, this is the value that would be in the 'connections' of associated messages. 'raw' : The original message from the IRC log, as downloaded. 'ascii' : The raw message converted to ascii (unconvertable characters are replaced with a special word). 'tokenized' : The same message with automatic tokenisation and replacement of rare words with placeholder symbols. 'connections' : The indices of linked messages. (only ubuntu) 'date' : The date the messages are from. The labelling for each date only start after the first 1000 messages of that date. ### Data Splits The dataset has 4 parts: | Part | Number of Annotated Messages | | ------------- | ------------------------------------------- | | Train | 67,463 | | Dev | 2,500 | | Test | 5,000 | | Channel 2 | 2,600 | ## Dataset Creation ### Curation Rationale IRC is a synchronous chat setting with a long history of use. Several channels log all messages and make them publicly available. The Ubuntu channel is particularly heavily used and has been the subject of several academic studies. Data was selected from the channel in order to capture the diversity of situations in the channel (e.g. when there are many users or very few users). For full details, see the [annotation information page](https://github.com/jkkummerfeld/irc-disentanglement/blob/master/data/READ.history.md). ### Source Data #### Initial Data Collection and Normalization Data was collected from the Ubuntu IRC channel logs, which are publicly available at [https://irclogs.ubuntu.com/](https://irclogs.ubuntu.com/). The raw files are included, as well as two other versions: - ASCII, converted using the script [make_txt.py](https://github.com/jkkummerfeld/irc-disentanglement/blob/master/tools/preprocessing/make-txt.py) - Tok, tokenised text with rare words replaced by UNK using the script [dstc8-tokenise.py](https://github.com/jkkummerfeld/irc-disentanglement/blob/master/tools/preprocessing/dstc8-tokenise.py) The raw channel two data is from prior work [(Elsner and Charniak, 2008)](https://www.aclweb.org/anthology/P08-1095.pdf)]. #### Who are the source language producers? The text is from a large group of internet users asking questions and providing answers related to Ubuntu. ### Annotations #### Annotation process The data is expert annotated with: - Training, one annotation per line in general, a small portion is double-annotated and adjudicated - Dev, Channel 2, double annotated and adjudicated - Test, triple annotated and adjudicated | Part | Annotators | Adjudication? | | ------------- | --------------- | ------------------------------------- | | Train | 1 or 2 per file | For files with 2 annotators (only 10) | | Dev | 2 | Yes | | Test | 3 | Yes | | Channel 2 | 2 | Yes | #### Who are the annotators? Students and a postdoc at the University of Michigan. Everyone involved went through a training process with feedback to learn the annotation guidelines. ### Personal and Sensitive Information No content is removed or obfuscated. There is probably personal information in the dataset from users. ## Considerations for Using the Data ### Social Impact of Dataset The raw data is already available online and the annotations do not significantly provide additional information that could have a direct social impact. ### Discussion of Biases The data is mainly from a single technical domain (Ubuntu tech support) that probably has a demographic skew of some sort. Given that users are only identified by their self-selected usernames, it is difficult to know more about the authors. ### Other Known Limitations Being focused on a single language and a single channel means that the data is likely capturing a particular set of conventions in communication. Those conventions may not apply to other channels, or beyond IRC. ## Additional Information ### Dataset Curators Jonathan K. Kummerfeld ### Licensing Information Creative Commons Attribution 4.0 ### Citation Information ``` @inproceedings{kummerfeld-etal-2019-large, title = "A Large-Scale Corpus for Conversation Disentanglement", author = "Kummerfeld, Jonathan K. and Gouravajhala, Sai R. and Peper, Joseph J. and Athreya, Vignesh and Gunasekara, Chulaka and Ganhotra, Jatin and Patel, Siva Sankalp and Polymenakos, Lazaros C and Lasecki, Walter", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P19-1374", doi = "10.18653/v1/P19-1374", pages = "3846--3856", arxiv = "https://arxiv.org/abs/1810.11118", software = "https://jkk.name/irc-disentanglement", data = "https://jkk.name/irc-disentanglement", abstract = "Disentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. We created a new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation structure. Our data is 16 times larger than all previously released datasets combined, the first to include adjudication of annotation disagreements, and the first to include context. We use our data to re-examine prior work, in particular, finding that 89{\%} of conversations in a widely used dialogue corpus are either missing messages or contain extra messages. Our manually-annotated data presents an opportunity to develop robust data-driven methods for conversation disentanglement, which will help advance dialogue research.", } ``` ### Contributions Thanks to [@dhruvjoshi1998](https://github.com/dhruvjoshi1998) for adding this dataset. Thanks to [@jkkummerfeld](https://github.com/jkkummerfeld) for improvements to the documentation. ### Acknowledgments This material is based in part upon work supported by IBM under contract 4915012629. Any opinions, findings, conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views of IBM.
11,236
[ [ -0.048431396484375, -0.0526123046875, 0.007389068603515625, 0.0206146240234375, -0.033721923828125, 0.0204925537109375, -0.0194549560546875, -0.038421630859375, 0.05462646484375, 0.01548004150390625, -0.04388427734375, -0.040985107421875, -0.0599365234375, 0...
mac_morpho
2023-01-25T14:34:31.000Z
[ "task_categories:token-classification", "task_ids:part-of-speech", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:pt", "license:cc-by-4.0", "region:us" ]
null
Mac-Morpho is a corpus of Brazilian Portuguese texts annotated with part-of-speech tags. Its first version was released in 2003 [1], and since then, two revisions have been made in order to improve the quality of the resource [2, 3]. The corpus is available for download split into train, development and test sections. These are 76%, 4% and 20% of the corpus total, respectively (the reason for the unusual numbers is that the corpus was first split into 80%/20% train/test, and then 5% of the train section was set aside for development). This split was used in [3], and new POS tagging research with Mac-Morpho is encouraged to follow it in order to make consistent comparisons possible. [1] Aluísio, S., Pelizzoni, J., Marchi, A.R., de Oliveira, L., Manenti, R., Marquiafável, V. 2003. An account of the challenge of tagging a reference corpus for brazilian portuguese. In: Proceedings of the 6th International Conference on Computational Processing of the Portuguese Language. PROPOR 2003 [2] Fonseca, E.R., Rosa, J.L.G. 2013. Mac-morpho revisited: Towards robust part-of-speech. In: Proceedings of the 9th Brazilian Symposium in Information and Human Language Technology – STIL [3] Fonseca, E.R., Aluísio, Sandra Maria, Rosa, J.L.G. 2015. Evaluating word embeddings and a revised corpus for part-of-speech tagging in Portuguese. Journal of the Brazilian Computer Society.
@article{fonseca2015evaluating, title={Evaluating word embeddings and a revised corpus for part-of-speech tagging in Portuguese}, author={Fonseca, Erick R and Rosa, Joao Luis G and Aluisio, Sandra Maria}, journal={Journal of the Brazilian Computer Society}, volume={21}, number={1}, pages={2}, year={2015}, publisher={Springer} }
4
136
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - found language: - pt license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: - part-of-speech pretty_name: Mac-Morpho dataset_info: features: - name: id dtype: string - name: tokens sequence: string - name: pos_tags sequence: class_label: names: '0': PREP+PROADJ '1': IN '2': PREP+PRO-KS '3': NPROP '4': PREP+PROSUB '5': KC '6': PROPESS '7': NUM '8': PROADJ '9': PREP+ART '10': KS '11': PRO-KS '12': ADJ '13': ADV-KS '14': N '15': PREP '16': PROSUB '17': PREP+PROPESS '18': PDEN '19': V '20': PREP+ADV '21': PCP '22': CUR '23': ADV '24': PU '25': ART splits: - name: train num_bytes: 12635011 num_examples: 37948 - name: test num_bytes: 3095292 num_examples: 9987 - name: validation num_bytes: 671356 num_examples: 1997 download_size: 2463485 dataset_size: 16401659 --- # Dataset Card for Mac-Morpho ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Mac-Morpho homepage](http://nilc.icmc.usp.br/macmorpho/) - **Repository:** [Mac-Morpho repository](http://nilc.icmc.usp.br/macmorpho/) - **Paper:** [Evaluating word embeddings and a revised corpus for part-of-speech tagging in Portuguese](https://journal-bcs.springeropen.com/articles/10.1186/s13173-014-0020-x) - **Point of Contact:** [Erick R Fonseca](mailto:erickrfonseca@gmail.com) ### Dataset Summary Mac-Morpho is a corpus of Brazilian Portuguese texts annotated with part-of-speech tags. Its first version was released in 2003 [1], and since then, two revisions have been made in order to improve the quality of the resource [2, 3]. The corpus is available for download split into train, development and test sections. These are 76%, 4% and 20% of the corpus total, respectively (the reason for the unusual numbers is that the corpus was first split into 80%/20% train/test, and then 5% of the train section was set aside for development). This split was used in [3], and new POS tagging research with Mac-Morpho is encouraged to follow it in order to make consistent comparisons possible. [1] Aluísio, S., Pelizzoni, J., Marchi, A.R., de Oliveira, L., Manenti, R., Marquiafável, V. 2003. An account of the challenge of tagging a reference corpus for brazilian portuguese. In: Proceedings of the 6th International Conference on Computational Processing of the Portuguese Language. PROPOR 2003 [2] Fonseca, E.R., Rosa, J.L.G. 2013. Mac-morpho revisited: Towards robust part-of-speech. In: Proceedings of the 9th Brazilian Symposium in Information and Human Language Technology – STIL [3] Fonseca, E.R., Aluísio, Sandra Maria, Rosa, J.L.G. 2015. Evaluating word embeddings and a revised corpus for part-of-speech tagging in Portuguese. Journal of the Brazilian Computer Society. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Portuguese ## Dataset Structure ### Data Instances An example from the Mac-Morpho dataset looks as follows: ``` { "id": "0", "pos_tags": [14, 19, 14, 15, 22, 7, 14, 9, 14, 9, 3, 15, 3, 3, 24], "tokens": ["Jersei", "atinge", "média", "de", "Cr$", "1,4", "milhão", "na", "venda", "da", "Pinhal", "em", "São", "Paulo", "."] } ``` ### Data Fields - `id`: id of the sample - `tokens`: the tokens of the example text - `pos`: the PoS tags of each token The PoS tags correspond to this list: ``` "PREP+PROADJ", "IN", "PREP+PRO-KS", "NPROP", "PREP+PROSUB", "KC", "PROPESS", "NUM", "PROADJ", "PREP+ART", "KS", "PRO-KS", "ADJ", "ADV-KS", "N", "PREP", "PROSUB", "PREP+PROPESS", "PDEN", "V", "PREP+ADV", "PCP", "CUR", "ADV", "PU", "ART" ``` ### Data Splits The data is split into train, validation and test set. The split sizes are as follow: | Train | Val | Test | | ------ | ----- | ----- | | 37948 | 1997 | 9987 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @article{fonseca2015evaluating, title={Evaluating word embeddings and a revised corpus for part-of-speech tagging in Portuguese}, author={Fonseca, Erick R and Rosa, Jo{\~a}o Lu{\'\i}s G and Alu{\'\i}sio, Sandra Maria}, journal={Journal of the Brazilian Computer Society}, volume={21}, number={1}, pages={2}, year={2015}, publisher={Springer} } ``` ### Contributions Thanks to [@jonatasgrosman](https://github.com/jonatasgrosman) for adding this dataset.
6,477
[ [ -0.0411376953125, -0.04412841796875, -0.00937652587890625, 0.0199127197265625, -0.037811279296875, -0.0011854171752929688, -0.0010986328125, -0.025726318359375, 0.054168701171875, 0.038726806640625, -0.032196044921875, -0.06982421875, -0.0634765625, 0.017257...
gopalkalpande/bbc-news-summary
2022-06-22T13:08:15.000Z
[ "license:cc0-1.0", "region:us" ]
gopalkalpande
null
null
3
136
2022-06-22T12:56:16
--- license: cc0-1.0 --- # About Dataset ### Context Text summarization is a way to condense the large amount of information into a concise form by the process of selection of important information and discarding unimportant and redundant information. With the amount of textual information present in the world wide web the area of text summarization is becoming very important. The extractive summarization is the one where the exact sentences present in the document are used as summaries. The extractive summarization is simpler and is the general practice among the automatic text summarization researchers at the present time. Extractive summarization process involves giving scores to sentences using some method and then using the sentences that achieve highest scores as summaries. As the exact sentence present in the document is used the semantic factor can be ignored which results in generation of less calculation intensive summarization procedure. This kind of summary is generally completely unsupervised and language independent too. Although this kind of summary does its job in conveying the essential information it may not be necessarily smooth or fluent. Sometimes there can be almost no connection between adjacent sentences in the summary resulting in the text lacking in readability. Content This dataset for extractive text summarization has four hundred and seventeen political news articles of BBC from 2004 to 2005 in the News Articles folder. For each articles, five summaries are provided in the Summaries folder. The first clause of the text of articles is the respective title. Acknowledgements This dataset was created using a dataset used for data categorization that onsists of 2225 documents from the BBC news website corresponding to stories in five topical areas from 2004-2005 used in the paper of D. Greene and P. Cunningham. "Practical Solutions to the Problem of Diagonal Dominance in Kernel Document Clustering", Proc. ICML 2006; whose all rights, including copyright, in the content of the original articles are owned by the BBC. More at http://mlg.ucd.ie/datasets/bbc.html **Kaggle Link:** https://www.kaggle.com/datasets/pariza/bbc-news-summary
2,197
[ [ -0.0362548828125, -0.056915283203125, 0.0037860870361328125, 0.0248260498046875, -0.049346923828125, -0.0021572113037109375, -0.0211639404296875, -0.019439697265625, 0.0193939208984375, 0.0230255126953125, -0.0164031982421875, -0.050537109375, -0.0703125, 0....
keremberke/shoe-classification
2023-01-27T13:46:52.000Z
[ "task_categories:image-classification", "roboflow", "roboflow2huggingface", "Sports", "Retail", "Benchmark", "region:us" ]
keremberke
null
\
2
136
2023-01-27T13:46:37
--- task_categories: - image-classification tags: - roboflow - roboflow2huggingface - Sports - Retail - Benchmark --- <div align="center"> <img width="640" alt="keremberke/shoe-classification" src="https://huggingface.co/datasets/keremberke/shoe-classification/resolve/main/thumbnail.jpg"> </div> ### Dataset Labels ``` ['converse', 'adidas', 'nike'] ``` ### Number of Images ```json {'train': 576, 'test': 83, 'valid': 166} ``` ### How to Use - Install [datasets](https://pypi.org/project/datasets/): ```bash pip install datasets ``` - Load the dataset: ```python from datasets import load_dataset ds = load_dataset("keremberke/shoe-classification", name="full") example = ds['train'][0] ``` ### Roboflow Dataset Page [https://universe.roboflow.com/popular-benchmarks/nike-adidas-and-converse-shoes-classification/dataset/4](https://universe.roboflow.com/popular-benchmarks/nike-adidas-and-converse-shoes-classification/dataset/4?ref=roboflow2huggingface) ### Citation ``` ``` ### License Public Domain ### Dataset Summary This dataset was exported via roboflow.com on October 28, 2022 at 2:38 AM GMT Roboflow is an end-to-end computer vision platform that helps you * collaborate with your team on computer vision projects * collect & organize images * understand unstructured image data * annotate, and create datasets * export, train, and deploy computer vision models * use active learning to improve your dataset over time It includes 825 images. Shoes are annotated in folder format. The following pre-processing was applied to each image: * Auto-orientation of pixel data (with EXIF-orientation stripping) No image augmentation techniques were applied.
1,689
[ [ -0.03228759765625, -0.0052032470703125, 0.0045013427734375, 0.004817962646484375, -0.035736083984375, 0.01084136962890625, -0.014556884765625, -0.038177490234375, 0.011749267578125, -0.006748199462890625, -0.049041748046875, -0.0628662109375, -0.03680419921875, ...
jxu124/llava_instruct_150k
2023-05-20T18:50:37.000Z
[ "region:us" ]
jxu124
null
null
0
136
2023-04-24T13:17:41
--- dataset_info: features: - name: global_image_id dtype: string - name: image_path dtype: string - name: dialog sequence: sequence: string - name: anns_id dtype: string splits: - name: train num_bytes: 187730970 num_examples: 157712 download_size: 95089013 dataset_size: 187730970 --- # Dataset Card for "llava_instruct_150k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
508
[ [ -0.0263214111328125, -0.0146331787109375, 0.01033782958984375, 0.029632568359375, -0.0221099853515625, 0.0032672882080078125, 0.0177154541015625, -0.01093292236328125, 0.06695556640625, 0.037689208984375, -0.054046630859375, -0.0450439453125, -0.03887939453125, ...
miladfa7/Brain-MRI-Images-for-Brain-Tumor-Detection
2023-05-16T17:11:04.000Z
[ "region:us" ]
miladfa7
null
null
2
136
2023-05-03T07:11:39
Brain Tumor Detection | Vision Transformer 99% Click -> [Kaggle](https://www.kaggle.com/code/miladfa7/brain-tumor-detection-vision-transformer-99) --- task_categories: - image-classification - image-segmentation tags: - 'brain ' - MRI - brain-MRI-images - Tumor ---
266
[ [ -0.0206451416015625, -0.043975830078125, 0.04962158203125, 0.027435302734375, -0.03692626953125, -0.019439697265625, 0.0196380615234375, -0.006542205810546875, 0.034423828125, 0.05010986328125, -0.04638671875, -0.0615234375, -0.052703857421875, -0.0204925537...
Aeala/ShareGPT_Vicuna_unfiltered
2023-06-01T07:03:50.000Z
[ "language:en", "license:apache-2.0", "region:us" ]
Aeala
null
null
11
136
2023-06-01T06:54:32
--- license: apache-2.0 language: - en --- ## Dataset Card This is a reupload of [this dataset](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) that was further cleaned by gozfarb.
209
[ [ -0.025848388671875, -0.0296783447265625, 0.0130157470703125, 0.007488250732421875, -0.052978515625, -0.01357269287109375, 0.0181121826171875, -0.016754150390625, 0.06304931640625, 0.0809326171875, -0.06640625, -0.042816162109375, -0.0321044921875, -0.0155410...
cdminix/bu_radio
2023-10-24T08:07:47.000Z
[ "task_categories:automatic-speech-recognition", "task_categories:text-to-speech", "license:other", "region:us" ]
cdminix
The Boston University Radio Speech Corpus was collected primarily to support research in text-to-speech synthesis, particularly generation of prosodic patterns. The corpus consists of professionally read radio news data, including speech and accompanying annotations, suitable for speech and language research.
@article{ostendorf1995boston, title={The Boston University radio news corpus}, author={Ostendorf, Mari and Price, Patti J and Shattuck-Hufnagel, Stefanie}, journal={Linguistic Data Consortium}, pages={1--19}, year={1995} }
0
136
2023-07-17T15:05:46
--- license: other task_categories: - automatic-speech-recognition - text-to-speech --- Simply point ``BURN_PATH`` to your local copy of the dataset.
150
[ [ 0.005794525146484375, -0.012054443359375, 0.0191650390625, -0.0010690689086914062, -0.02423095703125, 0.0016946792602539062, 0.00482177734375, 0.036376953125, 0.0487060546875, 0.080078125, -0.049957275390625, -0.032562255859375, -0.0025730133056640625, -0.01...
jed351/Traditional-Chinese-Common-Crawl-Filtered
2023-07-20T23:09:09.000Z
[ "language:zh", "region:us" ]
jed351
null
null
5
136
2023-07-20T21:24:43
--- language: - zh --- # Traditional Chinese C4 ### Dataset Summary Data obtained from 2023-14 Common Crawl. Downloaded and processed using [code](https://github.com/jedcheng/c4-dataset-script) based on another [project](https://github.com/shjwudp/c4-dataset-script) attempting to recreate the C4 dataset. The resultant dataset contains both simplified and traditional Chinese, which could be found [here](https://huggingface.co/datasets/jed351/Chinese-Common-Crawl-Filtered). It was then filtered using a [modified list](https://github.com/jedcheng/c4-dataset-script/blob/master/SC_filter/SC_list.txt) of simplified Chinese characters to obtain this traditional Chinese dataset. I would like to acknowledge computational resources and support provided by the Imperial College Research Computing Service (http://doi.org/10.14469/hpc/2232)
850
[ [ -0.01297760009765625, -0.021575927734375, 0.0297088623046875, 0.0221405029296875, -0.0184478759765625, 0.01219940185546875, -0.02105712890625, -0.0472412109375, 0.038177490234375, 0.0496826171875, -0.042755126953125, -0.0595703125, 0.000732421875, 0.04022216...
jamescalam/agent-conversations-retrieval-tool
2023-08-27T12:57:37.000Z
[ "region:us" ]
jamescalam
null
null
7
136
2023-08-27T12:56:16
Entry not found
15
[ [ -0.0213775634765625, -0.01497650146484375, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.0170135498046875, -0.052093505859375, -0.01497650146484375, -0.0604248046875, 0.0379028...
sorenmulli/hyggeswag
2023-10-11T07:48:11.000Z
[ "region:us" ]
sorenmulli
null
null
0
136
2023-09-26T18:44:27
--- dataset_info: features: - name: ctx dtype: string - name: option-0 dtype: string - name: option-1 dtype: string - name: option-2 dtype: string - name: option-3 dtype: string - name: correct dtype: int64 - name: source_id dtype: string - name: ind dtype: int64 splits: - name: train num_bytes: 41243 num_examples: 100 download_size: 32083 dataset_size: 41243 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "hyggeswag" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
681
[ [ -0.043792724609375, -0.032379150390625, 0.003803253173828125, 0.012725830078125, -0.0113067626953125, 0.001407623291015625, 0.011383056640625, -0.0181121826171875, 0.0704345703125, 0.0303955078125, -0.067138671875, -0.06268310546875, -0.052764892578125, -0.0...
ericyu/SYSU_CD
2023-10-22T16:50:21.000Z
[ "region:us" ]
ericyu
null
null
0
136
2023-10-22T16:44:46
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* - split: val path: data/val-* dataset_info: features: - name: imageA dtype: image - name: imageB dtype: image - name: label dtype: image splits: - name: train num_bytes: 3393267984.0 num_examples: 12000 - name: test num_bytes: 1196988392.0 num_examples: 4000 - name: val num_bytes: 1164865940.0 num_examples: 4000 download_size: 5814133284 dataset_size: 5755122316.0 --- # Dataset Card for "SYSU_CD" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
722
[ [ -0.031585693359375, -0.018951416015625, 0.02008056640625, 0.018707275390625, -0.011810302734375, -0.01092529296875, 0.01114654541015625, 0.0015802383422851562, 0.05865478515625, 0.017608642578125, -0.0635986328125, -0.062042236328125, -0.0457763671875, -0.00...
cornell_movie_dialog
2023-04-05T10:02:37.000Z
[ "language:en", "region:us" ]
null
This corpus contains a large metadata-rich collection of fictional conversations extracted from raw movie scripts: - 220,579 conversational exchanges between 10,292 pairs of movie characters - involves 9,035 characters from 617 movies - in total 304,713 utterances - movie metadata included: - genres - release year - IMDB rating - number of IMDB votes - IMDB rating - character metadata included: - gender (for 3,774 characters) - position on movie credits (3,321 characters)
@InProceedings{Danescu-Niculescu-Mizil+Lee:11a, author={Cristian Danescu-Niculescu-Mizil and Lillian Lee}, title={Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs.}, booktitle={Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, ACL 2011}, year={2011} }
11
135
2022-03-02T23:29:22
--- language: - en paperswithcode_id: cornell-movie-dialogs-corpus pretty_name: Cornell Movie-Dialogs Corpus dataset_info: features: - name: movieID dtype: string - name: movieTitle dtype: string - name: movieYear dtype: string - name: movieIMDBRating dtype: string - name: movieNoIMDBVotes dtype: string - name: movieGenres sequence: string - name: characterID1 dtype: string - name: characterID2 dtype: string - name: characterName1 dtype: string - name: characterName2 dtype: string - name: utterance sequence: - name: text dtype: string - name: LineID dtype: string splits: - name: train num_bytes: 19548840 num_examples: 83097 download_size: 9916637 dataset_size: 19548840 --- # Dataset Card for "cornell_movie_dialog" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [http://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html](http://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 9.92 MB - **Size of the generated dataset:** 19.55 MB - **Total amount of disk used:** 29.46 MB ### Dataset Summary This corpus contains a large metadata-rich collection of fictional conversations extracted from raw movie scripts: - 220,579 conversational exchanges between 10,292 pairs of movie characters - involves 9,035 characters from 617 movies - in total 304,713 utterances - movie metadata included: - genres - release year - IMDB rating - number of IMDB votes - IMDB rating - character metadata included: - gender (for 3,774 characters) - position on movie credits (3,321 characters) ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 9.92 MB - **Size of the generated dataset:** 19.55 MB - **Total amount of disk used:** 29.46 MB An example of 'train' looks as follows. ``` { "characterID1": "u0 ", "characterID2": " u2 ", "characterName1": " m0 ", "characterName2": " m0 ", "movieGenres": ["comedy", "romance"], "movieID": " m0 ", "movieIMDBRating": " 6.90 ", "movieNoIMDBVotes": " 62847 ", "movieTitle": " f ", "movieYear": " 1999 ", "utterance": { "LineID": ["L1"], "text": ["L1 "] } } ``` ### Data Fields The data fields are the same among all splits. #### default - `movieID`: a `string` feature. - `movieTitle`: a `string` feature. - `movieYear`: a `string` feature. - `movieIMDBRating`: a `string` feature. - `movieNoIMDBVotes`: a `string` feature. - `movieGenres`: a `list` of `string` features. - `characterID1`: a `string` feature. - `characterID2`: a `string` feature. - `characterName1`: a `string` feature. - `characterName2`: a `string` feature. - `utterance`: a dictionary feature containing: - `text`: a `string` feature. - `LineID`: a `string` feature. ### Data Splits | name |train| |-------|----:| |default|83097| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{Danescu-Niculescu-Mizil+Lee:11a, author={Cristian Danescu-Niculescu-Mizil and Lillian Lee}, title={Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs.}, booktitle={Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, ACL 2011}, year={2011} } ``` ### Contributions Thanks to [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
7,346
[ [ -0.043792724609375, -0.05145263671875, 0.00977325439453125, -0.0069122314453125, -0.0208740234375, 0.0083160400390625, -0.02587890625, -0.02056884765625, 0.050384521484375, 0.03656005859375, -0.0621337890625, -0.06024169921875, -0.040435791015625, 0.00832366...
hebrew_sentiment
2023-01-25T14:32:05.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:he", "license:mit", "region:us" ]
null
HebrewSentiment is a data set consists of 12,804 user comments to posts on the official Facebook page of Israel’s president, Mr. Reuven Rivlin. In October 2015, we used the open software application Netvizz (Rieder, 2013) to scrape all the comments to all of the president’s posts in the period of June – August 2014, the first three months of Rivlin’s presidency.2 While the president’s posts aimed at reconciling tensions and called for tolerance and empathy, the sentiment expressed in the comments to the president’s posts was polarized between citizens who warmly thanked the president, and citizens that fiercely critiqued his policy. Of the 12,804 comments, 370 are neutral; 8,512 are positive, 3,922 negative. Data Annotation: A trained researcher examined each comment and determined its sentiment value, where comments with an overall positive sentiment were assigned the value 1, comments with an overall negative sentiment were assigned the value -1, and comments that are off-topic to the post’s content were assigned the value 0. We validated the coding scheme by asking a second trained researcher to code the same data. There was substantial agreement between raters (N of agreements: 10623, N of disagreements: 2105, Coehn’s Kappa = 0.697, p = 0).
@inproceedings{amram-etal-2018-representations, title = "Representations and Architectures in Neural Sentiment Analysis for Morphologically Rich Languages: A Case Study from {M}odern {H}ebrew", author = "Amram, Adam and Ben David, Anat and Tsarfaty, Reut", booktitle = "Proceedings of the 27th International Conference on Computational Linguistics", month = aug, year = "2018", address = "Santa Fe, New Mexico, USA", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/C18-1190", pages = "2242--2252", abstract = "This paper empirically studies the effects of representation choices on neural sentiment analysis for Modern Hebrew, a morphologically rich language (MRL) for which no sentiment analyzer currently exists. We study two dimensions of representational choices: (i) the granularity of the input signal (token-based vs. morpheme-based), and (ii) the level of encoding of vocabulary items (string-based vs. character-based). We hypothesise that for MRLs, languages where multiple meaning-bearing elements may be carried by a single space-delimited token, these choices will have measurable effects on task perfromance, and that these effects may vary for different architectural designs {---} fully-connected, convolutional or recurrent. Specifically, we hypothesize that morpheme-based representations will have advantages in terms of their generalization capacity and task accuracy, due to their better OOV coverage. To empirically study these effects, we develop a new sentiment analysis benchmark for Hebrew, based on 12K social media comments, and provide two instances of these data: in token-based and morpheme-based settings. Our experiments show that representation choices empirical effects vary with architecture type. While fully-connected and convolutional networks slightly prefer token-based settings, RNNs benefit from a morpheme-based representation, in accord with the hypothesis that explicit morphological information may help generalize. Our endeavour also delivers the first state-of-the-art broad-coverage sentiment analyzer for Hebrew, with over 89% accuracy, alongside an established benchmark to further study the effects of linguistic representation choices on neural networks{'} task performance.", }
2
135
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - found language: - he license: - mit multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification paperswithcode_id: modern-hebrew-sentiment-dataset pretty_name: HebrewSentiment dataset_info: - config_name: token features: - name: text dtype: string - name: label dtype: class_label: names: '0': pos '1': neg '2': off-topic splits: - name: train num_bytes: 2159738 num_examples: 10244 - name: test num_bytes: 540883 num_examples: 2560 download_size: 2593643 dataset_size: 2700621 - config_name: morph features: - name: text dtype: string - name: label dtype: class_label: names: '0': pos '1': neg '2': off-topic splits: - name: train num_bytes: 2258128 num_examples: 10221 - name: test num_bytes: 571401 num_examples: 2555 download_size: 2722672 dataset_size: 2829529 --- # Dataset Card for HebrewSentiment ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/omilab/Neural-Sentiment-Analyzer-for-Modern-Hebrew - **Repository:** https://github.com/omilab/Neural-Sentiment-Analyzer-for-Modern-Hebrew - **Paper:** http://aclweb.org/anthology/C18-1190 - **Leaderboard:** - **Point of Contact:** ### Dataset Summary HebrewSentiment is a data set consists of 12,804 user comments to posts on the official Facebook page of Israel’s president, Mr. Reuven Rivlin. In October 2015, we used the open software application Netvizz (Rieder, 2013) to scrape all the comments to all of the president’s posts in the period of June – August 2014, the first three months of Rivlin’s presidency.2 While the president’s posts aimed at reconciling tensions and called for tolerance and empathy, the sentiment expressed in the comments to the president’s posts was polarized between citizens who warmly thanked the president, and citizens that fiercely critiqued his policy. Of the 12,804 comments, 370 are neutral; 8,512 are positive, 3,922 negative. Data Annotation: ### Supported Tasks and Leaderboards Sentiment Analysis ### Languages Hebrew ## Dataset Structure tsv format: {hebrew_sentence}\t{sentiment_label} ### Data Instances רובי הייתי רוצה לראות ערביה נישאת ליהודי 1 תמונה יפיפיה-שפו 0 חייבים לעשות סוג של חרם כשכתבים שונאי ישראל עולים לשידור צריכים להעביר לערוץ אחר ואז תראו מה יעשה כוחו של הרייטינג ( בהקשר לדבריה של רינה מצליח ) 2 ### Data Fields - `text`: The modern hebrew inpput text. - `label`: The sentiment label. 0=positive , 1=negative, 2=off-topic. ### Data Splits | | train | test | |--------------------------|--------|---------| | HebrewSentiment (token) | 10243 | 2559 | | HebrewSentiment (morph) | 10243 | 2559 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization User comments to posts on the official Facebook page of Israel’s president, Mr. Reuven Rivlin. In October 2015, we used the open software application Netvizz (Rieder, 2013) to scrape all the comments to all of the president’s posts in the period of June – August 2014, the first three months of Rivlin’s presidency. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process A trained researcher examined each comment and determined its sentiment value, where comments with an overall positive sentiment were assigned the value 0, comments with an overall negative sentiment were assigned the value 1, and comments that are off-topic to the post’s content were assigned the value 2. We validated the coding scheme by asking a second trained researcher to code the same data. There was substantial agreement between raters (N of agreements: 10623, N of disagreements: 2105, Coehn’s Kappa = 0.697, p = 0). #### Who are the annotators? Researchers ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators OMIlab, The Open University of Israel ### Licensing Information MIT License Copyright (c) 2018 OMIlab, The Open University of Israel Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ### Citation Information @inproceedings{amram-etal-2018-representations, title = "Representations and Architectures in Neural Sentiment Analysis for Morphologically Rich Languages: A Case Study from {M}odern {H}ebrew", author = "Amram, Adam and Ben David, Anat and Tsarfaty, Reut", booktitle = "Proceedings of the 27th International Conference on Computational Linguistics", month = aug, year = "2018", address = "Santa Fe, New Mexico, USA", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/C18-1190", pages = "2242--2252", abstract = "This paper empirically studies the effects of representation choices on neural sentiment analysis for Modern Hebrew, a morphologically rich language (MRL) for which no sentiment analyzer currently exists. We study two dimensions of representational choices: (i) the granularity of the input signal (token-based vs. morpheme-based), and (ii) the level of encoding of vocabulary items (string-based vs. character-based). We hypothesise that for MRLs, languages where multiple meaning-bearing elements may be carried by a single space-delimited token, these choices will have measurable effects on task perfromance, and that these effects may vary for different architectural designs {---} fully-connected, convolutional or recurrent. Specifically, we hypothesize that morpheme-based representations will have advantages in terms of their generalization capacity and task accuracy, due to their better OOV coverage. To empirically study these effects, we develop a new sentiment analysis benchmark for Hebrew, based on 12K social media comments, and provide two instances of these data: in token-based and morpheme-based settings. Our experiments show that representation choices empirical effects vary with architecture type. While fully-connected and convolutional networks slightly prefer token-based settings, RNNs benefit from a morpheme-based representation, in accord with the hypothesis that explicit morphological information may help generalize. Our endeavour also delivers the first state-of-the-art broad-coverage sentiment analyzer for Hebrew, with over 89{\%} accuracy, alongside an established benchmark to further study the effects of linguistic representation choices on neural networks{'} task performance.", } ### Contributions Thanks to [@elronbandel](https://github.com/elronbandel) for adding this dataset.
9,063
[ [ -0.04217529296875, -0.04620361328125, 0.00506591796875, 0.035308837890625, -0.03619384765625, -0.0107574462890625, -0.035186767578125, -0.026397705078125, 0.04150390625, 0.007183074951171875, -0.0300140380859375, -0.06976318359375, -0.05645751953125, -0.0014...