author
stringlengths
2
29
cardData
null
citation
stringlengths
0
9.58k
description
stringlengths
0
5.93k
disabled
bool
1 class
downloads
float64
1
1M
gated
bool
2 classes
id
stringlengths
2
108
lastModified
stringlengths
24
24
paperswithcode_id
stringlengths
2
45
private
bool
2 classes
sha
stringlengths
40
40
siblings
list
tags
list
readme_url
stringlengths
57
163
readme
stringlengths
0
977k
jglaser
null
@InProceedings{huggingface:dataset, title = {jglaser/protein_ligand_contacts}, author={Jens Glaser, ORNL }, year={2022} }
A dataset to fine-tune language models on protein-ligand binding affinity and contact prediction.
false
2
false
jglaser/protein_ligand_contacts
2022-03-15T21:17:32.000Z
null
false
67f9dbf9e17ada0dcdc47e05ad9b37ed01f8e82f
[]
[ "tags:molecules", "tags:chemistry", "tags:SMILES" ]
https://huggingface.co/datasets/jglaser/protein_ligand_contacts/resolve/main/README.md
--- tags: - molecules - chemistry - SMILES --- ## How to use the data sets This dataset contains more than 16,000 unique pairs of protein sequences and ligand SMILES with experimentally determined binding affinities and protein-ligand contacts (ligand atom/SMILES token vs. Calpha within 5 Angstrom). These are represented by a list that contains the positions of non-zero elements of the flattened, sparse sequence x smiles tokens (2048x512) matrix. The first and last entries in both dimensions are padded to zero, they correspond to [CLS] and [SEP]. It can be used for fine-tuning a language model. The data solely uses data from PDBind-cn. Contacts are calculated at four cut-off distances: 5, 8, 11A and 15A. ### Use the already preprocessed data Load a test/train split using ``` from datasets import load_dataset train = load_dataset("jglaser/protein_ligand_contacts",split='train[:90%]') validation = load_dataset("jglaser/protein_ligand_contacts",split='train[90%:]') ``` ### Pre-process yourself To manually perform the preprocessing, download the data sets from P.DBBind-cn Register for an account at <https://www.pdbbind.org.cn/>, confirm the validation email, then login and download - the Index files (1) - the general protein-ligand complexes (2) - the refined protein-ligand complexes (3) Extract those files in `pdbbind/data` Run the script `pdbbind.py` in a compute job on an MPI-enabled cluster (e.g., `mpirun -n 64 pdbbind.py`). Perform the steps in the notebook `pdbbind.ipynb`
ctheodoris
null
null
null
false
1
false
ctheodoris/Genecorpus-30M
2022-09-28T00:14:01.000Z
null
false
440e8b1455d5d1268dfd2f7ec9fd2bfa4148e73c
[]
[]
https://huggingface.co/datasets/ctheodoris/Genecorpus-30M/resolve/main/README.md
# Dataset Card for Genecorpus-30M ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks) - [Species](#species) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) <!--- - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ---> ## Dataset Description <!--- **Paper:** ---> - **Point of Contact:** christina.theodoris@gladstone.ucsf.edu ### Dataset Summary We assembled a large-scale pretraining corpus, Genecorpus-30M, comprised of ~30 million human single cell transcriptomes from a broad range of tissues from publicly available data. This corpus was used for pretraining [Geneformer](https://huggingface.co/ctheodoris/Geneformer), a pretrained transformer model that enables context-aware predictions in settings with limited data in network biology. ### Supported Tasks This corpus was used for pretraining [Geneformer](https://huggingface.co/ctheodoris/Geneformer) and is compatible with pretraining or fine-tuning Geneformer or similar models. ### Species Homo sapiens ## Dataset Structure ### Data Instances Genecorpus-30M is provided as tokenized data in the Huggingface Datasets structure, which is based on the Apache Arrow format. Each example within the dataset is composed of the rank value encoding for a single cell within the corpus. Rank value encodings provide a nonparametric representation of each single cell’s transcriptome, ranking genes by their expression within that cell normalized by their expression across the entire Genecorpus-30M. This method takes advantage of the many observations of each gene’s expression across Genecorpus-30M to prioritize genes that distinguish cell state. Specifically, this method will deprioritize ubiquitously highly-expressed housekeeping genes by normalizing them to a lower rank. Conversely, genes such as transcription factors that may be lowly expressed when they are expressed but highly distinguish cell state will move to a higher rank within the encoding. Furthermore, this rank-based approach may be more robust against technical artifacts that may systematically bias the absolute transcript counts value while the overall relative ranking of genes within each cell remains more stable. To accomplish this, we first calculated the nonzero median value of expression of each detected gene across all cells from the entire Genecorpus-30M. We aggregated the transcript count distribution for each gene, normalizing the gene transcript counts in each cell by the total transcript count of that cell to account for varying sequencing depth. We then normalized the genes in each single cell transcriptome by that gene’s non-zero median value of expression across Genecorpus-30M and ordered the genes by the rank of their normalized expression in that specific cell. Of note, we opted to use the nonzero median value of expression rather than include zeros in the distribution so as not to weight the value by tissue representation within Genecorpus-30M, assuming that a representative range of transcript values would be observed within the cells in which each gene was detected. The rank value encodings for each single cell transcriptome were then tokenized based on a total vocabulary of 25,424 protein-coding or miRNA genes detected within Geneformer-30M. The token dictionary mapping each token ID to special tokens (<pad> and <mask>) or Ensembl IDs for each gene is included within the repository as a pickle file (token_dictionary.pickle). ### Data Fields - `input_ids`: rank value encoding for an example cell - `lengths`: length of rank value encoding for that example cell ### Data Splits The dataset does not contain any predefined splits. ## Dataset Creation ### Curation Rationale Mapping the gene regulatory networks that drive disease progression enables screening for molecules that correct the network by normalizing core regulatory elements, rather than targeting peripheral downstream effectors that may not be disease modifying. However, mapping the gene network architecture requires large amounts of transcriptomic data to learn the connections between genes, which impedes network-correcting drug discovery in settings with limited data, including rare diseases and diseases affecting clinically inaccessible tissues. Although data remains limited in these settings, recent advances in sequencing technologies have driven a rapid expansion in the amount of transcriptomic data available from human tissues more broadly. Furthermore, single cell technologies have facilitated the observation of transcriptomic states without averaging genes’ expression across multiple cells, potentially providing more precise data for inference of network interactions, especially in diseases driven by dysregulation of multiple cell types. Recently, the concept of transfer learning has revolutionized fields such as natural language understanding and computer vision by leveraging deep learning models pretrained on large-scale general datasets that can then be fine-tuned towards a vast array of downstream tasks with limited task-specific data that would be insufficient to yield meaningful predictions when used in isolation. We therefore assembled Genecorpus-30M to allow the large-scale pretraining of [Geneformer](https://huggingface.co/ctheodoris/Geneformer), a pretrained transformer model that enables context-aware predictions in settings with limited data in network biology. ### Source Data #### Initial Data Collection and Normalization Source data included 29.9 million (29,900,531) human single cell transcriptomes from a broad range of tissues from 561 publicly available datasets from original studies cited in the Extended Methods of Theodoris et al. 2022. Datasets were filtered to retain cells with total read counts within three standard deviations of the mean within that dataset and mitochondrial reads within three standard deviations of the mean within that dataset. Ensembl-annotated protein-coding and miRNA genes were used for downstream analysis. Cells with less than seven detected Ensembl-annotated protein-coding or miRNA genes were excluded as the 15% masking used for the pretraining learning objective would not reliably mask a gene in cells with fewer detected genes. Ultimately, 27.4 million (27,406,217) cells passed the defined quality filters. Cells were then represented as rank value encodings as discussed above in [Data Instances](#data-instances). #### Who are the source data producers? Publicly available datasets containing raw counts were collected from National Center for Biotechnology Information (NCBI) Gene Expression Omnibus (GEO), NCBI Sequence Read Archive (SRA), Human Cell Atlas, European Molecular Biology Laboratory-European Bioinformatics Institute (EMBL-EBI) Single Cell Expression Atlas, Broad Institute Single Cell Portal, Brotman Baty Institute (BBI)-Allen Single Cell Atlases, Tumor Immune Single-cell Hub (TISCH) (excluding malignant cells), Panglao Database, 10x Genomics, University of California, Santa Cruz Cell Browser, European Genome-phenome Archive, Synapse, Riken, Zenodo, National Institutes of Health (NIH) Figshare Archive, NCBI dbGap, Refine.bio, China National GeneBank Sequence Archive, Mendeley Data, and individual communication with authors of the original studies as cited in the Extended Methods of Theodoris et al. 2022. ### Annotations #### Annotation process Geneformer-30M does not contain annotations. #### Who are the annotators? N/A ### Personal and Sensitive Information There is no personal or sensitive information included in the dataset. The dataset is composed of rank value encodings, so there are no traceable sequencing reads included. ## Considerations for Using the Data ### Social Impact of Dataset Genecorpus-30M enabled the large-scale pretraining of [Geneformer](https://huggingface.co/ctheodoris/Geneformer), a pretrained transformer model that enables context-aware predictions in settings with limited data in network biology. Within our publication, we demonstrated that during pretraining, Geneformer gained a fundamental understanding of network dynamics, encoding network hierarchy in the model’s attention weights in a completely self-supervised manner. Fine-tuning Geneformer towards a diverse panel of downstream tasks relevant to chromatin and network dynamics using limited task-specific data demonstrated that Geneformer consistently boosted predictive accuracy. Applied to disease modeling with limited patient data, Geneformer identified candidate therapeutic targets for cardiomyopathy. Overall, Geneformer represents an invaluable pretrained model from which fine-tuning towards a broad range of downstream applications can be pursued to accelerate discovery of key network regulators and candidate therapeutic targets. ### Discussion of Biases We excluded cells with high mutational burdens (e.g. malignant cells and immortalized cell lines) that could lead to substantial network rewiring without companion genome sequencing to facilitate interpretation. We only included droplet-based sequencing platforms to assure expression value unit comparability. Although we assembled the dataset to represent as diverse a set of human tissues and cell types as possible, particular tissues and cell types are not represented due to unavailability of public data at the time of dataset assembly. In our manuscript, we demonstrated that pretraining with larger and more diverse corpuses consistently improved Geneformer’s predictive power, consistent with observations that large-scale pretraining allows training of deeper models that ultimately have greater predictive potential in fields including NLU, computer vision, and mathematical problem-solving. Additionally, exposure to hundreds of experimental datasets during pretraining also appeared to promote robustness to batch-dependent technical artifacts and individual variability that commonly impact single cell analyses in biology. These findings suggest that as the amount of publicly available transcriptomic data continues to expand, future models pretrained on even larger-scale corpuses may open opportunities to achieve meaningful predictions in even more elusive tasks with increasingly limited task-specific data. ### Other Known Limitations Genecorpus-30M was intended to be used for self-supervised pretraining. To achieve the best possible predictions in downstream tasks, Geneformer should be fine-tuned with labeled datasets relevant to the task at hand. ## Additional Information ### Dataset Curators Christina Theodoris, MD, PhD <!--- ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. --->
SetFit
null
null
null
false
1
false
SetFit/catalonia_independence_ca
2022-03-13T09:10:29.000Z
null
false
9d24e08b068f24f80d9b3679e3806fe1c1be8fb3
[]
[]
https://huggingface.co/datasets/SetFit/catalonia_independence_ca/resolve/main/README.md
#catalonian independence tweet dataset This dataset is a port of the official ['catalonia_independence' dataset] (https://huggingface.co/datasets/catalonia_independence) on the Hub. It has just the Catalan language version.
SetFit
null
null
null
false
1
false
SetFit/catalonia_independence_es
2022-03-13T09:11:31.000Z
null
false
4d0ae2a3df2769cd4eff981ae8184b9fd72b0798
[]
[]
https://huggingface.co/datasets/SetFit/catalonia_independence_es/resolve/main/README.md
#catalonian independence tweet dataset This dataset is a port of the official ['catalonia_independence' dataset] (https://huggingface.co/datasets/catalonia_independence) on the Hub. It has just the Spanish language version.
SetFit
null
null
null
false
1
false
SetFit/xglue_nc
2022-03-14T03:27:58.000Z
null
false
7fa32cf76b45dceb224903152c34dfa13718dfb2
[]
[]
https://huggingface.co/datasets/SetFit/xglue_nc/resolve/main/README.md
#xglue nc This dataset is a port of the official ['xglue' dataset] (https://huggingface.co/datasets/xglue) on the Hub. It has just the news category classification section. It has been reduced to just 3 columns (plus text label) that are relevant to the SetFit task. Validation and test in English, Spanish, French, Russian, and German.
SetFit
null
null
null
false
9
false
SetFit/amazon_reviews_multi_de
2022-03-23T15:34:53.000Z
null
false
bb25d49f17c86f7affb193c18e0511afcd51b933
[]
[]
https://huggingface.co/datasets/SetFit/amazon_reviews_multi_de/resolve/main/README.md
#amazon reviews multi german This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the German language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task.
SetFit
null
null
null
false
7
false
SetFit/amazon_reviews_multi_es
2022-03-23T15:43:09.000Z
null
false
16015418b488c9186fce74b058877ea939ca934d
[]
[]
https://huggingface.co/datasets/SetFit/amazon_reviews_multi_es/resolve/main/README.md
#amazon reviews multi spanish This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the Spanish language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task.
SetFit
null
null
null
false
7
false
SetFit/amazon_reviews_multi_ja
2022-03-23T15:40:06.000Z
null
false
77676678b2e9e03265aae02823ba2f77b531d11a
[]
[]
https://huggingface.co/datasets/SetFit/amazon_reviews_multi_ja/resolve/main/README.md
#amazon reviews multi japanese This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the Japanese language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task.
SetFit
null
null
null
false
13
false
SetFit/amazon_reviews_multi_zh
2022-03-23T15:30:49.000Z
null
false
184ac90d5511a7f6801cba99688892f440ece660
[]
[]
https://huggingface.co/datasets/SetFit/amazon_reviews_multi_zh/resolve/main/README.md
#amazon reviews multi chinese This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the Chinese language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task.
SetFit
null
null
null
false
7
false
SetFit/amazon_reviews_multi_fr
2022-03-23T15:45:44.000Z
null
false
3a43b31171a667fb0bb7a298e143fd022266f78b
[]
[]
https://huggingface.co/datasets/SetFit/amazon_reviews_multi_fr/resolve/main/README.md
#amazon reviews multi french This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the French language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task.
multiIR
null
null
null
false
1
false
multiIR/toy_data
2022-03-14T10:33:27.000Z
null
false
7c9a79666d13e6d27ee74279fccdca11decbfb5d
[]
[]
https://huggingface.co/datasets/multiIR/toy_data/resolve/main/README.md
#toy dataset This is a small portion of the full dataset, used for testing and formatting purposes.
rocca
null
null
null
false
1
false
rocca/top-reddit-posts
2022-03-23T05:16:33.000Z
null
false
f9dd0d78228c6840ae9d97ffb7b8d6dfbbbc8634
[]
[ "license:mit" ]
https://huggingface.co/datasets/rocca/top-reddit-posts/resolve/main/README.md
--- license: mit --- The `post-data-by-subreddit.tar` file contains 5000 gzipped json files - one for each of the top 5000 subreddits (as roughly measured by subscriber count and comment activity). Each of those json files (e.g. `askreddit.json`) contains an array of the data for the top 1000 posts of all time. Notes: * I stopped crawling a subreddit's top-posts list if I reached a batch that had a post with a score less than 5, so some subreddits won't have the full 1000 posts. * No posts comments are included. Only the posts themselves. * See the example file `askreddit.json` in this repo if you want to see what you're getting before downloading all the data. * The list of subreddits included are listed in `top-5k-subreddits.json`. * NSFW subreddits have been included in the crawl, so you might have to filter them out depending on your use case. * The Deno scraping/crawling script is included as `crawl.js`, and can be started with `deno run --allow-net --allow-read=. --allow-write=. crawl.js` once you've [installed Deno](https://deno.land/manual/getting_started/installation) and have downloaded `top-5k-subreddits.json` into the same folder as `crawl.js`.
carbon12
null
false
1
false
carbon12/evaluating_student_writing
2022-03-13T13:03:06.000Z
null
false
dd5650eb094112f8913c5c9f907e43008aeb52cf
[]
[]
https://huggingface.co/datasets/carbon12/evaluating_student_writing/resolve/main/README.md
From the Evaluating Student Writing Kaggle competition.
gj1997
null
null
null
false
1
false
gj1997/trial2
2022-03-13T09:03:58.000Z
null
false
c54df84f9a7566184d83c75d208a97e5aa5a77d3
[]
[]
https://huggingface.co/datasets/gj1997/trial2/resolve/main/README.md
Parmann
null
null
null
false
1
false
Parmann/speech_classification
2022-03-13T08:32:04.000Z
null
false
749b7eac6d013c77d95ba1b744bb88ac436ca48b
[]
[]
https://huggingface.co/datasets/Parmann/speech_classification/resolve/main/README.md
This dataset contains MFCC feature extracted for 646 short speech audios
stjokerli
null
null
null
false
1
false
stjokerli/TextToText_axg_seqio
2022-04-04T10:24:18.000Z
null
false
088baa7f2aa235290fb8a35850cee1e70bd5ce25
[]
[]
https://huggingface.co/datasets/stjokerli/TextToText_axg_seqio/resolve/main/README.md
# text-to-text format from superglue axg # Note that RTE train and val set has been added axg: DatasetDict({ test: Dataset({ features: ['idx', 'inputs', 'targets'], num_rows: 356 }) train: Dataset({ features: ['idx', 'inputs', 'targets'], num_rows: 2490 }) validation: Dataset({ features: ['idx', 'inputs', 'targets'], num_rows: 277 }) })
stjokerli
null
null
null
false
2
false
stjokerli/TextToText_axb_seqio
2022-04-04T10:25:39.000Z
null
false
aa9340e5512f9d1c196b34645346db83107a0cd3
[]
[]
https://huggingface.co/datasets/stjokerli/TextToText_axb_seqio/resolve/main/README.md
axb: DatasetDict({ test: Dataset({ features: ['idx', 'inputs', 'targets'], num_rows: 1104 }) train: Dataset({ features: ['idx', 'inputs', 'targets'], num_rows: 2490 }) validation: Dataset({ features: ['idx', 'inputs', 'targets'], num_rows: 277 }) }) Text to text implemantion of T5 note that RTE train and validation set has been added
joypersicanon
null
null
null
false
1
false
joypersicanon/ph-en-text
2022-03-17T13:30:52.000Z
null
false
4dc1c8da193d078c788bccf7eebbc301c754b121
[]
[]
https://huggingface.co/datasets/joypersicanon/ph-en-text/resolve/main/README.md
[Needs More Information] # Dataset Card for ph-en-text ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** https://huggingface.co/datasets/joypersicanon/ph-en-text/tree/main - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** Mary Joy P. Canon ### Dataset Summary PhEnText is a large-scale and multi-domain lexical data written in Philippine English text. It is composed of 20, 562, 265 lines from news articles, religious articles and court decisions. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages ph-en ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields id: "3128940", text: "Why this happened should be the focus of inquiry." ### Data Splits 80:20 split for train and test data ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information]
lewtun
null
@dataset{kasieczka_gregor_2019_2603256, author = {Kasieczka, Gregor and Plehn, Tilman and Thompson, Jennifer and Russel, Michael}, title = {Top Quark Tagging Reference Dataset}, month = mar, year = 2019, publisher = {Zenodo}, version = {v0 (2018\_03\_27)}, doi = {10.5281/zenodo.2603256}, url = {https://doi.org/10.5281/zenodo.2603256} }
Top Quark Tagging is a dataset of Monte Carlo simulated hadronic top and QCD dijet events for the evaluation of top quark tagging architectures. The dataset consists of 1.2M training events, 400k validation events and 400k test events.
false
1
false
lewtun/top_quark_tagging
2022-04-03T14:26:05.000Z
null
false
cc60812b3dc5abb00043962616195c023c7c27a2
[]
[ "license:cc-by-4.0" ]
https://huggingface.co/datasets/lewtun/top_quark_tagging/resolve/main/README.md
--- license: cc-by-4.0 --- # Top Quark Tagging Reference Dataset A set of MC simulated training/testing events for the evaluation of top quark tagging architectures. In total 1.2M training events, 400k validation events and 400k test events. Use “train” for training, “val” for validation during the training and “test” for final testing and reporting results. ## Description * 14 TeV, hadronic tops for signal, qcd diets background, Delphes ATLAS detector card with Pythia8 * No MPI/pile-up included * Clustering of particle-flow entries (produced by Delphes E-flow) into anti-kT 0.8 jets in the pT range [550,650] GeV * All top jets are matched to a parton-level top within ∆R = 0.8, and to all top decay partons within 0.8 * Jets are required to have |eta| < 2 * The leading 200 jet constituent four-momenta are stored, with zero-padding for jets with fewer than 200 * Constituents are sorted by pT, with the highest pT one first * The truth top four-momentum is stored as truth_px etc. * A flag (1 for top, 0 for QCD) is kept for each jet. It is called is_signal_new * The variable "ttv" (= test/train/validation) is kept for each jet. It indicates to which dataset the jet belongs. It is redundant as the different sets are already distributed as different files.
wanyu
null
null
null
false
181
false
wanyu/IteraTeR_full_sent
2022-10-24T18:58:37.000Z
null
false
845aaad797f618d1f8c9b42c3cb5919f0becdb2a
[]
[ "arxiv:2203.03802", "annotations_creators:crowdsourced", "language_creators:found", "language:en", "license:apache-2.0", "multilinguality:monolingual", "source_datasets:original", "task_categories:text2text-generation", "language_bcp47:en-US", "tags:conditional-text-generation", "tags:text-editi...
https://huggingface.co/datasets/wanyu/IteraTeR_full_sent/resolve/main/README.md
--- annotations_creators: - crowdsourced language_creators: - found language: - en license: - apache-2.0 multilinguality: - monolingual source_datasets: - original task_categories: - text2text-generation task_ids: [] pretty_name: IteraTeR_full_sent language_bcp47: - en-US tags: - conditional-text-generation - text-editing --- Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802) Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang Github repo: https://github.com/vipulraheja/IteraTeR
wanyu
null
null
null
false
6
false
wanyu/IteraTeR_full_doc
2022-10-24T18:58:30.000Z
null
false
792d5310cc82446cccfd3cd8953893b831538976
[]
[ "arxiv:2203.03802", "annotations_creators:crowdsourced", "language_creators:found", "language:en", "license:apache-2.0", "multilinguality:monolingual", "source_datasets:original", "task_categories:text2text-generation", "language_bcp47:en-US", "tags:conditional-text-generation", "tags:text-editi...
https://huggingface.co/datasets/wanyu/IteraTeR_full_doc/resolve/main/README.md
--- annotations_creators: - crowdsourced language_creators: - found language: - en license: - apache-2.0 multilinguality: - monolingual source_datasets: - original task_categories: - text2text-generation task_ids: [] pretty_name: IteraTeR_full_doc language_bcp47: - en-US tags: - conditional-text-generation - text-editing --- Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802) Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang Github repo: https://github.com/vipulraheja/IteraTeR
wanyu
null
null
null
false
235
false
wanyu/IteraTeR_human_sent
2022-10-24T18:58:22.000Z
null
false
e22e0371dac444239b944f9293f5b491d62b73f0
[]
[ "arxiv:2203.03802", "annotations_creators:crowdsourced", "language_creators:found", "language:en", "license:apache-2.0", "multilinguality:monolingual", "source_datasets:original", "task_categories:text2text-generation", "language_bcp47:en-US", "tags:conditional-text-generation", "tags:text-editi...
https://huggingface.co/datasets/wanyu/IteraTeR_human_sent/resolve/main/README.md
--- annotations_creators: - crowdsourced language_creators: - found language: - en license: - apache-2.0 multilinguality: - monolingual source_datasets: - original task_categories: - text2text-generation task_ids: [] pretty_name: IteraTeR_human_sent language_bcp47: - en-US tags: - conditional-text-generation - text-editing --- Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802) Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang Github repo: https://github.com/vipulraheja/IteraTeR
wanyu
null
null
null
false
3
false
wanyu/IteraTeR_human_doc
2022-10-24T18:58:15.000Z
null
false
3b0bdabb090d04062ebc17e54ac889a64f5cb791
[]
[ "arxiv:2203.03802", "annotations_creators:crowdsourced", "language_creators:found", "language:en", "license:apache-2.0", "multilinguality:monolingual", "source_datasets:original", "task_categories:text2text-generation", "language_bcp47:en-US", "tags:conditional-text-generation", "tags:text-editi...
https://huggingface.co/datasets/wanyu/IteraTeR_human_doc/resolve/main/README.md
--- annotations_creators: - crowdsourced language_creators: - found language: - en license: - apache-2.0 multilinguality: - monolingual source_datasets: - original task_categories: - text2text-generation task_ids: [] pretty_name: IteraTeR-human-doc language_bcp47: - en-US tags: - conditional-text-generation - text-editing --- Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802) Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang Github repo: https://github.com/vipulraheja/IteraTeR
Aclairs
null
null
null
false
1
false
Aclairs/ALBERTFINALYEAR
2022-03-14T05:56:07.000Z
null
false
1a2b7bc94feea59665740ea295e504c41b8f9c39
[]
[]
https://huggingface.co/datasets/Aclairs/ALBERTFINALYEAR/resolve/main/README.md
--- {} --- # AutoNLP Dataset for project: ALBERTFINALYEAR ## Table of content - [Dataset Description](#dataset-description) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) ## Dataset Descritpion This dataset has been automatically processed by AutoNLP for project ALBERTFINALYEAR. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "context": "Hasidic or Chasidic Judaism overlaps significantly with Haredi Judaism in its engagement with the se[...]", "question": "What overlaps significantly with Haredi Judiasm?", "answers.text": [ "Chasidic Judaism" ], "answers.answer_start": [ 11 ] }, { "context": "Data compression can be viewed as a special case of data differencing: Data differencing consists of[...]", "question": "What can classified as data differencing with empty source data?", "answers.text": [ "Data compression", "data compression" ], "answers.answer_start": [ 0, 400 ] } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "context": "Value(dtype='string', id=None)", "question": "Value(dtype='string', id=None)", "answers.text": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)", "answers.answer_start": "Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 87433 | | valid | 10544 |
alkzzz
null
null
null
false
1
false
alkzzz/palui
2022-03-14T07:32:35.000Z
null
false
bb60660d157a96f5beae964140c7f52c11c5c3f5
[]
[ "license:cc-by-4.0" ]
https://huggingface.co/datasets/alkzzz/palui/resolve/main/README.md
--- license: cc-by-4.0 ---
GEM-submissions
null
null
null
false
1
false
GEM-submissions/lewtun__this-is-a-test__1647246406
2022-03-14T08:26:51.000Z
null
false
e0536f5bfc7c35bb62f104bb2400c2b36b6029ef
[]
[ "benchmark:gem", "type:prediction", "submission_name:This is a test", "tags:evaluation", "tags:benchmark" ]
https://huggingface.co/datasets/GEM-submissions/lewtun__this-is-a-test__1647246406/resolve/main/README.md
--- benchmark: gem type: prediction submission_name: This is a test tags: - evaluation - benchmark --- # GEM Submission Submission name: This is a test
GEM-submissions
null
null
null
false
1
false
GEM-submissions/lewtun__mt5_xl__1647246454
2022-03-14T08:27:39.000Z
null
false
1d84bb9af6e19a7cd6860f4e3149f951e7c1c018
[]
[ "benchmark:gem", "type:prediction", "submission_name:mT5_xl", "tags:evaluation", "tags:benchmark" ]
https://huggingface.co/datasets/GEM-submissions/lewtun__mt5_xl__1647246454/resolve/main/README.md
--- benchmark: gem type: prediction submission_name: mT5_xl tags: - evaluation - benchmark --- # GEM Submission Submission name: mT5_xl
ianomunga
null
null
null
false
1
false
ianomunga/MIAS
2022-03-14T08:42:09.000Z
null
false
ba89cb938c0fe227ae23b6f7ef704a190f71e7de
[]
[ "license:cc-by-4.0" ]
https://huggingface.co/datasets/ianomunga/MIAS/resolve/main/README.md
--- license: cc-by-4.0 ---
GEM-submissions
null
null
null
false
1
false
GEM-submissions/lewtun__this-is-a-test__1647247409
2022-03-14T08:43:34.000Z
null
false
2bd261e242dd6801c5bf27ed6dfbe28309ba0387
[]
[ "benchmark:gem", "type:prediction", "submission_name:This is a test", "tags:evaluation", "tags:benchmark" ]
https://huggingface.co/datasets/GEM-submissions/lewtun__this-is-a-test__1647247409/resolve/main/README.md
--- benchmark: gem type: prediction submission_name: This is a test tags: - evaluation - benchmark --- # GEM Submission Submission name: This is a test
EMBO
null
null
null
false
292
false
EMBO/BLURB
2022-10-20T19:09:53.000Z
null
false
efabbf522ab41dc053bfe2c17d1f0ac77d599307
[]
[ "arxiv:2007.15779", "arxiv:1909.06146", "license:apache-2.0", "annotations_creators:expert-generated", "language_creators:expert-generated", "language:en", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "task_categories:question-answering", "task_categor...
https://huggingface.co/datasets/EMBO/BLURB/resolve/main/README.md
--- license: apache-2.0 annotations_creators: - expert-generated language_creators: - expert-generated language: - en multilinguality: - monolingual paperswithcode_id: null pretty_name: BLURB (Biomedical Language Understanding and Reasoning Benchmark.) size_categories: - 10K<n<100K source_datasets: - original task_categories: - structure-prediction - question-answering - text-scoring - text-classification task_ids: - named-entity-recognition - parsing - closed-domain-qa - semantic-similarity-scoring - text-scoring-other-sentence-similrity - topic-classificatio --- # Dataset Card for BLURB ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://microsoft.github.io/BLURB/index.html - **Paper:** [Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing](https://arxiv.org/pdf/2007.15779.pdf) - **Leaderboard:** https://microsoft.github.io/BLURB/leaderboard.html - **Point of Contact:** ### Dataset Summary BLURB is a collection of resources for biomedical natural language processing. In general domains, such as newswire and the Web, comprehensive benchmarks and leaderboards such as GLUE have greatly accelerated progress in open-domain NLP. In biomedicine, however, such resources are ostensibly scarce. In the past, there have been a plethora of shared tasks in biomedical NLP, such as BioCreative, BioNLP Shared Tasks, SemEval, and BioASQ, to name just a few. These efforts have played a significant role in fueling interest and progress by the research community, but they typically focus on individual tasks. The advent of neural language models, such as BERT provides a unifying foundation to leverage transfer learning from unlabeled text to support a wide range of NLP applications. To accelerate progress in biomedical pretraining strategies and task-specific methods, it is thus imperative to create a broad-coverage benchmark encompassing diverse biomedical tasks. Inspired by prior efforts toward this direction (e.g., BLUE), we have created BLURB (short for Biomedical Language Understanding and Reasoning Benchmark). BLURB comprises of a comprehensive benchmark for PubMed-based biomedical NLP applications, as well as a leaderboard for tracking progress by the community. BLURB includes thirteen publicly available datasets in six diverse tasks. To avoid placing undue emphasis on tasks with many available datasets, such as named entity recognition (NER), BLURB reports the macro average across all tasks as the main score. The BLURB leaderboard is model-agnostic. Any system capable of producing the test predictions using the same training and development data can participate. The main goal of BLURB is to lower the entry barrier in biomedical NLP and help accelerate progress in this vitally important field for positive societal and human impact. #### BC5-chem The corpus consists of three separate sets of articles with diseases, chemicals and their relations annotated. The training (500 articles) and development (500 articles) sets were released to task participants in advance to support text-mining method development. The test set (500 articles) was used for final system performance evaluation. - **Homepage:** https://biocreative.bioinformatics.udel.edu/resources/corpora/biocreative-v-cdr-corpus - **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/) - **Paper:** [BioCreative V CDR task corpus: a resource for chemical disease relation extraction](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4860626/) #### BC5-disease The corpus consists of three separate sets of articles with diseases, chemicals and their relations annotated. The training (500 articles) and development (500 articles) sets were released to task participants in advance to support text-mining method development. The test set (500 articles) was used for final system performance evaluation. - **Homepage:** https://biocreative.bioinformatics.udel.edu/resources/corpora/biocreative-v-cdr-corpus - **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/) - **Paper:** [BioCreative V CDR task corpus: a resource for chemical disease relation extraction](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4860626/) #### BC2GM The BioCreative II Gene Mention task. The training corpus for the current task consists mainly of the training and testing corpora (text collections) from the BCI task, and the testing corpus for the current task consists of an additional 5,000 sentences that were held 'in reserve' from the previous task. In the current corpus, tokenization is not provided; instead participants are asked to identify a gene mention in a sentence by giving its start and end characters. As before, the training set consists of a set of sentences, and for each sentence a set of gene mentions (GENE annotations). - **Homepage:** https://biocreative.bioinformatics.udel.edu/tasks/biocreative-ii/task-1a-gene-mention-tagging/ - **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/) - **Paper:** [verview of BioCreative II gene mention recognition](https://link.springer.com/article/10.1186/gb-2008-9-s2-s2) #### NCBI Disease The NCBI disease corpus is fully annotated at the mention and concept level to serve as a research resource for the biomedical natural language processing community. Corpus Characteristics ---------------------- * 793 PubMed abstracts * 6,892 disease mentions * 790 unique disease concepts * Medical Subject Headings (MeSH®) * Online Mendelian Inheritance in Man (OMIM®) * 91% of the mentions map to a single disease concept **divided into training, developing and testing sets. Corpus Annotation * Fourteen annotators * Two-annotators per document (randomly paired) * Three annotation phases * Checked for corpus-wide consistency of annotations - **Homepage:** https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/ - **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/) - **Paper:** [NCBI disease corpus: a resource for disease name recognition and concept normalization](https://pubmed.ncbi.nlm.nih.gov/24393765/) #### JNLPBA The BioNLP / JNLPBA Shared Task 2004 involves the identification and classification of technical terms referring to concepts of interest to biologists in the domain of molecular biology. The task was organized by GENIA Project based on the annotations of the GENIA Term corpus (version 3.02). Corpus format: The JNLPBA corpus is distributed in IOB format, with each line containing a single token and its tag, separated by a tab character. Sentences are separated by blank lines. - **Homepage: ** http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004 - **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/) - **Paper: ** [Introduction to the Bio-entity Recognition Task at JNLPBA](https://aclanthology.org/W04-1213) #### EBM PICO - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** #### ChemProt - **Homepage:** - **Repository:** - **Paper:** #### DDI - **Homepage:** - **Repository:** - **Paper:** #### GAD - **Homepage:** - **Repository:** - **Paper:** #### BIOSSES BIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the [TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset](https://tac.nist.gov/2014/BiomedSumm/) containing articles from the biomedical domain. The sentence pairs in BIOSSES were selected from citing sentences, i.e. sentences that have a citation to a reference article. The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). In the original paper the mean of the scores assigned by the five human annotators was taken as the gold standard. The Pearson correlation between the gold standard scores and the scores estimated by the models was used as the evaluation metric. The strength of correlation can be assessed by the general guideline proposed by Evans (1996) as follows: - very strong: 0.80–1.00 - strong: 0.60–0.79 - moderate: 0.40–0.59 - weak: 0.20–0.39 - very weak: 0.00–0.19 - **Homepage:** https://tabilab.cmpe.boun.edu.tr/BIOSSES/DataSet.html - **Repository:** https://github.com/gizemsogancioglu/biosses - **Paper:** [BIOSSES: a semantic sentence similarity estimation system for the biomedical domain](https://academic.oup.com/bioinformatics/article/33/14/i49/3953954) - **Point of Contact:** [Gizem Soğancıoğlu](gizemsogancioglu@gmail.com) and [Arzucan Özgür](gizemsogancioglu@gmail.com) #### HoC - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** #### PubMedQA We introduce PubMedQA, a novel biomedical question answering (QA) dataset collected from PubMed abstracts. The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative statins reduce atrial fibrillation after coronary artery bypass grafting?) using the corresponding abstracts. PubMedQA has 1k expert-annotated, 61.2k unlabeled and 211.3k artificially generated QA instances. Each PubMedQA instance is composed of (1) a question which is either an existing research article title or derived from one, (2) a context which is the corresponding abstract without its conclusion, (3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question, and (4) a yes/no/maybe answer which summarizes the conclusion. PubMedQA is the first QA dataset where reasoning over biomedical research texts, especially their quantitative contents, is required to answer the questions. Our best performing model, multi-phase fine-tuning of BioBERT with long answer bag-of-word statistics as additional supervision, achieves 68.1% accuracy, compared to single human performance of 78.0% accuracy and majority-baseline of 55.2% accuracy, leaving much room for improvement. PubMedQA is publicly available at this https URL. - **Homepage:** https://pubmedqa.github.io/ - **Repository:** https://github.com/pubmedqa/pubmedqa - **Paper:** [PubMedQA: A Dataset for Biomedical Research Question Answering](https://arxiv.org/pdf/1909.06146.pdf) - **Leaderboard:** [Question answering](https://pubmedqa.github.io/) - **Point of Contact:** #### BioASQ Task 7b will use benchmark datasets containing training and test biomedical questions, in English, along with gold standard (reference) answers. The participants will have to respond to each test question with relevant concepts (from designated terminologies and ontologies), relevant articles (in English, from designated article repositories), relevant snippets (from the relevant articles), relevant RDF triples (from designated ontologies), exact answers (e.g., named entities in the case of factoid questions) and 'ideal' answers (English paragraph-sized summaries). 2747 training questions (that were used as dry-run or test questions in previous year) are already available, along with their gold standard answers (relevant concepts, articles, snippets, exact answers, summaries). - **Homepage:** http://bioasq.org/ - **Repository:** http://participants-area.bioasq.org/datasets/ - **Paper:** [Automatic semantic classification of scientific literature according to the hallmarks of cancer](https://academic.oup.com/bioinformatics/article/32/3/432/1743783?login=false) ### Supported Tasks and Leaderboards | **Dataset** | **Task** | **Train** | **Dev** | **Test** | **Evaluation Metrics** | **Added** | |:------------:|:-----------------------:|:---------:|:-------:|:--------:|:----------------------:|-----------| | BC5-chem | NER | 5203 | 5347 | 5385 | F1 entity-level | **Yes** | | BC5-disease | NER | 4182 | 4244 | 4424 | F1 entity-level | **Yes** | | NCBI-disease | NER | 5134 | 787 | 960 | F1 entity-level | **Yes** | | BC2GM | NER | 15197 | 3061 | 6325 | F1 entity-level | **Yes** | | JNLPBA | NER | 46750 | 4551 | 8662 | F1 entity-level | **Yes** | | EBM PICO | PICO | 339167 | 85321 | 16364 | Macro F1 word-level | No | | ChemProt | Relation Extraction | 18035 | 11268 | 15745 | Micro F1 | No | | DDI | Relation Extraction | 25296 | 2496 | 5716 | Micro F1 | No | | GAD | Relation Extraction | 4261 | 535 | 534 | Micro F1 | No | | BIOSSES | Sentence Similarity | 64 | 16 | 20 | Pearson | **Yes** | | HoC | Document Classification | 1295 | 186 | 371 | Average Micro F1 | No | | PubMedQA | Question Answering | 450 | 50 | 500 | Accuracy | **Yes** | | BioASQ | Question Answering | 670 | 75 | 140 | Accuracy | No | Datasets used in the BLURB biomedical NLP benchmark. The Train, Dev, and test splits might not be exactly identical to those proposed in BLURB. This is something to be checked. ### Languages English from biomedical texts ## Dataset Structure ### Data Instances * **NER** ```json { 'id': 0, 'tokens': [ "DPP6", "as", "a", "candidate", "gene", "for", "neuroleptic", "-", "induced", "tardive", "dyskinesia", "." ] 'ner_tags': [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ] } ``` * **PICO** ```json { 'TBD' } ``` * **Relation Extraction** ```json { 'TBD' } ``` * **Sentence Similarity** ```json {'sentence 1': 'Here, looking for agents that could specifically kill KRAS mutant cells, they found that knockdown of GATA2 was synthetically lethal with KRAS mutation' 'sentence 2': 'Not surprisingly, GATA2 knockdown in KRAS mutant cells resulted in a striking reduction of active GTP-bound RHO proteins, including the downstream ROCK kinase' 'score': 2.2} ``` * **Document Classification** ```json { 'TBD' } ``` * **Question Answering** * PubMedQA ```json {'context': {'contexts': ['Programmed cell death (PCD) is the regulated death of cells within an organism. The lace plant (Aponogeton madagascariensis) produces perforations in its leaves through PCD. The leaves of the plant consist of a latticework of longitudinal and transverse veins enclosing areoles. PCD occurs in the cells at the center of these areoles and progresses outwards, stopping approximately five cells from the vasculature. The role of mitochondria during PCD has been recognized in animals; however, it has been less studied during PCD in plants.', 'The following paper elucidates the role of mitochondrial dynamics during developmentally regulated PCD in vivo in A. madagascariensis. A single areole within a window stage leaf (PCD is occurring) was divided into three areas based on the progression of PCD; cells that will not undergo PCD (NPCD), cells in early stages of PCD (EPCD), and cells in late stages of PCD (LPCD). Window stage leaves were stained with the mitochondrial dye MitoTracker Red CMXRos and examined. Mitochondrial dynamics were delineated into four categories (M1-M4) based on characteristics including distribution, motility, and membrane potential (ΔΨm). A TUNEL assay showed fragmented nDNA in a gradient over these mitochondrial stages. Chloroplasts and transvacuolar strands were also examined using live cell imaging. The possible importance of mitochondrial permeability transition pore (PTP) formation during PCD was indirectly examined via in vivo cyclosporine A (CsA) treatment. This treatment resulted in lace plant leaves with a significantly lower number of perforations compared to controls, and that displayed mitochondrial dynamics similar to that of non-PCD cells.'], 'labels': ['BACKGROUND', 'RESULTS'], 'meshes': ['Alismataceae', 'Apoptosis', 'Cell Differentiation', 'Mitochondria', 'Plant Leaves'], 'reasoning_free_pred': ['y', 'e', 's'], 'reasoning_required_pred': ['y', 'e', 's']}, 'final_decision': 'yes', 'long_answer': 'Results depicted mitochondrial dynamics in vivo as PCD progresses within the lace plant, and highlight the correlation of this organelle with other organelles during developmental PCD. To the best of our knowledge, this is the first report of mitochondria and chloroplasts moving on transvacuolar strands to form a ring structure surrounding the nucleus during developmental PCD. Also, for the first time, we have shown the feasibility for the use of CsA in a whole plant system. Overall, our findings implicate the mitochondria as playing a critical and early role in developmentally regulated PCD in the lace plant.', 'pubid': 21645374, 'question': 'Do mitochondria play a role in remodelling lace plant leaves during programmed cell death?'} ``` ### Data Fields * **NER** * `id`: string * `ner_tags`: Sequence[ClassLabel] * `tokens`: Sequence[String] * **PICO** * To be added * **Relation Extraction** * To be added * **Sentence Similarity** * `sentence 1`: string * `sentence 2`: string * `score`: float ranging from 0 (no relation) to 4 (equivalent) * **Document Classification** * To be added * **Question Answering** * PubMedQA * `pubid`: integer * `question`: string * `context`: sequence of strings [`contexts`, `labels`, `meshes`, `reasoning_required_pred`, `reasoning_free_pred`] * `long_answer`: string * `final_decision`: string ### Data Splits Shown in the table of supported tasks. ## Dataset Creation ### Curation Rationale * BC5-chem * BC5-disease * BC2GM * JNLPBA * EBM PICO * ChemProt * DDI * GAD * BIOSSES * HoC * PubMedQA * BioASQ ### Source Data [More Information Needed] ### Annotations All the datasets have been obtained and annotated by experts in the biomedical domain. Check the different citations for further details. #### Annotation process * BC5-chem * BC5-disease * BC2GM * JNLPBA * EBM PICO * ChemProt * DDI * GAD * BIOSSES - The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). The score range was described based on the guidelines of SemEval 2012 Task 6 on STS (Agirre et al., 2012). Besides the annotation instructions, example sentences from the biomedical literature were provided to the annotators for each of the similarity degrees. * HoC * PubMedQA * BioASQ ### Dataset Curators All the datasets have been obtained and annotated by experts in thebiomedical domain. Check the different citations for further details. ### Licensing Information * BC5-chem * BC5-disease * BC2GM * JNLPBA * EBM PICO * ChemProt * DDI * GAD * BIOSSES - BIOSSES is made available under the terms of [The GNU Common Public License v.3.0](https://www.gnu.org/licenses/gpl-3.0.en.html). * HoC * PubMedQA - MIT License Copyright (c) 2019 pubmedqa * BioASQ ### Citation Information * BC5-chem & BC5-disease ```latex @article{article, author = {Li, Jiao and Sun, Yueping and Johnson, Robin and Sciaky, Daniela and Wei, Chih-Hsuan and Leaman, Robert and Davis, Allan Peter and Mattingly, Carolyn and Wiegers, Thomas and lu, Zhiyong}, year = {2016}, month = {05}, pages = {baw068}, title = {BioCreative V CDR task corpus: a resource for chemical disease relation extraction}, volume = {2016}, journal = {Database}, doi = {10.1093/database/baw068} } ``` * BC2GM ```latex @article{article, author = {Smith, Larry and Tanabe, Lorraine and Ando, Rie and Kuo, Cheng-Ju and Chung, I-Fang and Hsu, Chun-Nan and Lin, Yu-Shi and Klinger, Roman and Friedrich, Christoph and Ganchev, Kuzman and Torii, Manabu and Liu, Hongfang and Haddow, Barry and Struble, Craig and Povinelli, Richard and Vlachos, Andreas and Baumgartner Jr, William and Hunter, Lawrence and Carpenter, Bob and Wilbur, W.}, year = {2008}, month = {09}, pages = {S2}, title = {Overview of BioCreative II gene mention recognition}, volume = {9 Suppl 2}, journal = {Genome biology}, doi = {10.1186/gb-2008-9-s2-s2} } ``` * JNLPBA ```latex @inproceedings{collier-kim-2004-introduction, title = "Introduction to the Bio-entity Recognition Task at {JNLPBA}", author = "Collier, Nigel and Kim, Jin-Dong", booktitle = "Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications ({NLPBA}/{B}io{NLP})", month = aug # " 28th and 29th", year = "2004", address = "Geneva, Switzerland", publisher = "COLING", url = "https://aclanthology.org/W04-1213", pages = "73--78", } ``` * NCBI Disiease ```latex @article{10.5555/2772763.2772800, author = {Dogan, Rezarta Islamaj and Leaman, Robert and Lu, Zhiyong}, title = {NCBI Disease Corpus}, year = {2014}, issue_date = {February 2014}, publisher = {Elsevier Science}, address = {San Diego, CA, USA}, volume = {47}, number = {C}, issn = {1532-0464}, abstract = {Graphical abstractDisplay Omitted NCBI disease corpus is built as a gold-standard resource for disease recognition.793 PubMed abstracts are annotated with disease mentions and concepts (MeSH/OMIM).14 Annotators produced high consistency level and inter-annotator agreement.Normalization benchmark results demonstrate the utility of the corpus.The corpus is publicly available to the community. Information encoded in natural language in biomedical literature publications is only useful if efficient and reliable ways of accessing and analyzing that information are available. Natural language processing and text mining tools are therefore essential for extracting valuable information, however, the development of powerful, highly effective tools to automatically detect central biomedical concepts such as diseases is conditional on the availability of annotated corpora.This paper presents the disease name and concept annotations of the NCBI disease corpus, a collection of 793 PubMed abstracts fully annotated at the mention and concept level to serve as a research resource for the biomedical natural language processing community. Each PubMed abstract was manually annotated by two annotators with disease mentions and their corresponding concepts in Medical Subject Headings (MeSH ) or Online Mendelian Inheritance in Man (OMIM ). Manual curation was performed using PubTator, which allowed the use of pre-annotations as a pre-step to manual annotations. Fourteen annotators were randomly paired and differing annotations were discussed for reaching a consensus in two annotation phases. In this setting, a high inter-annotator agreement was observed. Finally, all results were checked against annotations of the rest of the corpus to assure corpus-wide consistency.The public release of the NCBI disease corpus contains 6892 disease mentions, which are mapped to 790 unique disease concepts. Of these, 88% link to a MeSH identifier, while the rest contain an OMIM identifier. We were able to link 91% of the mentions to a single disease concept, while the rest are described as a combination of concepts. In order to help researchers use the corpus to design and test disease identification methods, we have prepared the corpus as training, testing and development sets. To demonstrate its utility, we conducted a benchmarking experiment where we compared three different knowledge-based disease normalization methods with a best performance in F-measure of 63.7%. These results show that the NCBI disease corpus has the potential to significantly improve the state-of-the-art in disease name recognition and normalization research, by providing a high-quality gold standard thus enabling the development of machine-learning based approaches for such tasks.The NCBI disease corpus, guidelines and other associated resources are available at: http://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/.}, journal = {J. of Biomedical Informatics}, month = {feb}, pages = {1–10}, numpages = {10}} ``` * EBM PICO * ChemProt * DDI * GAD * BIOSSES ```latex @article{souganciouglu2017biosses, title={BIOSSES: a semantic sentence similarity estimation system for the biomedical domain}, author={So{\u{g}}anc{\i}o{\u{g}}lu, Gizem and {\"O}zt{\"u}rk, Hakime and {\"O}zg{\"u}r, Arzucan}, journal={Bioinformatics}, volume={33}, number={14}, pages={i49--i58}, year={2017}, publisher={Oxford University Press} } ``` * HoC * PubMedQA ```latex @inproceedings{jin2019pubmedqa, title={PubMedQA: A Dataset for Biomedical Research Question Answering}, author={Jin, Qiao and Dhingra, Bhuwan and Liu, Zhengping and Cohen, William and Lu, Xinghua}, booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)}, pages={2567--2577}, year={2019} } ``` * BioASQ ```latex @article{10.1093/bioinformatics/btv585, author = {Baker, Simon and Silins, Ilona and Guo, Yufan and Ali, Imran and Högberg, Johan and Stenius, Ulla and Korhonen, Anna}, title = "{Automatic semantic classification of scientific literature according to the hallmarks of cancer}", journal = {Bioinformatics}, volume = {32}, number = {3}, pages = {432-440}, year = {2015}, month = {10}, abstract = "{Motivation: The hallmarks of cancer have become highly influential in cancer research. They reduce the complexity of cancer into 10 principles (e.g. resisting cell death and sustaining proliferative signaling) that explain the biological capabilities acquired during the development of human tumors. Since new research depends crucially on existing knowledge, technology for semantic classification of scientific literature according to the hallmarks of cancer could greatly support literature review, knowledge discovery and applications in cancer research.Results: We present the first step toward the development of such technology. We introduce a corpus of 1499 PubMed abstracts annotated according to the scientific evidence they provide for the 10 currently known hallmarks of cancer. We use this corpus to train a system that classifies PubMed literature according to the hallmarks. The system uses supervised machine learning and rich features largely based on biomedical text mining. We report good performance in both intrinsic and extrinsic evaluations, demonstrating both the accuracy of the methodology and its potential in supporting practical cancer research. We discuss how this approach could be developed and applied further in the future.Availability and implementation: The corpus of hallmark-annotated PubMed abstracts and the software for classification are available at: http://www.cl.cam.ac.uk/∼sb895/HoC.html .Contact:simon.baker@cl.cam.ac.uk}", issn = {1367-4803}, doi = {10.1093/bioinformatics/btv585}, url = {https://doi.org/10.1093/bioinformatics/btv585}, eprint = {https://academic.oup.com/bioinformatics/article-pdf/32/3/432/19568147/btv585.pdf}, } ``` ### Contributions * This dataset has been uploaded and generated by Dr. Jorge Abreu Vicente. * Thanks to [@GamalC](https://github.com/GamalC) for uploading the NER datasets to GitHub, from where I got them. * I am not part of the team that generated BLURB. This dataset is intended to help researchers to usethe BLURB benchmarking for NLP in Biomedical NLP. * Thanks to [@bwang482](https://github.com/bwang482) for uploading the [BIOSSES dataset](https://github.com/bwang482/datasets/tree/master/datasets/biosses). We forked the [BIOSSES 🤗 dataset](https://huggingface.co/datasets/biosses) to add it to this BLURB benchmark. * Thank you to [@tuner007](https://github.com/tuner007) for adding this dataset to the 🤗 hub
GEM-submissions
null
null
null
false
1
false
GEM-submissions/lewtun__this-is-a-test__1647256250
2022-03-14T11:10:55.000Z
null
false
2e7a18495a4a6b869d49c68c6def0bffc7e1135e
[]
[ "benchmark:gem", "type:prediction", "submission_name:This is a test", "tags:evaluation", "tags:benchmark" ]
https://huggingface.co/datasets/GEM-submissions/lewtun__this-is-a-test__1647256250/resolve/main/README.md
--- benchmark: gem type: prediction submission_name: This is a test tags: - evaluation - benchmark --- # GEM Submission Submission name: This is a test
null
null
@inproceedings{pasupat-liang-2015-compositional, title = "Compositional Semantic Parsing on Semi-Structured Tables", author = "Pasupat, Panupong and Liang, Percy", booktitle = "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = jul, year = "2015", address = "Beijing, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P15-1142", doi = "10.3115/v1/P15-1142", pages = "1470--1480", }
This WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables.
false
326
false
wikitablequestions
2022-11-03T16:08:16.000Z
null
false
871be3ea310c48a48f76afe0227ce1e76d36c4b2
[]
[ "arxiv:1508.00305", "annotations_creators:crowdsourced", "language_creators:found", "language:en", "license:cc-by-4.0", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "task_categories:question-answering", "tags:table-question-answering" ]
https://huggingface.co/datasets/wikitablequestions/resolve/main/README.md
--- annotations_creators: - crowdsourced language_creators: - found language: - en license: - cc-by-4.0 multilinguality: - monolingual paperswithcode_id: null pretty_name: WikiTableQuestions size_categories: - 10K<n<100K source_datasets: - original task_categories: - question-answering task_ids: [] tags: - table-question-answering dataset_info: - config_name: random-split-1 features: - name: id dtype: string - name: question dtype: string - name: answers sequence: string - name: table struct: - name: header sequence: string - name: rows sequence: sequence: string - name: name dtype: string splits: - name: test num_bytes: 11423506 num_examples: 4344 - name: train num_bytes: 30364389 num_examples: 11321 - name: validation num_bytes: 7145768 num_examples: 2831 download_size: 29267445 dataset_size: 48933663 - config_name: random-split-2 features: - name: id dtype: string - name: question dtype: string - name: answers sequence: string - name: table struct: - name: header sequence: string - name: rows sequence: sequence: string - name: name dtype: string splits: - name: test num_bytes: 11423506 num_examples: 4344 - name: train num_bytes: 30098954 num_examples: 11314 - name: validation num_bytes: 7411203 num_examples: 2838 download_size: 29267445 dataset_size: 48933663 - config_name: random-split-3 features: - name: id dtype: string - name: question dtype: string - name: answers sequence: string - name: table struct: - name: header sequence: string - name: rows sequence: sequence: string - name: name dtype: string splits: - name: test num_bytes: 11423506 num_examples: 4344 - name: train num_bytes: 28778697 num_examples: 11314 - name: validation num_bytes: 8731460 num_examples: 2838 download_size: 29267445 dataset_size: 48933663 - config_name: random-split-4 features: - name: id dtype: string - name: question dtype: string - name: answers sequence: string - name: table struct: - name: header sequence: string - name: rows sequence: sequence: string - name: name dtype: string splits: - name: test num_bytes: 11423506 num_examples: 4344 - name: train num_bytes: 30166421 num_examples: 11321 - name: validation num_bytes: 7343736 num_examples: 2831 download_size: 29267445 dataset_size: 48933663 - config_name: random-split-5 features: - name: id dtype: string - name: question dtype: string - name: answers sequence: string - name: table struct: - name: header sequence: string - name: rows sequence: sequence: string - name: name dtype: string splits: - name: test num_bytes: 11423506 num_examples: 4344 - name: train num_bytes: 30333964 num_examples: 11316 - name: validation num_bytes: 7176193 num_examples: 2836 download_size: 29267445 dataset_size: 48933663 --- # Dataset Card for WikiTableQuestions ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [WikiTableQuestions homepage](https://nlp.stanford.edu/software/sempre/wikitable) - **Repository:** [WikiTableQuestions repository](https://github.com/ppasupat/WikiTableQuestions) - **Paper:** [Compositional Semantic Parsing on Semi-Structured Tables](https://arxiv.org/abs/1508.00305) - **Leaderboard:** [WikiTableQuestions leaderboard on PaperWithCode](https://paperswithcode.com/dataset/wikitablequestions) - **Point of Contact:** [Needs More Information] ### Dataset Summary The WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables. ### Supported Tasks and Leaderboards question-answering, table-question-answering ### Languages en ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 27.91 MB - **Size of the generated dataset:** 45.68 MB - **Total amount of disk used:** 73.60 MB An example of 'validation' looks as follows: ``` { "id": "nt-0", "question": "what was the last year where this team was a part of the usl a-league?", "answers": ["2004"], "table": { "header": ["Year", "Division", "League", ...], "name": "csv/204-csv/590.csv", "rows": [ ["2001", "2", "USL A-League", ...], ["2002", "2", "USL A-League", ...], ... ] } } ``` ### Data Fields The data fields are the same among all splits. #### default - `id`: a `string` feature. - `question`: a `string` feature. - `answers`: a `list` of `string` feature. - `table`: a dictionary feature containing: - `header`: a `list` of `string` features. - `rows`: a `list` of `list` of `string` features: - `name`: a `string` feature. ### Data Splits | name |train|validation|test | |-------|----:|---------:|----:| |default|11321| 2831|4344| ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators Panupong Pasupat and Percy Liang ### Licensing Information Creative Commons Attribution Share Alike 4.0 International ### Citation Information ``` @inproceedings{pasupat-liang-2015-compositional, title = "Compositional Semantic Parsing on Semi-Structured Tables", author = "Pasupat, Panupong and Liang, Percy", booktitle = "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = jul, year = "2015", address = "Beijing, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P15-1142", doi = "10.3115/v1/P15-1142", pages = "1470--1480", } ``` ### Contributions Thanks to [@SivilTaram](https://github.com/SivilTaram) for adding this dataset.
gimmaru
null
null
null
false
1
false
gimmaru/github-issues
2022-03-14T12:34:35.000Z
null
false
c7e25e94998cec32978b36ab5530ab74784d37dd
[]
[ "arxiv:2005.00614" ]
https://huggingface.co/datasets/gimmaru/github-issues/resolve/main/README.md
<!DOCTYPE html> <html class=""> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="README.md · lewtun/github-issues at main" /> <meta property="og:type" content="website" /> <meta property="og:url" content="https://huggingface.co/datasets/lewtun/github-issues/blob/main/README.md" /> <meta property="og:image" content="https://huggingface.co/front/thumbnails/v2-2.png" /> <link rel="stylesheet" href="/front/build/style.6509d170.css" /> <link rel="preconnect" href="https://fonts.gstatic.com" /> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&display=swap" rel="stylesheet" /> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&display=swap" rel="stylesheet" /> <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.12.0/dist/katex.min.css" /> <title>README.md · lewtun/github-issues at main</title> </head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black ViewerBlobPage" > <div class="flex flex-col min-h-screen "><header class="border-b border-gray-100"><div class="w-full px-4 lg:px-6 xl:container flex items-center h-16"><div class="flex flex-1 items-center"><a class="flex flex-none items-center mr-5 lg:mr-6" href="/"><img alt="Hugging Face's logo" class="md:mr-2 w-7" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden text-lg font-bold whitespace-nowrap md:block">Hugging Face</span></a> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6&quot;,&quot;header&quot;:true,&quot;placeholder&quot;:&quot;Search models, datasets, users...&quot;,&quot;url&quot;:&quot;/api/quicksearch&quot;,&quot;searchParams&quot;:{&quot;withLinks&quot;:true}}" data-target="QuickSearch"><div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 form-input-alt h-9 pl-8 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 top-2.5 text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div></div> <div class="SVELTE_HYDRATER contents" data-props="{&quot;apiInferenceUrl&quot;:&quot;https://api-inference.huggingface.co&quot;}" data-target="NavigationMenuPhone"><button class="lg:hidden relative flex-none place-self-stretch flex items-center justify-center w-8" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="22" height="22" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32" fill="currentColor"><path d="M4 24h24v2H4z"></path><path d="M4 12h24v2H4z"></path><path d="M4 18h24v2H4z"></path><path d="M4 6h24v2H4z"></path></svg></button> </div></div> <div class="SVELTE_HYDRATER contents" data-props="{&quot;apiInferenceUrl&quot;:&quot;https://api-inference.huggingface.co&quot;,&quot;hfCloudName&quot;:&quot;private&quot;,&quot;isAuth&quot;:false,&quot;isHfCloud&quot;:false}" data-target="NavigationMenuDesktop"><nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="flex items-center group px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a> </li><li><a class="flex items-center group px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a> </li><li><a class="flex items-center group px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a> </li><li><a class="flex items-center group px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a> </li> <li><div class="relative "> <button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"> <svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="flex items-center group px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing" data-ga-category="header-menu" data-ga-action="clicked pricing" data-ga-label="pricing">Pricing </a></li> <li><div class="relative group"> <button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"> <svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="w-0.5 h-5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="px-2 py-0.5 block cursor-pointer hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In </a></li> <li><a class="ml-2 btn" href="/join">Sign Up </a></li></ul></nav></div></div></header> <main class="flex flex-col flex-1 "><header class="bg-gradient-to-t from-gray-50-to-white via-white dark:via-gray-950 pt-10 "><div class="container relative"><h1 class="flex items-center flex-wrap text-lg leading-tight mb-2 md:text-xl "><a href="/datasets" class="group flex items-center mb-1"><svg class="mr-1.5 text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> <span class="text-gray-400 group-hover:text-gray-500 mr-3 font-semibold">Datasets:</span></a> <div class="flex items-center mb-1"><img alt="Lewis Tunstall's avatar" class="w-4 h-4 rounded-full mr-1.5" src="https://aeiljuispo.cloudimg.io/v7/https://s3.amazonaws.com/moonup/production/uploads/1594651707950-noauth.jpeg?w=200&amp;h=200&amp;f=face"> <a href="/lewtun" class="font-sans text-gray-400 hover:text-blue-600">lewtun</a> <div class="text-gray-300 mx-0.5">/</div></div> <div class="max-w-full mb-1"><a class="font-mono font-semibold break-words" href="/datasets/lewtun/github-issues">github-issues</a> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;mr-4&quot;,&quot;title&quot;:&quot;Copy dataset name to clipboard&quot;,&quot;value&quot;:&quot;lewtun/github-issues&quot;}" data-target="CopyButton"><button class="inline-flex items-center relative bg-white text-sm focus:text-green-500 cursor-pointer focus:outline-none mr-4 mx-0.5 text-gray-600 " title="Copy dataset name to clipboard" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class=" absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0 "><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style=" border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div></div> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;mr-3 mb-1&quot;,&quot;isLikedByUser&quot;:false,&quot;likes&quot;:1,&quot;repoId&quot;:&quot;lewtun/github-issues&quot;,&quot;repoType&quot;:&quot;dataset&quot;}" data-target="LikeButton"><div class="inline-flex items-center border leading-none whitespace-nowrap text-sm rounded-md text-gray-500 overflow-hidden bg-white mr-3 mb-1"><button class="relative flex items-center px-1.5 py-1 hover:bg-gradient-to-t focus:outline-none from-red-50 to-transparent dark:from-red-900 dark:to-red-800 overflow-hidden" title="Like"><svg class="mr-1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32" fill="currentColor"><path d="M22.45,6a5.47,5.47,0,0,1,3.91,1.64,5.7,5.7,0,0,1,0,8L16,26.13,5.64,15.64a5.7,5.7,0,0,1,0-8,5.48,5.48,0,0,1,7.82,0L16,10.24l2.53-2.58A5.44,5.44,0,0,1,22.45,6m0-2a7.47,7.47,0,0,0-5.34,2.24L16,7.36,14.89,6.24a7.49,7.49,0,0,0-10.68,0,7.72,7.72,0,0,0,0,10.82L16,29,27.79,17.06a7.72,7.72,0,0,0,0-10.82A7.49,7.49,0,0,0,22.45,4Z"></path></svg> <svg class="mr-1 absolute text-red-500 origin-center transform transition ease-in\n\t\t\t\ttranslate-y-10 scale-0" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32" fill="currentColor"><path d="M22.5,4c-2,0-3.9,0.8-5.3,2.2L16,7.4l-1.1-1.1C12,3.3,7.2,3.3,4.3,6.2c0,0-0.1,0.1-0.1,0.1c-3,3-3,7.8,0,10.8L16,29l11.8-11.9c3-3,3-7.8,0-10.8C26.4,4.8,24.5,4,22.5,4z"></path></svg> like </button> <button class="flex items-center px-1.5 py-1 border-l text-gray-400 focus:outline-none hover:bg-gray-50 dark:hover:bg-gray-700 focus:bg-gray-100 " title="See users who liked this repository">1</button></div> </div> </h1> <div class="flex flex-wrap mb-3 lg:mb-5"></div> <div class="border-b border-gray-100"><div class="flex flex-col-reverse lg:flex-row lg:items-center lg:justify-between"><div class="flex items-center h-12 -mb-px overflow-x-auto overflow-y-hidden"><a class="tab-alternate " href="/datasets/lewtun/github-issues"><svg class="mr-1.5 text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Dataset card </a> <a class="tab-alternate active" href="/datasets/lewtun/github-issues/tree/main"><svg class="mr-1.5 text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M21 19h-8a1 1 0 0 1 0-2h8a1 1 0 0 1 0 2zm0-4h-8a1 1 0 0 1 0-2h8a1 1 0 0 1 0 2zm0-8h-8a1 1 0 0 1 0-2h8a1 1 0 0 1 0 2zm0 4h-8a1 1 0 0 1 0-2h8a1 1 0 0 1 0 2z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M9 19a1 1 0 0 1-1-1V6a1 1 0 0 1 2 0v12a1 1 0 0 1-1 1zm-6-4.333a1 1 0 0 1-.64-1.769L3.438 12l-1.078-.898a1 1 0 0 1 1.28-1.538l2 1.667a1 1 0 0 1 0 1.538l-2 1.667a.999.999 0 0 1-.64.231z" fill="currentColor"></path></svg> Files and versions </a> </div> </div></div></div></header> <div class="container relative flex flex-col md:grid md:space-y-0 w-full md:grid-cols-12 space-y-4 md:gap-6 mb-16 "><section class="pt-8 border-gray-100 col-span-full"><header class="pb-2 flex items-center justify-between flex-wrap"><div class="flex flex-wrap items-center"><div class="SVELTE_HYDRATER contents" data-props="{&quot;path&quot;:&quot;README.md&quot;,&quot;repoName&quot;:&quot;lewtun/github-issues&quot;,&quot;repoType&quot;:&quot;dataset&quot;,&quot;rev&quot;:&quot;main&quot;,&quot;refs&quot;:{&quot;branches&quot;:[&quot;main&quot;],&quot;tags&quot;:[]},&quot;view&quot;:&quot;blob&quot;}" data-target="BranchSelector"><div class="relative mr-4 mb-2"> <button class="text-base cursor-pointer w-full btn text-sm" type="button"> <svg class="mr-1.5 text-gray-700 dark:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" style="transform: rotate(360deg);"><path d="M13 14c-3.36 0-4.46 1.35-4.82 2.24C9.25 16.7 10 17.76 10 19a3 3 0 0 1-3 3a3 3 0 0 1-3-3c0-1.31.83-2.42 2-2.83V7.83A2.99 2.99 0 0 1 4 5a3 3 0 0 1 3-3a3 3 0 0 1 3 3c0 1.31-.83 2.42-2 2.83v5.29c.88-.65 2.16-1.12 4-1.12c2.67 0 3.56-1.34 3.85-2.23A3.006 3.006 0 0 1 14 7a3 3 0 0 1 3-3a3 3 0 0 1 3 3c0 1.34-.88 2.5-2.09 2.86C17.65 11.29 16.68 14 13 14m-6 4a1 1 0 0 0-1 1a1 1 0 0 0 1 1a1 1 0 0 0 1-1a1 1 0 0 0-1-1M7 4a1 1 0 0 0-1 1a1 1 0 0 0 1 1a1 1 0 0 0 1-1a1 1 0 0 0-1-1m10 2a1 1 0 0 0-1 1a1 1 0 0 0 1 1a1 1 0 0 0 1-1a1 1 0 0 0-1-1z" fill="currentColor"></path></svg> main <svg class="-mr-1 text-gray-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" style="transform: rotate(360deg);"><path d="M7 10l5 5l5-5z" fill="currentColor"></path></svg></button> </div></div> <div class="flex items-center flex-wrap mb-2"><a class="hover:underline text-gray-800" href="/datasets/lewtun/github-issues/tree/main">github-issues</a> <span class="text-gray-300 mx-1 font-light">/</span> <span class="font-light dark:text-gray-300">README.md</span> </div></div> <div class="flex flex-row items-center mb-2"> </div></header> <div class="border border-b-0 dark:border-gray-800 px-3 py-2 flex items-baseline rounded-t-lg bg-gradient-to-t from-gray-100-to-white"><img class="w-4 h-4 rounded-full mt-0.5 mr-2.5 self-center" alt="lewtun's picture" src="https://aeiljuispo.cloudimg.io/v7/https://s3.amazonaws.com/moonup/production/uploads/1594651707950-noauth.jpeg?w=200&amp;h=200&amp;f=face"> <div class="mr-5 truncate flex items-center flex-none"><a class="hover:underline" href="/lewtun">lewtun </a> <div class="mt-0.5 ml-1.5 bg-yellow-50 dark:bg-yellow-800 px-1 uppercase text-xs font-semibold text-yellow-500 dark:text-yellow-400 border border-yellow-200 rounded" title="member of the Hugging Face team">HF staff </div> </div> <a class="mr-4 font-mono text-sm text-gray-500 truncate hover:underline" href="/datasets/lewtun/github-issues/commit/3bb24dcad2b45b45e20fc0accc93058dcbe8087d">Create README.md</a> <a class="text-sm border dark:border-gray-800 px-1.5 rounded bg-gray-50 dark:bg-gray-900 hover:underline" href="/datasets/lewtun/github-issues/commit/3bb24dcad2b45b45e20fc0accc93058dcbe8087d">3bb24dc</a> <time class="ml-auto hidden lg:block text-gray-500 dark:text-gray-400 truncate flex-none pl-2" datetime="2021-10-04T15:49:55" title="Mon, 04 Oct 2021 15:49:55 GMT">5 months ago</time></div> <div class="flex flex-wrap items-center justify-between px-3 py-1.5 border dark:border-gray-800 text-sm text-gray-800 dark:bg-gray-900"><div class="flex flex-wrap items-center"><a class="flex items-center hover:underline my-1 mr-4" href="/datasets/lewtun/github-issues/raw/main/README.md"><svg class="mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32" style="transform: rotate(360deg);"><path d="M31 16l-7 7l-1.41-1.41L28.17 16l-5.58-5.59L24 9l7 7z" fill="currentColor"></path><path d="M1 16l7-7l1.41 1.41L3.83 16l5.58 5.59L8 23l-7-7z" fill="currentColor"></path><path d="M12.419 25.484L17.639 6l1.932.518L14.35 26z" fill="currentColor"></path></svg> raw </a><a class="flex items-center hover:underline my-1 mr-4" href="/datasets/lewtun/github-issues/commits/main/README.md"><svg class="mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32" style="transform: rotate(360deg);"><path d="M16 4C9.383 4 4 9.383 4 16s5.383 12 12 12s12-5.383 12-12S22.617 4 16 4zm0 2c5.535 0 10 4.465 10 10s-4.465 10-10 10S6 21.535 6 16S10.465 6 16 6zm-1 2v9h7v-2h-5V8z" fill="currentColor"></path></svg> history </a><a class="flex items-center hover:underline my-1 mr-4" href="/datasets/lewtun/github-issues/blame/main/README.md"><svg class="mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32" style="transform: rotate(360deg);"><path d="M16 2a14 14 0 1 0 14 14A14 14 0 0 0 16 2zm0 26a12 12 0 1 1 12-12a12 12 0 0 1-12 12z" fill="currentColor"></path><path d="M11.5 11a2.5 2.5 0 1 0 2.5 2.5a2.48 2.48 0 0 0-2.5-2.5z" fill="currentColor"></path><path d="M20.5 11a2.5 2.5 0 1 0 2.5 2.5a2.48 2.48 0 0 0-2.5-2.5z" fill="currentColor"></path></svg> blame </a> <div class="text-gray-400 flex items-center"><svg class="text-gray-300 text-sm mr-1.5 -translate-y-px" width="1em" height="1em" viewBox="0 0 22 28" fill="none" xmlns="http://www.w3.org/2000/svg"><path fill-rule="evenodd" clip-rule="evenodd" d="M15.3634 10.3639C15.8486 10.8491 15.8486 11.6357 15.3634 12.1209L10.9292 16.5551C10.6058 16.8785 10.0814 16.8785 9.7579 16.5551L7.03051 13.8277C6.54532 13.3425 6.54532 12.5558 7.03051 12.0707C7.51569 11.5855 8.30234 11.5855 8.78752 12.0707L9.7579 13.041C10.0814 13.3645 10.6058 13.3645 10.9292 13.041L13.6064 10.3639C14.0916 9.8787 14.8782 9.8787 15.3634 10.3639Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M10.6666 27.12C4.93329 25.28 0 19.2267 0 12.7867V6.52001C0 5.40001 0.693334 4.41334 1.73333 4.01334L9.73333 1.01334C10.3333 0.786673 11 0.786673 11.6 1.02667L19.6 4.02667C20.1083 4.21658 20.5465 4.55701 20.8562 5.00252C21.1659 5.44803 21.3324 5.97742 21.3333 6.52001V12.7867C21.3333 19.24 16.4 25.28 10.6666 27.12Z" fill="currentColor" fill-opacity="0.22"></path><path d="M10.0845 1.94967L10.0867 1.94881C10.4587 1.8083 10.8666 1.81036 11.2286 1.95515L11.2387 1.95919L11.2489 1.963L19.2489 4.963L19.25 4.96342C19.5677 5.08211 19.8416 5.29488 20.0351 5.57333C20.2285 5.85151 20.3326 6.18203 20.3333 6.52082C20.3333 6.52113 20.3333 6.52144 20.3333 6.52176L20.3333 12.7867C20.3333 18.6535 15.8922 24.2319 10.6666 26.0652C5.44153 24.2316 1 18.6409 1 12.7867V6.52001C1 5.82357 1.42893 5.20343 2.08883 4.94803L10.0845 1.94967Z" stroke="currentColor" stroke-opacity="0.30" stroke-width="2"></path></svg> Safe </div></div> <div class="dark:text-gray-300">10.3 kB</div></div> <div class="border border-t-0 rounded-b-lg dark:bg-gray-925 dark:border-gray-800 leading-tight"><div class="py-3"><div class="SVELTE_HYDRATER contents" data-props="{&quot;lines&quot;:[&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;# Dataset Card for GitHub Issues&lt;/span&gt;&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;## Dataset Description&lt;/span&gt;&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-bullet\\&quot;&gt;-&lt;/span&gt; &lt;span class=\\&quot;hljs-strong\\&quot;&gt;**Point of Contact:**&lt;/span&gt; [&lt;span class=\\&quot;hljs-string\\&quot;&gt;Lewis Tunstall&lt;/span&gt;](&lt;span class=\\&quot;hljs-link\\&quot;&gt;lewis@huggingface.co&lt;/span&gt;)&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;### Dataset Summary&lt;/span&gt;&quot;,&quot;&quot;,&quot;GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets [&lt;span class=\\&quot;hljs-string\\&quot;&gt;repository&lt;/span&gt;](&lt;span class=\\&quot;hljs-link\\&quot;&gt;https://github.com/huggingface/datasets&lt;/span&gt;). It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond.&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;### Supported Tasks and Leaderboards&lt;/span&gt;&quot;,&quot;&quot;,&quot;For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the &lt;span class=\\&quot;hljs-code\\&quot;&gt;`task-category-tag`&lt;/span&gt; with an appropriate &lt;span class=\\&quot;hljs-code\\&quot;&gt;`other:other-task-name`&lt;/span&gt;).&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-bullet\\&quot;&gt;-&lt;/span&gt; &lt;span class=\\&quot;hljs-code\\&quot;&gt;`task-category-tag`&lt;/span&gt;: The dataset can be used to train a model for [&lt;span class=\\&quot;hljs-string\\&quot;&gt;TASK NAME&lt;/span&gt;], which consists in [&lt;span class=\\&quot;hljs-string\\&quot;&gt;TASK DESCRIPTION&lt;/span&gt;]. Success on this task is typically measured by achieving a &lt;span class=\\&quot;hljs-emphasis\\&quot;&gt;*high/low*&lt;/span&gt; [&lt;span class=\\&quot;hljs-string\\&quot;&gt;metric name&lt;/span&gt;](&lt;span class=\\&quot;hljs-link\\&quot;&gt;https://huggingface.co/metrics/metric_name&lt;/span&gt;). The ([&lt;span class=\\&quot;hljs-string\\&quot;&gt;model name&lt;/span&gt;](&lt;span class=\\&quot;hljs-link\\&quot;&gt;https://huggingface.co/model_name&lt;/span&gt;) or [&lt;span class=\\&quot;hljs-string\\&quot;&gt;model class&lt;/span&gt;](&lt;span class=\\&quot;hljs-link\\&quot;&gt;https://huggingface.co/transformers/model_doc/model_class.html&lt;/span&gt;)) model currently achieves the following score. &lt;span class=\\&quot;hljs-emphasis\\&quot;&gt;*[&lt;span class=\\&quot;hljs-string\\&quot;&gt;IF A LEADERBOARD IS AVAILABLE&lt;/span&gt;]:*&lt;/span&gt; This task has an active leaderboard which can be found at [&lt;span class=\\&quot;hljs-string\\&quot;&gt;leaderboard url&lt;/span&gt;](&lt;span class=\\&quot;hljs-link\\&quot;&gt;&lt;/span&gt;) and ranks models based on [&lt;span class=\\&quot;hljs-string\\&quot;&gt;metric name&lt;/span&gt;](&lt;span class=\\&quot;hljs-link\\&quot;&gt;https://huggingface.co/metrics/metric_name&lt;/span&gt;) while also reporting [&lt;span class=\\&quot;hljs-string\\&quot;&gt;other metric name&lt;/span&gt;](&lt;span class=\\&quot;hljs-link\\&quot;&gt;https://huggingface.co/metrics/other_metric_name&lt;/span&gt;).&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;### Languages&lt;/span&gt;&quot;,&quot;&quot;,&quot;Provide a brief overview of the languages represented in the dataset. Describe relevant details about specifics of the language such as whether it is social media text, African American English,...&quot;,&quot;&quot;,&quot;When relevant, please provide [&lt;span class=\\&quot;hljs-string\\&quot;&gt;BCP-47 codes&lt;/span&gt;](&lt;span class=\\&quot;hljs-link\\&quot;&gt;https://tools.ietf.org/html/bcp47&lt;/span&gt;), which consist of a [&lt;span class=\\&quot;hljs-string\\&quot;&gt;primary language subtag&lt;/span&gt;](&lt;span class=\\&quot;hljs-link\\&quot;&gt;https://tools.ietf.org/html/bcp47#section-2.2.1&lt;/span&gt;), with a [&lt;span class=\\&quot;hljs-string\\&quot;&gt;script subtag&lt;/span&gt;](&lt;span class=\\&quot;hljs-link\\&quot;&gt;https://tools.ietf.org/html/bcp47#section-2.2.3&lt;/span&gt;) and/or [&lt;span class=\\&quot;hljs-string\\&quot;&gt;region subtag&lt;/span&gt;](&lt;span class=\\&quot;hljs-link\\&quot;&gt;https://tools.ietf.org/html/bcp47#section-2.2.4&lt;/span&gt;) if available.&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;## Dataset Structure&lt;/span&gt;&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;### Data Instances&lt;/span&gt;&quot;,&quot;&quot;,&quot;Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-code\\&quot;&gt;```&lt;/span&gt;&quot;,&quot;&lt;span class=\\&quot;hljs-code\\&quot;&gt;{&lt;/span&gt;&quot;,&quot;&lt;span class=\\&quot;hljs-code\\&quot;&gt; &amp;#x27;example_field&amp;#x27;: ...,&lt;/span&gt;&quot;,&quot;&lt;span class=\\&quot;hljs-code\\&quot;&gt; ...&lt;/span&gt;&quot;,&quot;&lt;span class=\\&quot;hljs-code\\&quot;&gt;}&lt;/span&gt;&quot;,&quot;&lt;span class=\\&quot;hljs-code\\&quot;&gt;```&lt;/span&gt;&quot;,&quot;&quot;,&quot;Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;### Data Fields&lt;/span&gt;&quot;,&quot;&quot;,&quot;List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-bullet\\&quot;&gt;-&lt;/span&gt; &lt;span class=\\&quot;hljs-code\\&quot;&gt;`example_field`&lt;/span&gt;: description of &lt;span class=\\&quot;hljs-code\\&quot;&gt;`example_field`&lt;/span&gt;&quot;,&quot;&quot;,&quot;Note that the descriptions can be initialized with the &lt;span class=\\&quot;hljs-strong\\&quot;&gt;**Show Markdown Data Fields**&lt;/span&gt; output of the [&lt;span class=\\&quot;hljs-string\\&quot;&gt;tagging app&lt;/span&gt;](&lt;span class=\\&quot;hljs-link\\&quot;&gt;https://github.com/huggingface/datasets-tagging&lt;/span&gt;), you will then only need to refine the generated descriptions.&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;### Data Splits&lt;/span&gt;&quot;,&quot;&quot;,&quot;Describe and name the splits in the dataset if there are more than one.&quot;,&quot;&quot;,&quot;Describe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.&quot;,&quot;&quot;,&quot;Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:&quot;,&quot;&quot;,&quot;| | Tain | Valid | Test |&quot;,&quot;| ----- | ------ | ----- | ---- |&quot;,&quot;| Input Sentences | | | |&quot;,&quot;| Average Sentence Length | | | |&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;## Dataset Creation&lt;/span&gt;&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;### Curation Rationale&lt;/span&gt;&quot;,&quot;&quot;,&quot;What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;### Source Data&lt;/span&gt;&quot;,&quot;&quot;,&quot;This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;#### Initial Data Collection and Normalization&lt;/span&gt;&quot;,&quot;&quot;,&quot;Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process.&quot;,&quot;&quot;,&quot;If data was collected from other pre-existing datasets, link to source here and to their [&lt;span class=\\&quot;hljs-string\\&quot;&gt;Hugging Face version&lt;/span&gt;](&lt;span class=\\&quot;hljs-link\\&quot;&gt;https://huggingface.co/datasets/dataset_name&lt;/span&gt;).&quot;,&quot;&quot;,&quot;If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;#### Who are the source language producers?&lt;/span&gt;&quot;,&quot;&quot;,&quot;State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data.&quot;,&quot;&quot;,&quot;If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See [&lt;span class=\\&quot;hljs-string\\&quot;&gt;Larson 2017&lt;/span&gt;](&lt;span class=\\&quot;hljs-link\\&quot;&gt;https://www.aclweb.org/anthology/W17-1601.pdf&lt;/span&gt;) for using identity categories as a variables, particularly gender.&quot;,&quot;&quot;,&quot;Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.&quot;,&quot;&quot;,&quot;Describe other people represented or mentioned in the data. Where possible, link to references for the information.&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;### Annotations&lt;/span&gt;&quot;,&quot;&quot;,&quot;If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;#### Annotation process&lt;/span&gt;&quot;,&quot;&quot;,&quot;If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;#### Who are the annotators?&lt;/span&gt;&quot;,&quot;&quot;,&quot;If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.&quot;,&quot;&quot;,&quot;Describe the people or systems who originally created the annotations and their selection criteria if applicable.&quot;,&quot;&quot;,&quot;If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See [&lt;span class=\\&quot;hljs-string\\&quot;&gt;Larson 2017&lt;/span&gt;](&lt;span class=\\&quot;hljs-link\\&quot;&gt;https://www.aclweb.org/anthology/W17-1601.pdf&lt;/span&gt;) for using identity categories as a variables, particularly gender.&quot;,&quot;&quot;,&quot;Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;### Personal and Sensitive Information&lt;/span&gt;&quot;,&quot;&quot;,&quot;State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See [&lt;span class=\\&quot;hljs-string\\&quot;&gt;Larson 2017&lt;/span&gt;](&lt;span class=\\&quot;hljs-link\\&quot;&gt;https://www.aclweb.org/anthology/W17-1601.pdf&lt;/span&gt;) for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).&quot;,&quot;&quot;,&quot;State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history). &quot;,&quot;&quot;,&quot;If efforts were made to anonymize the data, describe the anonymization process.&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;## Considerations for Using the Data&lt;/span&gt;&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;### Social Impact of Dataset&lt;/span&gt;&quot;,&quot;&quot;,&quot;Please discuss some of the ways you believe the use of this dataset will impact society.&quot;,&quot;&quot;,&quot;The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people&amp;#x27;s lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.&quot;,&quot;&quot;,&quot;Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;### Discussion of Biases&lt;/span&gt;&quot;,&quot;&quot;,&quot;Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact.&quot;,&quot;&quot;,&quot;For Wikipedia text, see for example [&lt;span class=\\&quot;hljs-string\\&quot;&gt;Dinan et al 2020 on biases in Wikipedia (esp. Table 1)&lt;/span&gt;](&lt;span class=\\&quot;hljs-link\\&quot;&gt;https://arxiv.org/abs/2005.00614&lt;/span&gt;), or [&lt;span class=\\&quot;hljs-string\\&quot;&gt;Blodgett et al 2020&lt;/span&gt;](&lt;span class=\\&quot;hljs-link\\&quot;&gt;https://www.aclweb.org/anthology/2020.acl-main.485/&lt;/span&gt;) for a more general discussion of the topic.&quot;,&quot;&quot;,&quot;If analyses have been run quantifying these biases, please add brief summaries and links to the studies here.&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;### Other Known Limitations&lt;/span&gt;&quot;,&quot;&quot;,&quot;If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;## Additional Information&lt;/span&gt;&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;### Dataset Curators&lt;/span&gt;&quot;,&quot;&quot;,&quot;List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;### Licensing Information&lt;/span&gt;&quot;,&quot;&quot;,&quot;Provide the license and link to the license webpage if available.&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;### Citation Information&lt;/span&gt;&quot;,&quot;&quot;,&quot;Provide the [&lt;span class=\\&quot;hljs-string\\&quot;&gt;BibTex&lt;/span&gt;](&lt;span class=\\&quot;hljs-link\\&quot;&gt;http://www.bibtex.org/&lt;/span&gt;)-formatted reference for the dataset. For example:&quot;,&quot;&lt;span class=\\&quot;hljs-code\\&quot;&gt;```&lt;/span&gt;&quot;,&quot;&lt;span class=\\&quot;hljs-code\\&quot;&gt;@article{article_id,&lt;/span&gt;&quot;,&quot;&lt;span class=\\&quot;hljs-code\\&quot;&gt; author = {Author List},&lt;/span&gt;&quot;,&quot;&lt;span class=\\&quot;hljs-code\\&quot;&gt; title = {Dataset Paper Title},&lt;/span&gt;&quot;,&quot;&lt;span class=\\&quot;hljs-code\\&quot;&gt; journal = {Publication Venue},&lt;/span&gt;&quot;,&quot;&lt;span class=\\&quot;hljs-code\\&quot;&gt; year = {2525}&lt;/span&gt;&quot;,&quot;&lt;span class=\\&quot;hljs-code\\&quot;&gt;}&lt;/span&gt;&quot;,&quot;&lt;span class=\\&quot;hljs-code\\&quot;&gt;```&lt;/span&gt;&quot;,&quot;&quot;,&quot;If the dataset has a [&lt;span class=\\&quot;hljs-string\\&quot;&gt;DOI&lt;/span&gt;](&lt;span class=\\&quot;hljs-link\\&quot;&gt;https://www.doi.org/&lt;/span&gt;), please provide it here.&quot;,&quot;&quot;,&quot;&lt;span class=\\&quot;hljs-section\\&quot;&gt;### Contributions&lt;/span&gt;&quot;,&quot;&quot;,&quot;Thanks to [&lt;span class=\\&quot;hljs-string\\&quot;&gt;@lewtun&lt;/span&gt;](&lt;span class=\\&quot;hljs-link\\&quot;&gt;https://github.com/lewtun&lt;/span&gt;) for adding this dataset.&quot;]}" data-target="BlobContent"><div class="relative text-sm"><div class="overflow-x-auto"><table class="border-collapse font-mono"><tbody><tr class="" id="L1"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">1</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section"># Dataset Card for GitHub Issues</span></td> </tr><tr class="" id="L2"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">2</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L3"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">3</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">## Dataset Description</span></td> </tr><tr class="" id="L4"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">4</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L5"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">5</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-bullet">-</span> <span class="hljs-strong">**Point of Contact:**</span> [<span class="hljs-string">Lewis Tunstall</span>](<span class="hljs-link">lewis@huggingface.co</span>)</td> </tr><tr class="" id="L6"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">6</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L7"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">7</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">### Dataset Summary</span></td> </tr><tr class="" id="L8"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">8</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L9"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">9</td> <td class="px-3 overflow-visible whitespace-pre">GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets [<span class="hljs-string">repository</span>](<span class="hljs-link">https://github.com/huggingface/datasets</span>). It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond.</td> </tr><tr class="" id="L10"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">10</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L11"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">11</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">### Supported Tasks and Leaderboards</span></td> </tr><tr class="" id="L12"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">12</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L13"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">13</td> <td class="px-3 overflow-visible whitespace-pre">For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the <span class="hljs-code">`task-category-tag`</span> with an appropriate <span class="hljs-code">`other:other-task-name`</span>).</td> </tr><tr class="" id="L14"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">14</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L15"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">15</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-bullet">-</span> <span class="hljs-code">`task-category-tag`</span>: The dataset can be used to train a model for [<span class="hljs-string">TASK NAME</span>], which consists in [<span class="hljs-string">TASK DESCRIPTION</span>]. Success on this task is typically measured by achieving a <span class="hljs-emphasis">*high/low*</span> [<span class="hljs-string">metric name</span>](<span class="hljs-link">https://huggingface.co/metrics/metric_name</span>). The ([<span class="hljs-string">model name</span>](<span class="hljs-link">https://huggingface.co/model_name</span>) or [<span class="hljs-string">model class</span>](<span class="hljs-link">https://huggingface.co/transformers/model_doc/model_class.html</span>)) model currently achieves the following score. <span class="hljs-emphasis">*[<span class="hljs-string">IF A LEADERBOARD IS AVAILABLE</span>]:*</span> This task has an active leaderboard which can be found at [<span class="hljs-string">leaderboard url</span>](<span class="hljs-link"></span>) and ranks models based on [<span class="hljs-string">metric name</span>](<span class="hljs-link">https://huggingface.co/metrics/metric_name</span>) while also reporting [<span class="hljs-string">other metric name</span>](<span class="hljs-link">https://huggingface.co/metrics/other_metric_name</span>).</td> </tr><tr class="" id="L16"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">16</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L17"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">17</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">### Languages</span></td> </tr><tr class="" id="L18"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">18</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L19"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">19</td> <td class="px-3 overflow-visible whitespace-pre">Provide a brief overview of the languages represented in the dataset. Describe relevant details about specifics of the language such as whether it is social media text, African American English,...</td> </tr><tr class="" id="L20"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">20</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L21"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">21</td> <td class="px-3 overflow-visible whitespace-pre">When relevant, please provide [<span class="hljs-string">BCP-47 codes</span>](<span class="hljs-link">https://tools.ietf.org/html/bcp47</span>), which consist of a [<span class="hljs-string">primary language subtag</span>](<span class="hljs-link">https://tools.ietf.org/html/bcp47#section-2.2.1</span>), with a [<span class="hljs-string">script subtag</span>](<span class="hljs-link">https://tools.ietf.org/html/bcp47#section-2.2.3</span>) and/or [<span class="hljs-string">region subtag</span>](<span class="hljs-link">https://tools.ietf.org/html/bcp47#section-2.2.4</span>) if available.</td> </tr><tr class="" id="L22"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">22</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L23"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">23</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">## Dataset Structure</span></td> </tr><tr class="" id="L24"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">24</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L25"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">25</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">### Data Instances</span></td> </tr><tr class="" id="L26"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">26</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L27"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">27</td> <td class="px-3 overflow-visible whitespace-pre">Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.</td> </tr><tr class="" id="L28"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">28</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L29"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">29</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-code">```</span></td> </tr><tr class="" id="L30"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">30</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-code">{</span></td> </tr><tr class="" id="L31"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">31</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-code"> &#x27;example_field&#x27;: ...,</span></td> </tr><tr class="" id="L32"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">32</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-code"> ...</span></td> </tr><tr class="" id="L33"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">33</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-code">}</span></td> </tr><tr class="" id="L34"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">34</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-code">```</span></td> </tr><tr class="" id="L35"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">35</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L36"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">36</td> <td class="px-3 overflow-visible whitespace-pre">Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.</td> </tr><tr class="" id="L37"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">37</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L38"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">38</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">### Data Fields</span></td> </tr><tr class="" id="L39"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">39</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L40"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">40</td> <td class="px-3 overflow-visible whitespace-pre">List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.</td> </tr><tr class="" id="L41"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">41</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L42"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">42</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-bullet">-</span> <span class="hljs-code">`example_field`</span>: description of <span class="hljs-code">`example_field`</span></td> </tr><tr class="" id="L43"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">43</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L44"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">44</td> <td class="px-3 overflow-visible whitespace-pre">Note that the descriptions can be initialized with the <span class="hljs-strong">**Show Markdown Data Fields**</span> output of the [<span class="hljs-string">tagging app</span>](<span class="hljs-link">https://github.com/huggingface/datasets-tagging</span>), you will then only need to refine the generated descriptions.</td> </tr><tr class="" id="L45"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">45</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L46"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">46</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">### Data Splits</span></td> </tr><tr class="" id="L47"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">47</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L48"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">48</td> <td class="px-3 overflow-visible whitespace-pre">Describe and name the splits in the dataset if there are more than one.</td> </tr><tr class="" id="L49"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">49</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L50"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">50</td> <td class="px-3 overflow-visible whitespace-pre">Describe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.</td> </tr><tr class="" id="L51"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">51</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L52"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">52</td> <td class="px-3 overflow-visible whitespace-pre">Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:</td> </tr><tr class="" id="L53"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">53</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L54"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">54</td> <td class="px-3 overflow-visible whitespace-pre">| | Tain | Valid | Test |</td> </tr><tr class="" id="L55"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">55</td> <td class="px-3 overflow-visible whitespace-pre">| ----- | ------ | ----- | ---- |</td> </tr><tr class="" id="L56"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">56</td> <td class="px-3 overflow-visible whitespace-pre">| Input Sentences | | | |</td> </tr><tr class="" id="L57"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">57</td> <td class="px-3 overflow-visible whitespace-pre">| Average Sentence Length | | | |</td> </tr><tr class="" id="L58"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">58</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L59"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">59</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">## Dataset Creation</span></td> </tr><tr class="" id="L60"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">60</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L61"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">61</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">### Curation Rationale</span></td> </tr><tr class="" id="L62"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">62</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L63"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">63</td> <td class="px-3 overflow-visible whitespace-pre">What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?</td> </tr><tr class="" id="L64"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">64</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L65"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">65</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">### Source Data</span></td> </tr><tr class="" id="L66"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">66</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L67"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">67</td> <td class="px-3 overflow-visible whitespace-pre">This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)</td> </tr><tr class="" id="L68"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">68</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L69"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">69</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">#### Initial Data Collection and Normalization</span></td> </tr><tr class="" id="L70"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">70</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L71"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">71</td> <td class="px-3 overflow-visible whitespace-pre">Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process.</td> </tr><tr class="" id="L72"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">72</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L73"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">73</td> <td class="px-3 overflow-visible whitespace-pre">If data was collected from other pre-existing datasets, link to source here and to their [<span class="hljs-string">Hugging Face version</span>](<span class="hljs-link">https://huggingface.co/datasets/dataset_name</span>).</td> </tr><tr class="" id="L74"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">74</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L75"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">75</td> <td class="px-3 overflow-visible whitespace-pre">If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.</td> </tr><tr class="" id="L76"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">76</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L77"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">77</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">#### Who are the source language producers?</span></td> </tr><tr class="" id="L78"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">78</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L79"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">79</td> <td class="px-3 overflow-visible whitespace-pre">State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data.</td> </tr><tr class="" id="L80"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">80</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L81"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">81</td> <td class="px-3 overflow-visible whitespace-pre">If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See [<span class="hljs-string">Larson 2017</span>](<span class="hljs-link">https://www.aclweb.org/anthology/W17-1601.pdf</span>) for using identity categories as a variables, particularly gender.</td> </tr><tr class="" id="L82"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">82</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L83"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">83</td> <td class="px-3 overflow-visible whitespace-pre">Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.</td> </tr><tr class="" id="L84"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">84</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L85"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">85</td> <td class="px-3 overflow-visible whitespace-pre">Describe other people represented or mentioned in the data. Where possible, link to references for the information.</td> </tr><tr class="" id="L86"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">86</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L87"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">87</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">### Annotations</span></td> </tr><tr class="" id="L88"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">88</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L89"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">89</td> <td class="px-3 overflow-visible whitespace-pre">If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.</td> </tr><tr class="" id="L90"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">90</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L91"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">91</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">#### Annotation process</span></td> </tr><tr class="" id="L92"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">92</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L93"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">93</td> <td class="px-3 overflow-visible whitespace-pre">If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.</td> </tr><tr class="" id="L94"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">94</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L95"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">95</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">#### Who are the annotators?</span></td> </tr><tr class="" id="L96"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">96</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L97"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">97</td> <td class="px-3 overflow-visible whitespace-pre">If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.</td> </tr><tr class="" id="L98"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">98</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L99"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">99</td> <td class="px-3 overflow-visible whitespace-pre">Describe the people or systems who originally created the annotations and their selection criteria if applicable.</td> </tr><tr class="" id="L100"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">100</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L101"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">101</td> <td class="px-3 overflow-visible whitespace-pre">If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See [<span class="hljs-string">Larson 2017</span>](<span class="hljs-link">https://www.aclweb.org/anthology/W17-1601.pdf</span>) for using identity categories as a variables, particularly gender.</td> </tr><tr class="" id="L102"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">102</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L103"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">103</td> <td class="px-3 overflow-visible whitespace-pre">Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.</td> </tr><tr class="" id="L104"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">104</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L105"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">105</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">### Personal and Sensitive Information</span></td> </tr><tr class="" id="L106"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">106</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L107"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">107</td> <td class="px-3 overflow-visible whitespace-pre">State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See [<span class="hljs-string">Larson 2017</span>](<span class="hljs-link">https://www.aclweb.org/anthology/W17-1601.pdf</span>) for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).</td> </tr><tr class="" id="L108"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">108</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L109"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">109</td> <td class="px-3 overflow-visible whitespace-pre">State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history). </td> </tr><tr class="" id="L110"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">110</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L111"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">111</td> <td class="px-3 overflow-visible whitespace-pre">If efforts were made to anonymize the data, describe the anonymization process.</td> </tr><tr class="" id="L112"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">112</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L113"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">113</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">## Considerations for Using the Data</span></td> </tr><tr class="" id="L114"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">114</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L115"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">115</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">### Social Impact of Dataset</span></td> </tr><tr class="" id="L116"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">116</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L117"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">117</td> <td class="px-3 overflow-visible whitespace-pre">Please discuss some of the ways you believe the use of this dataset will impact society.</td> </tr><tr class="" id="L118"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">118</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L119"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">119</td> <td class="px-3 overflow-visible whitespace-pre">The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people&#x27;s lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.</td> </tr><tr class="" id="L120"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">120</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L121"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">121</td> <td class="px-3 overflow-visible whitespace-pre">Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.</td> </tr><tr class="" id="L122"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">122</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L123"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">123</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">### Discussion of Biases</span></td> </tr><tr class="" id="L124"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">124</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L125"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">125</td> <td class="px-3 overflow-visible whitespace-pre">Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact.</td> </tr><tr class="" id="L126"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">126</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L127"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">127</td> <td class="px-3 overflow-visible whitespace-pre">For Wikipedia text, see for example [<span class="hljs-string">Dinan et al 2020 on biases in Wikipedia (esp. Table 1)</span>](<span class="hljs-link">https://arxiv.org/abs/2005.00614</span>), or [<span class="hljs-string">Blodgett et al 2020</span>](<span class="hljs-link">https://www.aclweb.org/anthology/2020.acl-main.485/</span>) for a more general discussion of the topic.</td> </tr><tr class="" id="L128"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">128</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L129"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">129</td> <td class="px-3 overflow-visible whitespace-pre">If analyses have been run quantifying these biases, please add brief summaries and links to the studies here.</td> </tr><tr class="" id="L130"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">130</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L131"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">131</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">### Other Known Limitations</span></td> </tr><tr class="" id="L132"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">132</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L133"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">133</td> <td class="px-3 overflow-visible whitespace-pre">If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.</td> </tr><tr class="" id="L134"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">134</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L135"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">135</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">## Additional Information</span></td> </tr><tr class="" id="L136"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">136</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L137"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">137</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">### Dataset Curators</span></td> </tr><tr class="" id="L138"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">138</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L139"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">139</td> <td class="px-3 overflow-visible whitespace-pre">List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.</td> </tr><tr class="" id="L140"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">140</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L141"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">141</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">### Licensing Information</span></td> </tr><tr class="" id="L142"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">142</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L143"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">143</td> <td class="px-3 overflow-visible whitespace-pre">Provide the license and link to the license webpage if available.</td> </tr><tr class="" id="L144"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">144</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L145"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">145</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">### Citation Information</span></td> </tr><tr class="" id="L146"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">146</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L147"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">147</td> <td class="px-3 overflow-visible whitespace-pre">Provide the [<span class="hljs-string">BibTex</span>](<span class="hljs-link">http://www.bibtex.org/</span>)-formatted reference for the dataset. For example:</td> </tr><tr class="" id="L148"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">148</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-code">```</span></td> </tr><tr class="" id="L149"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">149</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-code">@article{article_id,</span></td> </tr><tr class="" id="L150"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">150</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-code"> author = {Author List},</span></td> </tr><tr class="" id="L151"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">151</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-code"> title = {Dataset Paper Title},</span></td> </tr><tr class="" id="L152"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">152</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-code"> journal = {Publication Venue},</span></td> </tr><tr class="" id="L153"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">153</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-code"> year = {2525}</span></td> </tr><tr class="" id="L154"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">154</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-code">}</span></td> </tr><tr class="" id="L155"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">155</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-code">```</span></td> </tr><tr class="" id="L156"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">156</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L157"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">157</td> <td class="px-3 overflow-visible whitespace-pre">If the dataset has a [<span class="hljs-string">DOI</span>](<span class="hljs-link">https://www.doi.org/</span>), please provide it here.</td> </tr><tr class="" id="L158"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">158</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L159"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">159</td> <td class="px-3 overflow-visible whitespace-pre"><span class="hljs-section">### Contributions</span></td> </tr><tr class="" id="L160"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">160</td> <td class="px-3 overflow-visible whitespace-pre"> </td> </tr><tr class="" id="L161"><td class="text-right select-none pl-5 pr-3 cursor-pointer text-gray-300 hover:text-black">161</td> <td class="px-3 overflow-visible whitespace-pre">Thanks to [<span class="hljs-string">@lewtun</span>](<span class="hljs-link">https://github.com/lewtun</span>) for adding this dataset.</td> </tr></tbody></table></div> </div></div></div></div></section></div></main> </div> <script> import("/front/build/module/index.6509d170.js"); window.supportsDynamicImport = true; </script> <script> if (!window.supportsDynamicImport) { const systemJsLoaderTag = document.createElement("script"); systemJsLoaderTag.src = "https://unpkg.com/systemjs@2.0.0/dist/s.min.js"; systemJsLoaderTag.addEventListener("load", function () { System.import("./front/build/nomodule/index.6509d170.js"); }); document.head.appendChild(systemJsLoaderTag); } </script> <script type="text/javascript"> /// LinkedIn (part 1) _linkedin_partner_id = "3734489"; window._linkedin_data_partner_ids = window._linkedin_data_partner_ids || []; window._linkedin_data_partner_ids.push(_linkedin_partner_id); </script> <script> if ( !( ["localhost", "huggingface.test"].includes( window.location.hostname ) || window.location.hostname.includes("ngrok.io") ) ) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })( window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics" ); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview"); /// LinkedIn (part 2) (function (l) { if (!l) { window.lintrk = function (a, b) { window.lintrk.q.push([a, b]); }; window.lintrk.q = []; } var s = document.getElementsByTagName("script")[0]; var b = document.createElement("script"); b.type = "text/javascript"; b.async = true; b.src = "https://snap.licdn.com/li.lms-analytics/insight.min.js"; s.parentNode.insertBefore(b, s); })(window.lintrk); /// Twitter !(function (e, t, n, s, u, a) { e.twq || ((s = e.twq = function () { s.exe ? s.exe.apply(s, arguments) : s.queue.push(arguments); }), (s.version = "1.1"), (s.queue = []), (u = t.createElement(n)), (u.async = !0), (u.src = "//static.ads-twitter.com/uwt.js"), (a = t.getElementsByTagName(n)[0]), a.parentNode.insertBefore(u, a)); })(window, document, "script"); twq("init", "o6bfm"); twq("track", "PageView"); } </script> <noscript> <!-- LinkedIn (part 3) --> <img height="1" width="1" style="display: none" alt="" src="https://px.ads.linkedin.com/collect/?pid=3734489&fmt=gif" /> </noscript> </body> </html>
openclimatefix
null
@InProceedings{uk_pv, title = {UK PV solar generation dataset}, author={Open Climate Fix. }, year={2022} }
# UK PV dataset PV solar generation data from the UK. This dataset contains dataa from 1311 PV systems from 2018-01-01 to 2021-10-27. The time series of solar generation is in 5 minutes chunks. This data is from collected from live PV systems in the UK. We have obfuscated the location of the pv systems for privacy. If you are the owner of a PV system in the dataset, and do not want this data to be shared, please do get in contact with info@openclimatefix.org. ## Files The dataset contains two files - metadata.csv: Data about the PV systems, e.g location - pv.netcdf: Time series of PV solar generation ### metadata.csv Metadata of the different PV systems. Note that there are extra PV systems in this metadata that do not appear in the pv timeseries data The csv columns are - ss_id: the id of the system - latitude_rounded: latitude of the pv system, but rounded to approximately the nearest km - longitude_rounded: latitude of the pv system, but rounded to approximately the nearest km - llsoacd: TODO - orientation: The orientation of the pv system - tilt: The tilt of the pv system - kwp: The capacity of the pv system - operational_at: the datetime the pv system started working ### pv.netcdf Time series data of pv solar generation data is in a [xarray](https://docs.xarray.dev/en/stable/) format. The data variables are the same as 'ss_id' in the metadata. Each data variable contains the solar generation (in kw) for that pv system. The ss_id's here are a subset of the all the ss_id's in the metadata The co-ordinates of the date are 'datetime' which is the datetime of the solar generation reading.
false
1
false
openclimatefix/uk_pv
2022-09-12T20:33:40.000Z
null
false
3bfb06c6fa454ad282aaf417ea9b56b4c3fc348d
[]
[]
https://huggingface.co/datasets/openclimatefix/uk_pv/resolve/main/README.md
# UK PV dataset PV solar generation data from the UK. This dataset contains data from 1311 PV systems from 2018-01-01 to 2021-10-27. The time series of solar generation is in 5 minutes chunks. This data is collected from live PV systems in the UK. We have obfuscated the location of the PV systems for privacy. If you are the owner of a PV system in the dataset, and do not want this data to be shared, please do get in contact with info@openclimatefix.org. ## Files The dataset contains two files: - metadata.csv: Data about the PV systems, e.g location - pv.netcdf: Time series of PV solar generation ### metadata.csv Metadata of the different PV systems. Note that there are extra PV systems in this metadata that do not appear in the PV time-series data. The csv columns are: - ss_id: the id of the system - latitude_rounded: latitude of the PV system, but rounded to approximately the nearest km - longitude_rounded: latitude of the PV system, but rounded to approximately the nearest km - llsoacd: TODO - orientation: The orientation of the PV system - tilt: The tilt of the PV system - kwp: The capacity of the PV system - operational_at: the datetime the PV system started working ### pv.netcdf Time series data of PV solar generation data is in an [xarray](https://docs.xarray.dev/en/stable/) format. The data variables are the same as 'ss_id' in the metadata. Each data variable contains the solar generation (in kw) for that PV system. The ss_id's here are a subset of all the ss_id's in the metadata The coordinates of the date are tagged as 'datetime' which is the datetime of the solar generation reading. ## example using Hugging Face Datasets ```python from datasets import load_dataset dataset = load_dataset("openclimatefix/uk_pv") ``` # raw example ```python import xarray as xr import pandas as pd # load the metadata metadata_df = pd.read_csv('metadata.csv') # load the time-series data pv_power = xr.open_dataset("pv.netcdf", engine="h5netcdf") # take one PV system, and one day data of data on_pv_system = pv_power['10003'].to_dataframe() on_pv_system = on_pv_system[on_pv_system.index < '2021-06-02'] on_pv_system = on_pv_system[on_pv_system.index > '2021-06-01'] ``` ## useful links https://huggingface.co/docs/datasets/share - this repo was made by following this tutorial
GEM-submissions
null
null
null
false
1
false
GEM-submissions/lewtun__this-is-a-test__1647263213
2022-03-14T13:06:58.000Z
null
false
090cbc0841fe628b18037e73de742959bffaec77
[]
[ "benchmark:gem", "type:prediction", "submission_name:This is a test", "tags:evaluation", "tags:benchmark" ]
https://huggingface.co/datasets/GEM-submissions/lewtun__this-is-a-test__1647263213/resolve/main/README.md
--- benchmark: gem type: prediction submission_name: This is a test tags: - evaluation - benchmark --- # GEM Submission Submission name: This is a test
marsyas
null
@misc{tzanetakis_essl_cook_2001, author = "Tzanetakis, George and Essl, Georg and Cook, Perry", title = "Automatic Musical Genre Classification Of Audio Signals", url = "http://ismir2001.ismir.net/pdf/tzanetakis.pdf", publisher = "The International Society for Music Information Retrieval", year = "2001" }
GTZAN is a dataset for musical genre classification of audio signals. The dataset consists of 1,000 audio tracks, each of 30 seconds long. It contains 10 genres, each represented by 100 tracks. The tracks are all 22,050Hz Mono 16-bit audio files in WAV format. The genres are: blues, classical, country, disco, hiphop, jazz, metal, pop, reggae, and rock.
false
34
false
marsyas/gtzan
2022-11-06T20:34:20.000Z
null
false
7ea28ac19cd3ba9924de1940b9840be4f1419f8f
[]
[]
https://huggingface.co/datasets/marsyas/gtzan/resolve/main/README.md
--- pretty_name: GTZAN --- # Dataset Card for GTZAN ## Table of Contents - [Dataset Card for GTZAN](#dataset-card-for-gtzan) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [http://marsyas.info/downloads/datasets.html](http://marsyas.info/downloads/datasets.html) - **Paper:** [http://ismir2001.ismir.net/pdf/tzanetakis.pdf](http://ismir2001.ismir.net/pdf/tzanetakis.pdf) - **Point of Contact:** ### Dataset Summary GTZAN is a dataset for musical genre classification of audio signals. The dataset consists of 1,000 audio tracks, each of 30 seconds long. It contains 10 genres, each represented by 100 tracks. The tracks are all 22,050Hz Mono 16-bit audio files in WAV format. The genres are: blues, classical, country, disco, hiphop, jazz, metal, pop, reggae, and rock. ### Languages English ## Dataset Structure GTZAN is distributed as a single dataset without a predefined training and test split. The information below refers to the single `train` split that is assigned by default. ### Data Instances An example of GTZAN looks as follows: ```python { "file": "/path/to/cache/genres/blues/blues.00000.wav", "audio": { "path": "/path/to/cache/genres/blues/blues.00000.wav", "array": array( [ 0.00732422, 0.01660156, 0.00762939, ..., -0.05560303, -0.06106567, -0.06417847, ], dtype=float32, ), "sampling_rate": 22050, }, "genre": 0, } ``` ### Data Fields The types associated with each of the data fields is as follows: * `file`: a `string` feature. * `audio`: an `Audio` feature containing the `path` of the sound file, the decoded waveform in the `array` field, and the `sampling_rate`. * `genre`: a `ClassLabel` feature. ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @misc{tzanetakis_essl_cook_2001, author = "Tzanetakis, George and Essl, Georg and Cook, Perry", title = "Automatic Musical Genre Classification Of Audio Signals", url = "http://ismir2001.ismir.net/pdf/tzanetakis.pdf", publisher = "The International Society for Music Information Retrieval", year = "2001" } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset.
GEM
null
@inproceedings{perez2021models, title={Models and Datasets for Cross-Lingual Summarisation}, author={Perez-Beltrachini, Laura and Lapata, Mirella}, booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing}, pages={9408--9423}, year={2021} }
The XWikis Corpus (Perez-Beltrachini and Lapata, 2021) provides datasets with different language pairs and directions for cross-lingual abstractive document summarisation. This current version includes four languages: English, German, French, and Czech. The dataset is derived from Wikipedia. It is based on the observation that for a Wikipedia title, the lead section provides an overview conveying salient information, while the body provides detailed information. It thus assumes the body and lead paragraph as a document-summary pair. Furthermore, as a Wikipedia title can be associated with Wikipedia articles in various languages, 1) Wikipedia’s Interlanguage Links are used to find titles across languages and 2) given any two related Wikipedia titles, e.g., Huile d’Olive (French) and Olive Oil (English), the lead paragraph from one title is paired with the body of the other to derive cross-lingual pairs.
false
267
false
GEM/xwikis
2022-11-04T23:20:08.000Z
null
false
61d421611a79444616a68041d0889633547b088b
[]
[ "arxiv:2202.09583", "annotations_creators:found", "language_creators:unknown", "language:de", "language:en", "language:fr", "language:cs", "license:cc-by-sa-4.0", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "task_categories:summarization" ]
https://huggingface.co/datasets/GEM/xwikis/resolve/main/README.md
--- annotations_creators: - found language_creators: - unknown language: - de - en - fr - cs license: - cc-by-sa-4.0 multilinguality: - unknown size_categories: - unknown source_datasets: - original task_categories: - summarization task_ids: [] pretty_name: xwikis --- # Dataset Card for GEM/xwikis ## Dataset Description - **Homepage:** https://github.com/lauhaide/clads - **Repository:** [Needs More Information] - **Paper:** https://arxiv.org/abs/2202.09583 - **Leaderboard:** N/A - **Point of Contact:** Laura Perez-Beltrachini ### Link to Main Data Card You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/xwikis). ### Dataset Summary The XWikis Corpus provides datasets with different language pairs and directions for cross-lingual and multi-lingual abstractive document summarisation. You can load the dataset via: ``` import datasets data = datasets.load_dataset('GEM/xwikis') ``` The data loader can be found [here](https://huggingface.co/datasets/GEM/xwikis). #### website [Github](https://github.com/lauhaide/clads) #### paper https://arxiv.org/abs/2202.09583 #### authors Laura Perez-Beltrachini (University of Edinburgh) ## Dataset Overview ### Where to find the Data and its Documentation #### Webpage <!-- info: What is the webpage for the dataset (if it exists)? --> <!-- scope: telescope --> [Github](https://github.com/lauhaide/clads) #### Paper <!-- info: What is the link to the paper describing the dataset (open access preferred)? --> <!-- scope: telescope --> https://arxiv.org/abs/2202.09583 #### BibTex <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. --> <!-- scope: microscope --> ``` @InProceedings{clads-emnlp, author = "Laura Perez-Beltrachini and Mirella Lapata", title = "Models and Datasets for Cross-Lingual Summarisation", booktitle = "Proceedings of The 2021 Conference on Empirical Methods in Natural Language Processing ", year = "2021", address = "Punta Cana, Dominican Republic", } ``` #### Contact Name <!-- quick --> <!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> Laura Perez-Beltrachini #### Contact Email <!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> lperez@ed.ac.uk #### Has a Leaderboard? <!-- info: Does the dataset have an active leaderboard? --> <!-- scope: telescope --> no ### Languages and Intended Use #### Multilingual? <!-- quick --> <!-- info: Is the dataset multilingual? --> <!-- scope: telescope --> yes #### Covered Languages <!-- quick --> <!-- info: What languages/dialects are covered in the dataset? --> <!-- scope: telescope --> `German`, `English`, `French`, `Czech`, `Chinese` #### License <!-- quick --> <!-- info: What is the license of the dataset? --> <!-- scope: telescope --> cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International #### Intended Use <!-- info: What is the intended use of the dataset? --> <!-- scope: microscope --> Cross-lingual and Multi-lingual single long input document abstractive summarisation. #### Primary Task <!-- info: What primary task does the dataset support? --> <!-- scope: telescope --> Summarization #### Communicative Goal <!-- quick --> <!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. --> <!-- scope: periscope --> Entity descriptive summarisation, that is, generate a summary that conveys the most salient facts of a document related to a given entity. ### Credit #### Curation Organization Type(s) <!-- info: In what kind of organization did the dataset curation happen? --> <!-- scope: telescope --> `academic` #### Dataset Creators <!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). --> <!-- scope: microscope --> Laura Perez-Beltrachini (University of Edinburgh) #### Who added the Dataset to GEM? <!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. --> <!-- scope: microscope --> Laura Perez-Beltrachini (University of Edinburgh) and Ronald Cardenas (University of Edinburgh) ### Dataset Structure #### Data Splits <!-- info: Describe and name the splits in the dataset if there are more than one. --> <!-- scope: periscope --> For each language pair and direction there exists a train/valid/test split. The test split is a sample of size 7k from the intersection of titles existing in the four languages (cs,fr,en,de). Train/valid are randomly split. ## Dataset in GEM ### Rationale for Inclusion in GEM #### Similar Datasets <!-- info: Do other datasets for the high level task exist? --> <!-- scope: telescope --> no ### GEM-Specific Curation #### Modificatied for GEM? <!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? --> <!-- scope: telescope --> no #### Additional Splits? <!-- info: Does GEM provide additional splits to the dataset? --> <!-- scope: telescope --> no ### Getting Started with the Task ## Previous Results ### Previous Results #### Measured Model Abilities <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: telescope --> - identification of entity salient information - translation - multi-linguality - cross-lingual transfer, zero-shot, few-shot #### Metrics <!-- info: What metrics are typically used for this task? --> <!-- scope: periscope --> `ROUGE` #### Previous results available? <!-- info: Are previous results available? --> <!-- scope: telescope --> yes #### Other Evaluation Approaches <!-- info: What evaluation approaches have others used? --> <!-- scope: periscope --> ROUGE-1/2/L ## Dataset Curation ### Original Curation #### Sourced from Different Sources <!-- info: Is the dataset aggregated from different data sources? --> <!-- scope: telescope --> no ### Language Data #### How was Language Data Obtained? <!-- info: How was the language data obtained? --> <!-- scope: telescope --> `Found` #### Where was it found? <!-- info: If found, where from? --> <!-- scope: telescope --> `Single website` #### Data Validation <!-- info: Was the text validated by a different worker or a data curator? --> <!-- scope: telescope --> other #### Was Data Filtered? <!-- info: Were text instances selected or filtered? --> <!-- scope: telescope --> not filtered ### Structured Annotations #### Additional Annotations? <!-- quick --> <!-- info: Does the dataset have additional annotations for each instance? --> <!-- scope: telescope --> found #### Annotation Service? <!-- info: Was an annotation service used? --> <!-- scope: telescope --> no #### Annotation Values <!-- info: Purpose and values for each annotation --> <!-- scope: microscope --> The input documents have section structure information. #### Any Quality Control? <!-- info: Quality control measures? --> <!-- scope: telescope --> validated by another rater #### Quality Control Details <!-- info: Describe the quality control measures that were taken. --> <!-- scope: microscope --> Bilingual annotators assessed the content overlap of source document and target summaries. ### Consent #### Any Consent Policy? <!-- info: Was there a consent policy involved when gathering the data? --> <!-- scope: telescope --> no ### Private Identifying Information (PII) #### Contains PII? <!-- quick --> <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? --> <!-- scope: telescope --> no PII ### Maintenance #### Any Maintenance Plan? <!-- info: Does the original dataset have a maintenance plan? --> <!-- scope: telescope --> no ## Broader Social Context ### Previous Work on the Social Impact of the Dataset #### Usage of Models based on the Data <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? --> <!-- scope: telescope --> no ### Impact on Under-Served Communities #### Addresses needs of underserved Communities? <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). --> <!-- scope: telescope --> no ### Discussion of Biases #### Any Documented Social Biases? <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. --> <!-- scope: telescope --> no ## Considerations for Using the Data ### PII Risks and Liability ### Licenses #### Copyright Restrictions on the Dataset <!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? --> <!-- scope: periscope --> `public domain` #### Copyright Restrictions on the Language Data <!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? --> <!-- scope: periscope --> `public domain` ### Known Technical Limitations
cgarciae
null
null
Cartoon Set is a collection of random, 2D cartoon avatar images. The cartoons vary in 10 artwork categories, 4 color categories, and 4 proportion categories, with a total of ~1013 possible combinations. We provide sets of 10k and 100k randomly chosen cartoons and labeled attributes.
false
39
false
cgarciae/cartoonset
2022-03-23T19:12:10.000Z
null
false
6e8665ced0dc6c8f274e1e496a2187b11fe0832d
[]
[ "arxiv:1711.05139", "size_categories:10K<n<100K", "license:cc-by-4.0" ]
https://huggingface.co/datasets/cgarciae/cartoonset/resolve/main/README.md
--- pretty_name: Cartoon Set size_categories: - 10K<n<100K task_categories: - image - computer-vision - generative-modelling license: cc-by-4.0 --- # Dataset Card for Cartoon Set ## Table of Contents - [Dataset Card for Cartoon Set](#dataset-card-for-cartoon-set) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Usage](#usage) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://google.github.io/cartoonset/ - **Repository:** https://github.com/google/cartoonset/ - **Paper:** XGAN: Unsupervised Image-to-Image Translation for Many-to-Many Mappings - **Leaderboard:** - **Point of Contact:** ### Dataset Summary ![Cartoon Set sample image](https://huggingface.co/datasets/cgarciae/cartoonset/resolve/main/sample.png) [Cartoon Set](https://google.github.io/cartoonset/) is a collection of random, 2D cartoon avatar images. The cartoons vary in 10 artwork categories, 4 color categories, and 4 proportion categories, with a total of ~10^13 possible combinations. We provide sets of 10k and 100k randomly chosen cartoons and labeled attributes. #### Usage `cartoonset` provides the images as PNG byte strings, this gives you a bit more flexibility into how to load the data. Here we show 2 ways: **Using PIL:** ```python import datasets from io import BytesIO from PIL import Image ds = datasets.load_dataset("cgarciae/cartoonset", "10k") # or "100k" def process_fn(sample): img = Image.open(BytesIO(sample["img_bytes"])) ... return {"img": img} ds = ds.map(process_fn, remove_columns=["img_bytes"]) ``` **Using TensorFlow:** ```python import datasets import tensorflow as tf hfds = datasets.load_dataset("cgarciae/cartoonset", "10k") # or "100k" ds = tf.data.Dataset.from_generator( lambda: hfds, output_signature={ "img_bytes": tf.TensorSpec(shape=(), dtype=tf.string), }, ) def process_fn(sample): img = tf.image.decode_png(sample["img_bytes"], channels=3) ... return {"img": img} ds = ds.map(process_fn) ``` **Additional features:** You can also access the features that generated each sample e.g: ```python ds = datasets.load_dataset("cgarciae/cartoonset", "10k+features") # or "100k+features" ``` Apart from `img_bytes` these configurations add a total of 18 * 2 additional `int` features, these come in `{feature}`, `{feature}_num_categories` pairs where `num_categories` indicates the number of categories for that feature. See [Data Fields](#data-fields) for the complete list of features. ## Dataset Structure ### Data Instances A sample from the training set is provided below: ```python { 'img_bytes': b'0x...', } ``` If `+features` is added to the dataset name, the following additional fields are provided: ```python { 'img_bytes': b'0x...', 'eye_angle': 0, 'eye_angle_num_categories': 3, 'eye_lashes': 0, 'eye_lashes_num_categories': 2, 'eye_lid': 0, 'eye_lid_num_categories': 2, 'chin_length': 2, 'chin_length_num_categories': 3, ... } ``` ### Data Fields - `img_bytes`: A byte string containing the raw data of a 500x500 PNG image. If `+features` is appended to the dataset name, the following additional `int32` fields are provided: - `eye_angle` - `eye_angle_num_categories` - `eye_lashes` - `eye_lashes_num_categories` - `eye_lid` - `eye_lid_num_categories` - `chin_length` - `chin_length_num_categories` - `eyebrow_weight` - `eyebrow_weight_num_categories` - `eyebrow_shape` - `eyebrow_shape_num_categories` - `eyebrow_thickness` - `eyebrow_thickness_num_categories` - `face_shape` - `face_shape_num_categories` - `facial_hair` - `facial_hair_num_categories` - `facial_hair_num_categories` - `facial_hair_num_categories` - `hair` - `hair_num_categories` - `hair_num_categories` - `hair_num_categories` - `eye_color` - `eye_color_num_categories` - `face_color` - `face_color_num_categories` - `hair_color` - `hair_color_num_categories` - `glasses` - `glasses_num_categories` - `glasses_color` - `glasses_color_num_categories` - `eyes_slant` - `eye_slant_num_categories` - `eyebrow_width` - `eyebrow_width_num_categories` - `eye_eyebrow_distance` - `eye_eyebrow_distance_num_categories` ### Data Splits Train ## Dataset Creation ### Licensing Information This data is licensed by Google LLC under a Creative Commons Attribution 4.0 International License. ### Citation Information ``` @article{DBLP:journals/corr/abs-1711-05139, author = {Amelie Royer and Konstantinos Bousmalis and Stephan Gouws and Fred Bertsch and Inbar Mosseri and Forrester Cole and Kevin Murphy}, title = {{XGAN:} Unsupervised Image-to-Image Translation for many-to-many Mappings}, journal = {CoRR}, volume = {abs/1711.05139}, year = {2017}, url = {http://arxiv.org/abs/1711.05139}, eprinttype = {arXiv}, eprint = {1711.05139}, timestamp = {Mon, 13 Aug 2018 16:47:38 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1711-05139.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ### Contributions
PradeepReddyThathireddy
null
null
null
false
1
false
PradeepReddyThathireddy/Inspiring_Content_Detection_Dataset
2022-03-23T07:35:15.000Z
null
false
9be08cd250913eb5d15f945d18aa485e01087d20
[]
[]
https://huggingface.co/datasets/PradeepReddyThathireddy/Inspiring_Content_Detection_Dataset/resolve/main/README.md
null
null
@inproceedings{pradhan-etal-2013-towards, title = "Towards Robust Linguistic Analysis using {O}nto{N}otes", author = {Pradhan, Sameer and Moschitti, Alessandro and Xue, Nianwen and Ng, Hwee Tou and Bj{\"o}rkelund, Anders and Uryupina, Olga and Zhang, Yuchen and Zhong, Zhi}, booktitle = "Proceedings of the Seventeenth Conference on Computational Natural Language Learning", month = aug, year = "2013", address = "Sofia, Bulgaria", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/W13-3516", pages = "143--152", } Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston. OntoNotes Release 5.0 LDC2013T19. Web Download. Philadelphia: Linguistic Data Consortium, 2013.
OntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre, multilingual corpus manually annotated with syntactic, semantic and discourse information. This dataset is the version of OntoNotes v5.0 extended and is used in the CoNLL-2012 shared task. It includes v4 train/dev and v9 test data for English/Chinese/Arabic and corrected version v12 train/dev/test data (English only). The source of data is the Mendeley Data repo [ontonotes-conll2012](https://data.mendeley.com/datasets/zmycy7t9h9), which seems to be as the same as the official data, but users should use this dataset on their own responsibility. See also summaries from paperwithcode, [OntoNotes 5.0](https://paperswithcode.com/dataset/ontonotes-5-0) and [CoNLL-2012](https://paperswithcode.com/dataset/conll-2012-1) For more detailed info of the dataset like annotation, tag set, etc., you can refer to the documents in the Mendeley repo mentioned above.
false
916
false
conll2012_ontonotesv5
2022-11-03T16:31:34.000Z
ontonotes-5-0
false
5f12f57d6b2d1f4e96fa6bcffae026313249d82f
[]
[ "annotations_creators:expert-generated", "language_creators:found", "language:ar", "language:en", "language:zh", "license:cc-by-nc-nd-4.0", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "task_categories:token-classification", "task_ids:named-entity-rec...
https://huggingface.co/datasets/conll2012_ontonotesv5/resolve/main/README.md
--- annotations_creators: - expert-generated language_creators: - found language: - ar - en - zh license: - cc-by-nc-nd-4.0 multilinguality: - multilingual paperswithcode_id: ontonotes-5-0 pretty_name: CoNLL2012 shared task data based on OntoNotes 5.0 size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition - part-of-speech - coreference-resolution - parsing - lemmatization - word-sense-disambiguation tags: - semantic-role-labeling dataset_info: - config_name: english_v4 features: - name: document_id dtype: string - name: sentences list: - name: part_id dtype: int32 - name: words sequence: string - name: pos_tags sequence: class_label: names: 0: XX 1: '``' 2: $ 3: '''''' 4: ',' 5: -LRB- 6: -RRB- 7: . 8: ':' 9: ADD 10: AFX 11: CC 12: CD 13: DT 14: EX 15: FW 16: HYPH 17: IN 18: JJ 19: JJR 20: JJS 21: LS 22: MD 23: NFP 24: NN 25: NNP 26: NNPS 27: NNS 28: PDT 29: POS 30: PRP 31: PRP$ 32: RB 33: RBR 34: RBS 35: RP 36: SYM 37: TO 38: UH 39: VB 40: VBD 41: VBG 42: VBN 43: VBP 44: VBZ 45: WDT 46: WP 47: WP$ 48: WRB - name: parse_tree dtype: string - name: predicate_lemmas sequence: string - name: predicate_framenet_ids sequence: string - name: word_senses sequence: float32 - name: speaker dtype: string - name: named_entities sequence: class_label: names: 0: O 1: B-PERSON 2: I-PERSON 3: B-NORP 4: I-NORP 5: B-FAC 6: I-FAC 7: B-ORG 8: I-ORG 9: B-GPE 10: I-GPE 11: B-LOC 12: I-LOC 13: B-PRODUCT 14: I-PRODUCT 15: B-DATE 16: I-DATE 17: B-TIME 18: I-TIME 19: B-PERCENT 20: I-PERCENT 21: B-MONEY 22: I-MONEY 23: B-QUANTITY 24: I-QUANTITY 25: B-ORDINAL 26: I-ORDINAL 27: B-CARDINAL 28: I-CARDINAL 29: B-EVENT 30: I-EVENT 31: B-WORK_OF_ART 32: I-WORK_OF_ART 33: B-LAW 34: I-LAW 35: B-LANGUAGE 36: I-LANGUAGE - name: srl_frames list: - name: verb dtype: string - name: frames sequence: string - name: coref_spans sequence: sequence: int32 length: 3 splits: - name: test num_bytes: 14709044 num_examples: 222 - name: train num_bytes: 112246121 num_examples: 1940 - name: validation num_bytes: 14116925 num_examples: 222 download_size: 193644139 dataset_size: 141072090 - config_name: chinese_v4 features: - name: document_id dtype: string - name: sentences list: - name: part_id dtype: int32 - name: words sequence: string - name: pos_tags sequence: class_label: names: 0: X 1: AD 2: AS 3: BA 4: CC 5: CD 6: CS 7: DEC 8: DEG 9: DER 10: DEV 11: DT 12: ETC 13: FW 14: IJ 15: INF 16: JJ 17: LB 18: LC 19: M 20: MSP 21: NN 22: NR 23: NT 24: OD 25: 'ON' 26: P 27: PN 28: PU 29: SB 30: SP 31: URL 32: VA 33: VC 34: VE 35: VV - name: parse_tree dtype: string - name: predicate_lemmas sequence: string - name: predicate_framenet_ids sequence: string - name: word_senses sequence: float32 - name: speaker dtype: string - name: named_entities sequence: class_label: names: 0: O 1: B-PERSON 2: I-PERSON 3: B-NORP 4: I-NORP 5: B-FAC 6: I-FAC 7: B-ORG 8: I-ORG 9: B-GPE 10: I-GPE 11: B-LOC 12: I-LOC 13: B-PRODUCT 14: I-PRODUCT 15: B-DATE 16: I-DATE 17: B-TIME 18: I-TIME 19: B-PERCENT 20: I-PERCENT 21: B-MONEY 22: I-MONEY 23: B-QUANTITY 24: I-QUANTITY 25: B-ORDINAL 26: I-ORDINAL 27: B-CARDINAL 28: I-CARDINAL 29: B-EVENT 30: I-EVENT 31: B-WORK_OF_ART 32: I-WORK_OF_ART 33: B-LAW 34: I-LAW 35: B-LANGUAGE 36: I-LANGUAGE - name: srl_frames list: - name: verb dtype: string - name: frames sequence: string - name: coref_spans sequence: sequence: int32 length: 3 splits: - name: test num_bytes: 9585138 num_examples: 166 - name: train num_bytes: 77195698 num_examples: 1391 - name: validation num_bytes: 10828169 num_examples: 172 download_size: 193644139 dataset_size: 97609005 - config_name: arabic_v4 features: - name: document_id dtype: string - name: sentences list: - name: part_id dtype: int32 - name: words sequence: string - name: pos_tags sequence: string - name: parse_tree dtype: string - name: predicate_lemmas sequence: string - name: predicate_framenet_ids sequence: string - name: word_senses sequence: float32 - name: speaker dtype: string - name: named_entities sequence: class_label: names: 0: O 1: B-PERSON 2: I-PERSON 3: B-NORP 4: I-NORP 5: B-FAC 6: I-FAC 7: B-ORG 8: I-ORG 9: B-GPE 10: I-GPE 11: B-LOC 12: I-LOC 13: B-PRODUCT 14: I-PRODUCT 15: B-DATE 16: I-DATE 17: B-TIME 18: I-TIME 19: B-PERCENT 20: I-PERCENT 21: B-MONEY 22: I-MONEY 23: B-QUANTITY 24: I-QUANTITY 25: B-ORDINAL 26: I-ORDINAL 27: B-CARDINAL 28: I-CARDINAL 29: B-EVENT 30: I-EVENT 31: B-WORK_OF_ART 32: I-WORK_OF_ART 33: B-LAW 34: I-LAW 35: B-LANGUAGE 36: I-LANGUAGE - name: srl_frames list: - name: verb dtype: string - name: frames sequence: string - name: coref_spans sequence: sequence: int32 length: 3 splits: - name: test num_bytes: 4900664 num_examples: 44 - name: train num_bytes: 42017761 num_examples: 359 - name: validation num_bytes: 4859292 num_examples: 44 download_size: 193644139 dataset_size: 51777717 - config_name: english_v12 features: - name: document_id dtype: string - name: sentences list: - name: part_id dtype: int32 - name: words sequence: string - name: pos_tags sequence: class_label: names: 0: XX 1: '``' 2: $ 3: '''''' 4: '*' 5: ',' 6: -LRB- 7: -RRB- 8: . 9: ':' 10: ADD 11: AFX 12: CC 13: CD 14: DT 15: EX 16: FW 17: HYPH 18: IN 19: JJ 20: JJR 21: JJS 22: LS 23: MD 24: NFP 25: NN 26: NNP 27: NNPS 28: NNS 29: PDT 30: POS 31: PRP 32: PRP$ 33: RB 34: RBR 35: RBS 36: RP 37: SYM 38: TO 39: UH 40: VB 41: VBD 42: VBG 43: VBN 44: VBP 45: VBZ 46: VERB 47: WDT 48: WP 49: WP$ 50: WRB - name: parse_tree dtype: string - name: predicate_lemmas sequence: string - name: predicate_framenet_ids sequence: string - name: word_senses sequence: float32 - name: speaker dtype: string - name: named_entities sequence: class_label: names: 0: O 1: B-PERSON 2: I-PERSON 3: B-NORP 4: I-NORP 5: B-FAC 6: I-FAC 7: B-ORG 8: I-ORG 9: B-GPE 10: I-GPE 11: B-LOC 12: I-LOC 13: B-PRODUCT 14: I-PRODUCT 15: B-DATE 16: I-DATE 17: B-TIME 18: I-TIME 19: B-PERCENT 20: I-PERCENT 21: B-MONEY 22: I-MONEY 23: B-QUANTITY 24: I-QUANTITY 25: B-ORDINAL 26: I-ORDINAL 27: B-CARDINAL 28: I-CARDINAL 29: B-EVENT 30: I-EVENT 31: B-WORK_OF_ART 32: I-WORK_OF_ART 33: B-LAW 34: I-LAW 35: B-LANGUAGE 36: I-LANGUAGE - name: srl_frames list: - name: verb dtype: string - name: frames sequence: string - name: coref_spans sequence: sequence: int32 length: 3 splits: - name: test num_bytes: 18254144 num_examples: 1200 - name: train num_bytes: 174173192 num_examples: 10539 - name: validation num_bytes: 24264804 num_examples: 1370 download_size: 193644139 dataset_size: 216692140 --- # Dataset Card for CoNLL2012 shared task data based on OntoNotes 5.0 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [CoNLL-2012 Shared Task](https://conll.cemantix.org/2012/data.html), [Author's page](https://cemantix.org/data/ontonotes.html) - **Repository:** [Mendeley](https://data.mendeley.com/datasets/zmycy7t9h9) - **Paper:** [Towards Robust Linguistic Analysis using OntoNotes](https://aclanthology.org/W13-3516/) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary OntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre, multilingual corpus manually annotated with syntactic, semantic and discourse information. This dataset is the version of OntoNotes v5.0 extended and is used in the CoNLL-2012 shared task. It includes v4 train/dev and v9 test data for English/Chinese/Arabic and corrected version v12 train/dev/test data (English only). The source of data is the Mendeley Data repo [ontonotes-conll2012](https://data.mendeley.com/datasets/zmycy7t9h9), which seems to be as the same as the official data, but users should use this dataset on their own responsibility. See also summaries from paperwithcode, [OntoNotes 5.0](https://paperswithcode.com/dataset/ontonotes-5-0) and [CoNLL-2012](https://paperswithcode.com/dataset/conll-2012-1) For more detailed info of the dataset like annotation, tag set, etc., you can refer to the documents in the Mendeley repo mentioned above. ### Supported Tasks and Leaderboards - [Named Entity Recognition on Ontonotes v5 (English)](https://paperswithcode.com/sota/named-entity-recognition-ner-on-ontonotes-v5) - [Coreference Resolution on OntoNotes](https://paperswithcode.com/sota/coreference-resolution-on-ontonotes) - [Semantic Role Labeling on OntoNotes](https://paperswithcode.com/sota/semantic-role-labeling-on-ontonotes) - ... ### Languages V4 data for Arabic, Chinese, English, and V12 data for English ## Dataset Structure ### Data Instances ``` { {'document_id': 'nw/wsj/23/wsj_2311', 'sentences': [{'part_id': 0, 'words': ['CONCORDE', 'trans-Atlantic', 'flights', 'are', '$', '2, 'to', 'Paris', 'and', '$', '3, 'to', 'London', '.']}, 'pos_tags': [25, 18, 27, 43, 2, 12, 17, 25, 11, 2, 12, 17, 25, 7], 'parse_tree': '(TOP(S(NP (NNP CONCORDE) (JJ trans-Atlantic) (NNS flights) )(VP (VBP are) (NP(NP(NP ($ $) (CD 2,400) )(PP (IN to) (NP (NNP Paris) ))) (CC and) (NP(NP ($ $) (CD 3,200) )(PP (IN to) (NP (NNP London) ))))) (. .) ))', 'predicate_lemmas': [None, None, None, 'be', None, None, None, None, None, None, None, None, None, None], 'predicate_framenet_ids': [None, None, None, '01', None, None, None, None, None, None, None, None, None, None], 'word_senses': [None, None, None, None, None, None, None, None, None, None, None, None, None, None], 'speaker': None, 'named_entities': [7, 6, 0, 0, 0, 15, 0, 5, 0, 0, 15, 0, 5, 0], 'srl_frames': [{'frames': ['B-ARG1', 'I-ARG1', 'I-ARG1', 'B-V', 'B-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'O'], 'verb': 'are'}], 'coref_spans': [], {'part_id': 0, 'words': ['In', 'a', 'Centennial', 'Journal', 'article', 'Oct.', '5', ',', 'the', 'fares', 'were', 'reversed', '.']}]} 'pos_tags': [17, 13, 25, 25, 24, 25, 12, 4, 13, 27, 40, 42, 7], 'parse_tree': '(TOP(S(PP (IN In) (NP (DT a) (NML (NNP Centennial) (NNP Journal) ) (NN article) ))(NP (NNP Oct.) (CD 5) ) (, ,) (NP (DT the) (NNS fares) )(VP (VBD were) (VP (VBN reversed) )) (. .) ))', 'predicate_lemmas': [None, None, None, None, None, None, None, None, None, None, None, 'reverse', None], 'predicate_framenet_ids': [None, None, None, None, None, None, None, None, None, None, None, '01', None], 'word_senses': [None, None, None, None, None, None, None, None, None, None, None, None, None], 'speaker': None, 'named_entities': [0, 0, 4, 22, 0, 12, 30, 0, 0, 0, 0, 0, 0], 'srl_frames': [{'frames': ['B-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'B-ARGM-TMP', 'I-ARGM-TMP', 'O', 'B-ARG1', 'I-ARG1', 'O', 'B-V', 'O'], 'verb': 'reversed'}], 'coref_spans': [], } ``` ### Data Fields - **`document_id`** (*`str`*): This is a variation on the document filename - **`sentences`** (*`List[Dict]`*): All sentences of the same document are in a single example for the convenience of concatenating sentences. Every element in `sentences` is a *`Dict`* composed of the following data fields: - **`part_id`** (*`int`*) : Some files are divided into multiple parts numbered as 000, 001, 002, ... etc. - **`words`** (*`List[str]`*) : - **`pos_tags`** (*`List[ClassLabel]` or `List[str]`*) : This is the Penn-Treebank-style part of speech. When parse information is missing, all parts of speech except the one for which there is some sense or proposition annotation are marked with a XX tag. The verb is marked with just a VERB tag. - tag set : Note tag sets below are founded by scanning all the data, and I found it seems to be a little bit different from officially stated tag sets. See official documents in the [Mendeley repo](https://data.mendeley.com/datasets/zmycy7t9h9) - arabic : str. Because pos tag in Arabic is compounded and complex, hard to represent it by `ClassLabel` - chinese v4 : `datasets.ClassLabel(num_classes=36, names=["X", "AD", "AS", "BA", "CC", "CD", "CS", "DEC", "DEG", "DER", "DEV", "DT", "ETC", "FW", "IJ", "INF", "JJ", "LB", "LC", "M", "MSP", "NN", "NR", "NT", "OD", "ON", "P", "PN", "PU", "SB", "SP", "URL", "VA", "VC", "VE", "VV",])`, where `X` is for pos tag missing - english v4 : `datasets.ClassLabel(num_classes=49, names=["XX", "``", "$", "''", ",", "-LRB-", "-RRB-", ".", ":", "ADD", "AFX", "CC", "CD", "DT", "EX", "FW", "HYPH", "IN", "JJ", "JJR", "JJS", "LS", "MD", "NFP", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "SYM", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "WDT", "WP", "WP$", "WRB",])`, where `XX` is for pos tag missing, and `-LRB-`/`-RRB-` is "`(`" / "`)`". - english v12 : `datasets.ClassLabel(num_classes=51, names="english_v12": ["XX", "``", "$", "''", "*", ",", "-LRB-", "-RRB-", ".", ":", "ADD", "AFX", "CC", "CD", "DT", "EX", "FW", "HYPH", "IN", "JJ", "JJR", "JJS", "LS", "MD", "NFP", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "SYM", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "VERB", "WDT", "WP", "WP$", "WRB",])`, where `XX` is for pos tag missing, and `-LRB-`/`-RRB-` is "`(`" / "`)`". - **`parse_tree`** (*`Optional[str]`*) : An serialized NLTK Tree representing the parse. It includes POS tags as pre-terminal nodes. When the parse information is missing, the parse will be `None`. - **`predicate_lemmas`** (*`List[Optional[str]]`*) : The predicate lemma of the words for which we have semantic role information or word sense information. All other indices are `None`. - **`predicate_framenet_ids`** (*`List[Optional[int]]`*) : The PropBank frameset ID of the lemmas in predicate_lemmas, or `None`. - **`word_senses`** (*`List[Optional[float]]`*) : The word senses for the words in the sentence, or None. These are floats because the word sense can have values after the decimal, like 1.1. - **`speaker`** (*`Optional[str]`*) : This is the speaker or author name where available. Mostly in Broadcast Conversation and Web Log data. When it is not available, it will be `None`. - **`named_entities`** (*`List[ClassLabel]`*) : The BIO tags for named entities in the sentence. - tag set : `datasets.ClassLabel(num_classes=37, names=["O", "B-PERSON", "I-PERSON", "B-NORP", "I-NORP", "B-FAC", "I-FAC", "B-ORG", "I-ORG", "B-GPE", "I-GPE", "B-LOC", "I-LOC", "B-PRODUCT", "I-PRODUCT", "B-DATE", "I-DATE", "B-TIME", "I-TIME", "B-PERCENT", "I-PERCENT", "B-MONEY", "I-MONEY", "B-QUANTITY", "I-QUANTITY", "B-ORDINAL", "I-ORDINAL", "B-CARDINAL", "I-CARDINAL", "B-EVENT", "I-EVENT", "B-WORK_OF_ART", "I-WORK_OF_ART", "B-LAW", "I-LAW", "B-LANGUAGE", "I-LANGUAGE",])` - **`srl_frames`** (*`List[{"word":str, "frames":List[str]}]`*) : A dictionary keyed by the verb in the sentence for the given Propbank frame labels, in a BIO format. - **`coref spans`** (*`List[List[int]]`*) : The spans for entity mentions involved in coreference resolution within the sentence. Each element is a tuple composed of (cluster_id, start_index, end_index). Indices are inclusive. ### Data Splits Each dataset (arabic_v4, chinese_v4, english_v4, english_v12) has 3 splits: _train_, _validation_, and _test_ ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{pradhan-etal-2013-towards, title = "Towards Robust Linguistic Analysis using {O}nto{N}otes", author = {Pradhan, Sameer and Moschitti, Alessandro and Xue, Nianwen and Ng, Hwee Tou and Bj{\"o}rkelund, Anders and Uryupina, Olga and Zhang, Yuchen and Zhong, Zhi}, booktitle = "Proceedings of the Seventeenth Conference on Computational Natural Language Learning", month = aug, year = "2013", address = "Sofia, Bulgaria", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/W13-3516", pages = "143--152", } ``` ### Contributions Thanks to [@richarddwang](https://github.com/richarddwang) for adding this dataset.
anjandash
null
null
null
false
1
false
anjandash/java-8m-methods-v2
2022-07-01T20:31:57.000Z
null
false
6de5f4fa6a044e79302def646a39bf2be621dac4
[]
[ "language:java", "license:mit", "multilinguality:monolingual" ]
https://huggingface.co/datasets/anjandash/java-8m-methods-v2/resolve/main/README.md
--- language: - java license: - mit multilinguality: - monolingual pretty_name: - java-8m-methods-v2 ---
victor
null
null
null
false
3
false
victor/autonlp-data-tweet-sentiment
2022-10-25T10:03:17.000Z
null
false
80ce985b32bd618df18f86436893249c60add630
[]
[ "language:en", "task_categories:text-classification" ]
https://huggingface.co/datasets/victor/autonlp-data-tweet-sentiment/resolve/main/README.md
--- language: - en task_categories: - text-classification --- # AutoNLP Dataset for project: tweet-sentiment ## Table of content - [Dataset Description](#dataset-description) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) ## Dataset Descritpion This dataset has been automatically processed by AutoNLP for project tweet-sentiment. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "I am going to see how long I can do this for.", "target": 8 }, { "text": "@anitabora yeah, right. What if our politicians start using uploading their pics, lots of inside sto[...]", "target": 8 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "ClassLabel(num_classes=13, names=['anger', 'boredom', 'empty', 'enthusiasm', 'fun', 'happiness', 'hate', 'love', 'neutral', 'relief', 'sadness', 'surprise', 'worry'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 31995 | | valid | 8005 |
hazal
null
null
null
false
1
false
hazal/Turkish-Biomedical-corpus-trM
2022-08-10T11:13:22.000Z
null
false
8990a6df925bf53cd9c864275703193cbfe85715
[]
[ "language:tr" ]
https://huggingface.co/datasets/hazal/Turkish-Biomedical-corpus-trM/resolve/main/README.md
--- language: - tr ---
Dayyan
null
null
null
false
1
false
Dayyan/bwns
2022-03-17T14:41:53.000Z
null
false
a6f9aa7bda62c328bd642d32316c63e3387210ec
[]
[]
https://huggingface.co/datasets/Dayyan/bwns/resolve/main/README.md
# BWNS: The Baha'i World News Service dataset. BWNS articles from 2000 to 2022.
jorge-henao
null
null
null
false
1
false
jorge-henao/disco_poetry_spanish
2022-03-17T03:19:06.000Z
null
false
689f949a36ec83a2a6f14e1fc4a52cf22a704d56
[]
[]
https://huggingface.co/datasets/jorge-henao/disco_poetry_spanish/resolve/main/README.md
# DISCO: Diachronic Spanish Sonnet Corpus [![DOI](https://zenodo.org/badge/103841064.svg)](https://zenodo.org/badge/latestdoi/103841064) The Diachronic Spanish Sonnet Corpus (DISCO) contains sonnets in Spanish in CSV, between the 15th and the 20th centuries (4303 sonnets by 1215 authors from 22 different countries). It includes well-known authors, but also less canonized ones. This is a CSV compilation taken from the plain text corpus v4 published on git https://github.com/pruizf/disco/tree/v4. It includes the title, author, age and text metadata. <br><br>
gcaillaut
null
null
English Wikipedia dataset for Entity Linking
false
1
false
gcaillaut/enwiki_el
2022-07-04T12:36:35.000Z
null
false
fbeac939f336b47d75f06167cf339f6706fbafdc
[]
[ "annotations_creators:machine-generated", "language:en-EN", "license:wtfpl", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "task_categories:other" ]
https://huggingface.co/datasets/gcaillaut/enwiki_el/resolve/main/README.md
--- annotations_creators: - machine-generated language_creators: [] language: - en-EN license: - wtfpl multilinguality: - monolingual pretty_name: test size_categories: - unknown source_datasets: - original task_categories: - other task_ids: [] --- # Dataset Card for frwiki_good_pages_el ## Dataset Description - Repository: [enwiki_el](https://github.com/GaaH/enwiki_el) - Point of Contact: [Gaëtan Caillaut](mailto://g.caillaut@brgm.fr) ### Dataset Summary It is intended to be used to train Entity Linking (EL) systems. Links in Wikipedia articles are used to detect named entities. ### Languages - English ## Dataset Structure ``` { "title": "Title of the page", "qid": "QID of the corresponding Wikidata entity", "words": ["tokens"], "wikipedia": ["Wikipedia description of each entity"], "labels": ["NER labels"], "titles": ["Wikipedia title of each entity"], "qids": ["QID of each entity"], } ``` The `words` field contains the article’s text splitted on white-spaces. The other fields are list with same length as `words` and contains data only when the respective token in `words` is the __start of an entity__. For instance, if the _i-th_ token in `words` is an entity, then the _i-th_ element of `wikipedia` contains a description, extracted from Wikipedia, of this entity. The same applies for the other fields. If the entity spans multiple words, then only the index of the first words contains data. The only exception is the `labels` field, which is used to delimit entities. It uses the IOB encoding: if the token is not part of an entity, the label is `"O"`; if it is the first word of a multi-word entity, the label is `"B"`; otherwise the label is `"I"`.
crabz
null
null
null
false
1
false
crabz/stsb-sk
2022-10-23T05:13:41.000Z
null
false
1b1f1f2a456fc59a8c9260f800d7098a34183419
[]
[ "annotations_creators:other", "language_creators:other", "language:sk", "language_bcp47:sk-SK", "license:unknown", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|stsb_multi_mt", "task_ids:semantic-similarity-scoring" ]
https://huggingface.co/datasets/crabz/stsb-sk/resolve/main/README.md
--- annotations_creators: - other language_creators: - other language: - sk language_bcp47: - sk-SK license: - unknown multilinguality: - monolingual pretty_name: stsb-sk size_categories: - 1K<n<10K source_datasets: - extended|stsb_multi_mt task_categories: - text-scoring task_ids: - semantic-similarity-scoring --- Retrieving the 50th example from the train set: ``` > print(dataset['train']['sentence1'][0][50]) Muž hrá na gitare. > print(dataset['train']['sentence2'][0][50]) Chlapec hrá na gitare. > print(dataset['train']['similarity_score'][0][50]) 3.200000047683716 ``` For score explanation see [stsb_multi_mt](https://huggingface.co/datasets/stsb_multi_mt).
voidful
null
null
null
false
2
false
voidful/NMSQA
2022-10-25T15:07:49.000Z
null
false
944856ecbb76416b4f41e561e780d1b68639c8b5
[]
[ "arxiv:2203.04911", "annotations_creators:crowdsourced", "annotations_creators:machine-generated", "language_creators:expert-generated", "language_creators:machine-generated", "language_creators:crowdsourced", "language:en", "multilinguality:monolingual", "size_categories:unknown", "source_dataset...
https://huggingface.co/datasets/voidful/NMSQA/resolve/main/README.md
--- annotations_creators: - crowdsourced - machine-generated language_creators: - expert-generated - machine-generated - crowdsourced language: - en license: [] multilinguality: - monolingual size_categories: - unknown source_datasets: - original task_categories: - question-answering - automatic-speech-recognition task_ids: - abstractive-qa pretty_name: NMSQA tags: - speech-recognition --- # Dataset Card for NMSQA(Natural Multi-speaker Spoken Question Answering) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/DanielLin94144/DUAL-textless-SQA - **Repository:** https://github.com/DanielLin94144/DUAL-textless-SQA - **Paper:** https://arxiv.org/abs/2203.04911 - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@voidful](https://github.com/voidful) for adding this dataset.
LongNN
null
null
null
false
2
false
LongNN/news_sum
2022-03-16T17:14:08.000Z
null
false
f2614cab4939062f7b9313470f297dbc7f26cf66
[]
[ "license:gpl-3.0" ]
https://huggingface.co/datasets/LongNN/news_sum/resolve/main/README.md
--- license: gpl-3.0 ---
anthonny
null
null
null
false
2
false
anthonny/hate_speech
2022-10-25T10:03:21.000Z
null
false
eed50a3535a938b051cb291cee7579376f7a7367
[]
[ "annotations_creators:found", "language_creators:crowdsourced", "language:es-EC", "license:unknown", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "task_categories:text-classification", "task_ids:semantic-similarity-classification" ]
https://huggingface.co/datasets/anthonny/hate_speech/resolve/main/README.md
--- annotations_creators: - found language_creators: - crowdsourced language: - es-EC license: - unknown multilinguality: - monolingual pretty_name: hate speech size_categories: - unknown source_datasets: - original task_categories: - text-classification task_ids: - semantic-similarity-classification ---
umanlp
null
@InProceedings{huggingface:dataset, title = {A great new dataset}, author={huggingface, Inc. }, year={2020} }
This new dataset is designed to solve this great NLP task and is crafted with a lot of care.
false
6
false
umanlp/xscitldr
2022-07-04T13:49:25.000Z
null
false
43aa565bcc88b801013e7a3882eee40713e7c725
[]
[]
https://huggingface.co/datasets/umanlp/xscitldr/resolve/main/README.md
**X-SCITLDR**: Cross-Lingual Extreme Summarization of Scholarly Documents # X-SCITLDR The number of scientific publications nowadays is rapidly increasing, causing information overload for researchers and making it hard for scholars to keep up to date with current trends and lines of work. Consequently, recent work on applying text mining technologies for scholarly publications has investigated the application of automatic text summarization technologies, including extreme summarization, for this domain. However, previous work has concentrated only on monolingual settings, primarily in English. In this paper, we fill this research gap and present an abstractive cross-lingual summarization dataset for four different languages in the scholarly domain, which enables us to train and evaluate models that process English papers and generate summaries in German, Italian, Chinese and Japanese. We present our new X-SCITLDR dataset for multilingual summarization and thoroughly benchmark different models based on a state-of-the-art multilingual pre-trained model, including a two-stage summarize and translate approach and a direct cross-lingual model. We additionally explore the benefits of intermediate-stage training using English monolingual summarization and machine translation as intermediate tasks and analyze performance in zero- and few-shot scenarios. # Languages - German - Italian - Chinese - Japanese # Related - [Paper](https://dl.acm.org/doi/abs/10.1145/3529372.3530938) - [Code](https://github.com/sobamchan/xscitldr/) - [Contact](mailto:sotaro.takeshita@uni-mannheim.de) # Citation Information ``` @inproceedings{takeshita-etal-2022-xsci, author = {Takeshita, Sotaro and Green, Tommaso and Friedrich, Niklas and Eckert, Kai and Ponzetto, Simone Paolo}, title = {X-SCITLDR: Cross-Lingual Extreme Summarization of Scholarly Documents}, year = {2022}, isbn = {9781450393454}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3529372.3530938}, doi = {10.1145/3529372.3530938}, abstract = {The number of scientific publications nowadays is rapidly increasing, causing information overload for researchers and making it hard for scholars to keep up to date with current trends and lines of work. Consequently, recent work on applying text mining technologies for scholarly publications has investigated the application of automatic text summarization technologies, including extreme summarization, for this domain. However, previous work has concentrated only on monolingual settings, primarily in English. In this paper, we fill this research gap and present an abstractive cross-lingual summarization dataset for four different languages in the scholarly domain, which enables us to train and evaluate models that process English papers and generate summaries in German, Italian, Chinese and Japanese. We present our new X-SCITLDR dataset for multilingual summarization and thoroughly benchmark different models based on a state-of-the-art multilingual pre-trained model, including a two-stage 'summarize and translate' approach and a direct cross-lingual model. We additionally explore the benefits of intermediate-stage training using English monolingual summarization and machine translation as intermediate tasks and analyze performance in zero- and few-shot scenarios.}, booktitle = {Proceedings of the 22nd ACM/IEEE Joint Conference on Digital Libraries}, articleno = {4}, numpages = {12}, keywords = {scholarly document processing, summarization, multilinguality}, location = {Cologne, Germany}, series = {JCDL '22} } ```
n6L3
null
null
null
false
1
false
n6L3/kaggle
2022-03-17T16:00:50.000Z
null
false
f5b8eff44796cdd3a3c9ebb77383051adae4abc7
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/n6L3/kaggle/resolve/main/README.md
--- license: apache-2.0 --- kaggle datasets
n6L3
null
null
null
false
1
false
n6L3/nlp
2022-07-24T05:58:39.000Z
null
false
4d1d66c78bfe1ad870fb21f7e7837103b43c42c7
[]
[]
https://huggingface.co/datasets/n6L3/nlp/resolve/main/README.md
- `tweet_disaster`, 8562
mrm8488
null
null
null
false
1
false
mrm8488/test2
2022-03-17T18:40:22.000Z
null
false
682cc4c36e60a556576b92370f918ed4513f9648
[]
[ "license:wtfpl" ]
https://huggingface.co/datasets/mrm8488/test2/resolve/main/README.md
--- license: wtfpl ---
Paulosdeanllons
null
null
null
false
1
false
Paulosdeanllons/ODS_BOE
2022-03-23T13:52:31.000Z
null
false
adb147bd12398f9d56a652005f4895c6b7100ebe
[]
[ "license:afl-3.0" ]
https://huggingface.co/datasets/Paulosdeanllons/ODS_BOE/resolve/main/README.md
--- license: afl-3.0 --- Texto perteneciente a todos los BOE (Boletin Oficial del Estado, España) desde 13 de enero del 2020 al 16 de febrero del 2022. Separador '|' Columnas: año | mes | dia | texto del BOE | tamaño | nombre pdf del BOE
malteos
null
null
null
false
1
false
malteos/test-ds
2022-10-25T10:03:23.000Z
null
false
2e1dc06ac448fac1fe3c032a8919735353d80f58
[]
[ "language:en-US", "multilinguality:monolingual", "size_categories:unknown", "task_categories:text-retrieval" ]
https://huggingface.co/datasets/malteos/test-ds/resolve/main/README.md
--- annotations_creators: [] language_creators: [] language: - en-US license: [] multilinguality: - monolingual pretty_name: test ds size_categories: - unknown source_datasets: [] task_categories: - text-retrieval task_ids: [] --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
malteos
null
null
null
false
1
false
malteos/test2
2022-10-23T05:14:36.000Z
cnn-daily-mail-1
false
d62cc9c9bad06319b45ec81ba7d840fd1bc63894
[]
[ "annotations_creators:no-annotation", "language_creators:found", "language:en", "license:apache-2.0", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "task_ids:summarization" ]
https://huggingface.co/datasets/malteos/test2/resolve/main/README.md
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - apache-2.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - conditional-text-generation task_ids: - summarization paperswithcode_id: cnn-daily-mail-1 pretty_name: CNN / Daily Mail --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
elena-soare
null
null
null
false
1
false
elena-soare/crawled-ecommerce
2022-04-04T10:35:10.000Z
null
false
f7a3fbcdaec21897a76a04cf78ecd94149444327
[]
[]
https://huggingface.co/datasets/elena-soare/crawled-ecommerce/resolve/main/README.md
This contains crawled ecommerce data from Common Crawl
cfilt
null
@inproceedings{bhattacharyya2010indowordnet, title={IndoWordNet}, author={Bhattacharyya, Pushpak}, booktitle={Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)}, year={2010} }
We provide the unique word list form the IndoWordnet (IWN) knowledge base.
false
7
false
cfilt/iwn_wordlists
2022-07-30T12:24:42.000Z
plod-filtered
false
f7e4722dde8861cc0b120ef9ad56383cdd032aa7
[]
[ "annotations_creators:Shivam Mhaskar, Diptesh Kanojia", "language_creators:found", "language:as", "language:bn", "language:mni", "language:gu", "language:hi", "language:kn", "language:ks", "language:kok", "language:ml", "language:mr", "language:or", "language:ne", "language:pa", "langu...
https://huggingface.co/datasets/cfilt/iwn_wordlists/resolve/main/README.md
--- annotations_creators: - Shivam Mhaskar, Diptesh Kanojia language_creators: - found language: - as - bn - mni - gu - hi - kn - ks - kok - ml - mr - or - ne - pa - sa - ta - te - ur license: "cc-by-nc-sa-4.0" multilinguality: - monolingual paperswithcode_id: plod-filtered pretty_name: 'PLOD: An Abbreviation Detection Dataset' size_categories: - 100K<n<1M source_datasets: - original task_categories: - token-classification task_ids: - abbreviation-detection --- <p align="center"><img src="https://huggingface.co/datasets/cfilt/HiNER-collapsed/raw/main/cfilt-dark-vec.png" alt="Computation for Indian Language Technology Logo" width="150" height="150"/></p> # IWN Wordlists [![License: CC BY-NC 4.0](https://img.shields.io/badge/License-CC%20BY--NC%20--SA%204.0-orange.svg)](https://creativecommons.org/licenses/by-nc-sa/4.0/) [![Twitter Follow](https://img.shields.io/twitter/follow/cfiltnlp?color=1DA1F2&logo=twitter&style=flat-square)](https://twitter.com/cfiltnlp) [![Twitter Follow](https://img.shields.io/twitter/follow/PeopleCentredAI?color=1DA1F2&logo=twitter&style=flat-square)](https://twitter.com/PeopleCentredAI) We provide the unique word list form the [IndoWordnet (IWN)](https://www.cfilt.iitb.ac.in/indowordnet/) knowledge base. ## Usage ```python from datasets import load_dataset language = "hindi" // supported languages: assamese, bengali, bodo, gujarati, hindi, kannada, kashmiri, konkani, malayalam, manipuri, marathi, meitei, nepali, oriya, punjabi, sanskrit, tamil, telugu, urdu. words = load_dataset("cfilt/iwn_wordlists", language) word_list = words["train"]["word"] ``` ## Citation ```latex @inproceedings{bhattacharyya2010indowordnet, title={IndoWordNet}, author={Bhattacharyya, Pushpak}, booktitle={Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)}, year={2010} } ```
hackathon-pln-es
null
null
null
false
1
false
hackathon-pln-es/parallel-sentences
2022-04-02T18:38:29.000Z
null
false
329f8440b131659c97299b2a4cdf38779082e14f
[]
[]
https://huggingface.co/datasets/hackathon-pln-es/parallel-sentences/resolve/main/README.md
# Parallel Sentences for Spanish language This repository contains parallel sentences (English + same sentences in Spanish language) in a simple tsv.gz format: ``` english_sentences\tsentence_in_spanish_language ``` ## Usage These sentences can be used to train multi-lingual sentence embedding models. For more details, you could check out [SBERT.net - Multilingual-Model](https://www.sbert.net/examples/training/multilingual/README.html)
tomekkorbak
null
null
null
false
1
false
tomekkorbak/pile-curse-full
2022-03-23T20:05:15.000Z
null
false
f888d2a1df5a5f11cde2832710cc0d9e59b3b132
[]
[]
https://huggingface.co/datasets/tomekkorbak/pile-curse-full/resolve/main/README.md
## Generation procedure The dataset was constructed using documents from [the Pile](https://pile.eleuther.ai/) scored using [LDNOOBW](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) wordlist (a score is number of curses per character). The procedure was the following: 1. The first half of the data are 100k documents randomly sampled from the Pile and assigned scores 2. The second half are the most cursing document from the Pile, obtained by scoring the whole Pile and choosing 100k documents with highest scores 3. Then, the dataset was shuffled and a 9:1 train-test split was done ## Basic stats The average and median scores are 0.013 and 0.019, respectively.
scjnugacj
null
null
null
false
1
false
scjnugacj/scjn_dataset_ner
2022-10-23T05:14:56.000Z
null
false
a545f06f697a5858d9d037d5807ab68218ea6f20
[]
[ "annotations_creators:expert-generated", "language_creators:other", "language:es", "license:cc-by-sa-4.0", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "task_ids:NER" ]
https://huggingface.co/datasets/scjnugacj/scjn_dataset_ner/resolve/main/README.md
--- annotations_creators: - expert-generated language_creators: - other language: - es license: - cc-by-sa-4.0 multilinguality: - monolingual pretty_name: Corpus SCJN NER size_categories: - unknown source_datasets: - original task_categories: - Token Classification task_ids: - NER --- # Corpus SCJN NER, para el reconocimiento de entidades nombradas En su primera versión contiene etiquetas para identificar leyes y tratados internacionales de los que el Estado Mexicano es parte. ## Dataset Structure ### Data Instances Un ejemplo de 'train' se ve de la siguiente forma: ``` { 'id': '3', 'ner_tags': [0, 0, 0, 0, 0, 1, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'tokens': ['el', 'artículo', '15', 'de', 'la', 'ley', 'general', 'de', 'títulos', 'y', 'operaciones', 'de', 'crédito', 'exige', 'que', 'se', 'satisfagan', 'las', 'expresiones', 'omitidas', 'en', 'el', 'título', ',', 'antes', 'de', 'la', 'presentación', 'de', 'éste', 'para', 'su', 'aceptación', 'o', 'para', 'su', 'pago', '.', 'aunque', 'varios', 'autores', 'estiman', 'que', 'el', 'tenedor', 'puede', 'completar', 'los', 'requisitos', 'faltantes', 'a', 'la', 'cambial', ',', 'en', 'cualquier', 'instante', 'anterior', 'a', 'su', 'vencimiento', ',', 'este', 'criterio', 'no', 'es', 'aplicable', 'frente', 'a', 'la', 'disposición', 'terminante', 'de', 'la', 'ley', 'mexicana', ';', 'y', 'si', 'nuestro', 'legislador', 'hubiera', 'aceptado', 'la', 'posibilidad', 'de', 'llenar', 'los', 'requisitos', 'en', 'cualquier', 'momento', ',', 'hasta', 'antes', 'de', 'la', 'presentación', 'del', 'documento', 'para', ',', 'el', 'pago', ',', 'no', 'habría', 'hablado', 'de', 'la', 'presentación', 'para', 'la', 'aceptación', ';', 'máxime', ',', 'que', 'mientras', 'todas', 'las', 'letras', 'de', 'cambio', 'son', 'susceptibles', 'de', 'pago', ',', 'no', 'todas', 'lo', 'son', 'de', 'aceptación', '.', 'la', 'cambial', 'en', 'blanco', 'bien', 'puede', 'existir', 'y', 'circular', 'antes', 'de', 'que', 'sea', 'presentada', 'para', 'su', 'aceptación', ';', 'pero', 'cuando', 'ya', 'el', 'tenedor', 'va', 'a', 'hacer', 'valer', 'sus', 'derechos', '(', 'y', 'la', 'presentación', 'para', 'la', 'aceptación', 'es', 'el', 'ejercicio', 'de', 'uno', 'de', 'ellos', ')', ',', 'debe', 'llenar', 'los', 'extremos', 'necesarios', 'y', 'presentar', 'un', 'documento', 'completo', '.', 'cuando', 'el', 'girado', ',', 'al', 'aceptar', 'la', 'letra', ',', 'se', 'muestra', 'conforme', 'en', 'que', 'después', 'se', 'llene', 'la', 'expresión', 'de', 'su', 'importe', ',', 'ello', 'no', 'le', 'reporta', 'perjuicio', ',', 'si', 'el', 'beneficiario', 'lo', 'hace', 'dentro', 'de', 'los', 'límites', 'convenidos', ';', 'más', 'si', 'éste', 'se', 'excede', 'en', 'la', 'expresión', 'de', 'la', 'cantidad', 'convenida', ',', 'el', 'girado', 'sí', 'recibe', 'perjuicio', 'considerable', ',', 'ya', 'que', 'a', 'pesar', 'de', 'que', 'pueda', 'válidamente', 'oponer', 'las', 'excepciones', 'de', 'dolo', 'y', 'plus', 'petitio', 'correspondientes', ',', 'frente', 'al', 'beneficiario', 'que', 'violó', 'lo', 'pactado', ',', 'no', 'podrá', 'hacerlo', 'si', 'el', 'tenedor', 'es', 'un', 'tercero', 'que', 'de', 'buena', 'fe', 'adquirió', 'el', 'documento', ',', 'ignorando', 'las', 'circunstancias', 'precedentes', ';', 'en', 'cambio', ',', 'si', 'de', 'acuerdo', 'con', 'lo', 'preceptuado', 'por', 'nuestra', 'ley', ',', 'falta', 'el', 'título', 'de', 'crédito', ',', 'pues', 'el', 'documento', 'cuyos', 'requisitos', 'omitidos', 'no', 'se', 'satisficieron', 'oportunamente', ',', 'no', 'produce', 'efectos', 'como', 'tal', '(', 'artículo', '14', 'de', 'la', 'ley', 'de', 'la', 'materia', ')', ',', 'ésta', 'será', 'excepción', 'que', ',', 'demostrada', ',', 'puede', 'ser', 'oponible', 'a', 'cualquier', 'tenedor', ',', 'es', 'decir', ',', 'ya', 'no', 'será', 'una', 'excepción', 'personal', ',', 'sino', 'una', 'excepción', 'real', '.'] } ``` ### Data Fields Los campos son los mismos para todos los splits. - `id`: a `string` feature. - `tokens`: a `list` of `string` features. - `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices: ```python {'O': 0, 'B-LEY': 1, 'I-LEY': 2, 'B-TRAT_INTL': 3, 'I-TRAT_INTL': 4} ``` ### Data Splits | name |train|validation|test| |---------|----:|---------:|---:| |SCJNNER|1396|345|0| ## Dataset Creation ### Annotations | annotations|train|validation|test| |---------|----:|---------:|---:| |LEY|1084|329|0| |TRAT_INTL|935|161|0| ### Dataset Curators Ana Gabriela Palomeque Ortiz, from SCJN - Unidad General de Administración del Conocimiento Jurídico. ### Personal and Sensitive Information No personal or sensitive information included. ## Considerations for Using the Data ### Other Known Limitations La información contenida en este dataset es para efectos demostrativos y no representa una fuente oficial de la Suprema Corte de Justicia de la Nación. ## License <br/>This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/deed.es">Attribution-ShareAlike 4.0 International License</a>.
yhavinga
null
Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Jouli and Edouard Grave, CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data
CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB We show that margin-based bitext mining in LASER's multilingual sentence space can be applied to monolingual corpora of billions of sentences to produce high quality aligned translation data. We use thirty-two snapshots of a curated common crawl corpus [1] totaling 69 billion unique sentences. Using one unified approach for 80 languages, we were able to mine 10.8 billion parallel sentences, out of which only 2.9 billion are aligned with English. IMPORTANT: Please cite reference [2][3] if you use this data. [1] Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Jouli and Edouard Grave, CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data [2] Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave and Armand Joulin, CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB [3] Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, and Armand Joulin. Beyond English-Centric Multilingual Machine Translation 90 languages, 1,197 bitexts total number of files: 90 total number of tokens: 112.14G total number of sentence fragments: 7.37G
false
485
false
yhavinga/ccmatrix
2022-10-25T07:29:18.000Z
ccmatrix
false
944446b70b99e5b9bd6332f136def0d394934bda
[]
[ "arxiv:1911.04944", "arxiv:1911.00359", "arxiv:2010.11125", "annotations_creators:found", "language_creators:found", "language:af", "language:am", "language:ar", "language:ast", "language:az", "language:be", "language:bg", "language:bn", "language:br", "language:ca", "language:ceb", ...
https://huggingface.co/datasets/yhavinga/ccmatrix/resolve/main/README.md
--- annotations_creators: - found language_creators: - found language: - af - am - ar - ast - az - be - bg - bn - br - ca - ceb - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - ha - he - hi - hr - hu - hy - id - ig - ilo - is - it - ja - jv - ka - kk - km - ko - la - lb - lg - lt - lv - mg - mk - ml - mr - ms - my - ne - nl - 'no' - oc - om - or - pl - pt - ro - ru - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - tl - tr - tt - uk - ur - uz - vi - wo - xh - yi - yo - zh - zu - se license: - unknown multilinguality: - multilingual size_categories: en-nl: - n<110M en-af: - n<9M en-lt: - <24M source_datasets: - original task_categories: - text2text-generation - translation task_ids: [] paperswithcode_id: ccmatrix pretty_name: CCMatrixV1 tags: - conditional-text-generation --- # Dataset Card for CCMatrix v1 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://opus.nlpl.eu/CCMatrix.php - **Repository:** None - **Paper:** https://arxiv.org/abs/1911.04944 ### Dataset Summary This corpus has been extracted from web crawls using the margin-based bitext mining techniques described at https://github.com/facebookresearch/LASER/tree/master/tasks/CCMatrix. * 90 languages, 1,197 bitexts * total number of files: 90 * total number of tokens: 112.14G * total number of sentence fragments: 7.37G ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Configs are generated for all language pairs in both directions. You can find the valid pairs in Homepage section of Dataset Description: https://opus.nlpl.eu/CCMatrix.php E.g. ``` dataset = load_dataset("yhavinga/ccmatrix", config="en-nl") ``` ## Dataset Structure ### Data Instances For example: ```json { "id": 1, "score": 1.2498379, "translation": { "nl": "En we moeten elke waarheid vals noemen die niet minstens door een lach vergezeld ging.”", "en": "And we should call every truth false which was not accompanied by at least one laugh.”" } } ``` ### Data Fields Each example contains an integer id starting with 0, a score, and a translation dictionary with the language 1 and language 2 texts. ### Data Splits Only a `train` split is provided. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information IMPORTANT: Please cite reference [2][3] if you use this data. 1. **[CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data](https://arxiv.org/abs/1911.00359)** by *Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Jouli and Edouard Grave*. 2. **[CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB](https://arxiv.org/abs/1911.04944)** by *Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave and Armand Joulin*. 3. **[Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125)** by *Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, and Armand Joulin.* This HuggingFace CCMatrix dataset is a wrapper around the service and files prepared and hosted by OPUS: * **[Parallel Data, Tools and Interfaces in OPUS](https://www.aclweb.org/anthology/L12-1246/)** by *Jörg Tiedemann*. ### Contributions
IIC
null
null
null
false
117
false
IIC/spanish_biomedical_crawled_corpus
2022-10-23T05:15:47.000Z
null
false
b19606594b960bdbca3edf4338b6deea02f2f933
[]
[ "arxiv:2109.07765", "annotations_creators:no-annotation", "language_creators:crowdsourced", "language:es", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:IIC/spanish_biomedical_crawled_corpus", "task_ids:language-modeling" ]
https://huggingface.co/datasets/IIC/spanish_biomedical_crawled_corpus/resolve/main/README.md
--- annotations_creators: - no-annotation language_creators: - crowdsourced language: - es multilinguality: - monolingual pretty_name: Spanish_Biomedical_Crawled_Corpus size_categories: - 1M<n<10M source_datasets: - IIC/spanish_biomedical_crawled_corpus task_categories: - sequence-modeling task_ids: - language-modeling --- # Spanish_Biomedical_Crawled_Corpus This is a dataset retrieved directly from [this link](https://zenodo.org/record/5510033#.Ykho3-hByUk), which was originally developed by [BSC](https://temu.bsc.es/). This is a direct copy-paste of the usage, limitations and license of the original dataset: ``` Description The largest Spanish biomedical and heath corpus to date gathered from a massive Spanish health domain crawler over more than 3,000 URLs were downloaded and preprocessed. The collected data have been preprocessed to produce the CoWeSe (Corpus Web Salud Español) resource, a large-scale and high-quality corpus intended for biomedical and health NLP in Spanish. Directory structure CoWeSe.txt: the CoWeSe corpus; an empty line separates each document License The corpus is released under this licensing scheme: - We do not own any of the text from which these data has been extracted and preprocessed to be ready for use for language modeling tasks. - We license the actual packaging of these data under a CC0 1.0 Universal License Notice and take down policy Notice: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please: Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted. Clearly identify the copyrighted work claimed to be infringed. Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate Copyright (c) 2021 Text Mining Unit at BSC ``` License, distribution and usage conditions of the original dataset apply. ### Contributions Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this dataset. ### Citation ``` @misc{carrino2021spanish, title={Spanish Biomedical Crawled Corpus: A Large, Diverse Dataset for Spanish Biomedical Language Models}, author={Casimiro Pio Carrino and Jordi Armengol-Estapé and Ona de Gibert Bonet and Asier Gutiérrez-Fandiño and Aitor Gonzalez-Agirre and Martin Krallinger and Marta Villegas}, year={2021}, eprint={2109.07765}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` [Go to the official paper from the dataset for more information](https://arxiv.org/abs/2109.07765).
scjnugacj
null
null
null
false
1
false
scjnugacj/scjn_dataset_corpus_tesis
2022-10-23T05:16:49.000Z
null
false
e7b906bc5a265ba97ca0b5c94ee4b49278d72070
[]
[ "annotations_creators:expert-generated", "language_creators:other", "language:es", "license:cc-by-sa-4.0", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original" ]
https://huggingface.co/datasets/scjnugacj/scjn_dataset_corpus_tesis/resolve/main/README.md
--- annotations_creators: - expert-generated language_creators: - other language: - es license: - cc-by-sa-4.0 multilinguality: - monolingual pretty_name: Corpus tesis de la SCJN size_categories: - unknown source_datasets: - original task_categories: [] task_ids: [] --- # Corpus tesis de la SCJN En su primera versión contiene textos correspondientes a las tesis de la décima y undécima época publicadas al mes de febrero del 2022 por la SCJN. ## Dataset Structure ### Data Instances Un ejemplo de 'train' se ve de la siguiente forma: ``` { 'id': '3', 'text': 'a la luz de las disposiciones del sistema de derechos humanos, los principios tanto de buena fe como de protección de las apariencias constituyen un límite tendente a evitar el dolo para el disfuncional ejercicio de los actos procesales, al cumplir con la función de colmar las inevitables lagunas legales, en tanto que la norma sólo previene abusos comunes, prohibiéndolos en forma enunciativa, porque de considerarlos limitativamente, muchas conductas o declaraciones contrarias a otras precedentes y, por tanto, indebidas, escaparían de la regulación. ambos principios sirven para analizar el caso en el que, en una primera ejecutoria de amparo promovido contra el auto de vinculación a proceso, se declaró irregularmente llevada a cabo una diligencia de reconocimiento de una persona por una fotografía (imputado), al inobservarse las formas procesales, por lo que en cumplimiento con la sentencia, se dictó auto de no vinculación a proceso y, en atención al deber de investigar conforme a los parámetros convencionales, la autoridad practicó una posterior diligencia, esta vez conforme a las disposiciones adjetivas que la rigen; sin embargo, si el defensor se retiró sin firmarla, aduciendo que lo haría posteriormente, sin que así se hubiera logrado, no obstante las gestiones tendientes a ello por la autoridad investigadora, quien pormenorizadamente las detalló en una certificación. actuación que debe ser sometida en cada caso al escrutinio constitucional, considerando que no puede alegar la nulidad quien ha incurrido conscientemente a su producción, porque buscaría aprovecharse de su personal dolo, al provocar daños por medio del uso desviado de medios legales inicialmente legítimos, si se les considera aisladamente. ahora bien, ponderado el caso concreto, se advierte que no obstante alegar en favor de su defenso el propio dolo, se produjeron las consecuencias inherentes a la diligencia en los términos establecidos en la norma, pues incluso consta que intervino activamente en la diligencia; lo que conduce a estimar infundado el agravio expuesto en el sentido de que debe negársele validez, al tender a beneficiar al quejoso del dolo del defensor expresado en retirarse sin firmar, indicando que regresaría a hacerlo, sin que hubiera actuado conforme a esa manifestación precedente, pretendiendo que, de prosperar la falta de formalidad en la segunda diligencia, la cual ahora le es atribuible, afectaría la expectativa creada en otros sujetos de derecho, en la especie, las víctimas, incluso, el exceso en el ejercicio de la acción constitucional alentaría la práctica viciosa de actos cuyos frutos serían aprovechables por quienes los realizan y, por otra parte, tanto las autoridades investigadoras como los tribunales se harían en alguna forma partícipes de ese proceder irregular, si consideraran permitido ese comportamiento sólo porque la ley omitió prohibirlo, incumpliendo las primeras con el deber de investigar la verdad conforme a los parámetros convencionales y, los segundos, al otorgarles credibilidad.' } ``` ### Data Fields Los campos son los mismos para todos los splits. - `id`: a `string` feature. - `text`: a `string` features. ### Data Splits | name |train|validation|test| |---------|----:|---------:|---:| |scjn_corpus_tesis|27913|0|0| ## Dataset Creation ### Annotations ### Dataset Curators Ana Gabriela Palomeque Ortiz, from SCJN - Unidad General de Administración del Conocimiento Jurídico. ### Personal and Sensitive Information No personal or sensitive information included. ## Considerations for Using the Data ### Other Known Limitations La información contenida en este dataset es para efectos demostrativos y no representa una fuente oficial de la Suprema Corte de Justicia de la Nación. ## License <br/>This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/deed.es">Attribution-ShareAlike 4.0 International License</a>.
hackathon-pln-es
null
null
null
false
1
false
hackathon-pln-es/MESD
2022-03-25T18:15:07.000Z
null
false
24d41c732de80b4b883f8e279d484a6d4b5eb017
[]
[ "license:cc-by-4.0", "Duville, Mathilde Marie; Alonso-Valerdi, Luz Maria; Ibarra, David (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi:10.17632/cy34mh68j9.5" ]
https://huggingface.co/datasets/hackathon-pln-es/MESD/resolve/main/README.md
--- license: cc-by-4.0 Duville, Mathilde Marie; Alonso-Valerdi, Luz Maria; Ibarra, David (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5 --- # Dataset Card for MESD ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://data.mendeley.com/datasets/cy34mh68j9/5 - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary Contiene los datos de la base MESD procesados para hacer 'finetuning' de un modelo 'Wav2Vec' en el Hackaton organizado por 'Somos NLP'. Ejemplo de referencia: https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/audio_classification.ipynb Hemos accedido a la base MESD para obtener ejemplos. Breve descripción de los autores de la base MESD: "La Base de Datos del Discurso Emocional Mexicano (MESD en inglés) proporciona enunciados de una sola palabra para las prosodias afectivas de ira, asco, miedo, felicidad, neutro y tristeza con conformación cultural mexicana. El MESD ha sido pronunciado por actores adultos y niños no profesionales: Se dispone de 3 voces femeninas, 2 masculinas y 6 infantiles. Las palabras de los enunciados emocionales y neutros proceden de dos corpus: (corpus A) compuesto por sustantivos y adjetivos que se repiten a través de prosodias emocionales y tipos de voz (femenina, masculina, infantil), y (corpus B) que consiste en palabras controladas por edad de adquisición, frecuencia de uso, familiaridad, concreción, valencia, excitación y clasificaciones de dimensionalidad de emociones discretas. Las grabaciones de audio se realizaron en un estudio profesional con los siguientes materiales (1) un micrófono Sennheiser e835 con una respuesta de frecuencia plana (100 Hz a 10 kHz), (2) una interfaz de audio Focusrite Scarlett 2i4 conectada al micrófono con un cable XLR y al ordenador, y (3) la estación de trabajo de audio digital REAPER (Rapid Environment for Audio Production, Engineering, and Recording). Los archivos de audio se almacenaron como una secuencia de 24 bits con una frecuencia de muestreo de 48000Hz. La amplitud de las formas de onda acústicas se reescaló entre -1 y 1. Se crearon dos versiones con reducción de la naturalidad de los locutores a partir de expresiones emocionales humanas para voces femeninas del corpus B. En concreto, la naturalidad se redujo progresivamente de las voces humanas al nivel 1 al nivel 2. En particular, se editaron la duración y el tono medio en las sílabas acentuadas para reducir la diferencia entre las sílabas acentuadas y las no acentuadas. En los enunciados completos, se redujeron las relaciones F2/F1 y F3/F1 editando las frecuencias F2 y F3. También se redujo la intensidad de los armónicos 1 y 4. " ### Supported Tasks and Leaderboards [Needs More Information] ### Languages Español ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields Origen: texto que indica si se trata del conjunto de datos MESD original o los casos 'Speaker-embedded naturalness-reduced female voices' donde los autores han generado de forma sintética nuevos datos transformando algunas de las instancias de los audios originales. Palabra: texto de la palabra que se ha leído. Emoción: texto de la emoción a la que representa: Valores: 'Enojo', 'Felicidad', 'Miedo', 'Neutral', 'Disgusto', 'Tristeza'. InfoActor: texto que indica si la voz es de 'Niño', 'Hombre', 'Mujer'. AudioArray: audio array, remuestreado a 16 Khz. ### Data Splits Train: 891 ejemplos, mezcla de casos MESD y 'Speaker-embedded naturalness-reduced female voices'. Validation: 130 ejemplos, todos casos MESD. Test: 129 ejemplos, todos casos MESD. ## Dataset Creation ### Curation Rationale Unir los tres subconjuntos de datos y procesarlos para la tarea de finetuning, acorde al input esperado por el modelo Wav2Vec. ### Source Data #### Initial Data Collection and Normalization Acceso a los datos en bruto: https://data.mendeley.com/datasets/cy34mh68j9/5 Conversión a audio arra y remuestreo a 16 Khz. #### Who are the source language producers? Duville, Mathilde Marie; Alonso-Valerdi, Luz Maria; Ibarra, David (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5 ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information Creative Commons, [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) ### Citation Information ``` Duville, Mathilde Marie; Alonso-Valerdi, Luz Maria; Ibarra, David (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5 ```
vinaykudari
null
null
null
false
1
false
vinaykudari/acled-token-summary
2022-03-20T00:47:22.000Z
null
false
bc865c50d83a257b7458e3c97ad16533fb491287
[]
[]
https://huggingface.co/datasets/vinaykudari/acled-token-summary/resolve/main/README.md
ACLED Dataset for Summarization Task - CSE635 (University at Buffalo) Actor Description - 0: N/A - 1: State Forces - 2: Rebel Groups - 3: Political Militias - 4: Identity Militias - 5: Rioters - 6: Protesters - 7: Civilians - 8: External/Other Forces
IIC
null
null
null
false
5
false
IIC/lfqa_spanish
2022-10-23T05:17:47.000Z
null
false
885169fbef99505fb1c3f006b1cbde4656ec2a31
[]
[ "annotations_creators:no-annotation", "language_creators:crowdsourced", "language:es", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:vblagoje/lfqa", "source_datasets:vblagoje/lfqa_support_docs", "task_ids:language-modeling" ]
https://huggingface.co/datasets/IIC/lfqa_spanish/resolve/main/README.md
--- annotations_creators: - no-annotation language_creators: - crowdsourced language: - es multilinguality: - monolingual pretty_name: LFQA size_categories: - 100K<n<1M source_datasets: - vblagoje/lfqa - vblagoje/lfqa_support_docs task_categories: - sequence-modeling task_ids: - language-modeling --- This is an automatically translated version of [vblagoje/lfqa](https://huggingface.co/datasets/vblagoje/lfqa), a dataset used for long form question answering training. The model used for translating the dataset is [marianMT english-spanish](https://huggingface.co/Helsinki-NLP/opus-mt-en-es).
TomTBT
null
null
The PMC Open Access Subset includes more than 3.4 million journal articles and preprints that are made available under license terms that allow reuse. Not all articles in PMC are available for text mining and other reuse, many have copyright protection, however articles in the PMC Open Access Subset are made available under Creative Commons or similar licenses that generally allow more liberal redistribution and reuse than a traditional copyrighted work. The PMC Open Access Subset is one part of the PMC Article Datasets This version takes XML version as source, benefiting from the structured text to split the articles in parts, naming the introduction, methods, results, discussion and conclusion, and refers with keywords in the text to external or internal resources (articles, figures, tables, formulas, boxed-text, quotes, code, footnotes, chemicals, graphics, medias).
false
11
false
TomTBT/pmc_open_access_xml
2022-10-17T07:40:21.000Z
null
false
c7c0b81ed48155296b34b226ffb18a039b44a48c
[]
[ "task_categories:text-classification", "task_categories:summarization", "task_categories:other", "annotations_creators:no-annotation", "language_creators:expert-generated", "language:en", "size_categories:1M<n<10M", "source_datasets:original", "license:cc0-1.0", "license:cc-by-4.0", "license:cc-...
https://huggingface.co/datasets/TomTBT/pmc_open_access_xml/resolve/main/README.md
--- pretty_name: XML-parsed PMC task_categories: - text-classification - summarization - other annotations_creators: - no-annotation language_creators: - expert-generated language: - en size_categories: - 1M<n<10M source_datasets: - original license: - cc0-1.0 - cc-by-4.0 - cc-by-sa-4.0 - cc-by-nc-4.0 - cc-by-nd-4.0 - cc-by-nc-nd-4.0 - cc-by-nc-sa-4.0 - unknown - other multilinguality: - monolingual task_ids: [] tags: - research papers - biology - medecine --- # Dataset Card for PMC Open Access XML ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/ - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary The XML Open Access includes more than 3.4 million journal articles and preprints that are made available under license terms that allow reuse. Not all articles in PMC are available for text mining and other reuse, many have copyright protection, however articles in the PMC Open Access Subset are made available under Creative Commons or similar licenses that generally allow more liberal redistribution and reuse than a traditional copyrighted work. The PMC Open Access Subset is one part of the PMC Article Datasets This version takes XML version as source, benefiting from the structured text to split the articles in parts, naming the introduction, methods, results, discussion and conclusion, and reference with keywords in the text to external or internal resources (articles, figures, tables, formulas, boxed-text, quotes, code, footnotes, chemicals, graphics, medias). The dataset was initially created with relation-extraction tasks in mind, between the references in text and the content of the references (e.g. for PMID, by joining the refered article abstract from the pubmed dataset), but aims in a larger extent to provide a corpus of pre-annotated text for other tasks (e.g. figure caption to graphic, glossary definition detection, summarization). ### Supported Tasks and Leaderboards [Needs More Information] ### Languages [Needs More Information] ## Dataset Structure ### Data Fields - "accession_id": The PMC ID of the article - "pmid": The PubMed ID of the article - "introduction": List of \<title\> and \<p\> elements in \<body\>, sharing their root with a \<title\> containing "introduction" or "background". - "methods": Same as introduction with "method" keyword. - "results": Same as introduction with "result" keyword. - "discussion": Same as introduction with "discussion" keyword. - "conclusion": Same as introduction with "conclusion" keyword. - "front": List of \<title\> and \<p\> elements in \<front\> after everything else has been searched. - "body": List of \<title\> and \<p\> elements in \<body\> after everything else has been searched. - "back": List of \<title\> and \<p\> elements in \<back\> after everything else has been searched. - "figure": List of \<fig\> elements of the article. - "table": List of \<table-wrap\> and \<array\> elements of the article. - "formula": List of \<disp-formula\> and \<inline-formula\> elements of the article. - "box": List of \<boxed-text\> elements of the article. - "code": List of \<code\> elements of the article. - "quote": List of \<disp-quote\> and \<speech\> elements of the article. - "chemical": List of \<chem-struct-wrap\> elements of the article. - "supplementary": List of \<supplementary-material\> and \<inline-supplementary-material\> elements of the article. - "footnote": List of \<fn-group\> and \<table-wrap-foot\> elements of the article. - "graphic": List of \<graphic\> and \<inline-graphic\> elements of the article. - "media": List of \<media\> and \<inline-media\> elements of the article. - "glossary": Glossary if found in the XML - "unknown_references": JSON of a dictionnary of each "tag":"text" for the reference that did not indicate a PMID - "n_references": Total number of references and unknown references - "license": The licence of the article - "retracted": If the article was retracted or not - "last_updated": Last update of the article - "citation": Citation of the article - "package_file": path to the folder containing the graphics and media files of the article (to append to the base URL: ftp.ncbi.nlm.nih.gov/pub/pmc/) In text, the references are in the form ##KEYWORD##IDX_REF##OLD_TEXT##, with keywords (REF, UREF, FIG, TAB, FORMU, BOX, CODE, QUOTE, CHEM, SUPPL, FOOTN, GRAPH, MEDIA) referencing respectively to "pubmed articles" (external), "unknown_references", "figure", "table", "formula", "box", "code", "quote", "chem", "supplementary", "footnote", "graphic" and "media". ### Data Splits [Needs More Information] ## Dataset Creation ### Curation Rationale Internal references (figures, tables, ...) were found using specific tags. Deciding on those tags was done by testing and by looking in the documentation for the different kind of possible usage. Then, to split the article into introduction, methods, results, discussion and conclusion, specific keywords in titles were used. Because there are no rules in this xml to tag those sections, finding the keyword seemed like the most reliable approach to do so. A drawback is that many section do not have those keywords in the titles but could be assimilated to those. However, the huge diversity in the titles makes it harder to label such sections. This could be the work of further versions of this dataset. ### Source Data #### Initial Data Collection and Normalization Data was obtained from: - ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/oa_noncomm/xml/ - ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/oa_comm/xml/ - ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/oa_other/xml/ Additional content for individual articles (graphics, media) can be obtained from: - ftp.ncbi.nlm.nih.gov/pub/pmc + "package_file" #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases The articles XML are similar accross collections. This means that if a certain collection handles the structure in unusual ways, the whole collection might not be as well annotated than others. This concerns all the sections (intro, methods, ...), the external references (pmids) and the internal references (tables, figures, ...). To illustrate that, references are sometime given as a range (e.g. 10-15). In that case, only reference 10 and 15 are linked. This could potentially be handled in a future version. ### Other Known Limitations [Needs More Information] ### Preprocessing recommendations - Filter out empty contents. - Remove unwanted references from the text, and replace either by the "references_text" or by the reference content itself. - Unescape HTML special characters: `import html; html.unescape(my_text)` - Remove superfluous line break in text. - Remove XML tags (\<italic\>, \<sup\>, \<sub\>, ...), replace by special tokens? - Join the items of the contents' lists. ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information https://www.ncbi.nlm.nih.gov/pmc/about/copyright/ Within the PMC Open Access Subset, there are three groupings: Commercial Use Allowed - CC0, CC BY, CC BY-SA, CC BY-ND licenses Non-Commercial Use Only - CC BY-NC, CC BY-NC-SA, CC BY-NC-ND licenses; and Other - no machine-readable Creative Commons license, no license, or a custom license. ### Citation Information [Needs More Information]
Heriot-WattUniversity
null
null
null
false
1
false
Heriot-WattUniversity/CANDOR-corpus
2022-03-20T13:02:29.000Z
null
false
6e708f31c5214bfa48b5ad7551f6fa7a0de4117f
[]
[]
https://huggingface.co/datasets/Heriot-WattUniversity/CANDOR-corpus/resolve/main/README.md
# CANDOR Corpus ### CANDOR = Conversation: A Naturalistic Dataset of Online Recordings The CANDOR corpus is a large, novel, multimodal corpus of 1,656 recorded conversations in spoken English. This 7+ million word, 850 hour corpus totals over 1TB of audio, video, and transcripts, with moment-to-moment measures of vocal, facial, and semantic expression, along with an extensive survey of speaker post conversation reflections. This corpus was first introduced by Reece et al. in [Advancing an Interdisciplinary Science of Conversation: Insights from a Large Multimodal Corpus of Human Speech](https://paperswithcode.com/paper/advancing-an-interdisciplinary-science-of)
Heriot-WattUniversity
null
null
null
false
1
false
Heriot-WattUniversity/bAbi-Plus
2022-03-20T13:34:49.000Z
null
false
867b8cdc3c1106ac586d8a9534bbc648e9a91514
[]
[]
https://huggingface.co/datasets/Heriot-WattUniversity/bAbi-Plus/resolve/main/README.md
# bAbi+ corpus **bAbi+** is an extension to the [bAbi corpus by Facebook AI](https://research.facebook.com/downloads/babi/) with some incremental phenomena of dialogue (such as self-corrections, pauses, restarts, etc.) added to it. It was first introduced by Shalyminov et al. (2017). Some of the papers that have used the dataset are: * *Challenging Neural Dialogue Models with Natural Data: Memory Networks Fail on Incremental Phenomena*. Igor Shalyminov, Arash Eshghi, and Oliver Lemon. 2017. In Proceedings of the 21st Workshop on the Semantics and Pragmatics of Dialogue (SemDial 2017 - SaarDial) [[Link](http://semdial.org/anthology/papers/Z/Z17/Z17-3016/)] * *Multi-Task Learning for Domain-General Spoken Disfluency Detection in Dialogue Systems*. Igor Shalyminov, Arash Eshghi, Oliver Lemon. 2018. Proceedings of the 22nd Workshop on the Semantics and Pragmatics of Dialogue - Full Papers (SemDial 2018) [[Link](http://semdial.org/anthology/papers/Z/Z18/Z18-3008/)] [[Code](https://github.com/ishalyminov/multitask_disfluency_detection)]
Heriot-WattUniversity
null
null
null
false
1
false
Heriot-WattUniversity/switchboard
2022-03-20T13:39:57.000Z
null
false
1ce244955f1c192e22c7de4c8f217ed3f34d0915
[]
[]
https://huggingface.co/datasets/Heriot-WattUniversity/switchboard/resolve/main/README.md
# Switchboard Switchboard is a collection of telephone conversations. [[dataset link](https://catalog.ldc.upenn.edu/LDC97S62)] [[Papers with code link](https://paperswithcode.com/dataset/switchboard-1-corpus)]
Heriot-WattUniversity
null
null
null
false
4
false
Heriot-WattUniversity/Groningen-Meaning-Bank
2022-03-20T15:52:28.000Z
null
false
c3164425d06c94eecdfc2966dbd1aedf928a468d
[]
[]
https://huggingface.co/datasets/Heriot-WattUniversity/Groningen-Meaning-Bank/resolve/main/README.md
# Groningen Meaning Bank [[Homepage](https://gmb.let.rug.nl/)]
enimai
null
null
null
false
6
false
enimai/MuST-C-fr
2022-08-30T15:24:53.000Z
null
false
bac8bbe1635bfb036c16bb54a0657467a2df727a
[]
[ "license:apache-2.0", "language:en", "language:fr" ]
https://huggingface.co/datasets/enimai/MuST-C-fr/resolve/main/README.md
--- license: apache-2.0 language: - en - fr ---
dannyvas23
null
null
null
false
1
false
dannyvas23/textosuicidios
2022-03-21T00:03:08.000Z
null
false
66cded2be5d5392e60f0d77f3d027413b84d1e4b
[]
[ "license:afl-3.0" ]
https://huggingface.co/datasets/dannyvas23/textosuicidios/resolve/main/README.md
--- license: afl-3.0 ---
dannyvas23
null
null
null
false
2
false
dannyvas23/notas_suicidios
2022-03-21T01:37:37.000Z
null
false
f024a61cb9987afe7063a0f35b90aa6a16385f3d
[]
[ "license:afl-3.0" ]
https://huggingface.co/datasets/dannyvas23/notas_suicidios/resolve/main/README.md
--- license: afl-3.0 ---
hazal
null
null
null
false
1
false
hazal/electronic-radiology-phd-thesis-trR
2022-08-10T11:13:34.000Z
null
false
f362220b39c6518285689d2616dccaeb318d6970
[]
[ "language:tr" ]
https://huggingface.co/datasets/hazal/electronic-radiology-phd-thesis-trR/resolve/main/README.md
--- language: - tr ---
jacobbieker
null
null
null
false
1
false
jacobbieker/hyperion-clouds
2022-03-21T08:29:52.000Z
null
false
7c4adc955c6443bff71d04af90a1037702bb801f
[]
[ "license:mit" ]
https://huggingface.co/datasets/jacobbieker/hyperion-clouds/resolve/main/README.md
--- license: mit ---
null
null
@InProceedings{godahewa2021monash, author = "Godahewa, Rakshitha and Bergmeir, Christoph and Webb, Geoffrey I. and Hyndman, Rob J. and Montero-Manso, Pablo", title = "Monash Time Series Forecasting Archive", booktitle = "Neural Information Processing Systems Track on Datasets and Benchmarks", year = "2021", note = "forthcoming" }
Monash Time Series Forecasting Repository which contains 30+ datasets of related time series for global forecasting research. This repository includes both real-world and competition time series datasets covering varied domains.
false
171
false
monash_tsf
2022-11-03T15:51:00.000Z
null
false
9982f1727d5c0a247a22b9e06c63b23fc6489464
[]
[ "annotations_creators:no-annotation", "language_creators:found", "license:cc-by-4.0", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "task_categories:time-series-forecasting", "task_ids:univariate-time-series-forecasting", "task_ids:multivariate-time-series-fo...
https://huggingface.co/datasets/monash_tsf/resolve/main/README.md
--- annotations_creators: - no-annotation language_creators: - found license: - cc-by-4.0 multilinguality: - monolingual pretty_name: Monash Time Series Forecasting Repository size_categories: - 1K<n<10K source_datasets: - original task_categories: - time-series-forecasting task_ids: - univariate-time-series-forecasting - multivariate-time-series-forecasting dataset_info: - config_name: weather features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 177638713 num_examples: 3010 - name: train num_bytes: 176893738 num_examples: 3010 - name: validation num_bytes: 177266226 num_examples: 3010 download_size: 38820451 dataset_size: 531798677 - config_name: tourism_yearly features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 71358 num_examples: 518 - name: train num_bytes: 54264 num_examples: 518 - name: validation num_bytes: 62811 num_examples: 518 download_size: 36749 dataset_size: 188433 - config_name: tourism_quarterly features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 190920 num_examples: 427 - name: train num_bytes: 162738 num_examples: 427 - name: validation num_bytes: 176829 num_examples: 427 download_size: 93833 dataset_size: 530487 - config_name: tourism_monthly features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 463986 num_examples: 366 - name: train num_bytes: 391518 num_examples: 366 - name: validation num_bytes: 427752 num_examples: 366 download_size: 199791 dataset_size: 1283256 - config_name: cif_2016 features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 31859 num_examples: 72 - name: train num_bytes: 24731 num_examples: 72 - name: validation num_bytes: 28295 num_examples: 72 download_size: 53344 dataset_size: 84885 - config_name: london_smart_meters features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 687138394 num_examples: 5560 - name: train num_bytes: 684386194 num_examples: 5560 - name: validation num_bytes: 685762294 num_examples: 5560 download_size: 219673439 dataset_size: 2057286882 - config_name: australian_electricity_demand features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 4765637 num_examples: 5 - name: train num_bytes: 4763162 num_examples: 5 - name: validation num_bytes: 4764400 num_examples: 5 download_size: 5770526 dataset_size: 14293199 - config_name: wind_farms_minutely features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 710246723 num_examples: 339 - name: train num_bytes: 710078918 num_examples: 339 - name: validation num_bytes: 710162820 num_examples: 339 download_size: 71383130 dataset_size: 2130488461 - config_name: bitcoin features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 340966 num_examples: 18 - name: train num_bytes: 336511 num_examples: 18 - name: validation num_bytes: 338738 num_examples: 18 download_size: 220403 dataset_size: 1016215 - config_name: pedestrian_counts features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 12923256 num_examples: 66 - name: train num_bytes: 12897120 num_examples: 66 - name: validation num_bytes: 12910188 num_examples: 66 download_size: 4587054 dataset_size: 38730564 - config_name: vehicle_trips features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 186688 num_examples: 329 - name: train num_bytes: 105261 num_examples: 329 - name: validation num_bytes: 145974 num_examples: 329 download_size: 44914 dataset_size: 437923 - config_name: kdd_cup_2018 features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 12146966 num_examples: 270 - name: train num_bytes: 12040046 num_examples: 270 - name: validation num_bytes: 12093506 num_examples: 270 download_size: 2456948 dataset_size: 36280518 - config_name: nn5_daily features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 366110 num_examples: 111 - name: train num_bytes: 314828 num_examples: 111 - name: validation num_bytes: 340469 num_examples: 111 download_size: 287708 dataset_size: 1021407 - config_name: nn5_weekly features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 55670 num_examples: 111 - name: train num_bytes: 48344 num_examples: 111 - name: validation num_bytes: 52007 num_examples: 111 download_size: 62043 dataset_size: 156021 - config_name: kaggle_web_traffic features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 486103806 num_examples: 145063 - name: train num_bytes: 415494391 num_examples: 145063 - name: validation num_bytes: 450799098 num_examples: 145063 download_size: 145485324 dataset_size: 1352397295 - config_name: kaggle_web_traffic_weekly features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 73816627 num_examples: 145063 - name: train num_bytes: 64242469 num_examples: 145063 - name: validation num_bytes: 69029548 num_examples: 145063 download_size: 28930900 dataset_size: 207088644 - config_name: solar_10_minutes features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 29707848 num_examples: 137 - name: train num_bytes: 29640033 num_examples: 137 - name: validation num_bytes: 29673941 num_examples: 137 download_size: 4559353 dataset_size: 89021822 - config_name: solar_weekly features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 34265 num_examples: 137 - name: train num_bytes: 28614 num_examples: 137 - name: validation num_bytes: 31439 num_examples: 137 download_size: 24375 dataset_size: 94318 - config_name: car_parts features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 661379 num_examples: 2674 - name: train num_bytes: 396653 num_examples: 2674 - name: validation num_bytes: 529016 num_examples: 2674 download_size: 39656 dataset_size: 1587048 - config_name: fred_md features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 325107 num_examples: 107 - name: train num_bytes: 314514 num_examples: 107 - name: validation num_bytes: 319811 num_examples: 107 download_size: 169107 dataset_size: 959432 - config_name: traffic_hourly features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 62413326 num_examples: 862 - name: train num_bytes: 62071974 num_examples: 862 - name: validation num_bytes: 62242650 num_examples: 862 download_size: 22868806 dataset_size: 186727950 - config_name: traffic_weekly features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 401046 num_examples: 862 - name: train num_bytes: 344154 num_examples: 862 - name: validation num_bytes: 372600 num_examples: 862 download_size: 245126 dataset_size: 1117800 - config_name: hospital features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 293558 num_examples: 767 - name: train num_bytes: 217625 num_examples: 767 - name: validation num_bytes: 255591 num_examples: 767 download_size: 78110 dataset_size: 766774 - config_name: covid_deaths features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 242187 num_examples: 266 - name: train num_bytes: 176352 num_examples: 266 - name: validation num_bytes: 209270 num_examples: 266 download_size: 27335 dataset_size: 627809 - config_name: sunspot features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 304974 num_examples: 1 - name: train num_bytes: 304726 num_examples: 1 - name: validation num_bytes: 304850 num_examples: 1 download_size: 68865 dataset_size: 914550 - config_name: saugeenday features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 97969 num_examples: 1 - name: train num_bytes: 97722 num_examples: 1 - name: validation num_bytes: 97845 num_examples: 1 download_size: 28721 dataset_size: 293536 - config_name: us_births features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 30171 num_examples: 1 - name: train num_bytes: 29923 num_examples: 1 - name: validation num_bytes: 30047 num_examples: 1 download_size: 16332 dataset_size: 90141 - config_name: solar_4_seconds features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 30513578 num_examples: 1 - name: train num_bytes: 30513083 num_examples: 1 - name: validation num_bytes: 30513331 num_examples: 1 download_size: 794502 dataset_size: 91539992 - config_name: wind_4_seconds features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 30513269 num_examples: 1 - name: train num_bytes: 30512774 num_examples: 1 - name: validation num_bytes: 30513021 num_examples: 1 download_size: 2226184 dataset_size: 91539064 - config_name: rideshare features: - name: start dtype: timestamp[s] - name: target sequence: sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 5161435 num_examples: 156 - name: train num_bytes: 4249051 num_examples: 156 - name: validation num_bytes: 4705243 num_examples: 156 download_size: 1031826 dataset_size: 14115729 - config_name: oikolab_weather features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 3302310 num_examples: 8 - name: train num_bytes: 3299142 num_examples: 8 - name: validation num_bytes: 3300726 num_examples: 8 download_size: 1326101 dataset_size: 9902178 - config_name: temperature_rain features: - name: start dtype: timestamp[s] - name: target sequence: sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: test num_bytes: 96059286 num_examples: 422 - name: train num_bytes: 88121466 num_examples: 422 - name: validation num_bytes: 92090376 num_examples: 422 download_size: 25747139 dataset_size: 276271128 --- # Dataset Card for Monash Time Series Forecasting Repository ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Monash Time Series Forecasting Repository](https://forecastingdata.org/) - **Repository:** [Monash Time Series Forecasting Repository code repository](https://github.com/rakshitha123/TSForecasting) - **Paper:** [Monash Time Series Forecasting Archive](https://openreview.net/pdf?id=wEc1mgAjU-) - **Leaderboard:** [Baseline Results](https://forecastingdata.org/#results) - **Point of Contact:** [Rakshitha Godahewa](mailto:rakshitha.godahewa@monash.edu) ### Dataset Summary The first comprehensive time series forecasting repository containing datasets of related time series to facilitate the evaluation of global forecasting models. All datasets are intended to use only for research purpose. Our repository contains 30 datasets including both publicly available time series datasets (in different formats) and datasets curated by us. Many datasets have different versions based on the frequency and the inclusion of missing values, making the total number of dataset variations to 58. Furthermore, it includes both real-world and competition time series datasets covering varied domains. The following table shows a list of datasets available: | Name | Domain | No. of series | Freq. | Pred. Len. | Source | |-------------------------------|-----------|---------------|--------|------------|-------------------------------------------------------------------------------------------------------------------------------------| | weather | Nature | 3010 | 1D | 30 | [Sparks et al., 2020](https://cran.r-project.org/web/packages/bomrang) | | tourism_yearly | Tourism | 1311 | 1Y | 4 | [Athanasopoulos et al., 2011](https://doi.org/10.1016/j.ijforecast.2010.04.009) | | tourism_quarterly | Tourism | 1311 | 1Q-JAN | 8 | [Athanasopoulos et al., 2011](https://doi.org/10.1016/j.ijforecast.2010.04.009) | | tourism_monthly | Tourism | 1311 | 1M | 24 | [Athanasopoulos et al., 2011](https://doi.org/10.1016/j.ijforecast.2010.04.009) | | cif_2016 | Banking | 72 | 1M | 12 | [Stepnicka and Burda, 2017](https://doi.org/10.1109/FUZZ-IEEE.2017.8015455) | | london_smart_meters | Energy | 5560 | 30T | 60 | [Jean-Michel, 2019](https://www.kaggle.com/jeanmidev/smart-meters-in-london) | | australian_electricity_demand | Energy | 5 | 30T | 60 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU-) | | wind_farms_minutely | Energy | 339 | 1T | 60 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU- ) | | bitcoin | Economic | 18 | 1D | 30 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU- ) | | pedestrian_counts | Transport | 66 | 1H | 48 | [City of Melbourne, 2020](https://data.melbourne.vic.gov.au/Transport/Pedestrian-Counting-System-Monthly-counts-per-hour/b2ak-trbp) | | vehicle_trips | Transport | 329 | 1D | 30 | [fivethirtyeight, 2015](https://github.com/fivethirtyeight/uber-tlc-foil-response) | | kdd_cup_2018 | Nature | 270 | 1H | 48 | [KDD Cup, 2018](https://www.kdd.org/kdd2018/kdd-cup) | | nn5_daily | Banking | 111 | 1D | 56 | [Ben Taieb et al., 2012](https://doi.org/10.1016/j.eswa.2012.01.039) | | nn5_weekly | Banking | 111 | 1W-MON | 8 | [Ben Taieb et al., 2012](https://doi.org/10.1016/j.eswa.2012.01.039) | | kaggle_web_traffic | Web | 145063 | 1D | 59 | [Google, 2017](https://www.kaggle.com/c/web-traffic-time-series-forecasting) | | kaggle_web_traffic_weekly | Web | 145063 | 1W-WED | 8 | [Google, 2017](https://www.kaggle.com/c/web-traffic-time-series-forecasting) | | solar_10_minutes | Energy | 137 | 10T | 60 | [Solar, 2020](https://www.nrel.gov/grid/solar-power-data.html) | | solar_weekly | Energy | 137 | 1W-SUN | 5 | [Solar, 2020](https://www.nrel.gov/grid/solar-power-data.html) | | car_parts | Sales | 2674 | 1M | 12 | [Hyndman, 2015](https://cran.r-project.org/web/packages/expsmooth/) | | fred_md | Economic | 107 | 1M | 12 | [McCracken and Ng, 2016](https://doi.org/10.1080/07350015.2015.1086655) | | traffic_hourly | Transport | 862 | 1H | 48 | [Caltrans, 2020](http://pems.dot.ca.gov/) | | traffic_weekly | Transport | 862 | 1W-WED | 8 | [Caltrans, 2020](http://pems.dot.ca.gov/) | | hospital | Health | 767 | 1M | 12 | [Hyndman, 2015](https://cran.r-project.org/web/packages/expsmooth/) | | covid_deaths | Health | 266 | 1D | 30 | [Johns Hopkins University, 2020](https://github.com/CSSEGISandData/COVID-19) | | sunspot | Nature | 1 | 1D | 30 | [Sunspot, 2015](http://www.sidc.be/silso/newdataset) | | saugeenday | Nature | 1 | 1D | 30 | [McLeod and Gweon, 2013](http://www.jenvstat.org/v04/i11) | | us_births | Health | 1 | 1D | 30 | [Pruim et al., 2020](https://cran.r-project.org/web/packages/mosaicData) | | solar_4_seconds | Energy | 1 | 4S | 60 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU- ) | | wind_4_seconds | Energy | 1 | 4S | 60 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU- ) | | rideshare | Transport | 2304 | 1H | 48 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU- ) | | oikolab_weather | Nature | 8 | 1H | 48 | [Oikolab](https://oikolab.com/) | | temperature_rain | Nature | 32072 | 1D | 30 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU- ) ### Dataset Usage To load a particular dataset just specify its name from the table above e.g.: ```python load_dataset("monash_tsf", "nn5_daily") ``` > Notes: > - Data might contain missing values as in the original datasets. > - The prediction length is either specified in the dataset or a default value depending on the frequency is used as in the original repository benchmark. ### Supported Tasks and Leaderboards #### `time-series-forecasting` ##### `univariate-time-series-forecasting` The univariate time series forecasting tasks involves learning the future one dimensional `target` values of a time series in a dataset for some `prediction_length` time steps. The performance of the forecast models can then be validated via the ground truth in the `validation` split and tested via the `test` split. ##### `multivariate-time-series-forecasting` The multivariate time series forecasting task involves learning the future vector of `target` values of a time series in a dataset for some `prediction_length` time steps. Similar to the univariate setting the performance of a multivariate model can be validated via the ground truth in the `validation` split and tested via the `test` split. ### Languages ## Dataset Structure ### Data Instances A sample from the training set is provided below: ```python { 'start': datetime.datetime(2012, 1, 1, 0, 0), 'target': [14.0, 18.0, 21.0, 20.0, 22.0, 20.0, ...], 'feat_static_cat': [0], 'feat_dynamic_real': [[0.3, 0.4], [0.1, 0.6], ...], 'item_id': '0' } ``` ### Data Fields For the univariate regular time series each series has the following keys: * `start`: a datetime of the first entry of each time series in the dataset * `target`: an array[float32] of the actual target values * `feat_static_cat`: an array[uint64] which contains a categorical identifier of each time series in the dataset * `feat_dynamic_real`: optional array of covariate features * `item_id`: a string identifier of each time series in a dataset for reference For the multivariate time series the `target` is a vector of the multivariate dimension for each time point. ### Data Splits The datasets are split in time depending on the prediction length specified in the datasets. In particular for each time series in a dataset there is a prediction length window of the future in the validation split and another prediction length more in the test split. ## Dataset Creation ### Curation Rationale To facilitate the evaluation of global forecasting models. All datasets in our repository are intended for research purposes and to evaluate the performance of new forecasting algorithms. ### Source Data #### Initial Data Collection and Normalization Out of the 30 datasets, 23 were already publicly available in different platforms with different data formats. The original sources of all datasets are mentioned in the datasets table above. After extracting and curating these datasets, we analysed them individually to identify the datasets containing series with different frequencies and missing observations. Nine datasets contain time series belonging to different frequencies and the archive contains a separate dataset per each frequency. #### Who are the source language producers? The data comes from the datasets listed in the table above. ### Annotations #### Annotation process The annotations come from the datasets listed in the table above. #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators * [Rakshitha Godahewa](mailto:rakshitha.godahewa@monash.edu) * [Christoph Bergmeir](mailto:christoph.bergmeir@monash.edu) * [Geoff Webb](mailto:geoff.webb@monash.edu) * [Rob Hyndman](mailto:rob.hyndman@monash.edu) * [Pablo Montero-Manso](mailto:pablo.monteromanso@sydney.edu.au) ### Licensing Information [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode) ### Citation Information ```tex @InProceedings{godahewa2021monash, author = "Godahewa, Rakshitha and Bergmeir, Christoph and Webb, Geoffrey I. and Hyndman, Rob J. and Montero-Manso, Pablo", title = "Monash Time Series Forecasting Archive", booktitle = "Neural Information Processing Systems Track on Datasets and Benchmarks", year = "2021", note = "forthcoming" } ``` ### Contributions Thanks to [@kashif](https://github.com/kashif) for adding this dataset.
xiongshunjie
null
null
null
false
1
false
xiongshunjie/ProDataset
2022-03-21T11:30:20.000Z
null
false
cdf7be8e4e84152a48415aa0f86e11f222365f48
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/xiongshunjie/ProDataset/resolve/main/README.md
--- license: apache-2.0 ---
Cheltone
null
null
null
false
1
false
Cheltone/MyTwitter
2022-03-21T14:51:28.000Z
null
false
ccd927007e794cf0a8794aee8482c6dec66ff6fb
[]
[]
https://huggingface.co/datasets/Cheltone/MyTwitter/resolve/main/README.md
Twitter 3.21
blo05
null
null
null
false
1
false
blo05/cleaned_wiki_en
2022-03-30T10:12:38.000Z
null
false
b6a5fc413080ac48e2ad89fb86a0e4f624ec02e3
[]
[]
https://huggingface.co/datasets/blo05/cleaned_wiki_en/resolve/main/README.md
Cleaned wikipedia dataset
fangyuan
null
@inproceedings{xu2022lfqadiscourse, title = {How Do We Answer Complex Questions: Discourse Structure of Long-form Answers}, author = {Xu, Fangyuan and Li, Junyi Jessy and Choi, Eunsol}, year = 2022, booktitle = {Proceedings of the Annual Meeting of the Association for Computational Linguistics}, note = {Long paper} }
LFQA discourse contains discourse annotations of long-form answers. - [VALIDITY]: Validity annotations of (question, answer) pairs. - [ROLE]: Role annotations of valid answer paragraphs.
false
1
false
fangyuan/lfqa_discourse
2022-07-01T15:29:22.000Z
null
false
285237c06959f3e9e749f9533088083925ae761f
[]
[ "arxiv:2203.11048", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:machine-generated", "language_creators:found", "language:en-US", "license:cc-by-sa-4.0", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:extended|natural_...
https://huggingface.co/datasets/fangyuan/lfqa_discourse/resolve/main/README.md
--- annotations_creators: - crowdsourced - expert-generated language_creators: - machine-generated - found language: - en-US license: - cc-by-sa-4.0 multilinguality: - monolingual pretty_name: lfqa_discourse size_categories: - unknown source_datasets: - extended|natural_questions - extended|eli5 task_categories: [] task_ids: [] --- # Dataset Card for LFQA Discourse ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [Repo](https://github.com/utcsnlp/lfqa_discourse) - **Paper:** [How Do We Answer Complex Questions: Discourse Structure of Long-form Answers](https://arxiv.org/abs/2203.11048) - **Point of Contact:** fangyuan[at]utexas.edu ### Dataset Summary This dataset contains discourse annotation of long-form answers. There are two types of annotations: * **Validity:** whether a <question, answer> pair is valid based on a set of invalid reasons defined. * **Role:** sentence-level role annotation of functional roles for long-form answers. ### Languages The dataset contains data in English. ## Dataset Structure ### Data Instances Each instance is a (question, long-form answer) pair from one of the four data sources -- ELI5, WebGPT, NQ, and model-generated answers (denoted as ELI5-model), and our discourse annotation, which consists of QA-pair level validity label and sentence-level functional role label. We provide all validity and role annotations here. For further train/val/test split, please refer to our [github repository](https://github.com/utcsnlp/lfqa_discourse). ### Data Fields For validity annotations, each instance contains the following fields: * `dataset`: The dataset this QA pair belongs to, one of [`NQ`, `ELI5`, `Web-GPT`]. Note that `ELI5` contains both human-written answers and model-generated answers, with model-generated answer distinguished with the `a_id` field mentioned below. * `q_id`: The question id, same as the original NQ or ELI5 dataset. * `a_id`: The answer id, same as the original ELI5 dataset. For NQ, we populate a dummy `a_id` (1). For machine generated answers, this field corresponds to the name of the model. * `question`: The question. * `answer_paragraph`: The answer paragraph. * `answer_sentences`: The list of answer sentences, tokenized from the answer paragraph. * `is_valid`: A boolean value indicating whether the qa pair is valid, values: [`True`, `False`]. * `invalid_reason`: A list of list, each list contains the invalid reason the annotator selected. The invalid reason is one of [`no_valid_answer`, `nonsensical_question`, `assumptions_rejected`, `multiple_questions`]. For role annotations, each instance contains the following fields: * * `dataset`: The dataset this QA pair belongs to, one of [`NQ`, `ELI5`, `Web-GPT`]. Note that `ELI5` contains both human-written answers and model-generated answers, with model-generated answer distinguished with the `a_id` field mentioned below. * `q_id`: The question id, same as the original NQ or ELI5 dataset. * `a_id`: The answer id, same as the original ELI5 dataset. For NQ, we populate a dummy `a_id` (1). For machine generated answers, this field corresponds to the name of the model. * `question`: The question. * `answer_paragraph`: The answer paragraph. * `answer_sentences`: The list of answer sentences, tokenized from the answer paragraph. * `role_annotation`: The list of majority role (or adjudicated) role (if exists), for the sentences in `answer_sentences`. Each role is one of [`Answer`, `Answer - Example`, `Answer (Summary)`, `Auxiliary Information`, `Answer - Organizational sentence`, `Miscellaneous`] * `raw_role_annotation`: A list of list, each list contains the raw role annotations for sentences in `answer_sentences`. ### Data Splits For train/validation/test splits, please refer to our [repository]((https://github.com/utcsnlp/lfqa_discourse). ## Dataset Creation Please refer to our [paper](https://arxiv.org/abs/2203.11048) and datasheet for details on dataset creation, annotation process and discussion on limitations. ## Additional Information ### Licensing Information https://creativecommons.org/licenses/by-sa/4.0/legalcode ### Citation Information ``` @inproceedings{xu2022lfqadiscourse, title = {How Do We Answer Complex Questions: Discourse Structure of Long-form Answers}, author = {Xu, Fangyuan and Li, Junyi Jessy and Choi, Eunsol}, year = 2022, booktitle = {Proceedings of the Annual Meeting of the Association for Computational Linguistics}, note = {Long paper} } ``` ### Contributions Thanks to [@carriex](https://github.com/carriex) for adding this dataset.
EALeon16
null
null
null
false
1
false
EALeon16/autonlp-data-pruebapoems
2022-10-25T10:03:29.000Z
null
false
b1cb0eb42393e09d5b9090c60a1f55d59273dbfb
[]
[ "language:es", "task_categories:text-classification" ]
https://huggingface.co/datasets/EALeon16/autonlp-data-pruebapoems/resolve/main/README.md
--- language: - es task_categories: - text-classification --- # AutoNLP Dataset for project: pruebapoems ## Dataset Descritpion This dataset has been automatically processed by AutoNLP for project pruebapoems. ### Languages The BCP-47 code for the dataset's language is es. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "When I was fair and young, then favor graced me.\r\nOf many was I sought their mistress for to be.\r\nBu[...]", "target": 1 }, { "text": "Sigh no more, ladies, sigh no more.\r\n Men were deceivers ever,\r\nOne foot in sea, and one on shore[...]", "target": 0 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "ClassLabel(num_classes=3, names=['Love', 'Mythology & Folklore', 'Nature'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 457 | | valid | 116 |
hackathon-pln-es
null
null
null
false
1
false
hackathon-pln-es/comentarios_depresivos
2022-04-01T01:40:06.000Z
null
false
dc52efd76d818fcd4d0a3b4cc1d6579486b92a0a
[]
[ "license:cc-by-sa-4.0" ]
https://huggingface.co/datasets/hackathon-pln-es/comentarios_depresivos/resolve/main/README.md
--- license: cc-by-sa-4.0 --- La base de datos consta de una cantidad de 192 347 filas de datos para el entrenamiento, 33 944 para las pruebas y 22630 para la validación. Su contenido está compuesto por comentarios suicidas y comentarios normales de la red social Reddit traducidos al español, y obtenidos de la base de datos: Suicide and Depression Detection de Nikhileswar Komati, la cual se puede visualizar en la siguiente dirección: https://www.kaggle.com/datasets/nikhileswarkomati/suicide-watch Autores - Danny Vásquez - César Salazar - Alexis Cañar - Yannela Castro - Daniel Patiño
hackathon-pln-es
null
null
null
false
1
false
hackathon-pln-es/poems-es
2022-03-27T18:39:08.000Z
null
false
1b7f73b6c66efd03e28c7f409895c878684675b5
[]
[ "license:wtfpl" ]
https://huggingface.co/datasets/hackathon-pln-es/poems-es/resolve/main/README.md
--- license: wtfpl --- Dataset descargado de la página kaggle.com. El archivo original contenía información en inglés y posteriormente fue traducida para su uso. El dataset contiene las columnas: - Autor: corresponde al autor del poema. - Contenido: contiene todo el poema. - Nombre del poema: contiene el título del poema. - Años: corresponde al tiempo en que fue hecho el poema. - Tipo: contiene el tipo que pertenece el poema.
IIC
null
null
null
false
1
false
IIC/bioasq22_es
2022-10-23T05:18:18.000Z
null
false
bf9af8a68334b40d7daa9523cb47a116833f64c2
[]
[ "annotations_creators:no-annotation", "language_creators:crowdsourced", "language:es", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:Helsinki-NLP/opus-mt-en-es", "task_ids:language-modeling" ]
https://huggingface.co/datasets/IIC/bioasq22_es/resolve/main/README.md
--- annotations_creators: - no-annotation language_creators: - crowdsourced language: - es multilinguality: - monolingual pretty_name: BIOASQ size_categories: - 100K<n<1M source_datasets: - Helsinki-NLP/opus-mt-en-es task_categories: - sequence-modeling task_ids: - language-modeling --- # BIOASQ 2022 Spanish This is an automatically translated version of the bioasq dataset, a dataset used for question answering in the biomedical domain. The translation was performed for the questions, answers and contexts using the [marianMT english-spanish](https://huggingface.co/Helsinki-NLP/opus-mt-en-es) . As the translation process may return answers that are not 100% present in the context, we developed an algorithm based on sentence tokenization and intersection of the words present in the answer and in the portion of the context that we are evaluating, and then extracting the parragraph from the context that matches the answer. License, distribution and usage conditions of the original dataset apply. ### Contributions Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this dataset.
nedroden
null
null
null
false
1
false
nedroden/nlcity
2022-03-22T10:06:37.000Z
null
false
a231f6a9b437ed1527687e6ddf180c78978b9d78
[]
[ "license:cc" ]
https://huggingface.co/datasets/nedroden/nlcity/resolve/main/README.md
--- license: cc ---
archmagos
null
null
null
false
1
false
archmagos/HourAI-data
2022-03-22T20:26:21.000Z
null
false
45c0b11a67f833a92e8e04fbaa2577e1c9f75a63
[]
[]
https://huggingface.co/datasets/archmagos/HourAI-data/resolve/main/README.md
#HourAI-data Conversational data used to finetune HourAI Parsed from: [omoito](https://dynasty-scans.com/series/omoito) Added some testing conversations that looked ok as well.
emrecan
null
null
null
false
1
false
emrecan/nli_tr_for_simcse
2022-10-25T10:55:10.000Z
null
false
976d9a18b30d4b77f6c12be552b8924903990794
[]
[ "language:tr", "size_categories:100K<n<1M", "source_datasets:nli_tr", "task_categories:text-classification", "task_ids:semantic-similarity-scoring", "task_ids:text-scoring" ]
https://huggingface.co/datasets/emrecan/nli_tr_for_simcse/resolve/main/README.md
--- language: - tr size_categories: - 100K<n<1M source_datasets: - nli_tr task_categories: - text-classification task_ids: - semantic-similarity-scoring - text-scoring --- # NLI-TR for Supervised SimCSE This dataset is a modified version of [NLI-TR](https://huggingface.co/datasets/nli_tr) dataset. Its intended use is to train Supervised [SimCSE](https://github.com/princeton-nlp/SimCSE) models for sentence-embeddings. Steps followed to produce this dataset are listed below: 1. Merge train split of snli_tr and multinli_tr subsets. 2. Find every premise that has an entailment hypothesis **and** a contradiction hypothesis. 3. Write found triplets into sent0 (premise), sent1 (entailment hypothesis), hard_neg (contradiction hypothesis) format.
d0r1h
null
null
null
false
1
false
d0r1h/Real_vs_Fake
2022-03-22T13:24:29.000Z
null
false
3ced201c9bbc5d73918f0b66ec8e22f1a82a8eed
[]
[ "license:afl-3.0" ]
https://huggingface.co/datasets/d0r1h/Real_vs_Fake/resolve/main/README.md
--- license: afl-3.0 ---
Carlos89apc
null
null
null
false
1
false
Carlos89apc/TraductorES_Kichwa
2022-03-22T14:04:09.000Z
null
false
a29926a4f351dd86b0df1e556a2fd28547ef596d
[]
[ "license:gpl" ]
https://huggingface.co/datasets/Carlos89apc/TraductorES_Kichwa/resolve/main/README.md
--- license: gpl ---
sayalaruano
null
null
null
false
1
false
sayalaruano/FakeNewsCorpusSpanish
2022-03-22T14:37:06.000Z
null
false
de49bc6dc80030d41ac50d2f3e981bbe78f51e47
[]
[]
https://huggingface.co/datasets/sayalaruano/FakeNewsCorpusSpanish/resolve/main/README.md
# :newspaper: The Spanish Fake News Corpus ![GitHub](https://img.shields.io/github/license/jpposadas/FakeNewsCorpusSpanish) ![GitHub repo size](https://img.shields.io/github/repo-size/jpposadas/FakeNewsCorpusSpanish) ![GitHub last commit](https://img.shields.io/github/last-commit/jpposadas/FakeNewsCorpusSpanish) ![GitHub stars](https://img.shields.io/github/stars/jpposadas/FakeNewsCorpusSpanish) ## The Spanish Fake News Corpus Version 2.0 [[ FakeDeS Task @ Iberlef 2021 ]] :metal: ### Corpus Description The Spanish Fake News Corpus Version 2.0 contains pairs of fake and true publications about different events (all of them were written in Spanish) that were collected from **November 2020 to March 2021**. Different sources from the web were used to gather the information, but mainly of two types: 1) newspapers and media companies websites, and 2) fact-cheking websites. Most of the revised fact-checking sites used follow the recommendations of the International [Fact-Checking Network (IFCN)](https://ifcncodeofprinciples.poynter.org/) that seeks to promote good practice in fact-checking. The assembled corpus has **572 instances** and the instances were labeled using two classes, true or fake. The test corpus is balanced with respect to these two classes. To compile the true-fake news pair of the test corpus, the following guidelines were followed: - A fake news is added to the corpus if any of the selected fact-checking sites determines it. - Given a fake news, its true news counterpart is added if there is evidence that it has been published in a reliable site (established newspaper site or media site). The topics covered in the corpus are: **Science, Sport, Politics, Society, COVID-19, Environment, and International**.The corpus includes mostly news articles, however, on this occasion social media posts were also included in the category of fake news. Exactly 90 posts were included as fake news (15.73\% of the total). This posts were recovered mainly from Facebook and WhatsApp. The use of the various fact-checking sites involved consulting pages from different countries that offer content in Spanish in addition to Mexico, so different variants of Spanish are included in the test corpus. These sites included countries like Argentina, Bolivia, Chile, Colombia, Costa Rica, Ecuador, Spain, United States, France, Peru, Uruguay, England and Venezuela. The corpus is concentrated in the file test.xlsx. The meaning of the columns is described next: <ul> <li><b>Id</b>: assign an identifier to each instance.</li> <li><b>Category</b>: indicates the category of the news (true or fake).</li> <li><b>Topic</b>: indicates the topic related to the news.</li> <li><b>Source</b>: indicates the name of the source.</li> <li><b>Headline</b>: contains the headline of the news.</li> <li><b>Text</b>: contains the raw text of the news.</li> <li><b>Link</b>: contains the URL of the source.</li> </ul> Note that some instances have an empty header intentionally because the source omitted it. ### :pencil: How to cite If you use the corpus please cite the following articles: 1) Gómez-Adorno, H., Posadas-Durán, J. P., Enguix, G. B., & Capetillo, C. P. (2021). Overview of FakeDeS at IberLEF 2021: Fake News Detection in Spanish Shared Task. Procesamiento del Lenguaje Natural, 67, 223-231. 2) Aragón, M. E., Jarquín, H., Gómez, M. M. Y., Escalante, H. J., Villaseñor-Pineda, L., Gómez-Adorno, H., ... & Posadas-Durán, J. P. (2020, September). Overview of mex-a3t at iberlef 2020: Fake news and aggressiveness analysis in mexican spanish. In Notebook Papers of 2nd SEPLN Workshop on Iberian Languages Evaluation Forum (IberLEF), Malaga, Spain. 3) Posadas-Durán, J. P., Gómez-Adorno, H., Sidorov, G., & Escobar, J. J. M. (2019). Detection of fake news in a new corpus for the Spanish language. Journal of Intelligent & Fuzzy Systems, 36(5), 4869-4876. ### FakeDeS @ IberLef 2021 >> The corpus was used for the **Fake News Detection in Spanish (FakeDeS)** shared task at the IberLEF 2021 congress. The details of the competition can be viewed in the main page of the [competition](https://sites.google.com/view/fakedes). ### Organizers - Helena Montserrat Gómez Adorno (IIMAS - UNAM) - Juan Pablo Francisco Posadas Durán (ESIME Zacatenco - IPN) - Gemma Bel Enguix (IINGEN - UNAM) - Claudia Porto Capetillo (IIMAS - UNAM) ## :books: The Spanish Fake News Corpus Version 1.0 (@ MEXLEF 20) ### :page_facing_up: Corpus Description <p style='text-align: justify;'> The Spanish Fake News Corpus contains a collection of news compiled from several resources on the Web: established newspapers websites, media companies’ websites, special websites dedicated to validating fake news and websites designated by different journalists as sites that regularly publish fake news. The news were collected from **January to July of 2018** and all of them were written in Spanish. The process of tagging the corpus was manually performed and the method followed is described in the paper. aspects were considered: 1) news were tagged as true if there was evidence that it has been published in reliable sites, i.e., established newspaper websites or renowned journalists websites; 2) news were tagged as fake if there were news from reliable sites or specialized website in detection of deceptive content for example VerificadoMX (https://verificado.mx) that contradicts it or no other evidence was found about the news besides the source; 3) the correlation between the news was kept by collecting the true-fake news pair of an event; 4) we tried to trace the source of the news. </p> The corpus contains 971 news divided into 491 real news and 480 fake news. The corpus covers news from 9 different topics: **Science, Sport, Economy, Education, Entertainment, Politics, Health, Security, and Society**. The corpus was split into train and test sets, using around the 70\% of the corpus for train and the rest for test. We performed a hierarchical distribution of the corpus, i.e., all the categories keep the 70\%-30\% ratio. The corpus is concentrated in the files train.xlsx and development.xlsx. The meaning of the columns is described next: <ul> <li><b>Id</b>: assign an identifier to each instance.</li> <li><b>Category</b>: indicates the category of the news (true or fake).</li> <li><b>Topic</b>: indicates the topic related to the news.</li> <li><b>Source</b>: indicates the name of the source.</li> <li><b>Headline</b>: contains the headline of the news.</li> <li><b>Text</b>: contains the raw text of the news.</li> <li><b>Link</b>: contains the URL of the source.</li> </ul> ### :pencil: How to cite If you use the corpus please cite the following articles: 1) Gómez-Adorno, H., Posadas-Durán, J. P., Enguix, G. B., & Capetillo, C. P. (2021). Overview of FakeDeS at IberLEF 2021: Fake News Detection in Spanish Shared Task. Procesamiento del Lenguaje Natural, 67, 223-231. 2) Aragón, M. E., Jarquín, H., Gómez, M. M. Y., Escalante, H. J., Villaseñor-Pineda, L., Gómez-Adorno, H., ... & Posadas-Durán, J. P. (2020, September). Overview of mex-a3t at iberlef 2020: Fake news and aggressiveness analysis in mexican spanish. In Notebook Papers of 2nd SEPLN Workshop on Iberian Languages Evaluation Forum (IberLEF), Malaga, Spain. 3) Posadas-Durán, J. P., Gómez-Adorno, H., Sidorov, G., & Escobar, J. J. M. (2019). Detection of fake news in a new corpus for the Spanish language. Journal of Intelligent & Fuzzy Systems, 36(5), 4869-4876. ### Fake News Detection Task at MEX-A3T >> The Fake News Corpus in Spanish was used for the **Fake News Detection Task** in the **MEX-A3T** competition at the IberLEF 2020 congress. The details of the competition can be viewed in the main page of the [competition](https://sites.google.com/view/mex-a3t/). ### Authors of the corpus Juan Manuel Ramírez Cruz (ESIME Zacatenco - IPN), Silvia Úrsula Palacios Alvarado (ESIME Zacatenco - IPN), Karime Elena Franca Tapia (ESIME Zacatenco - IPN), Juan Pablo Francisco Posadas Durán (ESIME Zacatenco - IPN), Helena Montserrat Gómez Adorno (IIMAS - UNAM), Grigori Sidorov (CIC - IPN) ### Aknowledgments The work was done with partial support of Red Temática de Tecnologías del Lenguaje, CONACYT project 240844 and SIP-IPN projects 20181849 and 20171813 ## License [CC-BY-4.0](https://choosealicense.com/licenses/cc-by-4.0/).
sayalaruano
null
null
null
false
2
false
sayalaruano/FakeNewsSpanish_Kaggle1
2022-03-22T14:59:40.000Z
null
false
8a03d6240ada811ba3d603f813b91d3be4553764
[]
[ "license:cc-by-nc-sa-4.0" ]
https://huggingface.co/datasets/sayalaruano/FakeNewsSpanish_Kaggle1/resolve/main/README.md
--- license: cc-by-nc-sa-4.0 --- This dataset was obtained from: https://www.kaggle.com/datasets/arseniitretiakov/noticias-falsas-en-espaol
sayalaruano
null
null
null
false
5
false
sayalaruano/FakeNewsSpanish_Kaggle2
2022-03-22T15:02:43.000Z
null
false
74fb40b34737bd14e40f2638ac00938243ec9ee3
[]
[ "license:cc-by-nc-sa-4.0" ]
https://huggingface.co/datasets/sayalaruano/FakeNewsSpanish_Kaggle2/resolve/main/README.md
--- license: cc-by-nc-sa-4.0 --- This dataset was obtained from: https://www.kaggle.com/datasets/zulanac/fake-and-real-news
openclimatefix
null
@InProceedings{ocf:mrms, title = {MRMS Archival Precipitation Rate Radar Dataset}, author={Jacob Bieker }, year={2022} }
This dataset consists of MRMS precipitation radar data for the continental United States, sampled at a 1kmx1km area and 2-mimntely spatial resolution.
false
82
false
openclimatefix/mrms
2022-06-22T13:39:35.000Z
null
false
421b0fbcfc6b450bea2364de6ace4e965cd98c8b
[]
[]
https://huggingface.co/datasets/openclimatefix/mrms/resolve/main/README.md
annotations_creators: - machine-generated language_creators: - machine-generated languages: [] licenses: - mit multilinguality: [] pretty_name: Mutli-Radar/Multi-System Precipitation Radar size_categories: - 1M<n<10M source_datasets: - original task_categories: - time-series-forecasting - image-classification - image-segmentation - other task_ids: - univariate-time-series-forecasting - multi-label-image-classification - semantic-segmentation # Dataset Card for MRMS ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://mrms.nssl.noaa.gov/ - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Jacob Bieker](mailto:jacob@openclimatefix.org) ### Dataset Summary Multi-Radar/Multi-System Precipitation Rate Radar data for 2016-2022. This data contains precipitation rate values for the continental United States. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages [Needs More Information] ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields [Needs More Information] ### Data Splits [Needs More Information] ## Dataset Creation ### Curation Rationale This dataset was constructed to help recreate the original dataset used for MetNet/MetNet-2 as well as Deep Generative Model of Radar papers. The datasets were not pubicly released, but this dataset should cover the time period used plus more compared to the datasets in the papers. ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information US Government License, no restrictions ### Citation Information @article(ocf:mrms, author = {Jacob Bieker} title = {MRMS Precipitation Rate Dataset} year = {2022} }
erikacardenas300
null
null
null
false
1
false
erikacardenas300/Zillow-Text-Listings
2022-03-23T01:47:24.000Z
null
false
4f92928b7f48c7f12925055498f2ed92ac042e06
[]
[]
https://huggingface.co/datasets/erikacardenas300/Zillow-Text-Listings/resolve/main/README.md
Please cite: E. Cardenas., et al. “A Comparison of House Price Classification with Structured and Unstructured Text Data.” Published in AAAI FLAIRS-35. 2022.
jullarson
null
null
null
false
1
false
jullarson/sdd
2022-03-22T20:40:54.000Z
null
false
a9cee35c7531ae57045e920c657dfced4bbc93e6
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/jullarson/sdd/resolve/main/README.md
--- license: apache-2.0 ---
nreimers
null
null
null
false
38
false
nreimers/trec-covid
2022-03-23T12:55:44.000Z
null
false
549d7035f8df8bcd19d41ea355a4a775273b08e5
[]
[]
https://huggingface.co/datasets/nreimers/trec-covid/resolve/main/README.md
This is the corpus file from the [BEIR benchmark](https://github.com/beir-cellar/beir) for the [TREC-COVID 19 dataset](https://ir.nist.gov/trec-covid/).
IsaacRodgz
null
null
null
false
1
false
IsaacRodgz/Fake-news-latam-omdena
2022-03-23T00:20:36.000Z
null
false
e3e19a9a95b3464d2aa336ccf473b4d1cc7de76b
[]
[]
https://huggingface.co/datasets/IsaacRodgz/Fake-news-latam-omdena/resolve/main/README.md
# Dataset Card for Fake-news-latam-omdena ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:**[latam-chapters-news-detector](https://github.com/OmdenaAI/latam-chapters-news-detector) - **Repository:**[latam-chapters-news-detector](https://github.com/OmdenaAI/latam-chapters-news-detector) - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Since the Cambridge Analytica scandal a pandora box has been opened around the world, bringing to light campaigns even involving our current Latinamerica leaders manipulating public opinion through social media to win an election. There is a common and simple pattern that includes platforms such as facebook and fake news, where the candidates are able to build a nefarious narrative for their own benefit. This fact is a growing concern for our democracies, as many of these practices have been widely spread across the region and more people are gaining access to the internet. Thus, it is a necessity to be able to advise the population, and for that we have to be able to quickly spot these plots on the net before the damage is irreversible. Therefore, an initial effort was taken to collect this dataset which gathered news from different news sources in Mexico, Colombia and El Salvador. With the objective to train a classification model and deploy it as part of the Politics Fake News Detector in LATAM (Latin America) project [https://github.com/OmdenaAI/latam-chapters-news-detector]. Website articles and tweets were considered. ### Supported Tasks and Leaderboards Binary fake news classification [with classes "True" and "Fake"] ### Languages Spanish only ## Dataset Structure ### Data Instances * Train: 2782 * Test: 310 ### Data Fields [More Information Needed] ### Data Splits Train and test. Each split was generated with a stratified procedure in order to have the same proportion of fake news in both train and test. Around 1/3 of the observations in each split have the label 'Fake', while 2/3 have the label 'True'. ## Dataset Creation ### Curation Rationale For a more specific flow of how the labeling was done, follow this link: https://github.com/OmdenaAI/latam-chapters-news-detector/blob/main/Fake-news_Flowchart.pdf ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset Once the capacity to somewhat detect irregularities in the news activity on the internet is developed, we might be able to counter the disinformation with the help of additional research. As we reduce the time spent in looking for those occurrences, more time can be used in validating the results and uncovering the truth; enabling researchers, journalists and organizations to help people make an informed decision whether the public opinion is true or not, so that they can identify on their own if someone is trying to manipulate them for a certain political benefit. If this matter isn’t tackled with enough urgency, we might see the rise of a new dark era in latin america politics, where many unscrupulous parties and people will manage to gain power and control the lives of many people. ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to the Omdena local chapter members from Mexico, Colombia and El Salvador for their amazing effort to collect and curate this dataset.