id
stringlengths
2
115
lastModified
stringlengths
24
24
tags
list
author
stringlengths
2
42
description
stringlengths
0
6.67k
citation
stringlengths
0
10.7k
likes
int64
0
3.66k
downloads
int64
0
8.89M
created
timestamp[us]
card
stringlengths
11
977k
card_len
int64
11
977k
embeddings
list
Mutugi/Solana_1000
2023-10-29T01:00:28.000Z
[ "region:us" ]
Mutugi
null
null
0
11
2023-10-29T00:59:47
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
w95/databricks-dolly-15k-az
2023-10-29T07:51:38.000Z
[ "task_categories:question-answering", "task_categories:summarization", "size_categories:1K<n<10K", "language:az", "license:cc-by-sa-3.0", "arxiv:2203.02155", "region:us" ]
w95
null
null
0
11
2023-10-29T07:43:06
--- license: cc-by-sa-3.0 task_categories: - question-answering - summarization language: - az size_categories: - 1K<n<10K --- This dataset is a machine-translated version of [databricks-dolly-15k.jsonl](https://huggingface.co/datasets/databricks/databricks-dolly-15k) into Azerbaijani. Dataset size is 8k. ----- # Summary `databricks-dolly-15k` is an open source dataset of instruction-following records generated by thousands of Databricks employees in several of the behavioral categories outlined in the [InstructGPT](https://arxiv.org/abs/2203.02155) paper, including brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization. This dataset can be used for any purpose, whether academic or commercial, under the terms of the [Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode). Supported Tasks: - Training LLMs - Synthetic Data Generation - Data Augmentation Languages: English Version: 1.0
1,023
[ [ 0.00669097900390625, -0.054931640625, -0.0007410049438476562, 0.047760009765625, -0.0240020751953125, 0.00469970703125, -0.00653839111328125, -0.00494384765625, 0.0004394054412841797, 0.05560302734375, -0.06524658203125, -0.060546875, -0.035186767578125, 0.0...
detectors/isun-ood
2023-10-30T18:25:18.000Z
[ "task_categories:image-classification", "size_categories:1K<n<10K", "license:unknown", "arxiv:1507.01422", "arxiv:1706.02690", "region:us" ]
detectors
null
null
0
11
2023-10-30T16:55:14
--- license: unknown size_categories: 1K<n<10K task_categories: - image-classification paperswithcode_id: isun pretty_name: iSUN configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: image dtype: image splits: - name: train num_bytes: 24514257.375 num_examples: 8925 download_size: 0 dataset_size: 24514257.375 --- # Dataset Card for iSUN for OOD Detection <!-- Provide a quick summary of the dataset. --> ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Original Dataset Authors**: Junting Pan, Xavier Giró-i-Nieto - **OOD Split Authors:** Shiyu Liang, Yixuan Li, R. Srikant - **Shared by:** Eduardo Dadalto - **License:** unknown ### Dataset Sources <!-- Provide the basic links for the dataset. --> - **Original Dataset Paper:** http://arxiv.org/abs/1507.01422v1 - **First OOD Application Paper:** http://arxiv.org/abs/1706.02690v5 ### Direct Use <!-- This section describes suitable use cases for the dataset. --> This dataset is intended to be used as an ouf-of-distribution dataset for image classification benchmarks. ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> This dataset is not annotated. ### Curation Rationale <!-- Motivation for the creation of this dataset. --> The goal in curating and sharing this dataset to the HuggingFace Hub is to accelerate research and promote reproducibility in generalized Out-of-Distribution (OOD) detection. Check the python library [detectors](https://github.com/edadaltocg/detectors) if you are interested in OOD detection. ### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> Please check original paper for details on the dataset. ### Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Please check original paper for details on the dataset. ## Citation <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** ```bibtex @software{detectors2023, author = {Eduardo Dadalto}, title = {Detectors: a Python Library for Generalized Out-Of-Distribution Detection}, url = {https://github.com/edadaltocg/detectors}, doi = {https://doi.org/10.5281/zenodo.7883596}, month = {5}, year = {2023} } @article{1706.02690v5, author = {Shiyu Liang and Yixuan Li and R. Srikant}, title = {Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks}, year = {2017}, month = {6}, note = {ICLR 2018}, archiveprefix = {arXiv}, url = {http://arxiv.org/abs/1706.02690v5} } @article{1507.01422v1, author = {Junting Pan and Xavier Giró-i-Nieto}, title = {End-to-end Convolutional Network for Saliency Prediction}, year = {2015}, month = {7}, note = {Winner of the saliency prediction challenge in the Large-scale Scene Understanding (LSUN) Challenge in the associated workshop of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2015}, archiveprefix = {arXiv}, url = {http://arxiv.org/abs/1507.01422v1} } ``` ## Dataset Card Authors Eduardo Dadalto ## Dataset Card Contact https://huggingface.co/edadaltocg
3,771
[ [ -0.03759765625, -0.030029296875, 0.0269012451171875, 0.006427764892578125, -0.03814697265625, -0.038726806640625, -0.0027484893798828125, -0.053131103515625, 0.0103302001953125, 0.023895263671875, -0.023345947265625, -0.044464111328125, -0.041900634765625, 0...
parksimon0808/prm800k-llama-v3
2023-11-02T03:00:33.000Z
[ "region:us" ]
parksimon0808
null
null
0
11
2023-10-30T16:56:06
--- dataset_info: features: - name: texts dtype: string - name: input_ids sequence: int32 - name: labels sequence: int64 splits: - name: train num_bytes: 2462945818 num_examples: 657768 - name: test num_bytes: 78069784 num_examples: 20419 download_size: 242891483 dataset_size: 2541015602 --- # Dataset Card for "prm800k-llama-v3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
509
[ [ -0.0303192138671875, -0.0024871826171875, 0.024871826171875, 0.0313720703125, -0.0435791015625, -0.002655029296875, 0.04168701171875, -0.01404571533203125, 0.0660400390625, 0.05194091796875, -0.054779052734375, -0.0526123046875, -0.0472412109375, 0.001276969...
namngo/vimmrc2-mtc
2023-10-31T02:08:53.000Z
[ "region:us" ]
namngo
null
null
0
11
2023-10-31T02:06:04
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
marziye-A/dataset-farma-test
2023-11-01T06:54:44.000Z
[ "region:us" ]
marziye-A
null
null
0
11
2023-11-01T06:14:20
--- dataset_info: features: - name: audio dtype: audio - name: name dtype: string splits: - name: train num_bytes: 74288845.504 num_examples: 2006 download_size: 72536013 dataset_size: 74288845.504 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "dataset-farma-test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
490
[ [ -0.0439453125, -0.032318115234375, 0.0013599395751953125, 0.0117340087890625, -0.0024089813232421875, -0.004852294921875, 0.0288543701171875, -0.017913818359375, 0.0611572265625, 0.0233612060546875, -0.059326171875, -0.041473388671875, -0.032806396484375, -0...
nekofura/zero-one_alpaca
2023-11-01T19:32:37.000Z
[ "region:us" ]
nekofura
null
null
0
11
2023-11-01T19:07:59
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
ambushburn/human-burn
2023-11-02T11:50:05.000Z
[ "region:us" ]
ambushburn
null
null
0
11
2023-11-01T20:03:24
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
andrewbass/test-215
2023-11-02T15:25:33.000Z
[ "region:us" ]
andrewbass
null
null
0
11
2023-11-01T21:07:11
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
lhoestq/conll2003
2021-12-21T11:23:57.000Z
[ "region:us" ]
lhoestq
null
null
0
10
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
midas/kptimes
2022-02-06T06:21:58.000Z
[ "region:us" ]
midas
\
@inproceedings{gallina2019kptimes, title={KPTimes: A Large-Scale Dataset for Keyphrase Generation on News Documents}, author={Gallina, Ygor and Boudin, Florian and Daille, B{\'e}atrice}, booktitle={Proceedings of the 12th International Conference on Natural Language Generation}, pages={130--135}, year={2019} }
0
10
2022-03-02T23:29:22
A dataset for benchmarking keyphrase extraction and generation techniques from news. For more details about the dataset please refer the original paper - [https://aclanthology.org/W19-8617.pdf](https://aclanthology.org/W19-8617.pdf) Original source of the data - [https://github.com/ygorg/KPTimes](https://github.com/ygorg/KPTimes) ## Dataset Summary <br> <p align="center"> <img src="https://huggingface.co/datasets/midas/kptimes/resolve/main/kptimes-details.png" alt="KPTimes dataset summary" width="90%"/> <br> </p> <br> KPTimes is a large scale dataset comprising of 279,923 news articles from NY Times and 10K from JPTimes. It is one of the datasets which has annotations of keyphrases curated by the editors who can be considered as experts. The motivation of the authors behind producing this dataset was to have a large dataset for training neural models for keyphrase generation in a domain other than the scientific domain, and to understand the differences between keyphrases annotated by experts and non-experts. The authors show that the editors tend to assign generic keyphrases that are not present in the actual news article's text, with 55% of them being abstractive keyphrases. The keyphrases in the news domain as presented in this work were also on an average shorter (1.4 words) than those in the scientific datasets (2.4 words). The dataset is randomly divided into train (92.8%), validation (3.6%) and test (3.6%) splits. In order to enable the models trained on this dataset to generalize well the authors did not want to have the entire data taken from a single source (NY Times), and therefore added 10K more articles from JPTimes dataset. The authors collected free to read article URLs from NY Times spanning from 2006 to 2017, and obtained their corresponding HTML pages from the Internet Archive. They cleaned the HTML tags and extracted the title, and the main content of the articles using heuristics. The gold keyphrases were obtained from the metadata fields - *news_keywords* and *keywords*. The documents in the dataset are full-length news articles, which also makes it a suitable dataset for developing models for identifying keyphrases from long documents. <br> <p align="center"> <img src="https://huggingface.co/datasets/midas/kptimes/resolve/main/KPTimesExample.png" alt="KPTimes sample" width="90%"/> <br> </p> <br> ## Dataset Structure ## Dataset Statistics Table 1: Statistics on the length of the abstractive keyphrases for Train, Test, and Validation splits of Inspec dataset. | | Train | Test | Validation | |:------------------: |:-------: |:-------: |:----------: | | Single word | 15.6% | 29.59% | 15.52% | | Two words | 36.7% | 36.88% | 12.38% | | Three words | 29.5% | 20.86% | 29.29% | | Four words | 12.5% | 8.88% | 0% | | Five words | 3.4% | 2.33% | 3.50% | | Six words | 1.4% | 0.93% | 1.38% | | Seven words | 0.4% | 0.27% | 0.37% | | Eight words | 0.24% | 0.13% | 0.21% | | Nine words | 0.14% | 0.013% | 0.10% | | Ten words | 0.02% | 0.0007% | 0.03% | | Eleven words | 0.01% | 0.01% | 0.003% | | Twelve words | 0.008% | 0.011% | 0.007% | | Thirteen words | 0.01% | 0.02% | 0.02% | | Fourteen words | 0.001% | 0% | 0% | | Fifteen words | 0.001% | 0.004% | 0.003% | | Sixteen words | 0.0004% | 0% | 0% | | Seventeen words | 0.0005% | 0% | 0% | | Eighteen words | 0.0004% | 0% | 0% | | Nineteen words | 0.0001% | 0% | 0% | | Twenty words | 0.0001% | 0% | 0% | | Twenty-three words | 0.0001% | 0% | 0% | Table 2: Statistics on the length of the extractive keyphrases for Train, Test, and Validation splits of Inspec dataset. | | Train | Test | Validation | |:--------------: |:-------: |:------: |:----------: | | Single word | 54.2% | 60.0% | 54.38% | | Two words | 33.9% | 32.4% | 33.73% | | Three words | 8.8% | 5.5% | 8.70% | | Four words | 1.9% | 1.04% | 1.97% | | Five words | 0.5% | 0.25% | 0.53% | | Six words | 0.4% | 0.16% | 0.44% | | Seven words | 0.12% | 0.06% | 0.15% | | Eight words | 0.05% | 0.03% | 0.08% | | Nine words | 0.009% | 0% | 0% | | Ten words | 0.0007% | 0.001% | 0% | | Eleven words | 0.0002% | 0% | 0% | | Twelve words | 0.0002% | 0% | 0% | | Thirteen words | 0.0002% | 0% | 0% || Table 3: General statistics of the Inspec dataset. | Type of Analysis | Train | Test | Validation | |:------------------------------------------------: |:---------------------: |:---------------------: |:---------------------: | | Annotator Type | Professional Indexers | Professional Indexers | Professional Indexers | | Document Type | News Articles | News Articles | News articles | | No. of Documents | 259,923 | 20,000 | 10,000 | | Avg. Document length (words) | 783.32 | 643.2 | 784.65 | | Max Document length (words) | 7278 | 5503 | 5627 | | Max no. of abstractive keyphrases in a document | 10 | 10 | 10 | | Min no. of abstractive keyphrases in a document | 0 | 0 | 0 | | Avg. no. of abstractive keyphrases per document | 2.87 | 2.30 | 2.89 | | Max no. of extractive keyphrases in a document | 10 | 10 | 9 | | Min no. of extractive keyphrases in a document | 0 | 0 | 0 | | Avg. no. of extractive keyphrases per document | 2.15 | 2.72 | 2.13 | ### Data Fields - **id**: unique identifier of the document. - **document**: Whitespace separated list of words in the document. - **doc_bio_tags**: BIO tags for each word in the document. B stands for the beginning of a keyphrase and I stands for inside the keyphrase. O stands for outside the keyphrase and represents the word that isn't a part of the keyphrase at all. - **extractive_keyphrases**: List of all the present keyphrases. - **abstractive_keyphrase**: List of all the absent keyphrases. - **other metadata**: Additional information present in the original dataset. - **id** : unique identifier for the document - **date** : publishing date (YYYY/MM/DD) - **categories** : categories of the article (1 or 2 categories) - **title** : title of the document - **abstract** : content of the article - **keyword** : list of keywords ### Data Splits |Split| #datapoints | |--|--| | Train | 259923 | | Test | 20000 | | Validation | 10000 | ## Usage ### Full Dataset ```python from datasets import load_dataset # get entire dataset dataset = load_dataset("midas/kptimes", "raw") # sample from the train split print("Sample from training data split") train_sample = dataset["train"][0] print("Fields in the sample: ", [key for key in train_sample.keys()]) print("Tokenized Document: ", train_sample["document"]) print("Document BIO Tags: ", train_sample["doc_bio_tags"]) print("Extractive/present Keyphrases: ", train_sample["extractive_keyphrases"]) print("Abstractive/absent Keyphrases: ", train_sample["abstractive_keyphrases"]) print("Other Metadata: ", train_sample["other_metadata"]) print("\n-----------\n") # sample from the validation split print("Sample from validation data split") validation_sample = dataset["validation"][0] print("Fields in the sample: ", [key for key in validation_sample.keys()]) print("Tokenized Document: ", validation_sample["document"]) print("Document BIO Tags: ", validation_sample["doc_bio_tags"]) print("Extractive/present Keyphrases: ", validation_sample["extractive_keyphrases"]) print("Abstractive/absent Keyphrases: ", validation_sample["abstractive_keyphrases"]) print("Other Metadata: ", validation_sample["other_metadata"]) print("\n-----------\n") # sample from the test split print("Sample from test data split") test_sample = dataset["test"][0] print("Fields in the sample: ", [key for key in test_sample.keys()]) print("Tokenized Document: ", test_sample["document"]) print("Document BIO Tags: ", test_sample["doc_bio_tags"]) print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"]) print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"]) print("Other Metadata: ", test_sample["other_metadata"]) print("\n-----------\n") ``` **Output** ```bash Sample from training data split Fields in the sample: ['id', 'document', 'doc_bio_tags', 'extractive_keyphrases', 'abstractive_keyphrases', 'other_metadata'] Tokenized Document: ['For', 'Donald', 'Trump’s', 'Big', 'Speech,', 'an', 'Added', 'Pressure:', 'No', 'Echoes', 'CLEVELAND', '—', 'Until', 'Monday', 'night,', 'Donald', 'J.', 'Trump’s', 'biggest', 'concern', 'about', 'his', 'convention', 'speech', 'was', 'how', 'much', 'to', 'reveal', 'about', 'himself', 'and', 'his', 'family', 'in', 'an', 'address', 'that', 'is', 'often', 'the', 'most', 'personal', 'one', 'a', 'presidential', 'candidate', 'delivers.', 'But', 'the', 'political', 'firestorm', 'over', 'his', 'wife’s', 'speech', ',', 'which', 'borrowed', 'passages', 'from', 'Michelle', 'Obama’s', 'convention', 'remarks', 'in', '2008,', 'raised', 'the', 'stakes', 'exponentially.', 'Mr.', 'Trump’s', 'speech', 'on', 'Thursday', 'night', 'cannot', 'merely', 'be', 'his', 'best', 'ever.', 'It', 'also', 'has', 'to', 'be', 'bulletproof.', 'By', 'Tuesday', 'morning,', 'word', 'had', 'spread', 'throughout', 'his', 'campaign', 'that', 'any', 'language', 'in', 'Mr.', 'Trump’s', 'address', 'even', 'loosely', 'inspired', 'by', 'speeches,', 'essays,', 'books', 'or', 'Twitter', 'posts', 'had', 'to', 'be', 'either', 'rewritten', 'or', 'attributed.', 'Mr.', 'Trump’s', 'chief', 'speechwriter,', 'Stephen', 'Miller,', 'reassured', 'colleagues', 'that', 'the', 'acceptance', 'speech', 'was', 'wholly', 'original,', 'according', 'to', 'two', 'staff', 'members', 'who', 'spoke', 'with', 'him', 'and', 'described', 'those', 'conversations', 'on', 'the', 'condition', 'of', 'anonymity.', 'Mr.', 'Miller', 'also', 'told', 'campaign', 'aides', 'that', 'he', 'had', 'looked', 'closely', 'at', 'passages', 'that', 'Mr.', 'Trump', 'had', 'contributed', '—', 'handwritten', 'on', 'unlined', 'white', 'pages', '—', 'and', 'was', 'confident', 'they', 'contained', 'no', 'problems.', '(Mr.', 'Miller', 'declined', 'an', 'interview', 'request.)', 'Even', 'so,', 'one', 'of', 'the', 'staff', 'members', 'downloaded', 'plagiarism-detection', 'software', 'and', 'ran', 'a', 'draft', 'of', 'the', 'speech', 'through', 'the', 'program.', 'No', 'red', 'flags', 'came', 'up.', 'The', 'intense', 'scrutiny', 'of', 'Mr.', 'Trump’s', 'words', 'added', 'new', 'pressure', 'to', 'a', 'speechwriting', 'process', 'that', 'has', 'been', 'one', 'of', 'the', 'most', 'unpredictable', 'and', 'free-form', 'in', 'modern', 'presidential', 'campaigns.', 'A', 'month', 'ago,', 'Mr.', 'Trump', 'began', 'giving', 'dictation', 'on', 'themes', 'for', 'the', 'speech,', 'and', 'he', 'tossed', 'ideas', 'and', 'phrases', 'to', 'Mr.', 'Miller', 'or', 'other', 'advisers', 'on', 'a', 'daily', 'basis.', 'On', 'printed', 'copies', 'of', 'each', 'draft,', 'he', 'circled', 'passages', 'he', 'liked,', 'crossed', 'out', 'or', 'put', 'question', 'marks', 'beside', 'lines', 'that', 'he', 'did', 'not', 'favor', 'and', 'frequently', 'suggested', 'new', 'words', 'or', 'phrases.', 'Image', 'Stephen', 'Miller,', 'left,', 'Mr.', 'Trump’s', 'chief', 'speechwriter,', 'and', 'Paul', 'Manafort,', 'the', 'campaign', 'chairman,', 'before', 'an', 'event', 'for', 'the', 'candidate', 'at', 'the', 'Trump', 'SoHo', 'hotel', 'in', 'New', 'York', 'last', 'month.', 'Credit', 'Damon', 'Winter/The', 'New', 'York', 'Times', '“I’ve', 'been', 'amending', 'the', 'drafts', 'big-league,”', 'Mr.', 'Trump', 'said', 'in', 'an', 'interview', 'in', 'his', 'Manhattan', 'office', 'before', 'the', 'convention.', '“I', 'get', 'ideas', 'from', 'a', 'lot', 'of', 'different', 'places,', 'a', 'lot', 'of', 'smart', 'people,', 'but', 'mostly', 'I', 'like', 'language', 'that', 'sounds', 'like', 'me.”', 'Yet', 'in', 'the', 'aftermath', 'of', 'Melania', 'Trump’s', 'speech,', 'campaign', 'advisers', 'have', 'fretted', 'that', 'they', 'do', 'not', 'know', 'for', 'sure', 'where', 'Mr.', 'Trump', 'gets', 'his', 'ideas', 'and', 'language', '—', 'whether', 'they', 'are', 'his', 'own,', 'in', 'other', 'words,', 'or', 'are', 'picked', 'up', 'from', 'Twitter,', 'television,', 'or,', 'say,', 'a', 'best', 'seller', 'by', 'Bill', 'O’Reilly', 'of', 'Fox', 'News,', 'a', 'commentator', 'whom', 'Mr.', 'Trump', 'likes.', 'Borrowing', 'or', 'adapting', 'may', 'not', 'always', 'be', 'tantamount', 'to', 'plagiarism,', 'but', 'several', 'Trump', 'advisers,', 'who', 'also', 'insisted', 'on', 'anonymity,', 'said', 'that', 'after', 'the', 'furor', 'over', 'Ms.', 'Trump’s', 'remarks,', 'the', 'campaign', 'cannot', 'allow', 'a', 'similar', 'blowup.', 'Ed', 'Rollins,', 'a', 'Republican', 'strategist', 'who', 'is', 'advising', 'a', '“super', 'PAC”', 'supporting', 'Mr.', 'Trump,', 'said', 'that', 'the', 'candidate', 'could', 'not', 'afford', 'any', 'mistakes.', '“His', 'speech', 'is', 'the', 'whole', 'game,”', 'Mr.', 'Rollins', 'said.', '“Viewers', 'have', 'to', 'watch', 'it', 'and', 'say,', '‘There', 'is', 'the', 'next', 'president', 'of', 'the', 'United', 'States.’”', 'In', 'the', 'interview,', 'Mr.', 'Trump', 'said', 'his', 'speech', 'would', 'center', 'on', 'his', 'vision', 'of', 'a', 'strong', 'and', 'secure', 'America', 'that', '“once', 'existed', 'and', 'no', 'longer', 'does,', 'but', 'can', 'again', 'under', 'a', 'Trump', 'administration.”', 'Latest', 'Election', 'Polls', '2016', 'Get', 'the', 'latest', 'national', 'and', 'state', 'polls', 'on', 'the', 'presidential', 'election', 'between', 'Hillary', 'Clinton', 'and', 'Donald', 'J.', 'Trump.', 'His', 'greatest', 'challenge,', 'he', 'said,', 'was', '“putting', 'myself', 'in', 'the', 'speech”', '—', 'discussing', 'his', 'upbringing', 'and', 'early', 'experiences', 'and', 'relating', 'them', 'to', 'the', 'hopes', 'and', 'aspirations', 'of', 'other', 'Americans.', '“I', 'was', 'never', 'comfortable', 'getting', 'personal', 'about', 'my', 'family', 'because', 'I', 'thought', 'it', 'was', 'special', 'territory,”', 'Mr.', 'Trump', 'said,', 'glancing', 'at', 'a', 'picture', 'of', 'his', 'father', 'on', 'his', 'desk.', '“It', 'can', 'feel', 'exploitative', 'to', 'use', 'family', 'stories', 'to', 'win', 'votes.', 'And', 'I', 'had', 'a', 'very', 'happy', 'and', 'comfortable', 'life', 'growing', 'up.', 'I', 'had', 'a', 'great', 'relationship', 'with', 'my', 'father.', 'But', 'my', 'focus', 'needs', 'to', 'be', 'on', 'all', 'the', 'Americans', 'who', 'are', 'struggling.”', 'He', 'said', 'he', 'was', 'unsure', 'if', 'he', 'would', 'discuss', 'his', 'older', 'brother', 'Fred,', 'who', 'died', 'as', 'an', 'alcoholic', 'in', '1981', 'at', '43', '—', 'and', 'whom', 'he', 'has', 'described', 'as', 'an', 'example', 'of', 'how', 'destructive', 'choices', 'can', 'damage', 'lives', 'that', 'seem', 'golden.', '“Without', 'my', 'brother', 'Fred', 'I', 'might', 'not', 'be', 'here,”', 'Mr.', 'Trump', 'said.', '“He', 'was', 'really', 'smart,', 'great-looking.', 'I', 'don’t', 'drink', 'or', 'smoke', 'because', 'of', 'what', 'happened', 'to', 'him.', 'I', 'focused', 'on', 'building', 'my', 'business', 'and', 'making', 'good', 'choices.', 'I', 'may', 'talk', 'about', 'that,', 'but', 'I', 'don’t', 'know', 'if', 'I', 'should.”', 'Acceptance', 'speeches', 'seldom', 'seem', 'complete', 'without', 'anecdotes', 'about', 'personal', 'trials', 'and', 'triumphs:', 'Mitt', 'Romney,', 'trying', 'to', 'persuade', 'voters', 'to', 'see', 'him', 'as', 'more', 'than', 'a', 'rich', 'businessman,', 'devoted', 'about', 'a', 'fourth', 'of', 'his', '2012', 'address', 'to', 'his', 'parents’', 'unconditional', 'love,', 'his', 'Mormon', 'faith', 'and', 'reminiscences', 'about', 'watching', 'the', 'moon', 'landing.', 'In', '2008', ',', 'Barack', 'Obama', 'described', 'how', 'his', 'grandfather', 'benefited', 'from', 'the', 'G.I.', 'Bill', 'and', 'how', 'his', 'mother', 'and', 'grandmother', 'taught', 'him', 'the', 'value', 'of', 'hard', 'work.', 'And', 'Bill', 'Clinton’s', '1992', 'speech', 'vividly', 'recalled', 'the', 'life', 'lessons', 'he', 'learned', 'from', 'his', 'mother', 'about', 'fighting', 'and', 'working', 'hard,', 'from', 'his', 'grandfather', 'about', 'racial', 'equality', '—', 'and', 'from', 'his', 'wife,', 'Hillary,', 'who,', 'Mr.', 'Clinton', 'said,', 'taught', 'him', 'that', 'every', 'child', 'could', 'learn.', 'Mr.', 'Clinton', 'finished', 'his', 'speech', 'with', 'a', 'now-famous', 'line', 'tying', 'his', 'Arkansas', 'hometown', 'to', 'the', 'American', 'dream.', '“I', 'end', 'tonight', 'where', 'it', 'all', 'began', 'for', 'me,”', 'he', 'said.', '“I', 'still', 'believe', 'in', 'a', 'place', 'called', 'Hope.”', 'James', 'Carville,', 'a', 'senior', 'strategist', 'for', 'Mr.', 'Clinton’s', '1992', 'campaign,', 'said', 'that', 'if', 'Mr.', 'Trump', 'hoped', 'to', 'change', 'the', 'minds', 'of', 'those', 'who', 'see', 'him', 'as', 'divisive', 'or', 'bigoted,', 'he', 'would', 'need', 'to', 'open', 'himself', 'up', 'to', 'voters', 'in', 'meaningfully', 'personal', 'ways', 'in', 'his', 'speech.', '“If', 'he’s', 'really', 'different', 'than', 'the', 'way', 'he', 'seems', 'in', 'television', 'interviews', 'or', 'at', 'his', 'rallies,', 'Thursday’s', 'speech', 'will', 'be', 'his', 'single', 'greatest', 'opportunity', 'to', 'show', 'voters', 'who', 'he', 'really', 'is,”', 'Mr.', 'Carville', 'said.', 'Paul', 'Manafort,', 'the', 'Trump', 'campaign', 'chairman,', 'said', 'that', 'Thursday’s', 'speech', 'would', 'be', '“very', 'much', 'a', 'reflection', 'of', 'Mr.', 'Trump’s', 'own', 'words,', 'as', 'opposed', 'to', 'remarks', 'that', 'others', 'create', 'and', 'the', 'campaign', 'puts', 'in', 'his', 'mouth.”', '“He’s', 'not', 'an', 'editor', '—', 'he', 'is', 'actually', 'the', 'creator', 'of', 'the', 'speech,”', 'Mr.', 'Manafort', 'said.', '“Mr.', 'Trump', 'has', 'given', 'Steve', 'Miller', 'and', 'I', 'very', 'specific', 'directions', 'about', 'how', 'he', 'views', 'the', 'speech,', 'what', 'he', 'wants', 'to', 'communicate,', 'and', 'ways', 'to', 'tie', 'together', 'things', 'that', 'he', 'has', 'been', 'talking', 'about', 'in', 'the', 'campaign.', 'The', 'speech', 'will', 'end', 'up', 'being', 'tone-perfect', 'because', 'the', 'speech’s', 'words', 'will', 'be', 'his', 'words.”', 'Mr.', 'Trump', 'prefers', 'speaking', 'off', 'the', 'cuff', 'with', 'handwritten', 'notes,', 'a', 'style', 'that', 'has', 'proved', 'successful', 'at', 'his', 'rallies,', 'where', 'he', 'has', 'shown', 'a', 'talent', 'for', 'connecting', 'with', 'and', 'electrifying', 'crowds.', 'But', 'his', 'adjustment', 'to', 'formal', 'speeches', 'remains', 'a', 'work', 'in', 'progress:', 'He', 'does', 'not', 'always', 'sound', 'like', 'himself,', 'and', 'reading', 'from', 'a', 'text', 'can', 'detract', 'from', 'the', 'sense', 'of', 'authenticity', 'that', 'his', 'supporters', 'prize.', 'One', 'question', 'is', 'whether,', 'or', 'how', 'much,', 'he', 'will', 'ad-lib.', 'He', 'has', 'sometimes', 'seemed', 'unable', 'to', 'resist', 'deviating', 'from', 'prepared', 'remarks,', 'often', 'to', 'ill', 'effect', '—', 'ranting', 'about', 'a', 'mosquito', ',', 'or', 'joking', 'that', 'a', 'passing', 'airplane', 'was', 'from', 'Mexico', 'and', 'was', '“', 'getting', 'ready', 'to', 'attack', '.”', '“Ad-libbing', 'is', 'instinct,', 'all', 'instinct,”', 'Mr.', 'Trump', 'said.', '“I', 'thought', 'maybe', 'about', 'doing', 'a', 'freewheeling', 'speech', 'for', 'the', 'convention,', 'but', 'that', 'really', 'wouldn’t', 'work.', 'But', 'even', 'with', 'a', 'teleprompter,', 'the', 'speech', 'will', 'be', 'me', '—', 'my', 'ideas,', 'my', 'beliefs,', 'my', 'words.”'] Document BIO Tags: ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O'] Extractive/present Keyphrases: ['speeches', 'plagiarism'] Abstractive/absent Keyphrases: ['2016 presidential election', 'donald trump', 'republican national convention,rnc', 'melania trump'] Other Metadata: {'id': 'ny0282969', 'categories': ['us', 'politics'], 'date': '2016/07/21', 'title': 'For Donald Trump’s Big Speech, an Added Pressure: No Echoes', 'abstract': 'CLEVELAND — Until Monday night, Donald J. Trump’s biggest concern about his convention speech was how much to reveal about himself and his family in an address that is often the most personal one a presidential candidate delivers. But the political firestorm over his wife’s speech , which borrowed passages from Michelle Obama’s convention remarks in 2008, raised the stakes exponentially. Mr. Trump’s speech on Thursday night cannot merely be his best ever. It also has to be bulletproof. By Tuesday morning, word had spread throughout his campaign that any language in Mr. Trump’s address even loosely inspired by speeches, essays, books or Twitter posts had to be either rewritten or attributed. Mr. Trump’s chief speechwriter, Stephen Miller, reassured colleagues that the acceptance speech was wholly original, according to two staff members who spoke with him and described those conversations on the condition of anonymity. Mr. Miller also told campaign aides that he had looked closely at passages that Mr. Trump had contributed — handwritten on unlined white pages — and was confident they contained no problems. (Mr. Miller declined an interview request.) Even so, one of the staff members downloaded plagiarism-detection software and ran a draft of the speech through the program. No red flags came up. The intense scrutiny of Mr. Trump’s words added new pressure to a speechwriting process that has been one of the most unpredictable and free-form in modern presidential campaigns. A month ago, Mr. Trump began giving dictation on themes for the speech, and he tossed ideas and phrases to Mr. Miller or other advisers on a daily basis. On printed copies of each draft, he circled passages he liked, crossed out or put question marks beside lines that he did not favor and frequently suggested new words or phrases. Image Stephen Miller, left, Mr. Trump’s chief speechwriter, and Paul Manafort, the campaign chairman, before an event for the candidate at the Trump SoHo hotel in New York last month. Credit Damon Winter/The New York Times “I’ve been amending the drafts big-league,” Mr. Trump said in an interview in his Manhattan office before the convention. “I get ideas from a lot of different places, a lot of smart people, but mostly I like language that sounds like me.” Yet in the aftermath of Melania Trump’s speech, campaign advisers have fretted that they do not know for sure where Mr. Trump gets his ideas and language — whether they are his own, in other words, or are picked up from Twitter, television, or, say, a best seller by Bill O’Reilly of Fox News, a commentator whom Mr. Trump likes. Borrowing or adapting may not always be tantamount to plagiarism, but several Trump advisers, who also insisted on anonymity, said that after the furor over Ms. Trump’s remarks, the campaign cannot allow a similar blowup. Ed Rollins, a Republican strategist who is advising a “super PAC” supporting Mr. Trump, said that the candidate could not afford any mistakes. “His speech is the whole game,” Mr. Rollins said. “Viewers have to watch it and say, ‘There is the next president of the United States.’” In the interview, Mr. Trump said his speech would center on his vision of a strong and secure America that “once existed and no longer does, but can again under a Trump administration.” Latest Election Polls 2016 Get the latest national and state polls on the presidential election between Hillary Clinton and Donald J. Trump. His greatest challenge, he said, was “putting myself in the speech” — discussing his upbringing and early experiences and relating them to the hopes and aspirations of other Americans. “I was never comfortable getting personal about my family because I thought it was special territory,” Mr. Trump said, glancing at a picture of his father on his desk. “It can feel exploitative to use family stories to win votes. And I had a very happy and comfortable life growing up. I had a great relationship with my father. But my focus needs to be on all the Americans who are struggling.” He said he was unsure if he would discuss his older brother Fred, who died as an alcoholic in 1981 at 43 — and whom he has described as an example of how destructive choices can damage lives that seem golden. “Without my brother Fred I might not be here,” Mr. Trump said. “He was really smart, great-looking. I don’t drink or smoke because of what happened to him. I focused on building my business and making good choices. I may talk about that, but I don’t know if I should.” Acceptance speeches seldom seem complete without anecdotes about personal trials and triumphs: Mitt Romney, trying to persuade voters to see him as more than a rich businessman, devoted about a fourth of his 2012 address to his parents’ unconditional love, his Mormon faith and reminiscences about watching the moon landing. In 2008 , Barack Obama described how his grandfather benefited from the G.I. Bill and how his mother and grandmother taught him the value of hard work. And Bill Clinton’s 1992 speech vividly recalled the life lessons he learned from his mother about fighting and working hard, from his grandfather about racial equality — and from his wife, Hillary, who, Mr. Clinton said, taught him that every child could learn. Mr. Clinton finished his speech with a now-famous line tying his Arkansas hometown to the American dream. “I end tonight where it all began for me,” he said. “I still believe in a place called Hope.” James Carville, a senior strategist for Mr. Clinton’s 1992 campaign, said that if Mr. Trump hoped to change the minds of those who see him as divisive or bigoted, he would need to open himself up to voters in meaningfully personal ways in his speech. “If he’s really different than the way he seems in television interviews or at his rallies, Thursday’s speech will be his single greatest opportunity to show voters who he really is,” Mr. Carville said. Paul Manafort, the Trump campaign chairman, said that Thursday’s speech would be “very much a reflection of Mr. Trump’s own words, as opposed to remarks that others create and the campaign puts in his mouth.” “He’s not an editor — he is actually the creator of the speech,” Mr. Manafort said. “Mr. Trump has given Steve Miller and I very specific directions about how he views the speech, what he wants to communicate, and ways to tie together things that he has been talking about in the campaign. The speech will end up being tone-perfect because the speech’s words will be his words.” Mr. Trump prefers speaking off the cuff with handwritten notes, a style that has proved successful at his rallies, where he has shown a talent for connecting with and electrifying crowds. But his adjustment to formal speeches remains a work in progress: He does not always sound like himself, and reading from a text can detract from the sense of authenticity that his supporters prize. One question is whether, or how much, he will ad-lib. He has sometimes seemed unable to resist deviating from prepared remarks, often to ill effect — ranting about a mosquito , or joking that a passing airplane was from Mexico and was “ getting ready to attack .” “Ad-libbing is instinct, all instinct,” Mr. Trump said. “I thought maybe about doing a freewheeling speech for the convention, but that really wouldn’t work. But even with a teleprompter, the speech will be me — my ideas, my beliefs, my words.”', 'keyword': '2016 Presidential Election;Donald Trump;Republican National Convention,RNC;Speeches;Plagiarism;Melania Trump'} ----------- Sample from validation data split Fields in the sample: ['id', 'document', 'doc_bio_tags', 'extractive_keyphrases', 'abstractive_keyphrases', 'other_metadata'] Tokenized Document: ['Jack', 'Sock', 'Picks', 'Up', 'Where', 'He', 'Left', 'Off', 'at', 'Last', 'Year’s', 'U.S.', 'Open', 'When', 'we', 'last', 'saw', 'Jack', 'Sock', 'at', 'the', 'United', 'States', 'Open', ',', 'a', 'year', 'ago', 'September,', 'he', 'was', 'holding', 'a', 'trophy', 'over', 'his', 'head', 'and', '—', 'not', 'yet', '19', 'and', 'a', 'newly', 'declared', 'professional', '—', 'being', 'hailed', 'a', 'Grand', 'Slam', 'champion.', 'Granted,', 'as', 'major', 'titles', 'go,', 'mixed', 'doubles', '(with', 'Melanie', 'Oudin)', 'was', 'akin', 'to', 'a', 'serving', 'of', 'cheese', 'and', 'crackers,', 'with', 'the', 'steak,', 'or', 'singles', 'title,', 'still', 'lodged', 'in', 'the', 'freezer.', 'But', 'as', 'Sock', 'had', 'the', 'previous', 'year', 'also', 'won', 'the', 'junior', 'boys', 'title', 'in', 'Flushing', 'Meadows', 'and,', 'with', 'legend', 'holding', 'that', 'he', 'had', 'never', 'lost', 'a', 'high', 'school', 'match,', 'it', 'was', 'natural', '—', 'at', 'least', 'hopeful', '—', 'to', 'think', 'he', 'might', 'have', 'a', 'healthy', 'share', 'of', 'winning', 'genes', 'to', 'go', 'with', 'his', 'booming', 'serve.', 'And', 'his', 'name,', 'for', 'goodness', 'sakes,', 'is', 'Jack', 'Sock;', 'of', 'Lincoln,', 'Neb.,', 'a', 'proud', 'Cornhusker.', 'Does', 'it', 'get', 'any', 'more', 'wholesome', 'and', 'hearty', 'for', 'a', 'country', 'in', 'a', 'continuous', 'search', 'for', 'its', 'next', 'men’s', 'star', 'in', 'this', 'athletically', 'enhanced', 'smash-mouth', 'era?', 'So', 'after', 'Sock', 'introduced', 'himself', 'to', 'Florian', 'Mayer,', 'a', 'German', 'seeded', '22nd,', 'with', 'a', 'sizzling', 'ace', 'down', 'the', 'T', 'and', 'held', 'serve', 'to', 'begin', 'a', 'first-round', 'match', 'Monday', 'on', 'the', 'grandstand', 'court,', 'fans', 'responded', 'with', 'a', 'chant', 'of', '“Let’s', 'Go', 'Sock!”', 'Forgetting', 'for', 'the', 'moment', 'that', 'New', 'York', 'is', 'a', 'Yankees', 'town,', 'it', 'was', 'better', 'than', 'one', 'alternative', '—', 'Sock', 'it', 'to', 'him', '—', 'and', 'completely', 'understandable', 'as', 'Sock', 'was', 'in', 'the', 'process', 'of', 'feeding', 'America’s', 'slam', 'its', 'first', 'helping', 'of', 'nationalistic', 'fervor', 'by', 'overpowering', 'Mayer,', 'who', 'retired', 'while', 'trailing,', '6-3,', '6-2,', '3-2.', 'One', 'or', 'two', 'more', 'performances', 'like', 'this', 'and', 'we', 'can', 'expect', 'a', 'slew', 'of', 'word', 'play', 'headlines,', 'beginning', 'with', 'Sock', 'and', 'Awe.', 'It', 'doesn’t', 'take', 'much', 'to', 'fire', 'up', 'the', 'Next', 'Great', 'American', 'news', 'media', 'machine,', 'not', 'that', 'Sock', 'is', 'lacking', 'in', 'confidence', 'or', 'ambition.', '“I', 'feel', 'like', 'my', 'game', 'is', 'right', 'on', 'the', 'verge', 'of', 'going', 'to', 'the', 'next', 'level,”', 'he', 'said', 'after', 'winning', 'his', 'fourth', 'tour', 'match', 'of', '2012', 'against', 'six', 'losses.', 'To', 'explain', 'what', 'he', 'meant', 'of', 'taking', 'his', 'game', 'to', 'the', '“next', 'level,”', 'put', 'it', 'this', 'way:', 'from', 'his', 'current', 'ranking,', '243,', 'there', 'are', 'many', 'stops', 'to', 'make', 'on', 'the', 'ride', 'to', 'the', 'dizzying', 'heights', 'where', 'Roger', 'Federer', 'and', 'elite', 'company', 'reside', '—', 'beginning', 'with', 'leaping', 'into', 'position', 'near', 'another', 'young', 'and', 'hopeful', 'Yank,', 'Ryan', 'Harrison,', 'currently', 'No.', '61.', 'On', 'the', 'scale', 'of', 'youthful', 'and', 'potential', 'men’s', 'tour', 'heirs,', 'the', '21-year-old', 'Milos', 'Raonic', 'of', 'Canada', 'is', 'the', 'closest', 'to', 'a', 'major', 'breakthrough,', 'though', 'it', 'is', 'also', 'difficult', 'to', 'define', 'what', 'even', 'that', 'means', 'when', 'three', 'players', '—', 'Federer,', 'Rafael', 'Nadal', 'and', 'Novak', 'Djokovic', '—', 'have', 'won', '29', 'of', 'the', 'last', '30', 'slam', 'titles', 'and', 'show', 'little', 'inclination', 'of', 'easing', 'their', 'chokehold.', 'Compared', 'with', 'what', 'the', 'more', 'promising', 'newbies', 'face', 'these', 'days,', 'the', 'emergent', 'superstars', 'of', 'yore', 'practically', 'took', 'their', 'Grand', 'Slam', 'treats', 'by', 'merely', 'growing', 'tall', 'enough', 'to', 'reach', 'into', 'the', 'cookie', 'jar.', 'Boris', 'Becker', 'won', 'Wimbledon', 'as', 'a', '17-year-old', 'mop-haired', 'redhead.', 'John', 'McEnroe', 'and', 'Pete', 'Sampras', 'broke', 'through', 'in', 'New', 'York', 'at', '20', 'and', '19.', 'Into', 'the', '21st', 'century,', 'Nadal', 'began', 'his', 'domination', 'of', 'the', 'French', 'Open', 'at', '19,', 'Djokovic', 'won', 'the', 'Australian', 'Open', 'at', '21', 'and', 'Federer', 'sank', 'to', 'his', 'knees', 'at', 'Wimbledon', 'weeks', 'before', 'turning', '22.', 'These', 'days,', 'it', 'is', 'unfathomable', 'to', 'think', 'of', 'a', 'skinny', 'and', 'moon-balling', 'Michael', 'Chang', 'winning', 'the', 'French', 'Open', 'at', '17,', 'as', 'he', 'did', 'in', '1989,', 'or', 'a', 'teenager', 'winning', 'any', 'of', 'the', 'slams.', '“I', 'don’t', 'think', 'that’s', 'going', 'to', 'be', 'the', 'case', 'any', 'time', 'soon', 'because', 'this', 'game', 'is', 'so', 'physical', 'now', 'and', 'people', 'need', 'to', 'grow', 'into', 'their', 'body,”', 'said', 'John', 'Isner,', 'who', 'at', '27', 'has', 'reason', 'to', 'believe', 'that', 'his', 'best', 'results,', 'whatever', 'they', 'may', 'be,', 'are', 'still', 'ahead', 'of', 'him.', 'At', '31,', 'Federer,', 'who', 'absurdly', 'has', 'not', 'missed', 'a', 'Grand', 'Slam', 'tournament', 'in', '13', 'years,', 'may', 'be', 'the', 'best-conditioned', 'of', 'all.', 'Andy', 'Murray,', 'at', '25,', 'is', 'thought', 'to', 'be', 'on', 'the', 'verge', 'of', 'his', 'prime.', 'It', 'is', 'mind-boggling', 'to', 'think', 'that', 'Bjorn', 'Borg,', 'McEnroe,', 'Becker', 'and', 'others', 'were', 'playing', 'on', 'fumes,', 'their', 'best', 'matches', 'behind', 'them,', 'by', 'their', 'mid-20s.', 'A', 'no-kidding', 'adult’s', 'tour', 'that', 'provides', 'longevity', 'and', 'personal', 'context', 'is', 'so', 'much', 'richer', 'than', 'the', 'alternative.', 'But', 'given', 'such', 'dramatic', 'career', 'clock', 'changes,', 'patience', 'may', 'be', 'a', 'most', 'valuable', 'virtue', 'for', 'players', 'like', 'Raonic,', 'Harrison', 'and', 'Bernard', 'Tomic', 'of', 'Australia.', '“Those', 'guys,', 'it', 'might', 'take', 'them', 'a', 'little', 'while', 'to', 'see', 'their', 'very,', 'very', 'best', 'results,', 'but', 'they’re', 'certainly', 'not', 'doing', 'so', 'bad', 'right', 'now,”', 'said', 'Isner,', 'who', 'didn’t', 'hesitate', 'to', 'include', 'Sock,', 'calling', 'him', '“a', 'very', 'good', 'player.”', 'Sock', 'is', 'a', 'strapping', '’Husker,', '6', 'feet', '1', 'inch,', '180', 'pounds,', 'but', 'he', 'was', 'set', 'back', 'physically', 'in', 'March', 'by', 'surgery', 'to', 'repair', 'a', 'torn', 'abdominal', 'muscle.', 'In', 'a', 'brilliant', 'stroke,', 'he', 'has', 'been', 'working', 'in', 'Las', 'Vegas', 'with', 'the', 'trainer', 'Gil', 'Reyes,', 'who', 'whipped', 'the', 'once-profligate', 'Andre', 'Agassi', 'into', 'shape.', 'He', 'has', 'hired', 'the', 'former', 'Swedish', 'player,', 'Joakim', 'Nystrom,', 'to', 'help', 'him', 'play', 'a', 'more', 'patient', 'game.', 'On', 'today’s', 'altered', 'career', 'time', 'clock,', 'there', 'is', 'no', 'choice', 'but', 'to', 'wait', 'one’s', 'turn', 'and', 'see', 'what', 'happens.', 'In', 'a', 'microcosm', 'of', 'that', 'strategy,', 'Sock', 'fell', 'behind,', '0-40,', 'while', 'serving', 'at', '4-2', 'in', 'the', 'first', 'set,', 'rallied', 'to', 'deuce,', 'kept', 'his', 'cool', 'as', 'Mayer', 'challenged', 'two', 'line', 'calls', 'and', 'won', 'both,', 'and', 'wound', 'up', 'winning', 'the', 'long', 'game', 'with', 'the', 'help', 'of', 'his', 'own', 'challenge', 'of', 'an', 'out', 'call.', 'He', 'was', 'never', 'threatened', 'after', 'that,', 'cranking', 'his', 'first', 'serve', 'as', 'high', 'as', '134', 'miles', 'per', 'hour,', 'winning', '17', 'of', '25', 'second-service', 'points', 'and', 'shrugging', 'off', 'the', 'question', 'of', 'when', 'the', 'Next', 'Great', 'American', 'will', 'arrive', 'as', 'easily', 'as', 'he', 'did', 'Mayer.', '“Until', 'the', 'results', 'are', 'there,', 'until', 'the', 'rankings', 'and', 'everything', 'is', 'there,', 'not', 'a', 'different', 'answer', 'to', 'give,”', 'he', 'said.', 'Give', 'him', 'time,', 'in', 'other', 'words.', 'By', 'today’s', 'standards,', 'he’s', 'got', 'a', 'few', 'years', 'before', 'we', 'have', 'to', 'stop', 'asking.'] Document BIO Tags: ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O'] Extractive/present Keyphrases: [] Abstractive/absent Keyphrases: ['tennis', 'united states open (tennis)', 'sock jack'] Other Metadata: {'id': 'ny0125215', 'categories': ['sports', 'tennis'], 'date': '2012/08/28', 'title': 'Jack Sock Picks Up Where He Left Off at Last Year’s U.S. Open', 'abstract': 'When we last saw Jack Sock at the United States Open , a year ago September, he was holding a trophy over his head and — not yet 19 and a newly declared professional — being hailed a Grand Slam champion. Granted, as major titles go, mixed doubles (with Melanie Oudin) was akin to a serving of cheese and crackers, with the steak, or singles title, still lodged in the freezer. But as Sock had the previous year also won the junior boys title in Flushing Meadows and, with legend holding that he had never lost a high school match, it was natural — at least hopeful — to think he might have a healthy share of winning genes to go with his booming serve. And his name, for goodness sakes, is Jack Sock; of Lincoln, Neb., a proud Cornhusker. Does it get any more wholesome and hearty for a country in a continuous search for its next men’s star in this athletically enhanced smash-mouth era? So after Sock introduced himself to Florian Mayer, a German seeded 22nd, with a sizzling ace down the T and held serve to begin a first-round match Monday on the grandstand court, fans responded with a chant of “Let’s Go Sock!” Forgetting for the moment that New York is a Yankees town, it was better than one alternative — Sock it to him — and completely understandable as Sock was in the process of feeding America’s slam its first helping of nationalistic fervor by overpowering Mayer, who retired while trailing, 6-3, 6-2, 3-2. One or two more performances like this and we can expect a slew of word play headlines, beginning with Sock and Awe. It doesn’t take much to fire up the Next Great American news media machine, not that Sock is lacking in confidence or ambition. “I feel like my game is right on the verge of going to the next level,” he said after winning his fourth tour match of 2012 against six losses. To explain what he meant of taking his game to the “next level,” put it this way: from his current ranking, 243, there are many stops to make on the ride to the dizzying heights where Roger Federer and elite company reside — beginning with leaping into position near another young and hopeful Yank, Ryan Harrison, currently No. 61. On the scale of youthful and potential men’s tour heirs, the 21-year-old Milos Raonic of Canada is the closest to a major breakthrough, though it is also difficult to define what even that means when three players — Federer, Rafael Nadal and Novak Djokovic — have won 29 of the last 30 slam titles and show little inclination of easing their chokehold. Compared with what the more promising newbies face these days, the emergent superstars of yore practically took their Grand Slam treats by merely growing tall enough to reach into the cookie jar. Boris Becker won Wimbledon as a 17-year-old mop-haired redhead. John McEnroe and Pete Sampras broke through in New York at 20 and 19. Into the 21st century, Nadal began his domination of the French Open at 19, Djokovic won the Australian Open at 21 and Federer sank to his knees at Wimbledon weeks before turning 22. These days, it is unfathomable to think of a skinny and moon-balling Michael Chang winning the French Open at 17, as he did in 1989, or a teenager winning any of the slams. “I don’t think that’s going to be the case any time soon because this game is so physical now and people need to grow into their body,” said John Isner, who at 27 has reason to believe that his best results, whatever they may be, are still ahead of him. At 31, Federer, who absurdly has not missed a Grand Slam tournament in 13 years, may be the best-conditioned of all. Andy Murray, at 25, is thought to be on the verge of his prime. It is mind-boggling to think that Bjorn Borg, McEnroe, Becker and others were playing on fumes, their best matches behind them, by their mid-20s. A no-kidding adult’s tour that provides longevity and personal context is so much richer than the alternative. But given such dramatic career clock changes, patience may be a most valuable virtue for players like Raonic, Harrison and Bernard Tomic of Australia. “Those guys, it might take them a little while to see their very, very best results, but they’re certainly not doing so bad right now,” said Isner, who didn’t hesitate to include Sock, calling him “a very good player.” Sock is a strapping ’Husker, 6 feet 1 inch, 180 pounds, but he was set back physically in March by surgery to repair a torn abdominal muscle. In a brilliant stroke, he has been working in Las Vegas with the trainer Gil Reyes, who whipped the once-profligate Andre Agassi into shape. He has hired the former Swedish player, Joakim Nystrom, to help him play a more patient game. On today’s altered career time clock, there is no choice but to wait one’s turn and see what happens. In a microcosm of that strategy, Sock fell behind, 0-40, while serving at 4-2 in the first set, rallied to deuce, kept his cool as Mayer challenged two line calls and won both, and wound up winning the long game with the help of his own challenge of an out call. He was never threatened after that, cranking his first serve as high as 134 miles per hour, winning 17 of 25 second-service points and shrugging off the question of when the Next Great American will arrive as easily as he did Mayer. “Until the results are there, until the rankings and everything is there, not a different answer to give,” he said. Give him time, in other words. By today’s standards, he’s got a few years before we have to stop asking.', 'keyword': 'Tennis;United States Open (Tennis);Sock Jack'} ----------- Sample from test data split Fields in the sample: ['id', 'document', 'doc_bio_tags', 'extractive_keyphrases', 'abstractive_keyphrases', 'other_metadata'] Tokenized Document: ['World', 'records', 'no', 'joke', 'to', 'frustrated', 'Pakistanis', 'ISLAMABAD', '-', 'One', 'young', 'contender', 'created', 'the', 'world’s', 'largest', 'sequin', 'mosaic', 'using', '325,000', 'of', 'the', 'sparkly', 'discs.', 'Two', 'other', 'youths', 'achieved', '123', 'consecutive', 'badminton', 'passes', 'in', 'one', 'minute.', 'And', '1,450', 'participants', 'broke', 'the', 'record', 'for', 'the', 'most', 'people', 'arm', 'wrestling.', 'Such', 'are', 'the', 'skills', 'that', 'Guinness', 'World', 'Records', 'are', 'made', 'of', 'in', 'Pakistan,', 'where', 'thousands', 'of', 'young', 'people', 'are', 'groomed', 'to', 'establish', 'their', 'unique', 'feats', 'for', 'posterity.', 'Last', 'week,', 'the', 'contestants', 'came', 'together', 'for', 'the', 'annual', 'Punjab', 'Youth', 'Festival', 'to', 'show', 'their', 'stuff', '—', 'many', 'in', 'athletics,', 'but', 'others', 'in', 'downright', 'quirky', 'displays,', 'including', 'one', 'young', 'boy', 'who', 'achieved', 'fame', 'by', 'kicking', '50', 'coconuts', 'from', 'on', 'top', 'of', 'the', 'heads', 'of', 'a', 'row', 'of', 'people.', 'It', 'seems', 'Pakistan', 'has', 'become', 'a', 'world', 'record-creating', 'machine,', 'with', 'the', 'coordinated', 'effort', 'reaping', 'an', 'impressive', '23', 'world', 'records,', 'event', 'organizers', 'boasted.', 'The', 'push', 'for', 'inclusion', 'of', 'Pakistanis', 'in', 'the', 'venerable', 'Guinness', 'World', 'Records', 'entries', '(which', 'began', 'in', 'book', 'form', 'in', '1955)', 'stems', 'in', 'part', 'from', 'festival', 'organizers’', 'desire', 'to', 'boost', 'the', 'image', 'of', 'a', 'country', 'often', 'associated', 'with', 'militancy,', 'religious', 'strife', 'and', 'economic', 'decline.', 'There', 'is', 'a', 'patriotic', 'element,', 'as', 'well:', 'Last', 'October,', 'for', 'instance,', '42,813', 'Pakistanis', 'got', 'together', 'in', 'a', 'Lahore', 'hockey', 'stadium', 'to', 'belt', 'out', 'the', 'national', 'anthem', 'and', 'create', 'yet', 'another', 'world', 'record', 'for', 'the', 'most', 'people', 'singing', 'their', 'country’s', 'anthem.', 'Days', 'later,', 'another', '24,200', 'people', 'held', 'green', 'and', 'white', 'boxes', '—', 'the', 'colors', 'of', 'the', 'national', 'flag', 'of', 'Pakistan', '—', 'to', 'set', 'the', 'world', 'record', 'for', 'creating', 'the', 'largest', 'human', 'flag.', 'Although', 'some', 'of', 'the', 'records', 'might', 'seem', 'amusing', 'to', 'others', '—', 'coconut', 'kicking', 'champ', 'Mohammad', 'Rashid', 'of', 'Karachi', 'last', 'week', 'claimed', 'his', 'fourth', 'world', 'record', 'by', 'breaking', '34', 'pine', 'boards', 'in', '32', 'seconds', 'with', 'his', 'head', '—', 'the', 'competitions', 'were', 'no', 'laughing', 'matter', 'to', 'participants.', 'Usman', 'Anwar,', 'director', 'of', 'the', 'Punjab', 'Youth', 'Festival,', 'explained', 'that', 'the', 'kids', 'have', 'been', 'training', 'for', 'eight', 'months.', '“We', 'started', 'at', 'the', 'neighborhood', 'and', 'village', 'level', 'so', 'that', 'children', 'could', 'come', 'out', 'and', 'participate,”', 'said', 'Anwar.', '“Our', 'main', 'objective', 'was', 'to', 'inculcate', 'interest', 'for', 'sports', 'in', 'the', 'public.”', 'Young', 'people', 'from', 'over', '55,000', 'neighborhood', 'and', 'village', 'councils', 'vied', 'for', 'a', 'chance', 'to', 'compete', 'in', 'the', 'games.', '“We', 'were', 'able', 'to', 'select', 'the', 'best', 'of', 'the', 'best', 'to', 'train', 'for', 'the', 'world', 'records,”', 'said', 'Anwar.', 'Because', 'of', 'terrorism,', 'political', 'upheaval', 'and', 'widespread', 'unemployment,', 'many', 'young', 'people', 'appear', 'to', 'have', 'little', 'hope', 'for', 'the', 'future,', 'says', 'Hafeez', 'Rehman,', 'a', 'professor', 'in', 'the', 'anthropology', 'department', 'at', 'Quaid-i-Azam', 'University', 'in', 'the', 'capital,', 'Islamabad.', 'Sports', 'competitions,', 'Rehman', 'said,', 'create', 'an', 'opportunity', 'for', 'youth', 'to', 'excel', 'personally', 'and', 'also', 'to', 'improve', 'Pakistan’s', 'image.', '“We', 'have', 'energetic', 'youth.', 'Pakistan', 'has', 'more', 'than', '55', 'million', 'young', 'people.', 'It', 'becomes', 'an', 'asset', 'for', 'the', 'country,”', 'he', 'added.', 'The', 'festival', 'itself', 'has', 'become', 'part', 'of', 'the', 'record-setting', 'mania.', 'It', 'was', 'recognized', 'for', 'having', 'more', 'participants', '—', '3.3', 'million,', 'most', 'of', 'whom', 'registered', 'online,', 'according', 'to', 'Anwar', '—', 'constituting', 'a', 'world', 'record', 'for', 'sporting', 'events.'] Document BIO Tags: ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O'] Extractive/present Keyphrases: ['pakistan', 'guinness'] Abstractive/absent Keyphrases: ['india'] Other Metadata: {'id': 'jp0000001', 'categories': ['asia-pacific', 'offbeat-asia-pacific'], 'date': '2013/03/17', 'title': 'World records no joke to frustrated Pakistanis ', 'abstract': 'ISLAMABAD - One young contender created the world’s largest sequin mosaic using 325,000 of the sparkly discs. Two other youths achieved 123 consecutive badminton passes in one minute. And 1,450 participants broke the record for the most people arm wrestling. Such are the skills that Guinness World Records are made of in Pakistan, where thousands of young people are groomed to establish their unique feats for posterity. Last week, the contestants came together for the annual Punjab Youth Festival to show their stuff — many in athletics, but others in downright quirky displays, including one young boy who achieved fame by kicking 50 coconuts from on top of the heads of a row of people. It seems Pakistan has become a world record-creating machine, with the coordinated effort reaping an impressive 23 world records, event organizers boasted. The push for inclusion of Pakistanis in the venerable Guinness World Records entries (which began in book form in 1955) stems in part from festival organizers’ desire to boost the image of a country often associated with militancy, religious strife and economic decline. There is a patriotic element, as well: Last October, for instance, 42,813 Pakistanis got together in a Lahore hockey stadium to belt out the national anthem and create yet another world record for the most people singing their country’s anthem. Days later, another 24,200 people held green and white boxes — the colors of the national flag of Pakistan — to set the world record for creating the largest human flag. Although some of the records might seem amusing to others — coconut kicking champ Mohammad Rashid of Karachi last week claimed his fourth world record by breaking 34 pine boards in 32 seconds with his head — the competitions were no laughing matter to participants. Usman Anwar, director of the Punjab Youth Festival, explained that the kids have been training for eight months. “We started at the neighborhood and village level so that children could come out and participate,” said Anwar. “Our main objective was to inculcate interest for sports in the public.” Young people from over 55,000 neighborhood and village councils vied for a chance to compete in the games. “We were able to select the best of the best to train for the world records,” said Anwar. Because of terrorism, political upheaval and widespread unemployment, many young people appear to have little hope for the future, says Hafeez Rehman, a professor in the anthropology department at Quaid-i-Azam University in the capital, Islamabad. Sports competitions, Rehman said, create an opportunity for youth to excel personally and also to improve Pakistan’s image. “We have energetic youth. Pakistan has more than 55 million young people. It becomes an asset for the country,” he added. The festival itself has become part of the record-setting mania. It was recognized for having more participants — 3.3 million, most of whom registered online, according to Anwar — constituting a world record for sporting events.', 'keyword': 'india;pakistan;guinness'} ----------- ``` ### Keyphrase Extraction ```python from datasets import load_dataset # get the dataset only for keyphrase extraction dataset = load_dataset("midas/kptimes", "extraction") print("Samples for Keyphrase Extraction") # sample from the train split print("Sample from training data split") train_sample = dataset["train"][0] print("Fields in the sample: ", [key for key in train_sample.keys()]) print("Tokenized Document: ", train_sample["document"]) print("Document BIO Tags: ", train_sample["doc_bio_tags"]) print("\n-----------\n") # sample from the validation split print("Sample from validation data split") validation_sample = dataset["validation"][0] print("Fields in the sample: ", [key for key in validation_sample.keys()]) print("Tokenized Document: ", validation_sample["document"]) print("Document BIO Tags: ", validation_sample["doc_bio_tags"]) print("\n-----------\n") # sample from the test split print("Sample from test data split") test_sample = dataset["test"][0] print("Fields in the sample: ", [key for key in test_sample.keys()]) print("Tokenized Document: ", test_sample["document"]) print("Document BIO Tags: ", test_sample["doc_bio_tags"]) print("\n-----------\n") ``` ### Keyphrase Generation ```python # get the dataset only for keyphrase generation dataset = load_dataset("midas/kptimes", "generation") print("Samples for Keyphrase Generation") # sample from the train split print("Sample from training data split") train_sample = dataset["train"][0] print("Fields in the sample: ", [key for key in train_sample.keys()]) print("Tokenized Document: ", train_sample["document"]) print("Extractive/present Keyphrases: ", train_sample["extractive_keyphrases"]) print("Abstractive/absent Keyphrases: ", train_sample["abstractive_keyphrases"]) print("\n-----------\n") # sample from the validation split print("Sample from validation data split") validation_sample = dataset["validation"][0] print("Fields in the sample: ", [key for key in validation_sample.keys()]) print("Tokenized Document: ", validation_sample["document"]) print("Extractive/present Keyphrases: ", validation_sample["extractive_keyphrases"]) print("Abstractive/absent Keyphrases: ", validation_sample["abstractive_keyphrases"]) print("\n-----------\n") # sample from the test split print("Sample from test data split") test_sample = dataset["test"][0] print("Fields in the sample: ", [key for key in test_sample.keys()]) print("Tokenized Document: ", test_sample["document"]) print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"]) print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"]) print("\n-----------\n") ``` ## Citation Information ``` @inproceedings{gallina2019kptimes, title={KPTimes: A Large-Scale Dataset for Keyphrase Generation on News Documents}, author={Gallina, Ygor and Boudin, Florian and Daille, B{\'e}atrice}, booktitle={Proceedings of the 12th International Conference on Natural Language Generation}, pages={130--135}, year={2019} } ``` ## Contributions Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax), [@UmaGunturi](https://github.com/UmaGunturi) and [@ad6398](https://github.com/ad6398) for adding this dataset
68,409
[ [ -0.00634765625, -0.0312042236328125, 0.0211029052734375, 0.01480865478515625, -0.0243682861328125, 0.01512908935546875, -0.0152740478515625, -0.0078277587890625, 0.0204315185546875, 0.0126495361328125, -0.038787841796875, -0.064697265625, -0.04730224609375, ...
projecte-aina/sts-ca
2023-09-13T12:46:21.000Z
[ "task_categories:text-classification", "task_ids:semantic-similarity-scoring", "task_ids:text-scoring", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "language:ca", "license:cc-by-4.0", "arxiv:2107.07903", "region:us...
projecte-aina
Semantic Textual Similarity in Catalan. STS corpus is a benchmark for evaluating Semantic Text Similarity in Catalan. It consists of more than 3000 sentence pairs, annotated with the semantic similarity between them, using a scale from 0 (no similarity at all) to 5 (semantic equivalence). It is done manually by 4 different annotators following our guidelines based on previous work from the SemEval challenges (https://www.aclweb.org/anthology/S13-1004.pdf). The source data are scraped sentences from the Catalan Textual Corpus (https://doi.org/10.5281/zenodo.4519349), used under CC-by-SA-4.0 licence (https://creativecommons.org/licenses/by-sa/4.0/). The dataset is released under the same licence. This dataset was developed by BSC TeMU as part of the AINA project, and to enrich the Catalan Language Understanding Benchmark (CLUB). This is the version 1.0.2 of the dataset with the complete human and automatic annotations and the analysis scripts. It also has a more accurate license. This dataset can be used to build and score semantic similiarity models.
Rodriguez-Penagos, Carlos Gerardo, Armentano-Oller, Carme, Gonzalez-Agirre, Aitor, & Gibert Bonet, Ona. (2021). Semantic Textual Similarity in Catalan (Version 1.0.1) [Data set]. Zenodo. http://doi.org/10.5281/zenodo.4761434
0
10
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - found language: - ca license: - cc-by-4.0 multilinguality: - monolingual size_categories: - unknown source_datasets: [] task_categories: - text-classification task_ids: - semantic-similarity-scoring - text-scoring pretty_name: sts-ca --- # Dataset Card for STS-ca ## Dataset Description - **Website:** https://zenodo.org/record/4761434 - **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903) - **Point of Contact:** [Carlos Rodríguez-Penagos](carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](carme.armentano@bsc.es) ### Dataset Summary STS-ca corpus is a benchmark for evaluating Semantic Text Similarity in Catalan. This dataset was developed by [BSC TeMU](https://temu.bsc.es/) as part of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/), to enrich the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/). ### Supported Tasks and Leaderboards This dataset can be used to build and score semantic similarity models in Catalan. ### Languages The dataset is in Catalan (`ca-ES`). ## Dataset Structure ### Data Instances Follows [SemEval challenges](https://www.aclweb.org/anthology/S13-1004.pdf): * index (int) * id (str): Unique ID assigned to the sentence pair. * sentence 1 (str): First sentence of the pair. * sentence 2 (str): Second sentence of the pair. * avg (float): Gold truth #### Example | index | id | sentence 1 | sentence 2 | avg | | ------- | ---- | ------------ | ------------ | ----- | | 19 | ACN2_131 | Els manifestants ocupen l'Imperial Tarraco durant una hora fent jocs de taula | Els manifestants ocupen l'Imperial Tarraco i fan jocs de taula | 4 | | 21 | TE2_80 | El festival comptarà amb cinc escenaris i se celebrarà entre el 7 i el 9 de juliol al Parc del Fòrum. | El festival se celebrarà el 7 i 8 de juliol al Parc del Fòrum de Barcelona | 3 | | 23 | Oscar2_609 | Aleshores hi posarem un got de vi i continuarem amb la cocció fins que s'hagi evaporat el vi i ho salpebrarem. | Mentre, hi posarem el vi al sofregit i deixarem coure uns 7/8′, fins que el vi s'evapori. | 3 | | 25 | Viqui2_48 | L'arboç grec (Arbutus andrachne) és un arbust o un petit arbre dins la família ericàcia. | El ginjoler ("Ziziphus jujuba") és un arbust o arbre petit de la família de les "Rhamnaceae". | 2.75 | | 27 | ACN2_1072 | Mentre han estat davant la comandància, els manifestants han cridat consignes a favor de la independència i han cantat cançons com 'L'estaca'. | Entre les consignes que han cridat s'ha pogut escoltar càntics com 'els carrers seran sempre nostres' i contínues consignes en favor de la independència. | 3 | | 28 | Viqui2_587 | Els cinc municipis ocupen una superfície de poc més de 100 km2 i conjuntament sumen una població total aproximada de 3.691 habitants (any 2019). | Té una població d'1.811.177 habitants (2005) repartits en 104 municipis d'una superfície total de 14.001 km2. | 2.67 | ### Data Fields This dataset follows [SemEval](https://www.aclweb.org/anthology/S13-1004.pdf) challenges formats and conventions. ### Data Splits - sts_cat_dev_v1.tsv (500 annotated pairs) - sts_cat_train_v1.tsv (2073 annotated pairs) - sts_cat_test_v1.tsv (500 annotated pairs) ## Dataset Creation ### Curation Rationale We created this dataset to contribute to the development of language models in Catalan, a low-resource language. ### Source Data #### Initial Data Collection and Normalization Random sentences were extracted from 3 Catalan subcorpus from the [Catalan Textual Corpus](https://zenodo.org/record/4519349#.Ys_0PexBzOs): [ACN](https://www.acn.cat/), [Oscar](https://oscar-corpus.com/) and [Wikipedia](https://ca.wikipedia.org/wiki/Portada). We generated candidate pairs using a combination of metrics from Doc2Vec, Jaccard and a BERT-like model (“[distiluse-base-multilingual-cased-v2](https://huggingface.co/distilbert-base-multilingual-cased)”). Finally, we manually reviewed the generated pairs to reject non-relevant pairs (identical or ungrammatical sentences, etc.) before providing them to the annotation team. The average of the four annotations was selected as a “ground truth” for each sentence pair, except when an annotator diverged in more than one unit from the average. In these cases, we discarded the divergent annotation and recalculated the average without it. We also discarded 45 sentence pairs because the annotators disagreed too much. For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines. #### Who are the source language producers? The [Catalan Textual Corpus](https://zenodo.org/record/4519349#.Ys_0PexBzOs) is a 1760-million-token web corpus of Catalan built from several sources: existing corpus such as DOGC, CaWac (non-deduplicated version), Oscar (unshuffled version), Open Subtitles, Catalan Wikipedia; and three brand new crawlings: the Catalan General Crawling, obtained by crawling the 500 most popular .cat and .ad domains; the Catalan Government Crawling, obtained by crawling the .gencat domain and subdomains, belonging to the Catalan Government; and the ACN corpus with 220k news items from March 2015 until October 2020, crawled from the Catalan News Agency. ### Annotations #### Annotation process We comissioned the manual annotation of the similarity between the sentences of each pair, following the provided guidelines. #### Who are the annotators? A team of native language speakers from 2 different companies, working independently. ### Personal and Sensitive Information No personal or sensitive information included. ## Considerations for Using the Data ### Social Impact of Dataset We hope this dataset contributes to the development of language models in Catalan, a low-resource language. ### Discussion of Biases [N/A] ### Other Known Limitations [N/A] ## Additional Information ### Dataset Curators Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es) This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/en/inici/index.html) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/). ### Licensing Information This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>. ### Citation Information ``` @inproceedings{armengol-estape-etal-2021-multilingual, title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan", author = "Armengol-Estap{\'e}, Jordi and Carrino, Casimiro Pio and Rodriguez-Penagos, Carlos and de Gibert Bonet, Ona and Armentano-Oller, Carme and Gonzalez-Agirre, Aitor and Melero, Maite and Villegas, Marta", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.437", doi = "10.18653/v1/2021.findings-acl.437", pages = "4933--4946", } ``` [DOI](https://doi.org/10.5281/zenodo.4529183) ### Contributions [N/A]
7,566
[ [ -0.02679443359375, -0.033477783203125, 0.017669677734375, 0.0286102294921875, -0.032623291015625, 0.00032067298889160156, -0.0277862548828125, -0.0345458984375, 0.04730224609375, 0.0347900390625, -0.0203857421875, -0.06329345703125, -0.041748046875, 0.013328...
vasudevgupta/natural-questions-validation
2021-05-04T18:25:07.000Z
[ "region:us" ]
vasudevgupta
null
null
0
10
2022-03-02T23:29:22
Obtained using following code: ```python from datasets import load_dataset dataset = load_dataset("natural_questions", split="validation") dataset.save_to_disk("natural-questions-validation") ```
197
[ [ -0.042022705078125, -0.06024169921875, -0.0006384849548339844, 0.0160064697265625, 0.0058135986328125, -0.01352691650390625, -0.0007157325744628906, 0.0149688720703125, 0.0153045654296875, 0.061370849609375, -0.0494384765625, -0.022186279296875, 0.01036834716796...
crystina-z/no-nonself-mrtydi-corpus
2022-03-10T22:08:19.000Z
[ "region:us" ]
crystina-z
null
null
0
10
2022-03-09T01:03:48
Entry not found
15
[ [ -0.021392822265625, -0.01494598388671875, 0.05718994140625, 0.028839111328125, -0.0350341796875, 0.046539306640625, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.01702880859375, -0.052093505859375, -0.01494598388671875, -0.06036376953125, 0.03790...
deepakvk/squad2_valdn
2022-04-08T09:46:56.000Z
[ "region:us" ]
deepakvk
null
null
0
10
2022-04-08T09:46:52
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
surdan/nerel_short
2022-10-25T10:06:49.000Z
[ "task_ids:named-entity-recognition", "multilinguality:monolingual", "language:ru", "region:us" ]
surdan
null
null
0
10
2022-04-11T06:34:28
--- language: ru multilinguality: monolingual task_ids: - named-entity-recognition --- ### About DataSet The dataset based on NEREL corpus. For more information about original data, please visit this [source](https://github.com/dialogue-evaluation/RuNNE) Example of preparing original data illustrated in <Prepare_original_data.ipynb> ### Additional info The dataset consist 29 entities, each of them can be as beginner part of entity "B-" as inner "I-". Frequency for each entity: - I-AGE: 284 - B-AGE: 247 - B-AWARD: 285 - I-AWARD: 466 - B-CITY: 1080 - I-CITY: 39 - B-COUNTRY: 2378 - I-COUNTRY: 128 - B-CRIME: 214 - I-CRIME: 372 - B-DATE: 2701 - I-DATE: 5437 - B-DISEASE: 136 - I-DISEASE: 80 - B-DISTRICT: 98 - I-DISTRICT: 73 - B-EVENT: 3369 - I-EVENT: 2524 - B-FACILITY: 376 - I-FACILITY: 510 - B-FAMILY: 27 - I-FAMILY: 22 - B-IDEOLOGY: 271 - I-IDEOLOGY: 20 - B-LANGUAGE: 32 - I-LAW: 1196 - B-LAW: 297 - B-LOCATION: 242 - I-LOCATION: 139 - B-MONEY: 147 - I-MONEY: 361 - B-NATIONALITY: 437 - I-NATIONALITY: 41 - B-NUMBER: 1079 - I-NUMBER: 328 - B-ORDINAL: 485 - I-ORDINAL: 6 - B-ORGANIZATION: 3339 - I-ORGANIZATION: 3354 - B-PENALTY: 73 - I-PENALTY: 104 - B-PERCENT: 51 - I-PERCENT: 37 - B-PERSON: 5148 - I-PERSON: 3635 - I-PRODUCT: 48 - B-PRODUCT: 197 - B-PROFESSION: 3869 - I-PROFESSION: 2598 - B-RELIGION: 102 - I-RELIGION: 1 - B-STATE_OR_PROVINCE: 436 - I-STATE_OR_PROVINCE: 154 - B-TIME: 187 - I-TIME: 529 - B-WORK_OF_ART: 133 - I-WORK_OF_ART: 194 You can find mapper for entity ids in <id_to_label_map.pickle> file: ```python import pickle with open('id_to_label_map.pickle', 'rb') as f: mapper = pickle.load(f) ```
1,639
[ [ -0.0273284912109375, -0.03631591796875, 0.0180816650390625, 0.00286102294921875, 0.0014133453369140625, -0.006256103515625, -0.0179901123046875, -0.006664276123046875, 0.0290069580078125, 0.059356689453125, -0.0302581787109375, -0.060150146484375, -0.04211425781...
pietrolesci/mpe
2022-04-25T09:00:18.000Z
[ "region:us" ]
pietrolesci
null
null
0
10
2022-04-22T12:38:29
## Overview Original dataset [here](https://github.com/aylai/MultiPremiseEntailment). ## Dataset curation Same data and splits as the original. The following columns have been added: - `premise`: concatenation of `premise1`, `premise2`, `premise3`, and `premise4` - `label`: encoded `gold_label` with the following mapping `{"entailment": 0, "neutral": 1, "contradiction": 2}` ## Code to create the dataset ```python import pandas as pd from datasets import Features, Value, ClassLabel, Dataset, DatasetDict from pathlib import Path # read data path = Path("<path to files>") datasets = {} for dataset_path in path.rglob("*.txt"): df = pd.read_csv(dataset_path, sep="\t") datasets[dataset_path.name.split("_")[1].split(".")[0]] = df ds = {} for name, df_ in datasets.items(): df = df_.copy() # fix parsing error for dev split if name == "dev": # fix parsing error df.loc[df["contradiction_judgments"] == "3 contradiction", "contradiction_judgments"] = 3 df.loc[df["gold_label"].isna(), "gold_label"] = "contradiction" # check no nan assert df.isna().sum().sum() == 0 # fix dtypes for col in ("entailment_judgments", "neutral_judgments", "contradiction_judgments"): df[col] = df[col].astype(int) # fix premise column for i in range(1, 4 + 1): df[f"premise{i}"] = df[f"premise{i}"].str.split("/", expand=True)[1] df["premise"] = df[[f"premise{i}" for i in range(1, 4 + 1)]].agg(" ".join, axis=1) # encode labels df["label"] = df["gold_label"].map({"entailment": 0, "neutral": 1, "contradiction": 2}) # cast to dataset features = Features({ "premise1": Value(dtype="string", id=None), "premise2": Value(dtype="string", id=None), "premise3": Value(dtype="string", id=None), "premise4": Value(dtype="string", id=None), "premise": Value(dtype="string", id=None), "hypothesis": Value(dtype="string", id=None), "entailment_judgments": Value(dtype="int32"), "neutral_judgments": Value(dtype="int32"), "contradiction_judgments": Value(dtype="int32"), "gold_label": Value(dtype="string"), "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]), }) ds[name] = Dataset.from_pandas(df, features=features) # push to hub ds = DatasetDict(ds) ds.push_to_hub("mpe", token="<token>") # check overlap between splits from itertools import combinations for i, j in combinations(ds.keys(), 2): print( f"{i} - {j}: ", pd.merge( ds[i].to_pandas(), ds[j].to_pandas(), on=["premise", "hypothesis", "label"], how="inner", ).shape[0], ) #> dev - test: 0 #> dev - train: 0 #> test - train: 0 ```
2,823
[ [ -0.030487060546875, -0.045562744140625, 0.0162200927734375, 0.0229644775390625, -0.00859832763671875, -0.0081634521484375, -0.0085906982421875, 0.00214385986328125, 0.034637451171875, 0.041229248046875, -0.03369140625, -0.051788330078125, -0.04901123046875, ...
mathigatti/spanish_imdb_synopsis
2022-10-25T10:12:53.000Z
[ "task_categories:summarization", "task_categories:text-generation", "task_categories:text2text-generation", "annotations_creators:no-annotation", "multilinguality:monolingual", "language:es", "license:apache-2.0", "region:us" ]
mathigatti
null
null
1
10
2022-04-28T00:54:42
--- annotations_creators: - no-annotation language: - es license: - apache-2.0 multilinguality: - monolingual task_categories: - summarization - text-generation - text2text-generation --- # Dataset Card for Spanish IMDb Synopsis ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Dataset Creation](#dataset-creation) ## Dataset Description 4969 movie synopsis from IMDb in spanish. ### Dataset Summary [N/A] ### Languages All descriptions are in spanish, the other fields have some mix of spanish and english. ## Dataset Structure [N/A] ### Data Fields - `description`: IMDb description for the movie (string), should be spanish - `keywords`: IMDb keywords for the movie (string), mix of spanish and english - `genre`: The genres of the movie (string), mix of spanish and english - `year`: The year the movie was published (float) - `name`: The name of the movie (string), mix of spanish and english - `director`: The name of the main director in the movie, can be empty (string) ## Dataset Creation [This kaggle dataset](https://www.kaggle.com/datasets/komalkhetlani/imdb-dataset) was used as a starting point. Then IMDb was scraped downloading the synopsis of the movies that have more than 5000 votes/reviews and those that did not have a synopsis available in Spanish were discarded.
1,531
[ [ -0.037689208984375, -0.01458740234375, -0.0023441314697265625, 0.0274505615234375, -0.044525146484375, 0.02362060546875, 0.0017232894897460938, -0.021514892578125, 0.058502197265625, 0.0408935546875, -0.0804443359375, -0.053253173828125, -0.04620361328125, 0...
Ukhushn/home-depot
2022-10-25T10:20:53.000Z
[ "task_categories:sentence-similarity", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "license:afl-3.0", "region:us" ]
Ukhushn
null
null
0
10
2022-05-04T04:13:06
--- language: - en language_bcp47: - en-US license: - afl-3.0 annotations_creators: - no-annotation language_creators: - found multilinguality: - monolingual pretty_name: Ukhushn/home-depot size_categories: - 10K<n<100K source_datasets: [] task_categories: - sentence-similarity task_ids: [] --- # Dataset Card for Ukhushn/home-depot
335
[ [ -0.0158843994140625, 0.01325225830078125, -0.015838623046875, 0.006237030029296875, -0.045867919921875, 0.004245758056640625, 0.023162841796875, 0.00890350341796875, 0.0125885009765625, 0.038055419921875, -0.059661865234375, -0.050506591796875, -0.00350189208984...
SetFit/amazon_massive_intent_zh-CN
2022-06-20T14:31:57.000Z
[ "region:us" ]
SetFit
null
null
1
10
2022-05-06T09:12:07
Entry not found
15
[ [ -0.02142333984375, -0.014984130859375, 0.057220458984375, 0.0288238525390625, -0.03509521484375, 0.04656982421875, 0.052520751953125, 0.00506591796875, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060455322265625, 0.03793334...
laion/laion-high-resolution
2022-05-07T12:11:38.000Z
[ "license:cc-by-4.0", "region:us" ]
laion
null
null
44
10
2022-05-07T11:02:09
--- license: cc-by-4.0 --- Laion high resolution is a >= 1024x1024 subset of laion5B. It has 170M samples A good use case is to train a superresolution model. Refer to [img2dataset guide](https://github.com/rom1504/img2dataset/blob/main/dataset_examples/laion-high-resolution.md) for downloading
298
[ [ -0.059814453125, -0.0030269622802734375, 0.01025390625, 0.01314544677734375, -0.0269012451171875, -0.01380157470703125, 0.0161285400390625, -0.0243682861328125, 0.02838134765625, 0.04742431640625, -0.0213623046875, -0.0189971923828125, -0.031494140625, -0.01...
strombergnlp/rumoureval_2019
2022-10-25T21:43:58.000Z
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "license:cc-by-4.0", "stance-detection", "arxiv:1809.06683", "region:us" ]
strombergnlp
Stance prediction task in English. The goal is to predict whether a given reply to a claim either supports, denies, questions, or simply comments on the claim. Ran as a SemEval task in 2019.
@inproceedings{gorrell-etal-2019-semeval, title = "{S}em{E}val-2019 Task 7: {R}umour{E}val, Determining Rumour Veracity and Support for Rumours", author = "Gorrell, Genevieve and Kochkina, Elena and Liakata, Maria and Aker, Ahmet and Zubiaga, Arkaitz and Bontcheva, Kalina and Derczynski, Leon", booktitle = "Proceedings of the 13th International Workshop on Semantic Evaluation", month = jun, year = "2019", address = "Minneapolis, Minnesota, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/S19-2147", doi = "10.18653/v1/S19-2147", pages = "845--854", }
2
10
2022-05-12T09:54:08
--- annotations_creators: - crowdsourced language_creators: - found language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: [] task_categories: - text-classification task_ids: - fact-checking pretty_name: RumourEval 2019 tags: - stance-detection --- # Dataset Card for "rumoureval_2019" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://competitions.codalab.org/competitions/19938](https://competitions.codalab.org/competitions/19938) - **Repository:** [https://figshare.com/articles/dataset/RumourEval_2019_data/8845580](https://figshare.com/articles/dataset/RumourEval_2019_data/8845580) - **Paper:** [https://aclanthology.org/S19-2147/](https://aclanthology.org/S19-2147/), [https://arxiv.org/abs/1809.06683](https://arxiv.org/abs/1809.06683) - **Point of Contact:** [Leon Derczynski](https://github.com/leondz) - **Size of downloaded dataset files:** - **Size of the generated dataset:** - **Total amount of disk used:** ### Dataset Summary Stance prediction task in English. The goal is to predict whether a given reply to a claim either supports, denies, questions, or simply comments on the claim. Ran as a SemEval task in 2019. ### Supported Tasks and Leaderboards * SemEval 2019 task 1 ### Languages English of various origins, bcp47: `en` ## Dataset Structure ### Data Instances #### polstance An example of 'train' looks as follows. ``` { 'id': '0', 'source_text': 'Appalled by the attack on Charlie Hebdo in Paris, 10 - probably journalists - now confirmed dead. An attack on free speech everywhere.', 'reply_text': '@m33ryg @tnewtondunn @mehdirhasan Of course it is free speech, that\'s the definition of "free speech" to openly make comments or draw a pic!', 'label': 3 } ``` ### Data Fields - `id`: a `string` feature. - `source_text`: a `string` expressing a claim/topic. - `reply_text`: a `string` to be classified for its stance to the source. - `label`: a class label representing the stance the text expresses towards the target. Full tagset with indices: ``` 0: "support", 1: "deny", 2: "query", 3: "comment" ``` - `quoteID`: a `string` of the internal quote ID. - `party`: a `string` describing the party affiliation of the quote utterer at the time of utterance. - `politician`: a `string` naming the politician who uttered the quote. ### Data Splits | name |instances| |---------|----:| |train|7 005| |dev|2 425| |test|2 945| ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? Twitter users ### Annotations #### Annotation process Detailed in [Analysing How People Orient to and Spread Rumours in Social Media by Looking at Conversational Threads](https://journals.plos.org/plosone/article/authors?id=10.1371/journal.pone.0150989) #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators The dataset is curated by the paper's authors. ### Licensing Information The authors distribute this data under Creative Commons attribution license, CC-BY 4.0. ### Citation Information ``` @inproceedings{gorrell-etal-2019-semeval, title = "{S}em{E}val-2019 Task 7: {R}umour{E}val, Determining Rumour Veracity and Support for Rumours", author = "Gorrell, Genevieve and Kochkina, Elena and Liakata, Maria and Aker, Ahmet and Zubiaga, Arkaitz and Bontcheva, Kalina and Derczynski, Leon", booktitle = "Proceedings of the 13th International Workshop on Semantic Evaluation", month = jun, year = "2019", address = "Minneapolis, Minnesota, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/S19-2147", doi = "10.18653/v1/S19-2147", pages = "845--854", } ``` ### Contributions Author-added dataset [@leondz](https://github.com/leondz)
5,230
[ [ -0.0245819091796875, -0.051025390625, 0.0165557861328125, 0.0136566162109375, -0.0289459228515625, 0.0005841255187988281, -0.0273895263671875, -0.045501708984375, 0.046112060546875, 0.035003662109375, -0.056427001953125, -0.06280517578125, -0.057464599609375, ...
launch/gov_report_qs
2022-11-09T01:58:19.000Z
[ "task_categories:summarization", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:launch/gov_report", "language:en", "license:cc-by-4.0", "region:us" ]
launch
GovReport-QS hierarchical question-summary generation dataset. There are two configs: - paragraph: paragraph-level annotated data - document: aggregated paragraph-level annotated data for the same document
@inproceedings{cao-wang-2022-hibrids, title = "{HIBRIDS}: Attention with Hierarchical Biases for Structure-aware Long Document Summarization", author = "Cao, Shuyang and Wang, Lu", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.acl-long.58", pages = "786--807", abstract = "Document structure is critical for efficient information consumption. However, it is challenging to encode it efficiently into the modern Transformer architecture. In this work, we present HIBRIDS, which injects Hierarchical Biases foR Incorporating Document Structure into attention score calculation. We further present a new task, hierarchical question-summary generation, for summarizing salient content in the source document into a hierarchy of questions and summaries, where each follow-up question inquires about the content of its parent question-summary pair. We also annotate a new dataset with 6,153 question-summary hierarchies labeled on government reports. Experiment results show that our model produces better question-summary hierarchies than comparisons on both hierarchy quality and content coverage, a finding also echoed by human judges. Additionally, our model improves the generation of long-form summaries from long government reports and Wikipedia articles, as measured by ROUGE scores.", }
1
10
2022-05-22T22:12:20
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - launch/gov_report task_categories: - summarization task_ids: [] pretty_name: GovReport-QS --- # Dataset Card for GovReport-QS ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://gov-report-data.github.io](https://gov-report-data.github.io) - **Repository:** [https://github.com/ShuyangCao/hibrids_summ](https://github.com/ShuyangCao/hibrids_summ) - **Paper:** [https://aclanthology.org/2022.acl-long.58/](https://aclanthology.org/2022.acl-long.58/) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary Based on the GovReport dataset, GovReport-QS additionally includes annotated question-summary hierarchies for government reports. This hierarchy proactively highlights the document structure, to further promote content engagement and comprehension. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure Two configs are available: - **paragraph** (default): paragraph-level annotated data - **document**: aggregated paragraph-level annotated data for the same document To use different configs, set the `name` argument of the `load_dataset` function. ### Data Instances #### paragraph An example looks as follows. ``` { "doc_id": "GAO_123456", "summary_paragraph_index": 2, "document_sections": { "title": ["test docment section 1 title", "test docment section 1.1 title"], "paragraphs": ["test document\nsection 1 paragraphs", "test document\nsection 1.1 paragraphs"], "depth": [1, 2] }, "question_summary_pairs": { "question": ["What is the test question 1?", "What is the test question 1.1?"], "summary": ["This is the test answer 1.", "This is the test answer 1.1"], "parent_pair_index": [-1, 0] } } ``` #### document An example looks as follows. ``` { "doc_id": "GAO_123456", "document_sections": { "title": ["test docment section 1 title", "test docment section 1.1 title"], "paragraphs": ["test document\nsection 1 paragraphs", "test document\nsection 1.1 paragraphs"], "depth": [1, 2], "alignment": ["h0_title", "h0_full"] }, "question_summary_pairs": { "question": ["What is the test question 1?", "What is the test question 1.1?"], "summary": ["This is the test answer 1.", "This is the test answer 1.1"], "parent_pair_index": [-1, 0], "summary_paragraph_index": [2, 2] } } ``` ### Data Fields #### paragraph **Note that document_sections in this config are the sections aligned with the annotated summary paragraph.** - `doc_id`: a `string` feature. - `summary_paragraph_index`: a `int32` feature. - `document_sections`: a dictionary feature containing lists of (each element corresponds to a section): - `title`: a `string` feature. - `paragraphs`: a of `string` feature, with `\n` separating different paragraphs. - `depth`: a `int32` feature. - `question_summary_pairs`: a dictionary feature containing lists of (each element corresponds to a question-summary pair): - `question`: a `string` feature. - `summary`: a `string` feature. - `parent_pair_index`: a `int32` feature indicating which question-summary pair is the parent of the current pair. `-1` indicates that the current pair does not have parent. #### document **Note that document_sections in this config are the all sections in the document.** - `id`: a `string` feature. - `document_sections`: a dictionary feature containing lists of (each element corresponds to a section): - `title`: a `string` feature. - `paragraphs`: a of `string` feature, with `\n` separating different paragraphs. - `depth`: a `int32` feature. - `alignment`: a `string` feature. Whether the `full` section or the `title` of the section should be included when aligned with each annotated hierarchy. For example, `h0_full` indicates that the full section should be included for the hierarchy indexed `0`. - `question_summary_pairs`: a dictionary feature containing lists of: - `question`: a `string` feature. - `summary`: a `string` feature. - `parent_pair_index`: a `int32` feature indicating which question-summary pair is the parent of the current pair. `-1` indicates that the current pair does not have parent. Note that the indices start from `0` for pairs with the same `summary_paragraph_index`. - `summary_paragraph_index`: a `int32` feature indicating which summary paragraph the question-summary pair is annotated for. ### Data Splits #### paragraph - train: 17519 - valid: 974 - test: 973 #### document - train: 1371 - valid: 171 - test: 172 ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? Editors of the Congressional Research Service and U.S. Government Accountability Office. ### Personal and Sensitive Information None. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information CC BY 4.0 ### Citation Information ``` @inproceedings{cao-wang-2022-hibrids, title = "{HIBRIDS}: Attention with Hierarchical Biases for Structure-aware Long Document Summarization", author = "Cao, Shuyang and Wang, Lu", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.acl-long.58", pages = "786--807", abstract = "Document structure is critical for efficient information consumption. However, it is challenging to encode it efficiently into the modern Transformer architecture. In this work, we present HIBRIDS, which injects Hierarchical Biases foR Incorporating Document Structure into attention score calculation. We further present a new task, hierarchical question-summary generation, for summarizing salient content in the source document into a hierarchy of questions and summaries, where each follow-up question inquires about the content of its parent question-summary pair. We also annotate a new dataset with 6,153 question-summary hierarchies labeled on government reports. Experiment results show that our model produces better question-summary hierarchies than comparisons on both hierarchy quality and content coverage, a finding also echoed by human judges. Additionally, our model improves the generation of long-form summaries from long government reports and Wikipedia articles, as measured by ROUGE scores.", } ```
8,162
[ [ -0.04730224609375, -0.06451416015625, 0.01206207275390625, 0.0170745849609375, -0.00018322467803955078, -0.00905609130859375, -0.01343536376953125, -0.0138702392578125, 0.0242462158203125, 0.03131103515625, -0.05181884765625, -0.0595703125, -0.036590576171875, ...
DFKI-SLT/wikitext_linked
2022-07-04T06:09:56.000Z
[ "task_categories:fill-mask", "task_categories:token-classification", "task_categories:text-classification", "task_ids:masked-language-modeling", "task_ids:named-entity-recognition", "task_ids:part-of-speech", "task_ids:lemmatization", "task_ids:parsing", "task_ids:entity-linking-classification", "...
DFKI-SLT
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia. Dependency Relations, POS, NER tags are marked with trankit and entities are linked with entity-fishing. The dataset is available under the Creative Commons Attribution-ShareAlike License.
@misc{merity2016pointer, title={Pointer Sentinel Mixture Models}, author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher}, year={2016}, eprint={1609.07843}, archivePrefix={arXiv}, primaryClass={cs.CL} } @inproceedings{nguyen2021trankit, title={Trankit: A Light-Weight Transformer-based Toolkit for Multilingual Natural Language Processing}, author={Nguyen, Minh Van and Lai, Viet Dac and Veyseh, Amir Pouran Ben and Nguyen, Thien Huu}, booktitle="Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", year={2021} } @misc{entity-fishing, title = {entity-fishing}, howpublished = {\\url{https://github.com/kermitt2/entity-fishing}}, publisher = {GitHub}, year = {2016--2022}, archivePrefix = {swh}, eprint = {1:dir:cb0ba3379413db12b0018b7c3af8d0d2d864139c} }
5
10
2022-05-30T14:26:06
--- annotations_creators: - machine-generated language_creators: - machine-generated language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual pretty_name: wikitext_linked size_categories: - 1M<n<10M source_datasets: - extended|wikitext task_categories: - fill-mask - token-classification - text-classification task_ids: - masked-language-modeling - named-entity-recognition - part-of-speech - lemmatization - parsing - entity-linking-classification --- # Dataset Card for wikitext_linked ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - - **Repository:** [https://github.com/GabrielKP/svo/](https://github.com/GabrielKP/svo/) - **Paper:** - - **Leaderboard:** - - **Point of Contact:** [gabriel.kressin@dfki.de](mailto:gabriel.kressin@dfki.de) ### Dataset Summary The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia. Dependency Relations, POS, NER tags are marked with [trankit](https://github.com/nlp-uoregon/trankit), entities are linked with [entity-fishing](https://nerd.readthedocs.io/en/latest/index.html), which also tags another field of NER tags. The dataset is available under the Creative Commons Attribution-ShareAlike License. Compared to the preprocessed version of Penn Treebank (PTB), WikiText-2 is over 2 times larger and WikiText-103 is over 110 times larger. The WikiText dataset also features a far larger vocabulary and retains the original case, punctuation and numbers - all of which are removed in PTB. As it is composed of full articles, the dataset is well suited for models that can take advantage of long term dependencies. ### Supported Tasks and Leaderboards - masked-language-modeling - named-entity-recognition - part-of-speech - lemmatization - parsing - entity-linking-classification ### Languages English. ## Dataset Structure ### Data Instances #### wikitext2 - **Size of downloaded dataset files:** 27.3 MB - **Size of the generated dataset:** 197.2 MB - **Total amount of disk used:** 197.2 MB An example of 'validation' looks as follows. ```json { 'text': 'It is closely related to the American lobster , H. americanus .', 'original_id': 3, 'tok_span': [[0, 0], [0, 2], [3, 5], [6, 13], [14, 21], [22, 24], [25, 28], [29, 37], [38, 45], [46, 47], [48, 50], [51, 61], [62, 63]], 'tok_upos': ['root', 'PRON', 'AUX', 'ADV', 'ADJ', 'ADP', 'DET', 'ADJ', 'NOUN', 'PUNCT', 'PROPN', 'PROPN', 'PUNCT'], 'tok_xpos': ['root', 'PRP', 'VBZ', 'RB', 'JJ', 'IN', 'DT', 'JJ', 'NN', ',', 'NNP', 'NNP', '.'], 'tok_dephead': [0, 4, 4, 4, 0, 8, 8, 8, 4, 8, 8, 10, 4], 'tok_deprel': ['root', 'nsubj', 'cop', 'advmod', 'root', 'case', 'det', 'amod', 'obl', 'punct', 'appos', 'flat', 'punct'], 'tok_lemma': [None, 'it', 'be', 'closely', 'related', 'to', 'the', 'american', 'lobster', ',', 'H.', 'americanus', '.'], 'tok_ner': [None, 'O', 'O', 'O', 'O', 'O', 'O', 'S-MISC', 'O', 'O', 'O', 'O', 'O'], 'ent_span': [[29, 45]], 'ent_wikipedia_external_ref': ['377397'], 'ent_ner': [None], 'ent_domains': [['Enterprise']], } ``` #### wikitext103 - **Size of downloaded dataset files:** 1.11 GB - **Size of the generated dataset:** 7.82 GB - **Total amount of disk used:** 7.82 GB An example of 'train' looks as follows. ```json { 'text': 'Vision for the PlayStation Portable .', 'original_id': 3, 'tok_span': [[0, 0], [0, 6], [7, 10], [11, 14], [15, 26], [27, 35], [36, 37]], 'tok_upos': ['root', 'NOUN', 'ADP', 'DET', 'PROPN', 'PROPN', 'PUNCT'], 'tok_xpos': ['root', 'NN', 'IN', 'DT', 'NNP', 'NNP', '.'], 'tok_dephead': [0, 0, 5, 5, 5, 1, 1], 'tok_deprel': ['root', 'root', 'case', 'det', 'compound', 'nmod', 'punct'], 'tok_lemma': [None, 'vision', 'for', 'the', 'PlayStation', 'Portable', '.'], 'tok_ner': [None, 'O', 'O', 'O', 'B-MISC', 'E-MISC', 'O'], 'ent_span': [[15, 35]], 'ent_wikipedia_external_ref': ['619009'], 'ent_ner': [None], 'ent_domains': [['Electronics', 'Computer_Science']] } ``` Use following code to print the examples nicely: ```py def print_tokens_entities(example): text = example['text'] print( "Text:\n" f" {text}" "\nOrig-Id: " f"{example['original_id']}" "\nTokens:" ) iterator = enumerate(zip( example["tok_span"], example["tok_upos"], example["tok_xpos"], example["tok_ner"], example["tok_dephead"], example["tok_deprel"], example["tok_lemma"], )) print(f" Id | {'token':12} | {'upos':8} | {'xpos':8} | {'ner':8} | {'deph':4} | {'deprel':9} | {'lemma':12} | Id") print("---------------------------------------------------------------------------------------------------") for idx, (tok_span, upos, xpos, ner, dephead, deprel, lemma) in iterator: print(f" {idx:3} | {text[tok_span[0]:tok_span[1]]:12} | {upos:8} | {xpos:8} | {str(ner):8} | {str(dephead):4} | {deprel:9} | {str(lemma):12} | {idx}") iterator = list(enumerate(zip( example.get("ent_span", []), example.get("ent_wikipedia_external_ref", []), example.get("ent_ner", []), example.get("ent_domains", []), ))) if len(iterator) > 0: print("Entities") print(f" Id | {'entity':21} | {'wiki_ref':7} | {'ner':7} | domains") print("--------------------------------------------------------------------") for idx, ((start, end), wiki_ref, ent_ner, ent_domains) in iterator: print(f" {idx:3} | {text[start:end]:21} | {str(wiki_ref):7} | {str(ent_ner):7} | {ent_domains}") ``` ### Data Fields The data fields are the same among all splits. * text: string feature. * original_id: int feature. Mapping to index within original wikitext dataset. * tok_span: sequence of (int, int) tuples. Denotes token spans (start inclusive, end exclusive) within each sentence. **Note that each sentence includes an artificial root node to align dependency relations.** * tok_upos: string feature. [Universal Dependency POS tag](https://universaldependencies.org/) tags. Aligned with tok_span. Root node has tag "root". * tok_xpos: string geature. [XPOS POS tag](https://trankit.readthedocs.io/en/latest/overview.html#token-list). Aligned with tok_span. Root node has tag "root". * tok_dephead: int feature. [Universal Dependency Head Node](https://universaldependencies.org/introduction.html). Int refers to tokens in tok_span. Root node has head `0` (itself). * tok_deprel: [Universal Dependency Relation Description](https://universaldependencies.org/introduction.html). Refers to the relation between this token and head token. Aligned with tok_span. Root node has dependency relation "root" to itself. * tok_lemma: string feature. Lemma of token. Aligend with tok_span. * tok_ner: string feature. NER tag of token. Marked in BIOS schema (e.g. S-MISC, B-LOC, ...) Aligned with tok_span. Root node has NER tag `None`. * ent_span: sequence of (int, int) tuples. Denotes entities found by entity-fishing (start inclusive, end exclusive). * ent_wikipedia_external_ref: string feature. External Reference to wikipedia page. You can access the wikipedia page via the url `https://en.wikipedia.org/wiki?curid=<ent_wikipedia_external_ref>`. Aligend with ent_span. All entities either have this field, or the `ent_ner` field, but not both. An empty field is denoted by the string `None`. Aligned with ent_span. * ent_ner: string feature. Denotes NER tags. An empty field is denoted by the string `None`. Aligned with ent_span. "ent_domains": sequence of string. Denotes domains of entity. Can be empty sequence. Aligned with ent_span. ### Data Splits | name | train |validation| test| |-------------------|------:|---------:|----:| |wikitext103 |4076530| 8607|10062| |wikitext2 | 82649| 8606|10062| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [https://huggingface.co/datasets/wikitext](https://huggingface.co/datasets/wikitext) #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process 1. Started with `wikitext2-raw-v1` and `wikitext103-raw-v1` from [wikitext](https://huggingface.co/datasets/wikitext) 2. Ran datasets through Trankit. Marked all fields starting with `tok`. In this step, the texts have been split into sentences. To retain the original text sections you can accumulate over `original_id` (examples are in order). 3. Ran datasets through entity-fishing. Marked all fields starting with `ent`. #### Who are the annotators? Machines powered by [DFKI](https://www.dfki.de/web). ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) ### Citation Information Please cite the original creators of wikitext, and the great people developing trankit and entity-fishing. ``` @misc{merity2016pointer, title={Pointer Sentinel Mixture Models}, author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher}, year={2016}, eprint={1609.07843}, archivePrefix={arXiv}, primaryClass={cs.CL} } @inproceedings{nguyen2021trankit, title={Trankit: A Light-Weight Transformer-based Toolkit for Multilingual Natural Language Processing}, author={Nguyen, Minh Van and Lai, Viet Dac and Veyseh, Amir Pouran Ben and Nguyen, Thien Huu}, booktitle="Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", year={2021} } @misc{entity-fishing, title = {entity-fishing}, howpublished = {\\url{https://github.com/kermitt2/entity-fishing}}, publisher = {GitHub}, year = {2016--2022}, archivePrefix = {swh}, eprint = {1:dir:cb0ba3379413db12b0018b7c3af8d0d2d864139c} } ``` ### Contributions Thanks to [@GabrielKP](https://github.com/GabrielKP) for adding this dataset.
11,463
[ [ -0.030029296875, -0.02960205078125, 0.00897216796875, 0.0153350830078125, -0.0186920166015625, -0.00021970272064208984, -0.021270751953125, -0.023956298828125, 0.03216552734375, 0.025238037109375, -0.037384033203125, -0.0609130859375, -0.036895751953125, 0.0...
yoshitomo-matsubara/srsd-feynman_easy
2023-10-11T02:05:39.000Z
[ "task_categories:tabular-regression", "annotations_creators:expert", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended", "language:en", "license:cc-by-4.0", "arxiv:2206.10540", "doi:10.57967/hf/0763", "region:us" ]
yoshitomo-matsubara
null
null
0
10
2022-06-08T06:21:39
--- pretty_name: SRSD-Feynman (Easy) annotations_creators: - expert language_creators: - expert-generated language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - extended task_categories: - tabular-regression task_ids: [] --- # Dataset Card for SRSD-Feynman (Easy set) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** https://github.com/omron-sinicx/srsd-benchmark - **Paper:** [Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery](https://arxiv.org/abs/2206.10540) - **Point of Contact:** [Yoshitaka Ushiku](mailto:yoshitaka.ushiku@sinicx.com) ### Dataset Summary Our SRSD (Feynman) datasets are designed to discuss the performance of Symbolic Regression for Scientific Discovery. We carefully reviewed the properties of each formula and its variables in [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html) to design reasonably realistic sampling range of values so that our SRSD datasets can be used for evaluating the potential of SRSD such as whether or not an SR method con (re)discover physical laws from such datasets. This is the ***Easy set*** of our SRSD-Feynman datasets, which consists of the following 30 different physics formulas: [![Click here to open a PDF file](problem_table.png)](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_easy/resolve/main/problem_table.pdf) More details of these datasets are provided in [the paper and its supplementary material](https://arxiv.org/abs/2206.10540). ### Supported Tasks and Leaderboards Symbolic Regression ## Dataset Structure ### Data Instances Tabular data + Ground-truth equation per equation Tabular data: (num_samples, num_variables+1), where the last (rightmost) column indicate output of the target function for given variables. Note that the number of variables (`num_variables`) varies from equation to equation. Ground-truth equation: *pickled* symbolic representation (equation with symbols in sympy) of the target function. ### Data Fields For each dataset, we have 1. train split (txt file, whitespace as a delimiter) 2. val split (txt file, whitespace as a delimiter) 3. test split (txt file, whitespace as a delimiter) 4. true equation (pickle file for sympy object) ### Data Splits - train: 8,000 samples per equation - val: 1,000 samples per equation - test: 1,000 samples per equation ## Dataset Creation ### Curation Rationale We chose target equations based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html). ### Annotations #### Annotation process We significantly revised the sampling range for each variable from the annotations in the Feynman Symbolic Regression Database. First, we checked the properties of each variable and treat physical constants (e.g., light speed, gravitational constant) as constants. Next, variable ranges were defined to correspond to each typical physics experiment to confirm the physical phenomenon for each equation. In cases where a specific experiment is difficult to be assumed, ranges were set within which the corresponding physical phenomenon can be seen. Generally, the ranges are set to be sampled on log scales within their orders as 10^2 in order to take both large and small changes in value as the order changes. Variables such as angles, for which a linear distribution is expected are set to be sampled uniformly. In addition, variables that take a specific sign were set to be sampled within that range. #### Who are the annotators? The main annotators are - Naoya Chiba (@nchiba) - Ryo Igarashi (@rigarash) ### Personal and Sensitive Information N/A ## Considerations for Using the Data ### Social Impact of Dataset We annotated this dataset, assuming typical physical experiments. The dataset will engage research on symbolic regression for scientific discovery (SRSD) and help researchers discuss the potential of symbolic regression methods towards data-driven scientific discovery. ### Discussion of Biases Our choices of target equations are based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html), which are focused on a field of Physics. ### Other Known Limitations Some variables used in our datasets indicate some numbers (counts), which should be treated as integer. Due to the capacity of 32-bit integer, however, we treated some of such variables as float e.g., number of molecules (10^{23} - 10^{25}) ## Additional Information ### Dataset Curators The main curators are - Naoya Chiba (@nchiba) - Ryo Igarashi (@rigarash) ### Licensing Information Creative Commons Attribution 4.0 ### Citation Information [[Preprint](https://arxiv.org/abs/2206.10540)] ```bibtex @article{matsubara2022rethinking, title={Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery}, author={Matsubara, Yoshitomo and Chiba, Naoya and Igarashi, Ryo and Ushiku, Yoshitaka}, journal={arXiv preprint arXiv:2206.10540}, year={2022} } ``` ### Contributions Authors: - Yoshitomo Matsubara (@yoshitomo-matsubara) - Naoya Chiba (@nchiba) - Ryo Igarashi (@rigarash) - Yoshitaka Ushiku (@yushiku)
6,340
[ [ -0.00933074951171875, -0.034759521484375, 0.031768798828125, 0.017669677734375, -0.01313018798828125, -0.019012451171875, 0.0025882720947265625, -0.0196990966796875, 0.02593994140625, 0.0261688232421875, -0.058013916015625, -0.034881591796875, -0.046783447265625...
gsarti/magpie
2022-10-27T08:37:46.000Z
[ "task_categories:text-classification", "task_categories:text2text-generation", "task_categories:translation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "lice...
gsarti
The MAGPIE corpus is a large sense-annotated corpus of potentially idiomatic expressions (PIEs), based on the British National Corpus (BNC). Potentially idiomatic expressions are like idiomatic expressions, but the term also covers literal uses of idiomatic expressions, such as 'I leave work at the end of the day.' for the idiom 'at the end of the day'. This version of the dataset reflects the filtered subset used by Dankers et al. (2022) in their investigation on how PIEs are represented by NMT models. Authors use 37k samples annotated as fully figurative or literal, for 1482 idioms that contain nouns, numerals or adjectives that are colours (which they refer to as keywords). Because idioms show syntactic and morphological variability, the focus is mostly put on nouns. PIEs and their context are separated using the original corpus’s word-level annotations.
@inproceedings{haagsma-etal-2020-magpie, title = "{MAGPIE}: A Large Corpus of Potentially Idiomatic Expressions", author = "Haagsma, Hessel and Bos, Johan and Nissim, Malvina", booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2020.lrec-1.35", pages = "279--287", language = "English", ISBN = "979-10-95546-34-4", } @inproceedings{dankers-etal-2022-transformer, title = "Can Transformer be Too Compositional? Analysing Idiom Processing in Neural Machine Translation", author = "Dankers, Verna and Lucas, Christopher and Titov, Ivan", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.acl-long.252", doi = "10.18653/v1/2022.acl-long.252", pages = "3608--3626", }
1
10
2022-06-13T20:58:22
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification - text2text-generation - translation task_ids: [] pretty_name: magpie tags: - idiomaticity-classification --- # Dataset Card for MAGPIE ## Table of Contents - [Dataset Card for MAGPIE](#dataset-card-for-itacola) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Original Repository:** [hslh/magpie-corpus](https://github.com/hslh/magpie-corpus) - **Other Repository:** [vernadankers/mt_idioms](https://github.com/vernadankers/mt_idioms) - **Original Paper:** [ACL Anthology](https://aclanthology.org/2020.lrec-1.35/) - **Other Paper:** [ACL Anthology](https://aclanthology.org/2022.acl-long.252/) - **Point of Contact:** [Hessel Haagsma, Verna Dankers](vernadankers@gmail.com) ### Dataset Summary The MAGPIE corpus ([Haagsma et al. 2020](https://aclanthology.org/2020.lrec-1.35/)) is a large sense-annotated corpus of potentially idiomatic expressions (PIEs), based on the British National Corpus (BNC). Potentially idiomatic expressions are like idiomatic expressions, but the term also covers literal uses of idiomatic expressions, such as 'I leave work at the end of the day.' for the idiom 'at the end of the day'. This version of the dataset reflects the filtered subset used by [Dankers et al. (2022)](https://aclanthology.org/2022.acl-long.252/) in their investigation on how PIEs are represented by NMT models. Authors use 37k samples annotated as fully figurative or literal, for 1482 idioms that contain nouns, numerals or adjectives that are colors (which they refer to as keywords). Because idioms show syntactic and morphological variability, the focus is mostly put on nouns. PIEs and their context are separated using the original corpus’s word-level annotations. ### Languages The language data in MAGPIE is in English (BCP-47 `en`) ## Dataset Structure ### Data Instances The `magpie` configuration contains sentences with annotations for the presence, usage an type of potentially idiomatic expressions. An example from the `train` split of the `magpie` config (default) is provided below. ```json { 'sentence': 'There seems to be a dearth of good small tools across the board.', 'annotation': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1], 'idiom': 'across the board', 'usage': 'figurative', 'variant': 'identical', 'pos_tags': ['ADV', 'VERB', 'PART', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'NOUN'] } ``` The text is provided as-is, without further preprocessing or tokenization. The fields are the following: - `sentence`: The sentence containing a PIE. - `annotation`: List of 0s and 1s of the same length of the whitespace-tokenized sentence, with 1s corresponding to the position of the idiomatic expression. - `idiom`: The idiom contained in the sentence in its base form. - `usage`: Either `figurative` or `literal`, depending on the usage of the PIE. - `variant`: `identical` if the PIE matches the base form of the idiom, otherwise specifies the variation. - `pos_tags`: List of POS tags associated with words in the sentence. ### Data Splits | config| train| |----------:|-----:| |`magpie` | 44451 | ### Dataset Creation Please refer to the original article [MAGPIE: A Large Corpus of Potentially Idiomatic Expressions](https://aclanthology.org/2020.lrec-1.35) for additional information on dataset creation, and to the article [Can Transformer be Too Compositional? Analysing Idiom Processing in Neural Machine Translation](https://aclanthology.org/2022.acl-long.252) for further information on the filtering of selected idioms. ## Additional Information ### Dataset Curators The original authors are the curators of the original dataset. For problems or updates on this 🤗 Datasets version, please contact [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com). ### Licensing Information The dataset is licensed under [Creative Commons 4.0 license (CC-BY-4.0)](https://creativecommons.org/licenses/by/4.0/) ### Citation Information Please cite the authors if you use this corpus in your work: ```bibtex @inproceedings{haagsma-etal-2020-magpie, title = "{MAGPIE}: A Large Corpus of Potentially Idiomatic Expressions", author = "Haagsma, Hessel and Bos, Johan and Nissim, Malvina", booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2020.lrec-1.35", pages = "279--287", language = "English", ISBN = "979-10-95546-34-4", } @inproceedings{dankers-etal-2022-transformer, title = "Can Transformer be Too Compositional? Analysing Idiom Processing in Neural Machine Translation", author = "Dankers, Verna and Lucas, Christopher and Titov, Ivan", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.acl-long.252", doi = "10.18653/v1/2022.acl-long.252", pages = "3608--3626", } ```
5,962
[ [ -0.03497314453125, -0.04449462890625, -0.000002086162567138672, 0.040740966796875, -0.042816162109375, 0.00507354736328125, -0.026702880859375, -0.025054931640625, 0.0239105224609375, 0.016143798828125, -0.035125732421875, -0.042449951171875, -0.049346923828125,...
codeparrot/codeparrot-train-v2-near-dedup
2022-06-16T18:08:48.000Z
[ "region:us" ]
codeparrot
null
null
3
10
2022-06-16T17:59:41
Entry not found
15
[ [ -0.021392822265625, -0.01494598388671875, 0.05718994140625, 0.028839111328125, -0.0350341796875, 0.046539306640625, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.01702880859375, -0.052093505859375, -0.01494598388671875, -0.06036376953125, 0.03790...
codeparrot/codecomplex
2022-10-25T09:30:16.000Z
[ "task_categories:text-generation", "task_ids:language-modeling", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:unknown", "language:code", "license:apache-2.0", "region:us" ]
codeparrot
null
null
10
10
2022-06-24T20:18:43
--- annotations_creators: [] language_creators: - expert-generated language: - code license: - apache-2.0 multilinguality: - monolingual size_categories: - unknown source_datasets: [] task_categories: - text-generation task_ids: - language-modeling pretty_name: CodeComplex --- # CodeComplex Dataset ## Dataset Description [CodeComplex](https://github.com/yonsei-toc/CodeComple) consists of 4,200 Java codes submitted to programming competitions by human programmers and their complexity labels annotated by a group of algorithm experts. ### How to use it You can load and iterate through the dataset with the following two lines of code: ```python from datasets import load_dataset ds = load_dataset("codeparrot/codecomplex", split="train") print(next(iter(ds))) ``` ## Data Structure ``` DatasetDict({ train: Dataset({ features: ['src', 'complexity', 'problem', 'from'], num_rows: 4517 }) }) ``` ### Data Instances ```python {'src': 'import java.io.*;\nimport java.math.BigInteger;\nimport java.util.InputMismatchException;...', 'complexity': 'quadratic', 'problem': '1179_B. Tolik and His Uncle', 'from': 'CODEFORCES'} ``` ### Data Fields * src: a string feature, representing the source code in Java. * complexity: a string feature, giving program complexity. * problem: a string of the feature, representing the problem name. * from: a string feature, representing the source of the problem. complexity filed has 7 classes, where each class has around 500 codes each. The seven classes are constant, linear, quadratic, cubic, log(n), nlog(n) and NP-hard. ### Data Splits The dataset only contains a train split. ## Dataset Creation The authors first collected problem and solution codes in Java from CodeForces and they were inspected by experienced human annotators to label each code by their time complexity. After the labelling, they used different programming experts to verify the class of each data that the human annotators assigned. ## Citation Information ``` @article{JeonBHHK22, author = {Mingi Jeon and Seung-Yeop Baik and Joonghyuk Hahn and Yo-Sub Han and Sang-Ki Ko}, title = {{Deep Learning-based Code Complexity Prediction}}, year = {2022}, } ```
2,230
[ [ -0.038330078125, -0.02301025390625, 0.01345062255859375, 0.0194091796875, 0.00579071044921875, 0.020965576171875, -0.028228759765625, -0.0267791748046875, -0.012939453125, 0.0259552001953125, -0.0254364013671875, -0.040924072265625, -0.050445556640625, 0.018...
knkarthick/AMI
2022-10-24T09:16:01.000Z
[ "task_categories:summarization", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10<n<1000", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
knkarthick
null
null
3
10
2022-06-28T10:30:41
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10<n<1000 source_datasets: - original task_categories: - summarization task_ids: [] pretty_name: AMI Corpus --- # Dataset Card for AMI Corpus ## Dataset Description ### Links - **Homepage:** https://groups.inf.ed.ac.uk/ami/corpus/ - **Repository:** https://groups.inf.ed.ac.uk/ami/download/ - **Paper:** https://groups.inf.ed.ac.uk/ami/corpus/overview.shtml - **Point of Contact:** https://huggingface.co/knkarthick ### Dataset Summary The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. For a gentle introduction to the corpus, see the corpus overview. To access the data, follow the directions given there. Around two-thirds of the data has been elicited using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The rest consists of naturally occurring meetings in a range of domains. Detailed information can be found in the documentation section. #### Synchronised recording devices: close-talking and far-field microphones, individual and room-view video cameras, projection, a whiteboard, individual pens. #### Annotation: orthographic transcription, annotations for many different phenomena (dialog acts, head movement etc. ). Although the AMI Meeting Corpus was created for the uses of a consortium that is developing meeting browsing technology, it is designed to be useful for a wide range of research areas. The downloads on this website include videos that are suitable for most purposes, but higher resolution videos are available for researchers engaged in video processing. All of the signals and transcription, and some of the annotations, have been released publicly under the Creative Commons Attribution 4.0 International Licence (CC BY 4.0). ### Languages English ## Dataset Structure ### Data Instances AMI Corpus is a meeting summarization dataset, consisting of 279 dialogues split into train, test and validation. The first instance in the training set: {'id': '30', 'summary': "The project manager opens the meeting by stating that they will address functional design and then going over the agenda. The industrial designer gives his presentation, explaining how remote controls function and giving personal preference to a clear, simple design that upgrades the technology as well as incorporates the latest features in chip design. The interface specialist gives her presentation next, addressing the main purpose of a remote control. She pinpoints the main functions of on/off, channel-switching, numbers for choosing particular channels, and volume; and also suggests adding a menu button to change settings such as brightness on the screen. She gives preference to a remote that is small, easy to use, and follows some conventions. The group briefly discusses the possibility of using an LCD screen if cost allows it, since it is fancy and fashionable. The marketing expert presents, giving statistical information from a survey of 100 subjects. She prefers a remote that is sleek, stylish, sophisticated, cool, beautiful, functional, solar-powered, has long battery life, and has a locator. They discuss the target group, deciding it should be 15-35 year olds. After they talk about features they might include, the project manager closes the meeting by allocating tasks.", 'dialogue': "Speaker A: Cool. Do you wanna give me the little cable thing? Yeah. Cool. Ah, that's why it won't meet. Okay, cool. Yep, cool. Okay, functional requirements. Alright, yeah. It's working. Cool, okay. So what I have, wh where I've got my information from is a survey where the usability lab um observed remote control use with um a hundred subjects and then they gave them a questionnaire. Um so it was all about, you know, how people feel about the look and feel of the remote control, you know. What's the most annoying things about remote controls and um the possibility of speech recognition and L_C_D_ screens in remote control. Not that they actually gave me any answers on the L_C_D_ screens, so I should have taken that bit out, but anyway. Um okay, so. What they found is that people don't like how current remote controls are, so you know, definitely you should be looking at something quite different. Um seventy five percent of users find most remote controls ugly. Uh the other twenty five percent have no fashion sense. Uh eighty percent of users would spend more to get um you know, a nice looking remote control. Um current remote controls, they don't match the user behaviour well, as you'll see on the next slide. Um I dunno what zapping is, but Oh, right. But you have that little thing that comes up at the bottom and tells you what's on. Um okay, fifty percent of users say they only use ten percent of the buttons, so that's going back to what, you know, we were saying earlier about, you know, do you need all the buttons on the remote control, they just make it look ugly. Okay? Cool. Um so this is my little graph thing. Mm k Okay, well, I can send it to all of you. What it is is um it's cones, 'cause I thought they'd be more exciting. Um but ooh where's it go? Back. Oh. Oh yes, cool. Okay, I'm gonna stop playing with the little pointy thing. Um okay, so like what it shows is how much things are used relatively and what you can clearly see from that is the thing that's used most is the channel selection. What you can't see is volume selection, it's a little bit higher than all the others. Yeah, so what the graph shows is that, you know, power, channel selection and volume selection are important, and the rest of them, you know, nobody really uses and so that's the the numbers along the top represent their like um their importance, you know, so on a scale of one to ten, how important is that and, you know, channel selection and volume selection are absolutely essential, and the power, well it's not quite so essential, apparently, although I don't understand how it couldn't be, um and everything else, I think, you know, you can forget about having those buttons on the remote control, 'cause they're just not needed, and they're not used. Okay. This is the bit that the email messed up for me and that's what I was fiddling about with at the beginning of the thing. Okay, cool. So um okay, so this is what people find annoying about remote controls. Uh that they get lost, that the uh you know, they're not intuitive and that they're bad for repetitive strain injury. I think if you're watching enough T_V_ to get repetitive strain injury from um you know, watching T_V_, then that's the least of your problems, but you know, it's up there. Um that yeah. Okay, so um I mean the the R_S_I_ thing would be that, like when you have the computer keyboards and you keep your wrists up would be something that encourages you want something with an ergonomic t design that encourages good use of the remote control and you know, not straining your wrists watching T_V_. Yes. Okay, cool. Right, um sorry this is pink because I was copying and pasting the table, and I didn't have time to white it out again. Um okay, but that shows how people whether they would pay more for voice recognition software. So you can see from that that, you know, younger people to the age of thirty five are quite likely to pay quite a lot more f well quite are quite likely to pay more for voice recognition software, whereas as people get older, they're a bit more sceptical about it and they're less willing to to try it. Um so clearly voice recognition is something to think about, but um you know I d I do wonder how well that would work given that a T_V_, you know, tends to be people talking and um, you know, how are you going to stop it from just flipping channels whilst watching T_V_. Um okay? Cool. Um okay, so these are my personal preferences. So you have sleek, stylish, sophisticated, you know, so something that's, you know, a bit cool. Um you know, functional, so it's useful, but minimalist. Um there's a there's an important thing that, you know, people use when, you know, when you're filling up your home, you know, a lot of people fill up their home with bits of crap, basically, you know, and you've got all this stuff, and you're just like, what the hell is that, who is ever gonna use it? You know, so things should either be functional or beautiful or preferably both, so I think we need to aim for both. Um okay, then a long battery life, like you were talking about earlier and um, you know, I was thinking that solar power would be quite cool because, you know, your remote control just sits there, and you could just sit it in the sunshine and save the environment a bit. Um and then like a locator, so you know, kind of like you have for a mobile phone or not a mobile phone Yeah, that's it, you know. I know, it's weird. My flatmate and I were talking about this on the way into uni this morning and I was like I need to get one for everything. So yeah, so maybe something where you clap and then it beeps, something a kind of sound that you don't often hear on the T_V_, you know, 'cause you don't want your remote control beeping every five minutes, 'cause you you'd then deliberately lose it by throwing it out the window or something. So okay? Cool. That's me. Cat's. Ca. Yeah, I mean that's the thing is that it didn't say in the survey, you know, whether, you know, these are the people that will pay more for a more stylish remote control, but I'm assuming, you know, yes. Well, that's when you go to uni, isn't it? So, you know Yeah. Oh, I've unplugged it. Do you want me to Yeah. Seventy six point three percent. Yeah. Yeah, I kn I mean I know what you're saying about the fifteen to twenty five year olds, but I mean it has been proven that that people of that age group have a higher disposable income because they don't have like I mean, you know, if you're at university, you're paying your rent, but you don't have a mortgage, you don't have a life insurance policy, you don't normally have a car, yeah, so. You're still learning to drive actually, so that just costs more than a car, but yeah. Um so I mean like it is an age group to target, really, I think. No, I mean that's what, that's like fifteen Pounds? You know, I think Yeah, I d I don't know many people without a T_V_. We didn't have a T_V_ last year, and everyone thought we were off our heads, you know. Yeah, I d well we've we've got quite a d decent T_V_. Yeah. I think I think the fact that, you know, ninety one point two percent of fifteen to twenty five year olds are saying yes, I would pay more for a voice recognition remote control, does say quite a lot really. You know, so I mean that and the disposable income and I don't think it's something to ignore, you know. Is not a massive difference, you know. No, do totally. You do have it in your mobile phone though, don't you? Because you have like I mean every mobile phone now has like call this person and it calls them. I don't know. Yeah. S so y you'd maybe need a code word. Do you know what I mean? So like when you say change, except that's being said quite a lot on T_V_, so maybe like, you know, remote. I mean how often do people say remote on T_V_? Although I only watch Charmed, so really I wouldn't know but like so you'd just say remote five, you know, remote ten, remote one two nine. I don't think there's a lot of uh voice recognition remote controls. Yeah, that would be another way to do it. Yeah, but then the code word would be even more important, because I mean Sky advertise on every channel, don't they, you know, so then it would be you'd be watching Charmed, and then the Sky advert would come on and it would change to Sky. Yeah, yeah, and that would be really annoying. Yeah. Do you not think that defeats the object of having voice recognition on a remote control though? Yeah, you know, so you have to have the remote control. It's more like if you lost it and it's down the sofa sometime, you can yell at it and it'll just change it, you can look for it later, yeah. Yeah, yeah, I suppose nearer to you but a b like if you have surround sound then Yeah. Yeah, 'cause it's it's quite important that you don't lose the the bit to locate the remote control. Yeah, definitely, yeah. Oh, so y you want our um PowerPoint presentations in there, hey? Okay. There you go. But is everyone's called functional requirements? Okay, so that's good. That's me done. Okay, cool.\r\nSpeaker B: No. Mm. Um um wi on on a what? Oh project project documents, yeah, yeah, yeah, okay. Oh okay, yeah. Yes, I think so. Yeah, the last minute, yeah, yeah. Yeah. Um Okay. Hmm. Mm. Okay, yeah, afterwards, yeah, okay. Thanks. I think we need like some general discussion at the end probably. Yeah. Yeah, I think since since we were discussing some um design issues then I I I would like to continue okay, yeah. Thanks. Oh i Okay, I hope wait. Should it just There's just nothing. Oh right, right, right, um Okay. Nothin okay, something is coming up. No signal? Why? Oh. My my computer went blank now. Adjusting. But I don't see anything I don't see anything on my computer now. This is the problem, but Um. Uh now it's okay. No? No. Oh okay. Okay, that's fine, that's good. Okay, let's start from the beginning. So I'm going to speak about technical functions design uh just like some some first issues that came up. Um 'kay, so the method I was um adopting at this point, it's not um for the for the whole um period of the um all the project but it's just at th at this very moment. Um uh my method was um to look at um other um remote controls, uh so mostly just by searching on the web and to see what um functionality they used. And then um after having got this inspiration and having compared what I found on the web um just to think about what the de what the user really needs and what um what the user might desire as additional uh functionalities. And yeah, and then just to um put the main function of the remote control in in words. Um so the findings uh were um that the main function of the remote control is is just sending messages to the television set, so this quite straightforward. And uh w some of the main functions would be switching on, switching off, uh then the user would like to switch the channel um for example just m changing to the next channel to to flip through all all of the possible channels, or then mm uh the other possibility would be that um she might just want to choose one particular channel, so we would need the numbers. And and also the volume is very important. Um um I als okay. 'Kay. Um um among the findings I found that m m most of the curr mm presently available remote controls also include other mm functionalities um in their design, like operating a V_C_R_, but they don't seem to be able to deal with D_V_D_ players, but then there are surely there are many other functionali functions that could possibly be added to them, but according to the last minute update um actually um we do not want to have all this complicated functions added to our design. So my personal preferences would be uh to keep the mm the whole remote control small um just like the physical size. And then it must be easy to use, so it must follow some conventions um like whereabouts you find the on off button and maybe the colour tends to be red or something. Um then yeah, the must-have buttons would be on off and then the channel numbers and then um the one that allows us to go to the next or the previous channel, and then volume has to be there. But then um other functionalities um could be just uh there could be a menu button and you could change things on the screen then, um for example brightness and mm similar functions could be just um done through the menu. And yeah, the last question I had about whether we wanted to incorporate n uh more functionalities, the answer was already no because of the last minute update. So at the for the time being that's uh that's all. If you have questions Yeah, and also it's it's um other question is uh because there are so many different And there are so many different things that could possibly be included because besides video and D_V_D_ there are the mm um video C_D_s and whatever, so it might be problematic to to choose between all these possible things. Um well, I think the buttons are still mm kind of the most um easy for the user to use, I mean um what other options would you have? A little screen or something, but this would be really kind of I think a lot of learning for the user and and I mean the user just wants to get um get a result um quickly, not to spend time in like um giving several orders um I dunno. I think I th I would I would think the put the buttons, but if if you have other mm proposals um. Yeah. Yeah. Mm-hmm. Yep. Uh am I going in the right direction? No. Wait. Okay, here it comes. Okay, here you are. Um that's very good, very interesting. Mm-hmm. Yeah. Yeah, you share a television or something that yeah. It was seventy something, yeah, yeah. Yeah this this is not unaffordable, but the problem is whether people need it, whether they do have a T_V_ to use its full Yeah. Common, the students yeah, yeah. The s the stu yeah, and the remote control might not yeah, it might not even function with the old T_V_. Yeah, we're still yeah. Or w maybe we can just kind of uh uh Yeah, but at the same time I think maybe we can we can just decide to to have both of these groups as our target, because actually I mean they're all still re young people. Yeah. Yeah. Yeah. Yeah. An Yeah. Yeah. Yeah but uh um Yeah, yeah sure, yeah, yeah. Yeah. Yeah, w well now the v the voice recognition if if it works wonderfully w we could possibly do away with all buttons, but I think this is not really the right moment yet, because people are just so used to buttons and um, yeah it's it's kind of safer, so we we need both, so the voice recognition would be just an extra, it wouldn't really reduce the size of the remote. Yeah but m but on the other hand, remote control isn't as close to you you probably might just just uh speak into it and and the T_V_ would be already further away, so it might not pick up the other things coming from there. Yeah, but then the remote control I think I mean um the idea is kind of it's it's not that it's sitting there on on top of the television, because then you could already yell at the television and you wouldn't you you wouldn't need the remote control, so the remote control is still something you keep n near yourself. Yeah, yeah, yeah. No, but I I I was just defending the the fact why why we want to keep the remote control close to us, a and uh not to yell at it from the distance. Okay. Oh yeah, yeah. Okay, yeah, mm-hmm. The major ones, yeah. Mm-hmm. Mm-hmm. Yeah. Did you find it? It's just yeah, yeah. Oh so so we'll just put them i there, we we yeah, w we won't even okay. Yeah. Yeah. Uh something conceptual, yeah. Hmm. Sorry, but um the next meeting um are we going to have it um right after lunch or shall we prepare our To prepare, okay, yeah, that's good. Okay. Cool. Okay, see you.\r\nSpeaker C: Mm. You said uh targ target groups, what does that mean? Uh okay, 'kay. So are Okay. Alright. I can go first, yeah. Right. Um so f from the Right sure. Uh okay. So n uh with uh with regard to the uh working design of this uh uh remote control uh I've identified um a few basic uh components of the remote and uh se uh from the design, functional design perspective um w I c we can now uh know wha what exactly the components are and how how they work together with each other. So this is the method that uh I'll mostly be following in my um in my uh role. Um the identification of the components, uh and uh since since I'm dealing only with the technical aspects, I would need feedback from the marketing person uh and uh from the user interface person. Uh we'll then integrate this into the product design at a technical level and uh basically update and come up with a new design, so it's a cyclical process. Okay, so these were the basic findings from today. The last three bullets have been integrated from uh the last minute uh email. Uh I just quickly jotted them down. Um so basically uh the as I told you the identification of how the remote control works and what are the various parts to it uh and what are the different processes um and how the parts uh communicate with each other. Um okay, so e the mee email said that teletext is now outdated, so we need to do away with that functionality of the remote control. Um also uh the remote control should be used only for television, because incorporating other features um makes it more comp complex. And the reason why teletext is outdated because uh of internet and uh the availability of internet over television. How however, our our remote control would only be dealing uh with the the use for television, in order to keep things simple. Um also the management wants that um our design should be unique uh it so it should incorporate um colour and the slogan uh that our company um has it as its standard. Okay, so he he here is a functional overview of the remote control. Um there's basically an energy source at the heart uh which feeds into the chip and the user interface. The user interf interface communicates with the chip, so I'll basic go over to the Okay. So if uh if this is our energy source and this is a cell, uh it communicates uh it feeds energy into the into the chip, which basically finds out h uh how how to do everything. There is a user interface here. So whe when the user presses a button, it feeds into the chip and the chip then generates a response and takes the response to an infrared terminal, um which then so the output of the chip is an infrared bit code, which is then communicated to the remote site, which h has an infrared receiver. Um the there can be uh a bulb here or something to indicate whether the remote is on or communicating. Um so these are the essent so a all the functionality of the remote control, whatever new functions that we need to do, um make the chip more complicated uh and bigger, basically. Okay. Um so i in my personal preferences um I'm hoping that we can ke keep the design as simple and clear as possible. This would uh help us uh to upgrade our technology at a future point of time. And uh also if we can incorporate uh the latest features in our chip design, so that our um uh remote control does not become outdated soon and it's compatible with mot most uh televisions. That's about it. So anything that you would like to know or No, I don't have any idea about what each component costs. Um yeah. Anything else? Yeah. Certainly, yeah. So so tha yeah, we definitely need to operate within our constraints, but um unfortunately I I do not have any data, so uh I just identified the functional components for that. Yeah, okay. Yeah. Mm 'kay. I it'll take some time. Oh, there it is, yeah. It'll come up, it um uh no signal. Yeah yeah, it says something now, adjusting Okay. Oh, that's strange. Okay. And one more time. Mm. Sorry, cou could you go back for a second? Uh switching on off channel, uh volume, okay, that's great. So in the u user interface requirements uh uh uh we we have been able to identify what are the basic buttons that we do want. Um but um so so at this stage, uh how we go about implementing those button we will not identify or I mean in we can completely do away with buttons and uh have some kind of a fancy user interface or something like that. But uh is is there any uh uh any thoughts on that? Right. Yeah, and it'll make the costs yeah. Right. Uh I think the co costs will also play a big role when we come to know about them. So well we can probably wait until t we have more knowledge on that. Uh i if the if the costs allow, we can have like an L_C_D_ display and uh with um because we do want something fancy and fashionable as well. So yeah? Cool. try to press oh, okay, yep. Mm. Right. Mm-hmm. Mm. Right. Mm-hmm. Hmm. Right. Mm. Mm. Mm. Some kind of a ring, some Right. Hmm. Okay, that's great, thanks. Mm. I think one of the very interesting things that came up in um uh Ka Kate Cat Cat's uh presentation was um uh this this issue of uh uh like voice recognition being more popular with uh younger people. So if we need to have a target group um then uh I think as far as the m motto of our company is concerned, if we want to have something sleek and uh you know, good looking uh we are better off targeting a younger audience then um you know, people who are comparatively elderly. Um. Right. Right. Bu but but the survey did say that f things like voice recognition are more popular with them, so if you want to put in something stylish, then uh th it'll certainly be more popular with this i ye with the younger people as compared to older people, yeah. Right, and Right. Mm. Right. But uh still, if if you can go back to that slide and uh, how popular was it? Oh, oh, okay. That's alright, if you can just look it up on your computer, wh uh um people between twenty five to thirty five, uh how popular was so it was sti still still quite popular amongst them. So even they are seventy six percent, is that high amount? Alright. Yeah. So you're more likely to b Yeah. Yeah. Mm. Bu but even even in the case of twenty five to thirty five it's quite popular, right? So mm uh are are are Mm. Mm. Um I was having a a general outlook on um m most like sophisticated features, but voice recognition itself I'm not very sure about, because one of the p uh things that Cat pointed out was uh uh how do we go about implementing it? Uh and uh Yeah. But how frequently do we use it anyway and um uh h ho how good is it, you know uh voice recognition softwares are still quite uh Yeah. Right. Right. Okay. O Right. Mm. Right. Yeah. Okay, so it seems like a feasible thing to implement uh for for a limited yeah. Mm. W What uh Mm. What wh uh what I was thinking is that there is this uh separation between what the channels are on T_V_ and how they are numbered on the remote control. If we can do with away with that, our product can be really popular uh in the sense that uh a person can say, I want to watch uh I_T_V_ one instead of saying that I want to go onto channel number forty five. Yeah, so if uh if something like that can be incorporated, some kind of Mm-hmm. Alright. Yeah, that's Right. Mm. Mm yeah and it might become very difficult from a distance for the television to understand what you're saying because of the noise factor for the remote control being cl I mean it'll it'll mm. Yeah. Mm. So uh wh another thing uh that can be used is that uh there can be a beeper button on the T_V_, so you can go and press that button and um and the remote control, wherever it is, it'll beep, so we we can probably come to know where it is. Right, yeah, yeah, yeah. Alright, yeah. Right. Okay. So where exactly is this i Ah, okay. Yeah. Yeah, yeah in that one, right yeah. No. Right. I guess I'll find out. Wha what was it again that I was supposed to look into? Con components, oh.\r\nSpeaker D: All hooked up. Okay, so now we are here at the functional design meeting. Um hopefully this meeting I'll be doing a little bit less talking than I did last time 'cause this is when you get to show us what you've been doing individually. The agenda for the meeting, I put it in the sh shared documents folder. I don't know if that meant that you could see it or not. Did anyone? No. Oh well. Um I'll try and do that for the next meeting as well so if you check in there, there's a shared project documents folder. Um and it should be in there. Project documents, yeah. So I'll put it in there. Is it best if I send you an email maybe, to let you know it's there? Yep. I'll do that next time. Um I'll act as secretary for this meeting and just take minutes as we go through, and then I'll send them to you after the meeting. The main the main focus of this meeting is your presentations that you've been preparing during the time, so we'll go through each of you one by one. Um then we need to briefly discuss the new project requirements that were sent to us. I just sent at the last minute, I'm sorry about that, but we can see how that affects what you were you were doing. Um and then we need to, by the end of the meeting come to some kind of decision on who our target group's going to be and what the functions of the remote control that's the the main goal is to come up with those two things, target group and functions of the remote control. And we've got forty minutes to do that in. So I would say yeah? As uh who it is that we're going to be trying to sell this thing to, yeah. So we need to yeah, we need to have a fairly defined group that that we want to focus on and then look at the functions um of the dem remote control itself. So with that I think it's best if I hand over to you. Does anyone have a preference for going first? You wanna go first? Okay, so we need to unplug my laptop and plug in yours. I assume we just pull it out? Just before you start, to make it easier, would you three mind emailing me your presentations? Once we you don't have to do it now but when once you go back, just so that I don't have to scribble everything down. Hmm. Mm-hmm. Okay. Do you have any um i idea about costs at this point? Br Okay. 'Cause that's something to consider, I guess, if we're if we're using more advanced technology, it might increase the price. Yeah. That's fine. Are there any more questions, or shall we just skip straight to the next one and then we can discuss all of them together at the end? Yeah, I think that will do. Okay, so do you want to Yes, shall shall we pull this up? I think that has to come out of there. Yeah. Yeah, I thought those last minute things, they're gonna hit you the worst. It ta takes a little Oh, and have you you need to then also press on yours, function F_ eight, so the blue function key at the bottom and F_ eight. Now it's coming, computer no signal. Maybe again? Okay, adjusting. There we go, there we go. Oh, if you press if you press function and that again there's there's usually three modes, one where it's only here, one where it's only there, and one where it's both. Okay, so one more time. Should yeah just wait for a moment, adjusting. Okay. Mm-hmm. Mm-hmm. Mm-hmm. Mm-hmm. Yeah. If I mean that was the the directive that came through from management, but if we had a a decent case for that we really think it's important to include video and D_V_D_, I could get back to them and see. It's w it's just whether it's worth arguing about. Mm-hmm. Yeah. Mm-hmm. Okay. Are there any questions for clarification of Maarika before we go on to the next one? Mm-hmm. Mm. Mm. Mm-hmm. Sure, we can discuss that maybe after the next one. Do you want to yeah. Oh, I'm getting hungry. You set? Uh we need to do the function key thing so that it comes up on here. Hello. Is it plugged in prop it's working? Okay. Excellent. It's um switching between channels, sort of randomly going through. Mm. Ooh, that's a bit difficult to see. If you explain it to us it'll be fine. Yeah. I liked the, I liked the litt ooh come back. No. Okay. Mm-hmm, that's the next one along, yeah? Mm-hmm. Mm-hmm. Mm-hmm. Mm-hmm. The remote control. Mm-hmm. That's alright. Mm. Keys and things like that, yeah. Whistle and it screams at you, yeah. Mm-hmm. That's you, excellent. Um. I'm just gonna tick yes. So, we've got about ten, fifteen minutes to discuss Mm-hmm. Yeah. Mm-hmm. Yeah. Then again I guess the th where it was most popular was the fifteen to twenty five bracket and the I don't know how often they're buying televisions. Yeah, but you don't have much money, generally. I would've thought it's it's more that twenty five to thirty five, when people are really moving out and they've got their first job and they want their nice toys and O oh it's on sorry, we unplugged it. Here, let me Yeah. Mm-hmm. Yeah. Yeah, they've got no commitments and usually not a car and all of those things. Kids. Yeah. Yeah, and if we're if we're talking twenty five Euros as a price, that's not unaffordable, even for young people. Yeah. Yeah. But do they But the T_V_s are often kind of someone's old T_V_ that's blah blah and be a bit strange to have a fancy rome remote. Mm. Yeah. Yeah. Yeah. Yeah. Yeah, if we ta if we take fifteen to thirty five, but that then does imply that we should try and incorporate voice recognition. Is that gonna have a an implication for the technical specs? Mm-hmm. Yeah. Yeah. With um but with a T_V_ remote it's gonna be quite limited if we're t saying the main things people want to do is on off channel five, louder, tha that should be relatively simple. Mm. Yeah. Mm-hmm. Yeah, but maybe if you wanna look into that just to just to check. Um, so if we go for the the fifteen to thirty five age group and then of course we're going to get th anyone who's older than thirty five who wants to look young and hip and trendy and has the money, then they'll they'll still go for the same advertising. Yeah, I think we need both. Yeah. Mm. Uh-huh. Uh-huh. So that if that was in the the voice recognition, that would be great. Yeah. Yeah. Watch Sky and yeah. Mm-hmm. But that's definitely a possibility. Yeah. So that you can yell at it, yeah. Yeah. Alright. Mm. Yeah. Yeah. Yeah. Yeah. Mm-hmm. That's but then if you're buying the remote separately, but y you could have something, but i if it was something that you could like stick onto the T_V_ or something, some like a two p if you bought it in a two part pack, so one part attaches to the T_V_. The l Well that's right, but it solves the problem of having different noises. Yeah. Okay, I think we're gonna have to wrap this up um. But if we go away with that that kind of general um specification in mind that we're looking at fifteen to thirty five year olds, we want it to look simple, but still have the buttons so it's easy to use, but only those key buttons, the major buttons and then one sort of menu one, and then voice recognition included as an option um but that obviously needs a little bit more working out as to whether it's really feasible and some of those problems we were mentioning um. What we have to do now is to go back to our little places, complete our questionnaire and some sort of summarisation, which y you'll get immediately by email. Send me your presentations so that I can use them to make the minutes, and then we've got a lunch break and after lunch we go back to our own little stations and have thirty minutes more work. Um I'll put the minutes in that project documents folder, but I'll send you an email when I do it, so that you know. It should be on your desktop, so on the yeah. So I'll put it I'll put them there as soon as I've written them. Yeah, and email them round. Yeah, that would be great. Oh yeah, put them in there. Yeah, then you don't have to email them. No, they're all called something slightly different. Technical requirements and something something, yeah. So, if you put them in there, we'll all be able to see them and refer to them if we need to. Um as to where we're going from here, you're going to look at the components concept. Yeah? Whatever that means. Yeah. You'll be looking you'll be looking at the user interface concept, on something conceptual and you're watching trends to see how we go and surely voice recognition'll fall off the map or something that um we'll keep keep our options op hmm? Components, yeah. No, we have we have after lunch we have thirty minutes to ourselves to prepare, so that's fine, w before lunch we just have to complete the questionnaire and some sort of summary. Okay? Right on time. Okay, so you can I guess we'll see you for lunch in a sec?"} ### Data Fields - dialogue: text of dialogue. - summary: human written summary of the dialogue. - id: unique file id of an example. ### Data Splits - train: 209 - val: 42 - test: 28 ## Dataset Creation ### Curation Rationale Refer Above. ### Who are the source language producers? linguists ### Who are the annotators? language experts ## Licensing Information non-commercial licence: cc-by-4.0 ## Citation Information ``` Carletta, J. (2006) Announcing the AMI Meeting Corpus. The ELRA Newsletter 11(1), January-March, p. 3-5 ``` ## Contributions Thanks to Carletta for adding this dataset.
36,543
[ [ -0.06512451171875, -0.029541015625, 0.0139617919921875, -0.004299163818359375, -0.032623291015625, 0.005695343017578125, 0.0006270408630371094, -0.040283203125, 0.035247802734375, 0.01129150390625, -0.049530029296875, -0.0231170654296875, -0.018707275390625, ...
MicPie/unpredictable_mmo-champion-com
2022-08-04T20:09:49.000Z
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-cl...
MicPie
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. For more details please see the accompanying dataset card.
@misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} }
0
10
2022-07-03T08:15:38
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - apache-2.0 multilinguality: - monolingual pretty_name: UnpredicTable-mmo-champion-com size_categories: - 100K<n<1M source_datasets: [] task_categories: - multiple-choice - question-answering - zero-shot-classification - text2text-generation - table-question-answering - text-generation - text-classification - tabular-classification task_ids: - multiple-choice-qa - extractive-qa - open-domain-qa - closed-domain-qa - closed-book-qa - open-book-qa - language-modeling - multi-class-classification - natural-language-inference - topic-classification - multi-label-classification - tabular-multi-class-classification - tabular-multi-label-classification --- # Dataset Card for "UnpredicTable-mmo-champion-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** junshern@nyu.edu, perez@nyu.edu ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
14,811
[ [ -0.039642333984375, -0.039825439453125, 0.0299224853515625, 0.0225830078125, 0.00848388671875, 0.011383056640625, -0.007442474365234375, -0.0416259765625, 0.035400390625, 0.0223236083984375, -0.07574462890625, -0.04718017578125, -0.045501708984375, 0.0172271...
MicPie/unpredictable_phonearena-com
2022-08-04T20:11:00.000Z
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-cl...
MicPie
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. For more details please see the accompanying dataset card.
@misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} }
0
10
2022-07-03T08:59:46
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - apache-2.0 multilinguality: - monolingual pretty_name: UnpredicTable-phonearena-com size_categories: - 100K<n<1M source_datasets: [] task_categories: - multiple-choice - question-answering - zero-shot-classification - text2text-generation - table-question-answering - text-generation - text-classification - tabular-classification task_ids: - multiple-choice-qa - extractive-qa - open-domain-qa - closed-domain-qa - closed-book-qa - open-book-qa - language-modeling - multi-class-classification - natural-language-inference - topic-classification - multi-label-classification - tabular-multi-class-classification - tabular-multi-label-classification --- # Dataset Card for "UnpredicTable-phonearena-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** junshern@nyu.edu, perez@nyu.edu ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
14,807
[ [ -0.03778076171875, -0.039031982421875, 0.0301666259765625, 0.02362060546875, 0.00559234619140625, 0.0102691650390625, -0.00917816162109375, -0.0426025390625, 0.038543701171875, 0.0203399658203125, -0.07208251953125, -0.047332763671875, -0.044219970703125, 0....
MicPie/unpredictable_bulbapedia-bulbagarden-net
2022-08-04T19:40:16.000Z
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-cl...
MicPie
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. For more details please see the accompanying dataset card.
@misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} }
0
10
2022-07-03T09:24:28
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - apache-2.0 multilinguality: - monolingual pretty_name: UnpredicTable-bulbapedia-bulbagarden-net size_categories: - 100K<n<1M source_datasets: [] task_categories: - multiple-choice - question-answering - zero-shot-classification - text2text-generation - table-question-answering - text-generation - text-classification - tabular-classification task_ids: - multiple-choice-qa - extractive-qa - open-domain-qa - closed-domain-qa - closed-book-qa - open-book-qa - language-modeling - multi-class-classification - natural-language-inference - topic-classification - multi-label-classification - tabular-multi-class-classification - tabular-multi-label-classification --- # Dataset Card for "UnpredicTable-bulbapedia-bulbagarden-net" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** junshern@nyu.edu, perez@nyu.edu ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
14,830
[ [ -0.039581298828125, -0.039093017578125, 0.0292816162109375, 0.0253448486328125, 0.00659942626953125, 0.01010894775390625, -0.00995635986328125, -0.04443359375, 0.03887939453125, 0.019439697265625, -0.07275390625, -0.0458984375, -0.04522705078125, 0.016281127...
MicPie/unpredictable_wkdu-org
2022-08-04T20:18:48.000Z
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-cl...
MicPie
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. For more details please see the accompanying dataset card.
@misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} }
0
10
2022-07-03T09:30:13
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - apache-2.0 multilinguality: - monolingual pretty_name: UnpredicTable-wkdu-org size_categories: - 100K<n<1M source_datasets: [] task_categories: - multiple-choice - question-answering - zero-shot-classification - text2text-generation - table-question-answering - text-generation - text-classification - tabular-classification task_ids: - multiple-choice-qa - extractive-qa - open-domain-qa - closed-domain-qa - closed-book-qa - open-book-qa - language-modeling - multi-class-classification - natural-language-inference - topic-classification - multi-label-classification - tabular-multi-class-classification - tabular-multi-label-classification --- # Dataset Card for "UnpredicTable-wkdu-org" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** junshern@nyu.edu, perez@nyu.edu ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
14,795
[ [ -0.04083251953125, -0.03924560546875, 0.031951904296875, 0.02276611328125, 0.005889892578125, 0.0118255615234375, -0.00928497314453125, -0.044189453125, 0.03564453125, 0.0196380615234375, -0.072509765625, -0.044891357421875, -0.04638671875, 0.0155029296875, ...
rongzhangibm/NaturalQuestionsV2
2022-07-07T05:22:20.000Z
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:no-annotation", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "region:us" ]
rongzhangibm
null
null
5
10
2022-07-06T13:50:46
--- annotations_creators: - no-annotation language_creators: - crowdsourced language: - en license: - cc-by-sa-3.0 multilinguality: - monolingual pretty_name: Natural Questions size_categories: - 100K<n<1M source_datasets: - original task_categories: - question-answering task_ids: - open-domain-qa paperswithcode_id: natural-questions --- # Dataset Card for Natural Questions ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://ai.google.com/research/NaturalQuestions/dataset](https://ai.google.com/research/NaturalQuestions/dataset) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 42981 MB - **Size of the generated dataset:** 139706 MB - **Total amount of disk used:** 182687 MB ### Dataset Summary The NQ corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question. The inclusion of real user questions, and the requirement that solutions should read an entire page to find the answer, cause NQ to be a more realistic and challenging task than prior QA datasets. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 42981 MB - **Size of the generated dataset:** 139706 MB - **Total amount of disk used:** 182687 MB An example of 'train' looks as follows. ``` ``` ### Data Fields The data fields are the same among all splits. #### default ``` "id": datasets.Value("string"), "document": { "title": datasets.Value("string"), "url": datasets.Value("string"), "html": datasets.Value("string"), "tokens": datasets.features.Sequence( { "token": datasets.Value("string"), "is_html": datasets.Value("bool"), "start_byte": datasets.Value("int64"), "end_byte": datasets.Value("int64"), } ), }, "question": { "text": datasets.Value("string"), "tokens": datasets.features.Sequence(datasets.Value("string")), }, "long_answer_candidates": datasets.features.Sequence( { "start_token": datasets.Value("int64"), "end_token": datasets.Value("int64"), "start_byte": datasets.Value("int64"), "end_byte": datasets.Value("int64"), "top_level": datasets.Value("bool"), } ), "annotations": datasets.features.Sequence( { "id": datasets.Value("string"), "long_answer": { "start_token": datasets.Value("int64"), "end_token": datasets.Value("int64"), "start_byte": datasets.Value("int64"), "end_byte": datasets.Value("int64"), "candidate_index": datasets.Value("int64") }, "short_answers": datasets.features.Sequence( { "start_token": datasets.Value("int64"), "end_token": datasets.Value("int64"), "start_byte": datasets.Value("int64"), "end_byte": datasets.Value("int64"), "text": datasets.Value("string"), } ), "yes_no_answer": datasets.features.ClassLabel( names=["NO", "YES"] ), # Can also be -1 for NONE. } ) ``` ### Data Splits | name | train | validation | |---------|-------:|-----------:| | default | 307373 | 7830 | | dev | N/A | 7830 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [Creative Commons Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/). ### Citation Information ``` @article{47761, title = {Natural Questions: a Benchmark for Question Answering Research}, author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov}, year = {2019}, journal = {Transactions of the Association of Computational Linguistics} } ``` ### Contributions
7,603
[ [ -0.053131103515625, -0.05792236328125, 0.01340484619140625, -0.0023288726806640625, -0.0123291015625, 0.002674102783203125, -0.02001953125, -0.0259857177734375, 0.050079345703125, 0.035858154296875, -0.05859375, -0.057220458984375, -0.02520751953125, 0.01763...
MicPie/unpredictable_cluster29
2022-08-04T20:02:57.000Z
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-cl...
MicPie
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. For more details please see the accompanying dataset card.
@misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} }
0
10
2022-07-08T19:06:50
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - apache-2.0 multilinguality: - monolingual pretty_name: UnpredicTable-cluster29 size_categories: - 100K<n<1M source_datasets: [] task_categories: - multiple-choice - question-answering - zero-shot-classification - text2text-generation - table-question-answering - text-generation - text-classification - tabular-classification task_ids: - multiple-choice-qa - extractive-qa - open-domain-qa - closed-domain-qa - closed-book-qa - open-book-qa - language-modeling - multi-class-classification - natural-language-inference - topic-classification - multi-label-classification - tabular-multi-class-classification - tabular-multi-label-classification --- # Dataset Card for "UnpredicTable-cluster29" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** junshern@nyu.edu, perez@nyu.edu ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
14,797
[ [ -0.040924072265625, -0.0391845703125, 0.0330810546875, 0.0233001708984375, 0.006622314453125, 0.01209259033203125, -0.0108795166015625, -0.042236328125, 0.037384033203125, 0.0200653076171875, -0.07269287109375, -0.048431396484375, -0.0460205078125, 0.0135192...
srivatsavaasista/textgenerator-ds-mini
2022-07-27T13:05:26.000Z
[ "region:us" ]
srivatsavaasista
null
null
0
10
2022-07-27T13:04:59
Entry not found
15
[ [ -0.021392822265625, -0.01494598388671875, 0.05718994140625, 0.028839111328125, -0.0350341796875, 0.046539306640625, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.01702880859375, -0.052093505859375, -0.01494598388671875, -0.06036376953125, 0.03790...
pinecone/movie-posters
2022-08-20T17:57:23.000Z
[ "region:us" ]
pinecone
null
null
0
10
2022-07-31T20:45:18
Entry not found
15
[ [ -0.021392822265625, -0.01494598388671875, 0.05718994140625, 0.028839111328125, -0.0350341796875, 0.046539306640625, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.01702880859375, -0.052093505859375, -0.01494598388671875, -0.06036376953125, 0.03790...
snoop2head/enron_aeslc_emails
2022-08-04T15:54:22.000Z
[ "region:us" ]
snoop2head
null
null
1
10
2022-08-04T15:53:14
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
UCL-DARK/ludwig
2022-08-11T15:51:56.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "lang...
UCL-DARK
TODO
TBC
6
10
2022-08-10T07:56:34
--- annotations_creators: - expert-generated language: - en language_creators: - expert-generated license: - cc-by-4.0 multilinguality: - monolingual pretty_name: ludwig size_categories: - n<1K source_datasets: - original tags: - implicature - pragmatics - language - llm - conversation - dialogue task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling --- # Dataset Card for LUDWIG ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository: https://github.com/ucl-dark/ludwig** - **Paper: TODO** - **Leaderboard: TODO** - **Point of Contact: Laura Ruis** ### Dataset Summary LUDWIG (**L**anguage **U**nderstanding **W**ith **I**mplied meanin**G**) is a dataset containing English conversational implicatures. Implicature is the act of meaning or implying one thing by saying something else. There's different types of implicatures, from simple ones like "Some guests came to the party" (implying not all guests came) to more complicated implicatures that depend on context like "A: Are you going to the party this Friday? B: There's a global pandemic.", implying no. Implicatures serve a wide range of goals in communication: efficiency, style, navigating social interactions, and more. We cannot fully understand utterances without understanding their implications. The implicatures in this dataset are conversational because they come in utterance-response tuples. Each tuple has an implicature associated with it, which is the implied meaning of the response. For example: Utterance: Are you going to the party this Friday? Response: There's a global pandemic. Implicature: No. This dataset can be used to evaluate language models on their pragmatic language understanding. ### Supported Tasks and Leaderboards - ```text-generation```: The dataset can be used to evaluate a models ability to generate the correct next token, i.e. "yes" or "no", depending on the implicature. For example, if you pass the model an example wrapped in a template like "Esther asked 'Are you coming to the party this Friday' and Juan responded 'There's a global pandemic', which means" the correct completion would be "no". Success in this task can be determined by the ability to generate the correct answer or by the ability to give the right token a higher likelihood than the wrong token, e.g. p("no") > p("yes"). - ```fill-mask```: The dataset can be used to evaluate a models ability to fill the correct token, i.e. "yes" or "no", depending on the implicature. For example, if you pass the model an example wrapped in a template like "Esther asked 'Are you coming to the party this Friday' and Juan responded 'There's a global pandemic', which means [mask]" the correct mask-fill would be "no". Success in this task can be determined by the ability to fill the correct answer or by the ability to give the right token a higher likelihood than the wrong token, e.g. p("no") > p("yes"). ### Languages English ## Dataset Structure ### Data Instances Find below an example of a 1-shot example instance (1-shot because there's 1 prompt example). ``` { "id": 1, "utterance": "Are you going to the party this Friday?", "response": "There's a global pandemic.", "implicature": "No.", "incoherent_implicature": "Yes". "prompts": [ { "utterance": "Was that hot?", "response": "The sun was scorching.", "implicature": "Yes.", "incoherent_implicature": "No.". } ] } ``` ### Data Fields ``` { "id": int, # unique identifier of data points "utterance": str, # the utterance in this example "response": str, # the response in this example "implicature": str, # the implied meaning of the response, e.g. 'yes' "incoherent_implicature": str, # the wrong implied meaning, e.g. 'no' "prompts": [ # optional: prompt examples from the validation set { "utterance": str, "response": str, "implicature": str, "incoherent_implicature": str, } ] } ``` ### Data Splits **Validation**: 118 instances that can be used for finetuning or few-shot learning **Test**: 600 instances that can be used for evaluating models. NB: the splits weren't originally part of the paper that presents this dataset. The same goes for the k-shot prompts. Added by @LauraRuis. ## Dataset Creation ### Curation Rationale Pragmatic language understanding is a crucial aspect of human communication, and implicatures are the primary object of study in this field. We want computational models of language to understand all the speakers implications. ### Source Data #### Initial Data Collection and Normalization "Conversational implicatures in English dialogue: Annotated dataset", Elizabeth Jasmi George and Radhika Mamidi 2020. [Link to paper](https://doi.org/10.1016/j.procs.2020.04.251) #### Who are the source language producers? These written representations of the utterances are collected manually by scraping and transcribing from relevant sources from August, 2019 to August, 2020. The source of dialogues in the data include TOEFL listening comprehension short conversations, movie dialogues from IMSDb and websites explaining idioms, similes, metaphors and hyperboles. The implicatures are annotated manually. ### Annotations #### Annotation process Manually annotated by dataset collectors. #### Who are the annotators? Authors of the original paper. ### Personal and Sensitive Information All the data is public and not sensitive. ## Considerations for Using the Data ### Social Impact of Dataset Any application that requires communicating with humans requires pragmatic language understanding. ### Discussion of Biases Implicatures can be biased to specific cultures. For example, whether the Pope is Catholic (a common used response implicature to indicate "yes") might not be common knowledge for everyone. Implicatures are also language-specific, the way people use pragmatic language depends on the language. This dataset only focuses on the English language. ### Other Known Limitations None yet. ## Additional Information ### Dataset Curators Elizabeth Jasmi George and Radhika Mamidi ### Licensing Information [license](https://creativecommons.org/licenses/by/4.0/) ### Citation Information ``` @article{George:Mamidi:2020, author = {George, Elizabeth Jasmi and Mamidi, Radhika}, doi = {10.1016/j.procs.2020.04.251}, journal = {Procedia Computer Science}, keywords = {}, note = {https://doi.org/10.1016/j.procs.2020.04.251}, number = {}, pages = {2316-2323}, title = {Conversational implicatures in English dialogue: Annotated dataset}, url = {https://app.dimensions.ai/details/publication/pub.1128198497}, volume = {171}, year = {2020} } ``` ### Contributions Thanks to [@LauraRuis](https://github.com/LauraRuis) for adding this dataset.
7,953
[ [ -0.02392578125, -0.072265625, 0.006072998046875, 0.01580810546875, -0.00799560546875, -0.005096435546875, -0.0181732177734375, -0.0198211669921875, 0.01236724853515625, 0.034515380859375, -0.04290771484375, -0.04840087890625, -0.034393310546875, 0.0146560668...
imodels/compas-recidivism
2022-08-13T04:17:29.000Z
[ "task_categories:tabular-classification", "size_categories:1K<n<10K", "interpretability", "fairness", "region:us" ]
imodels
null
null
1
10
2022-08-13T03:55:20
--- annotations_creators: [] language: [] language_creators: [] license: [] multilinguality: [] pretty_name: compas-recividivsm size_categories: - 1K<n<10K source_datasets: [] tags: - interpretability - fairness task_categories: - tabular-classification task_ids: [] --- Port of the compas-recidivism dataset from propublica (github [here](https://github.com/propublica/compas-analysis)). See details there and use carefully, as there are serious known social impacts and biases present in this dataset. Basic preprocessing done by the [imodels team](https://github.com/csinva/imodels) in [this notebook](https://github.com/csinva/imodels-data/blob/master/notebooks_fetch_data/00_get_datasets_custom.ipynb). The target is the binary outcome `is_recid`. ### Sample usage Load the data: ``` from datasets import load_dataset dataset = load_dataset("imodels/compas-recidivism") df = pd.DataFrame(dataset['train']) X = df.drop(columns=['is_recid']) y = df['is_recid'].values ``` Fit a model: ``` import imodels import numpy as np m = imodels.FIGSClassifier(max_rules=5) m.fit(X, y) print(m) ``` Evaluate: ``` df_test = pd.DataFrame(dataset['test']) X_test = df.drop(columns=['is_recid']) y_test = df['is_recid'].values print('accuracy', np.mean(m.predict(X_test) == y_test)) ```
1,289
[ [ -0.0199737548828125, -0.0287017822265625, -0.0018720626831054688, 0.018951416015625, -0.016021728515625, -0.0013914108276367188, 0.0017213821411132812, -0.016357421875, 0.035186767578125, 0.0386962890625, -0.0386962890625, -0.0355224609375, -0.05029296875, 0...
teven/code_contests
2022-08-24T20:01:04.000Z
[ "region:us" ]
teven
null
null
2
10
2022-08-24T17:28:47
HF-datasets version of Deepmind's [code_contests](https://github.com/deepmind/code_contests) dataset, notably used for AlphaGo. 1 row per solution, no test data or incorrect solutions included (only name/source/description/solution/language/difficulty)
252
[ [ -0.0287628173828125, -0.036041259765625, 0.0207672119140625, 0.0230560302734375, -0.0028591156005859375, 0.005828857421875, -0.012969970703125, 0.00959014892578125, 0.024383544921875, 0.046112060546875, -0.07586669921875, -0.05877685546875, -0.009033203125, ...
unpredictable/unpredictable_5k
2022-08-28T18:13:41.000Z
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-cl...
unpredictable
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. For more details please see the accompanying dataset card.
null
0
10
2022-08-28T17:37:14
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - apache-2.0 multilinguality: - monolingual pretty_name: UnpredicTable-5k size_categories: - 100K<n<1M source_datasets: [] task_categories: - multiple-choice - question-answering - zero-shot-classification - text2text-generation - table-question-answering - text-generation - text-classification - tabular-classification task_ids: - multiple-choice-qa - extractive-qa - open-domain-qa - closed-domain-qa - closed-book-qa - open-book-qa - language-modeling - multi-class-classification - natural-language-inference - topic-classification - multi-label-classification - tabular-multi-class-classification - tabular-multi-label-classification --- # Dataset Card for "UnpredicTable-5k" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Repository:** https://github.com/AnonCodeShare/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/unpredictable/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-support-google-com](https://huggingface.co/datasets/unpredictable/unpredictable_support-google-com) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Licensing Information Apache 2.0
8,217
[ [ -0.0272369384765625, -0.056304931640625, 0.01253509521484375, -0.0017299652099609375, -0.00731658935546875, -0.007472991943359375, -0.01763916015625, -0.037200927734375, 0.0013074874877929688, 0.0275115966796875, -0.060546875, -0.059295654296875, -0.051147460937...
unpredictable/unpredictable_unique
2022-08-28T18:26:18.000Z
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-cl...
unpredictable
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. For more details please see the accompanying dataset card.
null
0
10
2022-08-28T18:12:33
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - apache-2.0 multilinguality: - monolingual pretty_name: UnpredicTable-unique size_categories: - 100K<n<1M source_datasets: [] task_categories: - multiple-choice - question-answering - zero-shot-classification - text2text-generation - table-question-answering - text-generation - text-classification - tabular-classification task_ids: - multiple-choice-qa - extractive-qa - open-domain-qa - closed-domain-qa - closed-book-qa - open-book-qa - language-modeling - multi-class-classification - natural-language-inference - topic-classification - multi-label-classification - tabular-multi-class-classification - tabular-multi-label-classification --- # Dataset Card for "UnpredicTable-unique" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Repository:** https://github.com/AnonCodeShare/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/unpredictable/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-support-google-com](https://huggingface.co/datasets/unpredictable/unpredictable_support-google-com) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Licensing Information Apache 2.0
8,224
[ [ -0.026031494140625, -0.0584716796875, 0.01194000244140625, 0.00003337860107421875, -0.007843017578125, -0.00563812255859375, -0.01849365234375, -0.034820556640625, 0.0031566619873046875, 0.0276031494140625, -0.06103515625, -0.06036376953125, -0.050811767578125, ...
batterydata/cner
2022-09-05T16:07:43.000Z
[ "task_categories:token-classification", "language:en", "license:apache-2.0", "arxiv:2006.03039", "region:us" ]
batterydata
null
null
0
10
2022-09-05T15:49:33
--- language: - en license: - apache-2.0 task_categories: - token-classification pretty_name: 'Chemical Named Entity Recognition (CNER) Dataset for BatteryDataExtractor' --- # CNER Dataset ## Original Data Source #### CHEMDNER M. Krallinger, O. Rabal, F. Leitner, M. Vazquez, D. Salgado, Z. Lu, R. Leaman, Y. Lu, D. Ji, D. M. Lowe et al., J. Cheminf., 2015, 7, 1–17. #### MatScholar I. Weston, V. Tshitoyan, J. Dagdelen, O. Kononova, A. Tre- wartha, K. A. Persson, G. Ceder and A. Jain, J. Chem. Inf. Model., 2019, 59, 3692–3702. #### SOFC A. Friedrich, H. Adel, F. Tomazic, J. Hingerl, R. Benteau, A. Maruscyk and L. Lange, The SOFC-exp corpus and neural approaches to information extraction in the materials science domain, 2020, https://arxiv.org/abs/2006.03039. #### BioNLP G. Crichton, S. Pyysalo, B. Chiu and A. Korhonen, BMC Bioinf., 2017, 18, 1–14. ## Citation BatteryDataExtractor: battery-aware text-mining software embedded with BERT models
959
[ [ -0.006317138671875, -0.019775390625, 0.044769287109375, -0.007434844970703125, 0.005218505859375, 0.0121307373046875, -0.0014886856079101562, -0.0135498046875, 0.006031036376953125, 0.020721435546875, -0.03460693359375, -0.056060791015625, -0.0290985107421875, ...
chenghao/cuad_qa
2022-09-14T16:15:12.000Z
[ "task_categories:question-answering", "task_ids:closed-domain-qa", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:210...
chenghao
null
null
0
10
2022-09-14T00:01:15
--- annotations_creators: - expert-generated language_creators: - found language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - question-answering task_ids: - closed-domain-qa - extractive-qa paperswithcode_id: cuad pretty_name: CUAD train-eval-index: - config: default task: question-answering task_id: extractive_question_answering splits: train_split: train eval_split: test col_mapping: question: question context: context answers: text: text answer_start: answer_start metrics: - type: cuad name: CUAD --- # Dataset Card for CUAD This is a modified version of original [CUAD](https://huggingface.co/datasets/cuad/blob/main/README.md) which trims the question to its label form. ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Contract Understanding Atticus Dataset](https://www.atticusprojectai.org/cuad) - **Repository:** [Contract Understanding Atticus Dataset](https://github.com/TheAtticusProject/cuad/) - **Paper:** [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review](https://arxiv.org/abs/2103.06268) - **Point of Contact:** [Atticus Project Team](info@atticusprojectai.org) ### Dataset Summary Contract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions. CUAD is curated and maintained by The Atticus Project, Inc. to support NLP research and development in legal contract review. Analysis of CUAD can be found at https://arxiv.org/abs/2103.06268. Code for replicating the results and the trained model can be found at https://github.com/TheAtticusProject/cuad. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset contains samples in English only. ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ``` This example was too long and was cropped: { "answers": { "answer_start": [44], "text": ['DISTRIBUTOR AGREEMENT'] }, "context": 'EXHIBIT 10.6\n\n DISTRIBUTOR AGREEMENT\n\n THIS DISTRIBUTOR AGREEMENT (the "Agreement") is made by and between Electric City Corp., a Delaware corporation ("Company") and Electric City of Illinois LLC ("Distributor") this 7th day of September, 1999...', "id": "LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT__Document Name_0", "question": "Highlight the parts (if any) of this contract related to "Document Name" that should be reviewed by a lawyer. Details: The name of the contract", "title": "LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT" } ``` ### Data Fields - `id`: a `string` feature. - `title`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. ### Data Splits This dataset is split into train/test set. Number of samples in each set is given below: | | Train | Test | | ----- | ------ | ---- | | CUAD | 22450 | 4182 | ## Dataset Creation ### Curation Rationale A highly valuable specialized task without a public large-scale dataset is contract review, which costs humans substantial time, money, and attention. Many law firms spend approximately 50% of their time reviewing contracts (CEB, 2017). Due to the specialized training necessary to understand and interpret contracts, the billing rates for lawyers at large law firms are typically around $500-$900 per hour in the US. As a result, many transactions cost companies hundreds of thousands of dollars just so that lawyers can verify that there are no problematic obligations or requirements included in the contracts. Contract review can be a source of drudgery and, in comparison to other legal tasks, is widely considered to be especially boring. Contract review costs also affect consumers. Since contract review costs are so prohibitive, contract review is not often performed outside corporate transactions. Small companies and individuals consequently often sign contracts without even reading them, which can result in predatory behavior that harms consumers. Automating contract review by openly releasing high-quality data and fine-tuned models can increase access to legal support for small businesses and individuals, so that legal support is not exclusively available to wealthy companies. To reduce the disparate societal costs of contract review, and to study how well NLP models generalize to specialized domains, the authors introduced a new large-scale dataset for contract review. As part of The Atticus Project, a non-profit organization of legal experts, CUAD is introduced, the Contract Understanding Atticus Dataset. This dataset was created with a year-long effort pushed forward by dozens of law student annotators, lawyers, and machine learning researchers. The dataset includes more than 500 contracts and more than 13,000 expert annotations that span 41 label categories. For each of 41 different labels, models must learn to highlight the portions of a contract most salient to that label. This makes the task a matter of finding needles in a haystack. ### Source Data #### Initial Data Collection and Normalization The CUAD includes commercial contracts selected from 25 different types of contracts based on the contract names as shown below. Within each type, the creators randomly selected contracts based on the names of the filing companies across the alphabet. Type of Contracts: # of Docs Affiliate Agreement: 10 Agency Agreement: 13 Collaboration/Cooperation Agreement: 26 Co-Branding Agreement: 22 Consulting Agreement: 11 Development Agreement: 29 Distributor Agreement: 32 Endorsement Agreement: 24 Franchise Agreement: 15 Hosting Agreement: 20 IP Agreement: 17 Joint Venture Agreemen: 23 License Agreement: 33 Maintenance Agreement: 34 Manufacturing Agreement: 17 Marketing Agreement: 17 Non-Compete/No-Solicit/Non-Disparagement Agreement: 3 Outsourcing Agreement: 18 Promotion Agreement: 12 Reseller Agreement: 12 Service Agreement: 28 Sponsorship Agreement: 31 Supply Agreement: 18 Strategic Alliance Agreement: 32 Transportation Agreement: 13 TOTAL: 510 #### Who are the source language producers? The contracts were sourced from EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system used at the U.S. Securities and Exchange Commission (SEC). Publicly traded companies in the United States are required to file certain contracts under the SEC rules. Access to these contracts is available to the public for free at https://www.sec.gov/edgar. Please read the Datasheet at https://www.atticusprojectai.org/ for information on the intended use and limitations of the CUAD. ### Annotations #### Annotation process The labeling process included multiple steps to ensure accuracy: 1. Law Student Training: law students attended training sessions on each of the categories that included a summary, video instructions by experienced attorneys, multiple quizzes and workshops. Students were then required to label sample contracts in eBrevia, an online contract review tool. The initial training took approximately 70-100 hours. 2. Law Student Label: law students conducted manual contract review and labeling in eBrevia. 3. Key Word Search: law students conducted keyword search in eBrevia to capture additional categories that have been missed during the “Student Label” step. 4. Category-by-Category Report Review: law students exported the labeled clauses into reports, review each clause category-by-category and highlight clauses that they believe are mislabeled. 5. Attorney Review: experienced attorneys reviewed the category-by-category report with students comments, provided comments and addressed student questions. When applicable, attorneys discussed such results with the students and reached consensus. Students made changes in eBrevia accordingly. 6. eBrevia Extras Review. Attorneys and students used eBrevia to generate a list of “extras”, which are clauses that eBrevia AI tool identified as responsive to a category but not labeled by human annotators. Attorneys and students reviewed all of the “extras” and added the correct ones. The process is repeated until all or substantially all of the “extras” are incorrect labels. 7. Final Report: The final report was exported into a CSV file. Volunteers manually added the “Yes/No” answer column to categories that do not contain an answer. #### Who are the annotators? Answered in above section. ### Personal and Sensitive Information Some clauses in the files are redacted because the party submitting these contracts redacted them to protect confidentiality. Such redaction may show up as asterisks (\*\*\*) or underscores (\_\_\_) or blank spaces. The dataset and the answers reflect such redactions. For example, the answer for “January \_\_ 2020” would be “1/[]/2020”). For any categories that require an answer of “Yes/No”, annotators include full sentences as text context in a contract. To maintain consistency and minimize inter-annotator disagreement, annotators select text for the full sentence, under the instruction of “from period to period”. For the other categories, annotators selected segments of the text in the contract that are responsive to each such category. One category in a contract may include multiple labels. For example, “Parties” may include 4-10 separate text strings that are not continuous in a contract. The answer is presented in the unified format separated by semicolons of “Party A Inc. (“Party A”); Party B Corp. (“Party B”)”. Some sentences in the files include confidential legends that are not part of the contracts. An example of such confidential legend is as follows: THIS EXHIBIT HAS BEEN REDACTED AND IS THE SUBJECT OF A CONFIDENTIAL TREATMENT REQUEST. REDACTED MATERIAL IS MARKED WITH [* * *] AND HAS BEEN FILED SEPARATELY WITH THE SECURITIES AND EXCHANGE COMMISSION. Some sentences in the files contain irrelevant information such as footers or page numbers. Some sentences may not be relevant to the corresponding category. Some sentences may correspond to a different category. Because many legal clauses are very long and contain various sub-parts, sometimes only a sub-part of a sentence is responsive to a category. To address the foregoing limitations, annotators manually deleted the portion that is not responsive, replacing it with the symbol "<omitted>" to indicate that the two text segments do not appear immediately next to each other in the contracts. For example, if a “Termination for Convenience” clause starts with “Each Party may terminate this Agreement if” followed by three subparts “(a), (b) and (c)”, but only subpart (c) is responsive to this category, the authors manually deleted subparts (a) and (b) and replaced them with the symbol "<omitted>”. Another example is for “Effective Date”, the contract includes a sentence “This Agreement is effective as of the date written above” that appears after the date “January 1, 2010”. The annotation is as follows: “January 1, 2010 <omitted> This Agreement is effective as of the date written above.” Because the contracts were converted from PDF into TXT files, the converted TXT files may not stay true to the format of the original PDF files. For example, some contracts contain inconsistent spacing between words, sentences and paragraphs. Table format is not maintained in the TXT files. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Attorney Advisors Wei Chen, John Brockland, Kevin Chen, Jacky Fink, Spencer P. Goodson, Justin Haan, Alex Haskell, Kari Krusmark, Jenny Lin, Jonas Marson, Benjamin Petersen, Alexander Kwonji Rosenberg, William R. Sawyers, Brittany Schmeltz, Max Scott, Zhu Zhu Law Student Leaders John Batoha, Daisy Beckner, Lovina Consunji, Gina Diaz, Chris Gronseth, Calvin Hannagan, Joseph Kroon, Sheetal Sharma Saran Law Student Contributors Scott Aronin, Bryan Burgoon, Jigar Desai, Imani Haynes, Jeongsoo Kim, Margaret Lynch, Allison Melville, Felix Mendez-Burgos, Nicole Mirkazemi, David Myers, Emily Rissberger, Behrang Seraj, Sarahginy Valcin Technical Advisors & Contributors Dan Hendrycks, Collin Burns, Spencer Ball, Anya Chen ### Licensing Information CUAD is licensed under the Creative Commons Attribution 4.0 (CC BY 4.0) license and free to the public for commercial and non-commercial use. The creators make no representations or warranties regarding the license status of the underlying contracts, which are publicly available and downloadable from EDGAR. Privacy Policy & Disclaimers The categories or the contracts included in the dataset are not comprehensive or representative. The authors encourage the public to help improve them by sending them your comments and suggestions to info@atticusprojectai.org. Comments and suggestions will be reviewed by The Atticus Project at its discretion and will be included in future versions of Atticus categories once approved. The use of CUAD is subject to their privacy policy https://www.atticusprojectai.org/privacy-policy and disclaimer https://www.atticusprojectai.org/disclaimer. ### Citation Information ``` @article{hendrycks2021cuad, title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review}, author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball}, journal={arXiv preprint arXiv:2103.06268}, year={2021} } ``` ### Contributions Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding the original CUAD dataset.
15,065
[ [ -0.03192138671875, -0.04052734375, 0.0189666748046875, 0.0104522705078125, -0.0213623046875, 0.0012445449829101562, -0.005512237548828125, -0.052520751953125, 0.0286865234375, 0.054229736328125, -0.0098114013671875, -0.05810546875, -0.037933349609375, 0.0079...
schrilax/favorite-actors
2022-10-08T03:50:27.000Z
[ "region:us" ]
schrilax
null
null
0
10
2022-10-08T03:50:22
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
julien-c/titanic-survival
2022-10-10T19:20:30.000Z
[ "task_categories:tabular-classification", "license:cc", "tabular-classification", "region:us" ]
julien-c
null
null
1
10
2022-10-10T19:15:48
--- license: cc tags: - tabular-classification task_categories: - tabular-classification --- ## Titanic Survival from https://web.stanford.edu/class/archive/cs/cs109/cs109.1166/problem12.html
193
[ [ -0.00849151611328125, -0.042266845703125, 0.03485107421875, 0.05291748046875, -0.0241851806640625, 0.01220703125, 0.0377197265625, -0.001987457275390625, 0.01739501953125, 0.025360107421875, -0.03582763671875, 0.0029277801513671875, -0.030609130859375, -0.01...
IIC/SQUAC
2022-10-11T11:52:45.000Z
[ "region:us" ]
IIC
null
null
1
10
2022-10-11T11:52:34
Entry not found
15
[ [ -0.0214080810546875, -0.01494598388671875, 0.057159423828125, 0.028839111328125, -0.0350341796875, 0.04656982421875, 0.052490234375, 0.00504302978515625, 0.0513916015625, 0.016998291015625, -0.0521240234375, -0.0149993896484375, -0.06036376953125, 0.03790283...
KGraph/FB15k-237
2022-10-21T09:03:28.000Z
[ "task_categories:other", "annotations_creators:found", "annotations_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-4.0", "knowledge graph", "knowledge", "link prediction", "link", "region:us" ]
KGraph
null
null
3
10
2022-10-20T12:09:29
--- annotations_creators: - found - crowdsourced language: - en language_creators: [] license: - cc-by-4.0 multilinguality: - monolingual pretty_name: FB15k-237 size_categories: - 100K<n<1M source_datasets: - original tags: - knowledge graph - knowledge - link prediction - link task_categories: - other task_ids: [] --- # Dataset Card for FB15k-237 ## Table of Contents - [Dataset Card for FB15k-237](#dataset-card-for-fb15k-237) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://deepai.org/dataset/fb15k-237](https://deepai.org/dataset/fb15k-237) - **Repository:** - **Paper:** [More Information Needed](https://paperswithcode.com/dataset/fb15k-237) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary FB15k-237 is a link prediction dataset created from FB15k. While FB15k consists of 1,345 relations, 14,951 entities, and 592,213 triples, many triples are inverses that cause leakage from the training to testing and validation splits. FB15k-237 was created by Toutanova and Chen (2015) to ensure that the testing and evaluation datasets do not have inverse relation test leakage. In summary, FB15k-237 dataset contains 310,079 triples with 14,505 entities and 237 relation types. ### Supported Tasks and Leaderboards Supported Tasks: link prediction task on knowledge graphs. Leaderboads: [More Information Needed](https://paperswithcode.com/sota/link-prediction-on-fb15k-237) ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{schlichtkrull2018modeling, title={Modeling relational data with graph convolutional networks}, author={Schlichtkrull, Michael and Kipf, Thomas N and Bloem, Peter and Berg, Rianne van den and Titov, Ivan and Welling, Max}, booktitle={European semantic web conference}, pages={593--607}, year={2018}, organization={Springer} } ``` ### Contributions Thanks to [@pp413](https://github.com/pp413) for adding this dataset.
4,250
[ [ -0.041656494140625, -0.05084228515625, 0.0060272216796875, 0.0241851806640625, -0.006870269775390625, -0.0005297660827636719, -0.0207977294921875, -0.046966552734375, 0.0235595703125, 0.037078857421875, -0.0750732421875, -0.061309814453125, -0.038543701171875, ...
lexlms/lex_files_preprocessed
2023-05-10T16:01:44.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:extended", "language:en", ...
lexlms
null
null
3
10
2022-11-07T17:27:54
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - 1M<n<10M source_datasets: - extended task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling pretty_name: LexFiles configs: - eu_legislation - eu_court_cases - uk_legislation - uk_court_cases - us_legislation - us_court_cases - us_contracts - canadian_legislation - canadian_court_cases - indian_court_cases --- # Dataset Card for "LexFiles" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Specifications](#supported-tasks-and-leaderboards) ## Dataset Description - **Homepage:** https://github.com/coastalcph/lexlms - **Repository:** https://github.com/coastalcph/lexlms - **Paper:** https://arxiv.org/abs/xxx - **Point of Contact:** [Ilias Chalkidis](mailto:ilias.chalkidis@di.ku.dk) ### Dataset Summary **Disclaimer: This is a pre-proccessed version of the LexFiles corpus (https://huggingface.co/datasets/lexlms/lexfiles), where documents are pre-split in chunks of 512 tokens.** The LeXFiles is a new diverse English multinational legal corpus that we created including 11 distinct sub-corpora that cover legislation and case law from 6 primarily English-speaking legal systems (EU, CoE, Canada, US, UK, India). The corpus contains approx. 19 billion tokens. In comparison, the "Pile of Law" corpus released by Hendersons et al. (2022) comprises 32 billion in total, where the majority (26/30) of sub-corpora come from the United States of America (USA), hence the corpus as a whole is biased towards the US legal system in general, and the federal or state jurisdiction in particular, to a significant extent. ### Dataset Specifications | Corpus | Corpus alias | Documents | Tokens | Pct. | Sampl. (a=0.5) | Sampl. (a=0.2) | |-----------------------------------|----------------------|-----------|--------|--------|----------------|----------------| | EU Legislation | `eu-legislation` | 93.7K | 233.7M | 1.2% | 5.0% | 8.0% | | EU Court Decisions | `eu-court-cases` | 29.8K | 178.5M | 0.9% | 4.3% | 7.6% | | ECtHR Decisions | `ecthr-cases` | 12.5K | 78.5M | 0.4% | 2.9% | 6.5% | | UK Legislation | `uk-legislation` | 52.5K | 143.6M | 0.7% | 3.9% | 7.3% | | UK Court Decisions | `uk-court-cases` | 47K | 368.4M | 1.9% | 6.2% | 8.8% | | Indian Court Decisions | `indian-court-cases` | 34.8K | 111.6M | 0.6% | 3.4% | 6.9% | | Canadian Legislation | `canadian-legislation` | 6K | 33.5M | 0.2% | 1.9% | 5.5% | | Canadian Court Decisions | `canadian-court-cases` | 11.3K | 33.1M | 0.2% | 1.8% | 5.4% | | U.S. Court Decisions [1] | `court-listener` | 4.6M | 11.4B | 59.2% | 34.7% | 17.5% | | U.S. Legislation | `us-legislation` | 518 | 1.4B | 7.4% | 12.3% | 11.5% | | U.S. Contracts | `us-contracts` | 622K | 5.3B | 27.3% | 23.6% | 15.0% | | Total | `lexlms/lexfiles` | 5.8M | 18.8B | 100% | 100% | 100% | [1] We consider only U.S. Court Decisions from 1965 onwards (cf. post Civil Rights Act), as a hard threshold for cases relying on severely out-dated and in many cases harmful law standards. The rest of the corpora include more recent documents. [2] Sampling (Sampl.) ratios are computed following the exponential sampling introduced by Lample et al. (2019). Additional corpora not considered for pre-training, since they do not represent factual legal knowledge. | Corpus | Corpus alias | Documents | Tokens | |----------------------------------------|------------------------|-----------|--------| | Legal web pages from C4 | `legal-c4` | 284K | 340M | ### Citation [*Ilias Chalkidis\*, Nicolas Garneau\*, Catalina E.C. Goanta, Daniel Martin Katz, and Anders Søgaard.* *LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development.* *2022. In the Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics. Toronto, Canada.*](https://aclanthology.org/xxx/) ``` @inproceedings{chalkidis-garneau-etal-2023-lexlms, title = {{LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development}}, author = "Chalkidis*, Ilias and Garneau*, Nicolas and Goanta, Catalina and Katz, Daniel Martin and Søgaard, Anders", booktitle = "Proceedings of the 61h Annual Meeting of the Association for Computational Linguistics", month = june, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/xxx", } ```
5,242
[ [ -0.026824951171875, -0.01377105712890625, 0.041015625, 0.00690460205078125, -0.0288543701171875, 0.0165557861328125, -0.01209259033203125, -0.026123046875, 0.0280303955078125, 0.0310821533203125, -0.023040771484375, -0.07080078125, -0.04449462890625, 0.00291...
Conrad747/lg-ner
2023-03-30T13:44:30.000Z
[ "region:us" ]
Conrad747
LugandaPII is a named entity dataset consisting of PERSON, ORG, LOCATION, NORP, USERID and DATE entities. The train/validation/test sets are available for the Luganda language.
@InProceedings{huggingface:dataset, title = {Luganda Ner Dataset}, author={many authors }, year={2022} }
0
10
2022-11-09T08:19:08
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
bigbio/biology_how_why_corpus
2022-12-22T15:43:41.000Z
[ "multilinguality:monolingual", "language:en", "license:unknown", "region:us" ]
bigbio
This dataset consists of 185 "how" and 193 "why" biology questions authored by a domain expert, with one or more gold answer passages identified in an undergraduate textbook. The expert was not constrained in any way during the annotation process, so gold answers might be smaller than a paragraph or span multiple paragraphs. This dataset was used for the question-answering system described in the paper “Discourse Complements Lexical Semantics for Non-factoid Answer Reranking” (ACL 2014).
@inproceedings{jansen-etal-2014-discourse, title = "Discourse Complements Lexical Semantics for Non-factoid Answer Reranking", author = "Jansen, Peter and Surdeanu, Mihai and Clark, Peter", booktitle = "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jun, year = "2014", address = "Baltimore, Maryland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P14-1092", doi = "10.3115/v1/P14-1092", pages = "977--986", }
2
10
2022-11-13T22:06:38
--- language: - en bigbio_language: - English license: unknown multilinguality: monolingual bigbio_license_shortname: UNKNOWN pretty_name: BiologyHowWhyCorpus homepage: https://allenai.org/data/biology-how-why-corpus bigbio_pubmed: False bigbio_public: True bigbio_tasks: - QUESTION_ANSWERING --- # Dataset Card for BiologyHowWhyCorpus ## Dataset Description - **Homepage:** https://allenai.org/data/biology-how-why-corpus - **Pubmed:** False - **Public:** True - **Tasks:** QA This dataset consists of 185 "how" and 193 "why" biology questions authored by a domain expert, with one or more gold answer passages identified in an undergraduate textbook. The expert was not constrained in any way during the annotation process, so gold answers might be smaller than a paragraph or span multiple paragraphs. This dataset was used for the question-answering system described in the paper “Discourse Complements Lexical Semantics for Non-factoid Answer Reranking” (ACL 2014). ## Citation Information ``` @inproceedings{jansen-etal-2014-discourse, title = "Discourse Complements Lexical Semantics for Non-factoid Answer Reranking", author = "Jansen, Peter and Surdeanu, Mihai and Clark, Peter", booktitle = "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jun, year = "2014", address = "Baltimore, Maryland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P14-1092", doi = "10.3115/v1/P14-1092", pages = "977--986", } ```
1,605
[ [ -0.0296478271484375, -0.05828857421875, 0.0269927978515625, -0.00054931640625, -0.0172882080078125, -0.004375457763671875, 0.0008292198181152344, -0.021636962890625, 0.052947998046875, 0.02520751953125, -0.041778564453125, -0.05145263671875, -0.037078857421875, ...
bigbio/verspoor_2013
2022-12-22T15:47:37.000Z
[ "multilinguality:monolingual", "language:en", "license:unknown", "region:us" ]
bigbio
This dataset contains annotations for a small corpus of full text journal publications on the subject of inherited colorectal cancer. It is suitable for Named Entity Recognition and Relation Extraction tasks. It uses the Variome Annotation Schema, a schema that aims to capture the core concepts and relations relevant to cataloguing and interpreting human genetic variation and its relationship to disease, as described in the published literature. The schema was inspired by the needs of the database curators of the International Society for Gastrointestinal Hereditary Tumours (InSiGHT) database, but is intended to have application to genetic variation information in a range of diseases.
@article{verspoor2013annotating, title = {Annotating the biomedical literature for the human variome}, author = { Verspoor, Karin and Jimeno Yepes, Antonio and Cavedon, Lawrence and McIntosh, Tara and Herten-Crabb, Asha and Thomas, Zo{"e} and Plazzer, John-Paul }, year = 2013, journal = {Database}, publisher = {Oxford Academic}, volume = 2013 }
0
10
2022-11-13T22:12:45
--- language: - en bigbio_language: - English license: unknown multilinguality: monolingual bigbio_license_shortname: UNKNOWN pretty_name: Verspoor 2013 homepage: NA bigbio_pubmed: True bigbio_public: True bigbio_tasks: - NAMED_ENTITY_RECOGNITION - RELATION_EXTRACTION --- # Dataset Card for Verspoor 2013 ## Dataset Description - **Homepage:** NA - **Pubmed:** True - **Public:** True - **Tasks:** NER,RE This dataset contains annotations for a small corpus of full text journal publications on the subject of inherited colorectal cancer. It is suitable for Named Entity Recognition and Relation Extraction tasks. It uses the Variome Annotation Schema, a schema that aims to capture the core concepts and relations relevant to cataloguing and interpreting human genetic variation and its relationship to disease, as described in the published literature. The schema was inspired by the needs of the database curators of the International Society for Gastrointestinal Hereditary Tumours (InSiGHT) database, but is intended to have application to genetic variation information in a range of diseases. ## Citation Information ``` @article{verspoor2013annotating, title = {Annotating the biomedical literature for the human variome}, author = { Verspoor, Karin and Jimeno Yepes, Antonio and Cavedon, Lawrence and McIntosh, Tara and Herten-Crabb, Asha and Thomas, Zo{"e} and Plazzer, John-Paul }, year = 2013, journal = {Database}, publisher = {Oxford Academic}, volume = 2013 } ```
1,557
[ [ -0.0148773193359375, 0.004535675048828125, 0.0005626678466796875, -0.0002371072769165039, -0.0287933349609375, -0.003025054931640625, -0.0024433135986328125, -0.02484130859375, 0.051300048828125, 0.06109619140625, -0.059173583984375, -0.0667724609375, -0.0415039...
fewshot-goes-multilingual/cs_czech-named-entity-corpus_2.0
2022-12-05T22:44:28.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:cs", "license:cc-by-nc-sa-3.0", "czech NER", "CNEC", ...
fewshot-goes-multilingual
null
null
1
10
2022-11-22T09:21:00
--- annotations_creators: - expert-generated language: - cs language_creators: - found license: - cc-by-nc-sa-3.0 multilinguality: - monolingual pretty_name: Czech Named Entity Corpus 2.0 size_categories: - 1K<n<10K source_datasets: - original tags: - czech NER - CNEC task_categories: - token-classification task_ids: - named-entity-recognition --- # Dataset Card for Czech Named Entity Corpus 2.0 ## Dataset Description The dataset contains Czech sentences and annotated named entities. Total number of sentences is around 9,000 and total number of entities is around 34,000. (Total means train + validation + test) ## Dataset Features Each sample contains: - `text`: source sentence - `entities`: list of selected entities. Each entity contains: - `category_id`: string identifier of the entity category - `category_str`: human-friendly category name in Czech (verbalizer) - `start`: index on which the entity starts in the source sentence - `end`: index on which the entity ends in the source sentence - `content`: entity content, it was created as `text[start:end]` - `entity_id`: unique entity string identifier - `parent_id`: If entity was selected inside another entity (e.g. house number inside address), `parent_id` is the identifier of the parent entity. None otherwise. The `entity_id` field was checked to be globally unique (across data samples and dataset splits.) ## Entity categories The list of the recognized entities (`category_id`, `category_str` pairs): ```python3 { 'A': 'číslo v adrese / kontaktním údaji', 'ah': 'číslo domu', 'at': 'telefonní číslo / fax', 'az': 'PSČ (poštovní směrovací číslo)', 'C': 'reference/bibliografie', 'f': 'cizí výraz', 'g_': 'geografický název - jiný', 'gc': 'stát/země', 'gh': 'jméno vodstva', 'gl': 'přírodní oblast/útvar', 'gq': 'městská čtvrť', 'gr': 'území', 'gs': 'ulice/náměstí', 'gt': 'kontinent', 'gu': 'město/zámek', 'i_': 'instituce - jiná', 'ia': 'konference/soutěž', 'ic': 'kulturní/vzdělávací/vědecká instituce', 'if': 'komerční instituce', 'io': 'vládní/politická instituce', 'me': 'emailová adresa', 'mi': 'URL / internetový odkaz', 'mn': 'časopis', 'ms': 'radio/televizní stanice', 'n_': 'číselný výraz - jiný', 'na': 'věk', 'nb': 'číslo stránky/kapitoly/sekce/objektu', 'nc': 'množství/počet', 'ni': 'číslo položky', 'no': 'pořadí', 'ns': 'sportovní skóre', 'o_': 'artefakt - jiný', 'oa': 'umělecké dílo / kulturní artefakt', 'oe': 'jednotka', 'om': 'měna', 'op': 'produkt/výrobek', 'or': 'zákon/směrnice/listina', 'P': 'celé jméno', 'p_': 'jméno - jiné', 'pc': 'národnost', 'pd': '(akademický) titul', 'pf': 'křestní jméno', 'pm': 'prostřední jméno', 'pp': 'mýtická/historická postava', 'ps': 'příjmení', 's': 'zkratka', 'T': 'čas/datum', 'td': 'den', 'tf': 'svátky', 'th': 'hodiny/minuty', 'tm': 'měsíc', 'ty': 'rok', } ``` ## Dataset Source The dataset is a preprocessed adaptation of existing CNEC 2.0 dataset [project info](https://ufal.mff.cuni.cz/cnec/cnec2.0), [link to data](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0023-1B22-8). This adaptation contains (almost) same data, but converted to a convenient format. In addition, we inspected and decided to remove entity categories: `?`, `segm`, `cap`, `lower`, `upper`, which were either undocumented and/or carried little semantic meaning. The category names (verbalizers) are not in the original dataset. They were added by a Czech native speaker using the available [documentation](https://ufal.mff.cuni.cz/cnec/cnec2.0) and by looking at several occurrences in the data. ## Citation Cite authors of the [original dataset](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0023-1B22-8): ```bibtex @misc{11858/00-097C-0000-0023-1B22-8, title = {Czech Named Entity Corpus 2.0}, author = {{\v S}ev{\v c}{\'{\i}}kov{\'a}, Magda and {\v Z}abokrtsk{\'y}, Zden{\v e}k and Strakov{\'a}, Jana and Straka, Milan}, url = {http://hdl.handle.net/11858/00-097C-0000-0023-1B22-8}, note = {{LINDAT}/{CLARIAH}-{CZ} digital library at the Institute of Formal and Applied Linguistics ({{\'U}FAL}), Faculty of Mathematics and Physics, Charles University}, copyright = {Attribution-{NonCommercial}-{ShareAlike} 3.0 Unported ({CC} {BY}-{NC}-{SA} 3.0)}, year = {2014} } ```
4,321
[ [ -0.03558349609375, -0.024200439453125, 0.0252532958984375, 0.0105133056640625, -0.033203125, -0.0004105567932128906, -0.041412353515625, -0.0297698974609375, 0.034454345703125, 0.038818359375, -0.043670654296875, -0.0732421875, -0.037078857421875, 0.02854919...
VIMA/VIMA-Data
2023-06-17T04:52:09.000Z
[ "license:cc-by-4.0", "arxiv:2210.03094", "region:us" ]
VIMA
null
null
15
10
2022-11-24T19:59:13
--- license: cc-by-4.0 --- # Dataset Card for VIMA-Data ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Structure](#dataset-structure) - [Dataset Creation](#dataset-creation) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://vimalabs.github.io/ - **Repository:** https://github.com/vimalabs/VimaBench - **Paper:** https://arxiv.org/abs/2210.03094 ### Dataset Summary This is the official dataset used to train general robot manipulation agents with multimodal prompts, as presented in [paper](https://arxiv.org/abs/2210.03094). It contains 650K trajectories for 13 tasks in [VIMA-Bench](https://github.com/vimalabs/VimaBench). All demonstrations are generated by oracles. ## Dataset Structure Data are grouped into different tasks. Within each trajectory's folder, there are two folders `rgb_front` and `rgb_top`, and three files `obs.pkl`, `action.pkl`, and `trajectory.pkl`. RGB frames from a certain perspective are separately stored in corresponding folder. `obs.pkl` includes segmentation and state of end effector. `action.pkl` contains oracle actions. `trajectory.pkl` contains meta information such as elapsed steps, task information, and object information. Users can build their custom data piepline starting from here. More details and examples can be found [here](https://github.com/vimalabs/VimaBench#training-data). ## Dataset Creation All demonstrations are generated by scripted oracles. ## Additional Information ### Licensing Information This dataset is released under the [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/legalcode) license. ### Citation Information If you find our work useful, please consider citing us! ```bibtex @inproceedings{jiang2023vima, title = {VIMA: General Robot Manipulation with Multimodal Prompts}, author = {Yunfan Jiang and Agrim Gupta and Zichen Zhang and Guanzhi Wang and Yongqiang Dou and Yanjun Chen and Li Fei-Fei and Anima Anandkumar and Yuke Zhu and Linxi Fan}, booktitle = {Fortieth International Conference on Machine Learning}, year = {2023} } ```
2,365
[ [ -0.02105712890625, -0.051300048828125, 0.033660888671875, 0.007259368896484375, -0.0179443359375, -0.0133819580078125, -0.00994873046875, -0.006801605224609375, 0.029205322265625, 0.0274505615234375, -0.07745361328125, -0.059661865234375, -0.0288238525390625, ...
vucinatim/spectrogram-captions
2023-01-03T00:24:32.000Z
[ "task_categories:text-to-image", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:n<1K", "language:en", "license:afl-3.0", "stable diffusion sound generation text-to-sound text-to-image-to-sound spectrogram", "region:us"...
vucinatim
null
null
1
10
2022-11-29T17:44:33
--- annotations_creators: - machine-generated language: - en language_creators: - machine-generated license: - afl-3.0 multilinguality: - monolingual pretty_name: Captioned generic audio clips with spectrogram images size_categories: - n<1K source_datasets: [] tags: - 'stable diffusion sound generation text-to-sound text-to-image-to-sound spectrogram' task_categories: - text-to-image task_ids: [] --- Dataset of captioned spectrograms (text describing the sound).
472
[ [ -0.01513671875, 0.004962921142578125, 0.005443572998046875, 0.033111572265625, -0.0304718017578125, 0.0171051025390625, -0.041351318359375, -0.0173187255859375, 0.06146240234375, 0.0704345703125, -0.03814697265625, -0.04949951171875, -0.004642486572265625, 0...
fathyshalab/clinic-small_talk
2022-12-24T04:41:03.000Z
[ "region:us" ]
fathyshalab
null
null
0
10
2022-12-15T06:06:59
--- dataset_info: features: - name: 'Unnamed: 0' dtype: int64 - name: text dtype: string - name: label dtype: int64 - name: label_text dtype: string splits: - name: train num_bytes: 54000.1 num_examples: 805 - name: test num_bytes: 23142.9 num_examples: 345 download_size: 0 dataset_size: 77143.0 --- # Dataset Card for "clinic-small_talk" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
524
[ [ -0.0272674560546875, -0.03173828125, 0.04150390625, 0.006572723388671875, -0.0200958251953125, -0.02838134765625, -0.006916046142578125, -0.020263671875, 0.06402587890625, 0.0305633544921875, -0.054473876953125, -0.06744384765625, -0.038848876953125, -0.0196...
liuyanchen1015/mnli_MULTI
2022-12-16T01:43:32.000Z
[ "region:us" ]
liuyanchen1015
null
null
0
10
2022-12-16T01:43:06
--- dataset_info: features: - name: premise dtype: string - name: hypothesis dtype: string - name: label dtype: int64 - name: idx dtype: int64 splits: - name: train num_bytes: 79281363 num_examples: 384388 - name: dev_matched num_bytes: 1983976 num_examples: 9779 - name: dev_mismatched num_bytes: 2092314 num_examples: 9823 - name: test_matched num_bytes: 1976499 num_examples: 9672 - name: test_mismatched num_bytes: 2096238 num_examples: 9841 download_size: 58746057 dataset_size: 87430390 --- # Dataset Card for "mnli_MULTI" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
743
[ [ -0.047607421875, -0.01407623291015625, 0.0032444000244140625, 0.0244293212890625, -0.0158538818359375, -0.0015916824340820312, 0.023406982421875, -0.0107269287109375, 0.057159423828125, 0.032257080078125, -0.060760498046875, -0.0430908203125, -0.037689208984375,...
alexandrainst/scandi-wiki
2023-01-16T13:55:38.000Z
[ "task_categories:fill-mask", "task_categories:text-generation", "task_categories:feature-extraction", "task_ids:language-modeling", "multilinguality:multilingual", "size_categories:1M<n<10M", "source_datasets:wikipedia", "language:da", "language:sv", "language:no", "language:nb", "language:nn"...
alexandrainst
ScandiWiki is a parsed and deduplicated version of the Danish, Norwegian Bokmål, Norwegian Nynorsk, Swedish, Icelandic and Faroese Wikipedia corpora, as of January 2023.
# @InProceedings{huggingface:dataset, # title = {ScandiWiki: A Scandinavian Wikipedia Dump}, # author={Dan Saattrup Nielsen}, # year={2022} # } #
2
10
2023-01-16T12:29:34
--- pretty_name: ScandiWiki language: - da - sv - no - nb - nn - is - fo license: - cc-by-sa-4.0 multilinguality: - multilingual size_categories: - 1M<n<10M source_datasets: - wikipedia task_categories: - fill-mask - text-generation - feature-extraction task_ids: - language-modeling --- # Dataset Card for ScandiWiki ## Dataset Description - **Point of Contact:** [Dan Saattrup Nielsen](mailto:dan.nielsen@alexandra.dk) - **Total amount of disk used:** 4485.90 MB ### Dataset Summary ScandiWiki is a parsed and deduplicated Wikipedia dump in Danish, Norwegian Bokmål, Norwegian Nynorsk, Swedish, Icelandic and Faroese. ### Supported Tasks and Leaderboards This dataset is intended for general language modelling. ### Languages The dataset is available in Danish (`da`), Swedish (`sv`), Norwegian Bokmål (`nb`), Norwegian Nynorsk (`nn`), Icelandic (`is`) and Faroese (`fo`). ## Dataset Structure ### Data Instances - **Total amount of disk used:** 4485.90 MB An example from the `train` split of the `fo` subset looks as follows. ``` { 'id': '3380', 'url': 'https://fo.wikipedia.org/wiki/Enk%C3%B6pings%20kommuna', 'title': 'Enköpings kommuna', 'text': 'Enköpings kommuna (svenskt: Enköpings kommun), er ein kommuna í Uppsala län í Svøríki. Enköpings kommuna hevur umleið 40.656 íbúgvar (2013).\n\nKeldur \n\nKommunur í Svøríki' } ``` ### Data Fields The data fields are the same among all splits. - `id`: a `string` feature. - `url`: a `string` feature. - `title`: a `string` feature. - `text`: a `string` feature. ### Data Subsets | name | samples | |----------|----------:| | sv | 2,469,978 | | nb | 596,593 | | da | 287,216 | | nn | 162,776 | | is | 55,418 | | fo | 12,582 | ## Dataset Creation ### Curation Rationale It takes quite a long time to parse the Wikipedia dump as well as to deduplicate it, so this dataset is primarily for convenience. ### Source Data The original data is from the [wikipedia dataset](https://huggingface.co/datasets/wikipedia). ## Additional Information ### Dataset Curators [Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra Institute](https://alexandra.dk/) curated this dataset. ### Licensing Information The dataset is licensed under the [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/), in accordance with the same license of the [wikipedia dataset](https://huggingface.co/datasets/wikipedia).
2,487
[ [ -0.0609130859375, -0.032196044921875, 0.01232147216796875, 0.004924774169921875, -0.033111572265625, -0.0258331298828125, -0.0199127197265625, -0.028045654296875, 0.048583984375, 0.03509521484375, -0.050872802734375, -0.050811767578125, -0.0311737060546875, ...
JeremyAlain/SLF5K
2023-01-24T14:21:35.000Z
[ "task_categories:summarization", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:apache-2.0", "feedback", "human feedback", "language feedback", "binary feedback",...
JeremyAlain
The Summarization with Language Feedback (SLF5K) dataset is an English-language dataset containing 5K unique samples that can be used for the task of abstraction summarization. Each sample consists of a Reddit title and post, a model-generated (FeedME) summary, and human-written language feedback on that summary. Additionally, each sample has a high-quality, human-written (gold) summary that should be ideal for the Reddit post. Lastly, each sample has two additional model-generated summaries with binary human preference labels, on which summary is preferred by a human. The dataset can be used to train language models with language feedback on abstractive summarization. It can also be used to train a reward model on binary preferences.
@article{ }
4
10
2023-01-23T08:44:34
--- annotations_creators: - expert-generated language: - en language_creators: - found license: apache-2.0 multilinguality: - monolingual pretty_name: SLF5K size_categories: - 1K<n<10K source_datasets: - original tags: - feedback - human feedback - language feedback - binary feedback - reward - reward model - gpt3 - gpt-3 - instructgpt - alignment - ai alignment - scale - imitation learning from language feedback - ilf task_categories: - summarization task_ids: [] --- # Dataset Card for SLF5K ## Dataset Description - **Repository: https://github.com/JeremyAlain/imitation_learning_from_language_feedback** - **Paper: Training Language Models with Language Feedback at Scale** - **Point of Contact: jeremy.scheurer@nyu.edu and ethan@anthropic.com** ### Dataset Summary The Summarization with Language Feedback (SLF5K) dataset is an English-language dataset containing 5K unique samples that can be used for the task of abstraction summarization. Each sample consists of a Reddit title and post, a model-generated ([FeedME](https://beta.openai.com/docs/model-index-for-researchers)) summary, and human-written language feedback on that summary. Additionally, each sample has a high-quality, human-written (gold) summary that should be ideal for the Reddit post. Lastly, each sample has two additional model-generated summaries with binary human preference labels, on which summary is preferred by a human. The dataset can be used to train language models with language feedback on abstractive summarization. It can also be used to train a reward model on binary preferences. The Reddit posts were taken from the datasets provided by [Learning to Summarize from Human Feedbback](https://arxiv.org/pdf/2009.01325.pdf), who used the initial Reddit post dataset [TL;DR: Mining Reddit to Learn Automatic Summarization](https://aclanthology.org/W17-4508.pdf). ### Supported Tasks and Leaderboards The dataset can be used to train a model for abstractive and extractive summarization. It can either be trained directly on human-written summaries, or leverage language feedback or binary human preferences. The model performance is evaluated in a human evaluation, where annotators rate the quality of the generated summaries. Previous work has used [ROUGE](https://huggingface.co/spaces/evaluate-metric/rouge) scores, but in [Learning to Summarize from Human Feedbback](https://arxiv.org/pdf/2009.01325.pdf) they show that ROUGE is not an ideal metric. ### Languages English ## Dataset Structure ### Data Instances Each instance is a line in the dataset file (which is saved as .jsonl). Each instance contains various fields, where the most important are Here is an example instance: ``` {"id":"t3_3w7gyp", "subreddit":"dogs", "title":"Puppy playing at park - other owner aggressive towards him [help]", "post":"Hi all, looking for some advice. I have a 6m old kelpie, buzz, who goes with me daily to a dog park, [...]", "tldr_human_reference_summary":"other owner at park harsh with my dog for playing to rough with his. Have tried talking to him about it, hasn't helped.", "summary_prompt":"Write an excellent summary of the given text.\n\nTitle: Puppy playing at park - other owner aggressive towards him [help]\n\nText: Hi all, looking for some advice. [...] that too.\n\nTL;DR:", "generated_summary_for_comparison_A":"New dog at park is being aggressive to my pup, owner won't stop. What do I do?", "generated_summary_for_comparison_B":"A new dog has been coming to the dog park and the first day the new dog came, the old dog (a kelpie) was all over him.", "generated_summary_for_feedback":"A new dog has been coming to the dog park and the first day the owner hauled buzz off and whacked him. Today, the owner was staring daggers at me and lunging at buzz\/pulling his collar roughly.", "comparison_preference":"Summary A", "feedback":"The summary is concise but could include information about the poster knowing the dogs are just playing and will react if they become aggressive and wants to know how to handle things with Max's dad. ", "feedback_class":"Coverage", "has_additional_feedback":"No", "ideal_human_summary":"The poster is frustrated with a new person at the dog park who is upset with him because their young dogs are playing roughly. The poster will step in if it gets aggressive and wants the new person to understand this. "} ``` There are some additional fields like `time_spent_in_seconds_ideal_human_summary`, `time_spent_in_seconds_feedback`,`time_spent_in_seconds_comparison` which only have values for the development dataset. ### Data Fields - `id`: a unique string identifying the reddit post. - `subreddit`: subreddit of the post. - `title`: title of the reddit post. - `post`: reddit post - `tldr_human_reference_summary`: human reference summary automatically extracted from reddit (taken from the dataset of [TL;DR: Mining Reddit to Learn Automatic Summarization](https://aclanthology.org/W17-4508.pdf)) - `summary_prompt`: the whole prompt used to generate summaries - `generated_summary_for_comparison_A`: summary A used for binary human comparison (generated with FeedME) - `generated_summary_for_comparison_B`: summary B used for binary human comparison (generated with FeedME) - `generated_summary_for_feedback`: summary used to gather human language feedback ((generated with FeedME)) - `comparison_preference`: prefered Summary of human comparison, Values: "Summary A", "Summary B" - `feedback`: human language feedback on `generated_summary_for_feedback`(most important feedback point) - `feedback_class`: Class of language feedback, Values: "Coverage", "Accuracy", "Coherence", "other" - `has_additional_feedback`: Whether this sample could use more feedback on an important point. - `ideal_human_summary`: high-quality human-written summary for this sample. We instructed annotators to write an ideal summary. - `time_spent_in_seconds_ideal_human_summary`: Annotation time for ideal human summary - `time_spent_in_seconds_feedback`: Annotation time for language feedback - `time_spent_in_seconds_comparison`: Annotation time for binary comparison Note that the various datasplits have varying fields. The fields that are not contained in a dataset have the value None. ### Data Splits The SLF5K dataset has 4 splits: _train_, _development_, _validation_, and _test_. Below are the statistics of the dataset. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 5000 | | Development | 200 | | Validation | 500 | | Test | 698 | The reason we introduce a development and validation dataset, is the following. ## Dataset Creation ### Curation Rationale This dataset aims to support supervised language model training from human preferences on a summarization task with real natural training data. ### Source Data #### Initial Data Collection and Normalization The initial TL;DR dataset was made public by Völkse et. al. in the paper [TL;DR: Mining Reddit to Learn Automatic Summarization](https://aclanthology.org/W17-4508.pdf) (licensed under CC By 4.0). Stiennon et. al. then use this TL;DR dataset for their work [Learning to Summarize from Human Feedbback](https://arxiv.org/pdf/2009.01325.pdf). They filter the TL;DR dataset for quality reasons and collect binary human preference labels. Our datset is a subset from Stiennon et. al. Dataset, which can be downloaded [here](https://github.com/openai/summarize-from-feedback). Our train and development dataset are taken form their train dataset and our test and validation datasets are taken from their test datasest. #### Who are the source language producers? The reddit posts are written by users of reddit.com. ### Annotations #### Annotation process We first onboarded annotators by giving them test tasks on which we evaluated their annotation quality. We then selected 31 annotators for the remainder of the project (a few were removed later on due to quality issues). Througout the process we updated our instructions to make the tasks clearer and stayed in close contact with the annotators to answer questions etc. The various dataset splits were collected in multiple annotation iterations. The largest annotation was a single iteration of annotation 5000 samples for the train dataset. #### Who are the annotators? We used annotators through the annotation service [Surge AI](https://www.surgehq.ai/). ### Personal and Sensitive Information The annotators were completely anonymized and no information about them can be found in the dataset. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to align language models with human preferences by leveraging language feedback, on the task of summarization. Concretely, the goal is to to develop models that produce summaries for reddit posts that are more in line with human preferences. Note that this does not imply that the outputs will perfectly be aligned with human values, i.e. outputs can still be misaligned, offensive and contain harumful biases. While outputs from a model trained on our dataset may reflect the language of the reddit posts, summaries, and human feedback, it should always be made clear that such an output is automatically generated. ### Discussion of Biases The TL;DR dataset consists of user-submitted posts to the website reddit.com. It can thus contain content that is offensive or reflects harmful social biases. We thus recommend that models trained on the SLF5K dataset (which is based on the TL;DR) dataset be thoroughly studied for potential harmful behavior. The human preferences and feedback represented in this dataset were collected through crowd-workers and may disproportionally represent the views, biases, and values of the respective demographic of the annotators. ### Other Known Limitations The "human-summaries" collected in the TL;DR dataset (and available in the SLF5K dataset under the field `tldr_human_reference_summary`, were automatically extracted from reddit.com. They are often of poor quality and do not accurately reflect human summarization performance. In our paper, we show that our human written summaries (available in the SLF5K dataset under the field `ideal_human_summary`) are of much higher quality. ## Additional Information ### Dataset Curators The data is collected by Jérémy Scheurer, Jon Ander Campos, Tomasz Korbak, Jun Shern Chan, Angelica Chen, Kyunghyun Cho, and Ethan Perez. All authors are affiliated with New York University. Additionally, Jérémy Scheurer is affiliated with FAR AI. Jon Ander is affiliated with the University of the Basque Country. Tomek Korbak is affiliated with FAR AI and the University of Sussesx. Kyunghyun Cho is affiliated with Genentech and CIFAR LMB. Ethan Perez is affiliated with FAR AI and Anthropic. ### Licensing Information The SLF5K dataset is released under the Apache 2.0 license. ### Citation Information TBD
11,159
[ [ -0.039154052734375, -0.04962158203125, 0.01076507568359375, 0.0229949951171875, -0.01776123046875, -0.005916595458984375, -0.029296875, -0.04034423828125, 0.034515380859375, 0.032806396484375, -0.0347900390625, -0.04937744140625, -0.03192138671875, 0.0254516...
keremberke/pcb-defect-segmentation
2023-01-27T13:45:36.000Z
[ "task_categories:image-segmentation", "roboflow", "roboflow2huggingface", "region:us" ]
keremberke
null
@misc{ defects-2q87r_dataset, title = { Defects Dataset }, type = { Open Source Dataset }, author = { Diplom }, howpublished = { \\url{ https://universe.roboflow.com/diplom-qz7q6/defects-2q87r } }, url = { https://universe.roboflow.com/diplom-qz7q6/defects-2q87r }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2023 }, month = { jan }, note = { visited on 2023-01-27 }, }
5
10
2023-01-27T13:45:20
--- task_categories: - image-segmentation tags: - roboflow - roboflow2huggingface --- <div align="center"> <img width="640" alt="keremberke/pcb-defect-segmentation" src="https://huggingface.co/datasets/keremberke/pcb-defect-segmentation/resolve/main/thumbnail.jpg"> </div> ### Dataset Labels ``` ['dry_joint', 'incorrect_installation', 'pcb_damage', 'short_circuit'] ``` ### Number of Images ```json {'valid': 25, 'train': 128, 'test': 36} ``` ### How to Use - Install [datasets](https://pypi.org/project/datasets/): ```bash pip install datasets ``` - Load the dataset: ```python from datasets import load_dataset ds = load_dataset("keremberke/pcb-defect-segmentation", name="full") example = ds['train'][0] ``` ### Roboflow Dataset Page [https://universe.roboflow.com/diplom-qz7q6/defects-2q87r/dataset/8](https://universe.roboflow.com/diplom-qz7q6/defects-2q87r/dataset/8?ref=roboflow2huggingface) ### Citation ``` @misc{ defects-2q87r_dataset, title = { Defects Dataset }, type = { Open Source Dataset }, author = { Diplom }, howpublished = { \\url{ https://universe.roboflow.com/diplom-qz7q6/defects-2q87r } }, url = { https://universe.roboflow.com/diplom-qz7q6/defects-2q87r }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2023 }, month = { jan }, note = { visited on 2023-01-27 }, } ``` ### License CC BY 4.0 ### Dataset Summary This dataset was exported via roboflow.com on January 27, 2023 at 1:45 PM GMT Roboflow is an end-to-end computer vision platform that helps you * collaborate with your team on computer vision projects * collect & organize images * understand and search unstructured image data * annotate, and create datasets * export, train, and deploy computer vision models * use active learning to improve your dataset over time For state of the art Computer Vision training notebooks you can use with this dataset, visit https://github.com/roboflow/notebooks To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com The dataset includes 189 images. Defect are annotated in COCO format. The following pre-processing was applied to each image: No image augmentation techniques were applied.
2,241
[ [ -0.018157958984375, -0.0391845703125, 0.035736083984375, 0.01407623291015625, -0.0233917236328125, -0.01123809814453125, 0.007335662841796875, -0.03302001953125, 0.0225067138671875, 0.0163116455078125, -0.05157470703125, -0.059539794921875, -0.01776123046875, ...
achang/plot_qa
2023-02-12T01:20:56.000Z
[ "task_categories:visual-question-answering", "language:en", "license:cc", "plotQA", "region:us" ]
achang
null
null
3
10
2023-02-06T18:51:17
--- license: cc task_categories: - visual-question-answering language: - en tags: - plotQA pretty_name: PlotQA --- # Dataset Card for PlotQA ## Dataset Description - **PlotQA from here:** [PlotQA](https://github.com/NiteshMethani/PlotQA) ### Dataset Summary PlotQA is a VQA dataset with 28.9 million question-answer pairs grounded over 224,377 plots on data from real-world sources and questions based on crowd-sourced question templates. ## Dataset Structure ### Data Fields List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points. - `image`: PIL image of a plot - `text`: string of json data 'models'. See notes below. From [here](https://github.com/NiteshMethani/PlotQA/blob/master/PlotQA_Dataset.md): 'models': It is a list of dictionaries. Depending on the type of the plot (single or 2,3,4-multi), the length of the dictionary can vary from 1 to 4. Each dictionary contains the following keys- name: Label corresponding to the datapoint. color: Color corresponding to the `name` datapoint. bboxes: Bounding boxes corresponding to the `name` datapoints in the plot. label: label corresponding to the datapoint which will appear as the legend (same as the `name` field). x: x-value of the datapoints. y: y-value of the datapoints. [json2token](https://github.com/clovaai/donut/blob/b317b4bbf1eecec7c62e7666f2097e1e90a6b441/donut/model.py#L495) function was used to convert json to string. The new tokens are already loaded in plotQA processor: ``` from transformers import DonutProcessor processor = DonutProcessor.from_pretrained("[achang/donut-plotqa-trained](https://huggingface.co/achang/donut-plotqa-trained)") ``` ### Data Splits ``` validation: Dataset({ features: ['image', 'text'], num_rows: 33650 }) train: Dataset({ features: ['image', 'text'], num_rows: 157070 }) test: Dataset({ features: ['image', 'text'], num_rows: 33657 }) ``` ## Misc Dataset Creation, Annotations, Considerations for Using the Data, Social Impact of Dataset, Additional Information, Licensing Information look at [plotQA](https://github.com/NiteshMethani/PlotQA) ### Citation Information Please cite the following if you use the PlotQA dataset in your work: ``` @InProceedings{Methani_2020_WACV, author = {Methani, Nitesh and Ganguly, Pritha and Khapra, Mitesh M. and Kumar, Pratyush}, title = {PlotQA: Reasoning over Scientific Plots}, booktitle = {The IEEE Winter Conference on Applications of Computer Vision (WACV)}, month = {March}, year = {2020} } ```
2,957
[ [ -0.018096923828125, -0.025146484375, 0.02703857421875, 0.003299713134765625, -0.012451171875, 0.01027679443359375, 0.020172119140625, -0.0159454345703125, 0.0237579345703125, 0.046783447265625, -0.033782958984375, -0.042236328125, -0.045867919921875, -0.0083...
karukas/mediasum-summary-matching
2023-02-11T00:05:53.000Z
[ "region:us" ]
karukas
null
null
0
10
2023-02-11T00:04:28
--- dataset_info: features: - name: sentence1 dtype: string - name: sentence2 dtype: string splits: - name: train num_bytes: 4149687650 num_examples: 443596 - name: validation num_bytes: 92028438 num_examples: 10000 - name: test num_bytes: 94033599 num_examples: 10000 download_size: 2438334598 dataset_size: 4335749687 --- # Dataset Card for "mediasum-summary-matching" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
553
[ [ -0.042327880859375, 0.004474639892578125, 0.00531768798828125, 0.00969696044921875, -0.01491546630859375, -0.00927734375, 0.0202789306640625, 0.0074462890625, 0.0728759765625, 0.037811279296875, -0.061798095703125, -0.042999267578125, -0.04034423828125, -0.0...
HiTZ/euscrawl
2023-02-14T19:00:22.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10M<n<100M", "source_datasets:original", "language:eu"...
HiTZ
EusCrawl (http://www.ixa.eus/euscrawl/) is a high-quality corpus for Basque comprising 12.5 million documents and 423 million tokens, totalling 2.1 GiB of uncompressed text. EusCrawl was built using ad-hoc scrapers to extract text from 33 Basque websites with high-quality content, resulting in cleaner text compared to general purpose approaches. We do not claim ownership of any document in the corpus. All documents we collected were published under a Creative Commons license in their original website, and the specific variant can be found in the "license" field of each document. Should you consider that our data contains material that is owned by you and you would not like to be reproduced here, please contact Aitor Soroa at a.soroa@ehu.eus. For more details about the corpus, refer to our paper "Artetxe M., Aldabe I., Agerri R., Perez-de-Viñaspre O, Soroa A. (2022). Does Corpus Quality Really Matter for Low-Resource Languages?" https://arxiv.org/abs/2203.08111 If you use our corpus or models for academic research, please cite the paper in question: @misc{artetxe2022euscrawl, title={Does corpus quality really matter for low-resource languages?}, author={Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri, Olatz Perez-de-Viñaspre, Aitor Soroa}, year={2022}, eprint={2203.08111}, archivePrefix={arXiv}, primaryClass={cs.CL} } For questions please contact Aitor Soroa at a.soroa@ehu.eus.
@misc{artetxe2022euscrawl, title={Does corpus quality really matter for low-resource languages?}, author={Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri, Olatz Perez-de-Viñaspre, Aitor Soroa}, year={2022}, eprint={2203.08111}, archivePrefix={arXiv}, primaryClass={cs.CL} }
2
10
2023-02-13T20:13:26
--- annotations_creators: - no-annotation language: - eu language_creators: - found license: - cc multilinguality: - monolingual pretty_name: EusCrawl size_categories: - 10M<n<100M source_datasets: - original tags: - high-quality - scraping task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling dataset_info: features: - name: id dtype: int32 - name: title dtype: string - name: text dtype: string - name: source dtype: string - name: license dtype: string - name: url dtype: string splits: - name: train num_bytes: 2314407002 num_examples: 1724544 download_size: 728281801 dataset_size: 2314407002 --- # Dataset Card for EusCrawl ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://ixa.ehu.eus/euscrawl/ - **Repository:** - **Paper:** https://arxiv.org/abs/2203.08111 - **Leaderboard:** - **Point of Contact:** a.soroa@ehu.eus ### Dataset Summary EusCrawl (http://www.ixa.eus/euscrawl/) is a high-quality corpus for Basque comprising 12.5 million documents and 423 million tokens, totalling 2.1 GiB of uncompressed text. EusCrawl was built using ad-hoc scrapers to extract text from 33 Basque websites with high-quality content, resulting in cleaner text compared to general purpose approaches. ### Supported Tasks and Leaderboards EusCrawl is intended for pretraining models for language modeling or masked language modeling. ### Languages Basque (eu) ## Dataset Structure ### Data Instances ```json { "id": 6, "title": "Herriko enpresa handien eta txikien arteko topaketak egingo dituzte", "text": "09:30ean hasiko da bilera eta aurkezpena egingo dute Tubacex, JEZ, Envases, Guardian eta Vidrala enpresek. Eskualdeko lantegi motorrekin beste enpresa txikiak eta ertainak egongo dira. Erakunde publikoaren helburua da euren artean ezagutzea eta elkarlana sustatzea.", "source": "aiaraldea", "license": "cc-by-sa 3.0", "url": "https://aiaraldea.eus/laudio/1494603159768-herriko-enpresa-handien-eta-txikien-arteko-topaketak-egingo-dituzte", } ``` ### Data Fields - "id": example id - "title": article title - "text": article text - "source": article source - "license": article license - "url": article url ### Data Splits The dataset only has one training split because it is intended for pretraining language models. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information We do not claim ownership of any document in the corpus. All documents we collected were published under a Creative Commons license in their original website, and the specific variant can be found in the "license" field of each document. Should you consider that our data contains material that is owned by you and you would not like to be reproduced here, please contact Aitor Soroa at a.soroa@ehu.eus. ### Citation Information If you use our corpus or models for academic research, please cite the paper in question: ```bibtex @misc{artetxe2022euscrawl, title={Does corpus quality really matter for low-resource languages?}, author={Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri, Olatz Perez-de-Viñaspre, Aitor Soroa}, year={2022}, eprint={2203.08111}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@juletx](https://github.com/juletx) for adding this dataset.
5,115
[ [ -0.0369873046875, -0.032501220703125, 0.0064239501953125, 0.0213623046875, -0.0184173583984375, 0.005451202392578125, -0.025970458984375, -0.039459228515625, 0.047027587890625, 0.0323486328125, -0.049468994140625, -0.05303955078125, -0.0293121337890625, 0.02...
jonathan-roberts1/RS_C11
2023-03-31T17:07:50.000Z
[ "task_categories:image-classification", "task_categories:zero-shot-image-classification", "license:other", "region:us" ]
jonathan-roberts1
null
null
0
10
2023-02-14T18:12:02
--- dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': dense forest '1': grassland '2': harbor '3': high buildings '4': low buildings '5': overpass '6': railway '7': residential area '8': roads '9': sparse forest '10': storage tanks splits: - name: train num_bytes: 969136595.28 num_examples: 1232 download_size: 916398984 dataset_size: 969136595.28 license: other task_categories: - image-classification - zero-shot-image-classification --- # Dataset Card for "RS_C11" ## Dataset Description - **Paper** [Feature significance-based multibag-of-visual-words model for remote sensing image scene classification](https://www.spiedigitallibrary.org/journals/journal-of-applied-remote-sensing/volume-10/issue-3/035004/Feature-significance-based-multibag-of-visual-words-model-for-remote/10.1117/1.JRS.10.035004.pdf) ### Licensing Information Free usage without license. ## Citation Information [Feature significance-based multibag-of-visual-words model for remote sensing image scene classification](https://www.spiedigitallibrary.org/journals/journal-of-applied-remote-sensing/volume-10/issue-3/035004/Feature-significance-based-multibag-of-visual-words-model-for-remote/10.1117/1.JRS.10.035004.pdf) ``` @article{zhao2016feature, title = {Feature significance-based multibag-of-visual-words model for remote sensing image scene classification}, author = {Zhao, Lijun and Tang, Ping and Huo, Lianzhi}, year = 2016, journal = {Journal of Applied Remote Sensing}, publisher = {Society of Photo-Optical Instrumentation Engineers}, volume = 10, number = 3, pages = {035004--035004} } ```
1,850
[ [ -0.03680419921875, -0.03564453125, 0.0004482269287109375, 0.0187530517578125, -0.040924072265625, -0.017852783203125, 0.0018520355224609375, -0.04168701171875, -0.00093841552734375, 0.0186309814453125, -0.0316162109375, -0.05596923828125, -0.07012939453125, ...
rubentito/mp-docvqa
2023-02-27T16:09:10.000Z
[ "task_categories:question-answering", "task_categories:document-question-answering", "multilinguality:monolingual", "source_datasets:Single Page Document Visual Question Answering", "language:en", "license:mit", "arxiv:2212.05935", "region:us" ]
rubentito
null
null
0
10
2023-02-21T08:36:46
--- pretty_name: MP-DocVQA (Multipage Document Visual Question Answering) license: mit task_categories: - question-answering - document-question-answering - document-visual-question-answering language: - en multilinguality: - monolingual source_datasets: - Single Page Document Visual Question Answering --- # Dataset Card for Multipage Document Visual Question Answering (MP-DocVQA) ## Dataset Description - **Homepage: [Robust Reading Competition Portal](https://rrc.cvc.uab.es/?ch=17&com=introduction)** - **Repository: [Robust Reading Competition Portal](https://rrc.cvc.uab.es/?ch=17&com=downloads)** - **Paper: [Hierarchical multimodal transformers for Multi-Page DocVQA](https://arxiv.org/abs/2212.05935.pdf])** - **Leaderboard: [Task 4 of DocVQA on the Robust Reading Competition Portal](https://rrc.cvc.uab.es/?ch=17&com=evaluation&task=4)** ### Dataset Summary The dataset is aimed to perform Visual Question Answering on multipage industry scanned documents. The questions and answers are reused from Single Page DocVQA (SP-DocVQA) dataset. The images also corresponds to the same in original dataset with previous and posterior pages with a limit of up to 20 pages per document. ### Download the Dataset The dataset is not integrated with Huggingface yet. But you can download it from the [DocVQA Challenge](https://rrc.cvc.uab.es/?ch=17) in the RRC Portal, [Downloads section](https://rrc.cvc.uab.es/?ch=17&com=downloads). ### Leaderboard You can also check the live leaderboard at the [RRC Portal](https://rrc.cvc.uab.es/?ch=17&com=evaluation&task=4) ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits | | Train | Validation | Test | Total | |----------|:-----:|:-----------:|:------:|:-------:| |**Questions** |36230 | 5187 |5019 | 46436 | |**Documents** |5131 | 927 |959 | 5929 | |**Pages / Images** |37269 | 6510 |6223 | 47952 | Note that some documents might appear in both validation and test set. But they are never seen during training. ### Citation Information ```tex @article{tito2022hierarchical, title={Hierarchical multimodal transformers for Multi-Page DocVQA}, author={Tito, Rub{\`e}n and Karatzas, Dimosthenis and Valveny, Ernest}, journal={arXiv preprint arXiv:2212.05935}, year={2022} } ```
2,361
[ [ -0.044342041015625, -0.048065185546875, 0.0224456787109375, 0.009857177734375, 0.00008082389831542969, -0.007354736328125, 0.01361083984375, -0.00400543212890625, -0.006862640380859375, 0.0247344970703125, -0.057830810546875, -0.043609619140625, -0.0361938476562...
wwydmanski/colorectal-carcinoma-microbiome-fengq
2023-02-25T15:34:21.000Z
[ "task_categories:tabular-classification", "size_categories:n<1K", "microbiome", "tabular", "gut-microbiota", "region:us" ]
wwydmanski
The dataset contains 16S rRNA gene sequencing data from healthy controls and colorectal cancer patients. The dataset was used in the paper "Gut microbiome development along the colorectal adenoma-carcinoma sequence" by Feng et al. (2015).
Feng Q, Liang S, Jia H, et al. Gut microbiome development along the colorectal adenoma-carcinoma sequence. Nat Commun. 2015;6:6528. Published 2015 Mar 11. doi:10.1038/ncomms7528
1
10
2023-02-24T10:27:04
--- task_categories: - tabular-classification tags: - microbiome - tabular - gut-microbiota pretty_name: Colorectal Carcinoma Feng Q 2015 size_categories: - n<1K --- ## Publication Abstract Colorectal cancer, a commonly diagnosed cancer in the elderly, often develops slowly from benign polyps called adenoma. The gut microbiota is believed to be directly involved in colorectal carcinogenesis. The identity and functional capacity of the adenoma- or carcinoma-related gut microbe(s), however, have not been surveyed in a comprehensive manner. Here we perform a metagenome-wide association study (MGWAS) on stools from advanced adenoma and carcinoma patients and from healthy subjects, revealing microbial genes, strains and functions enriched in each group. An analysis of potential risk factors indicates that high intake of red meat relative to fruits and vegetables appears to associate with outgrowth of bacteria that might contribute to a more hostile gut environment. These findings suggest that faecal microbiome-based strategies may be useful for early diagnosis and treatment of colorectal adenoma or carcinoma. ## Dataset 156 metagenomic shotgun-sequenced faecal samples from colorectal adenoma and carcinoma patients and healthy controls ### Configurations - `presence-absence` - `CLR` ## Usage ```python dataset = load_dataset("wwydmanski/colorectal-carcinoma-microbiome-fengq", "presence-absence") train_dataset, test_dataset = dataset['train'], dataset['test'] X_train = np.array(train_dataset['values']) y_train = np.array(train_dataset['target']) X_test = np.array(test_dataset['values']) y_test = np.array(test_dataset['target']) ```
1,659
[ [ -0.02117919921875, -0.0255584716796875, 0.0531005859375, -0.0259552001953125, -0.0155029296875, -0.01177978515625, 0.013458251953125, -0.021209716796875, 0.04998779296875, 0.01322174072265625, -0.032806396484375, -0.0234527587890625, -0.0491943359375, 0.0287...
Duskfallcrew/DuskfallCrewArtStyle_Lora
2023-04-25T04:30:25.000Z
[ "task_categories:text-to-image", "size_categories:1K<n<10K", "language:en", "license:creativeml-openrail-m", "Art Style", "duskfallcrew", "region:us" ]
Duskfallcrew
null
null
0
10
2023-03-01T05:31:16
--- license: creativeml-openrail-m task_categories: - text-to-image language: - en tags: - Art Style - duskfallcrew pretty_name: Duskfallcrew Art Style Dataset & Lora size_categories: - 1K<n<10K --- # Dataset Card for DuskfallCrewArtStyle_Lora ## Dataset Description - **Homepage:https://duskfallcrew.carrd.co/** - **Point of Contact: See the Carrd website for contact info, or DM us on HF** ### Dataset Summary This data set is the basis for the LoRa that is in this repository. ### Supported Tasks and Leaderboards Text to Image / Stable Diffusion/ LoRA ### Languages English ### Source Data ### Personal and Sensitive Information This is based on our own Art, and while we're A OK for you to use it, you don't own the art within the dataset, but you may not care to anyways. ## Considerations for Using the Data ### Social Impact of Dataset Shitty Art! ### Discussion of Biases It largely has non binary features, not sure if it has any one specific gender. We have Dissociative identity disorder so laregely the faces in here are either alters in our system or other systems we've done art for. ### Other Known Limitations SHITTYART! ## Additional Information ### Licensing Information While it's under the lisc listed, we do ask you that you don't resell the dataset. You're responsible for your use of the dataset, and the faces within it. Your outputs are up to you. ### Citation Information If you use the dataset, citation is nice, but it'd be even nicer if you gave us coffee! https://ko-fi.com/DUSKFALLcrew
1,557
[ [ -0.00986480712890625, -0.05950927734375, 0.01538848876953125, 0.040496826171875, -0.024444580078125, 0.0140228271484375, 0.022125244140625, -0.052093505859375, 0.040618896484375, 0.052886962890625, -0.063720703125, -0.063720703125, -0.033050537109375, -0.013...
zeusfsx/ukrainian-news
2023-05-14T08:04:18.000Z
[ "task_categories:text-generation", "size_categories:10M<n<100M", "language:uk", "license:unknown", "news", "region:us" ]
zeusfsx
Ukrainian News Dataset This is a dataset of news articles downloaded from various Ukrainian websites and Telegram channels. The dataset contains approximately ~23M JSON objects (news)
null
9
10
2023-03-01T18:34:15
--- license: unknown task_categories: - text-generation language: - uk pretty_name: ukr-news size_categories: - 10M<n<100M tags: - news --- # Ukrainian News Dataset This is a dataset of news articles downloaded from various Ukrainian websites and Telegram channels. The dataset contains 22 567 099 JSON objects (news), total size ~67GB each with the following fields: ```json title: The title of the news article text: The text of the news article, which may contain HTML tags(e.g., paragraphs, links, images, etc.) url: The URL of the news article datetime: The time of publication or when the article was parsed and added to the dataset owner: The name of the website that published the news article ``` Count of news from websites: 16 022 416 Count of telegram posts: 6 544 683 The JSON objects are divided into parts, and the dataset is available for download via Hugging Face. The terms of use state that all data in this dataset is under the copyright of the owners of the respective websites. ## Accessing the Dataset The dataset is available for download via the Hugging Face datasets library. You can install the library via pip: ```bash pip install datasets ``` Once you have installed the library, you can load the dataset using the following code: ```python from datasets import load_dataset dataset = load_dataset('zeusfsx/ukrainian-news') ``` This will load the entire dataset into memory. If you prefer to load only a subset of the data, you can specify the split argument: ```python # Load only the first 10,000 examples from the "train" split dataset = load_dataset('zeusfsx/ukrainian-news', split='train[:10000]') ``` ## Contacts If you have any questions or comments about this dataset, please contact me at email [zeusfsxtmp@gmail.com]. I will do our best to respond to your inquiry as soon as possible. ## License The dataset is made available under the terms of use specified by the owners of the respective websites. Please consult the individual websites for more information on their terms of use.
2,054
[ [ -0.02056884765625, -0.037139892578125, 0.0218048095703125, 0.0311279296875, -0.044586181640625, 0.00290679931640625, -0.0134429931640625, -0.015625, 0.0230865478515625, 0.034423828125, -0.054534912109375, -0.053863525390625, -0.033294677734375, 0.01216888427...
renumics/dcase23-task2-enriched
2023-06-06T06:24:26.000Z
[ "task_categories:audio-classification", "size_categories:1K<n<10K", "license:cc-by-4.0", "anomaly detection", "anomalous sound detection", "acoustic condition monitoring", "sound machine fault diagnosis", "machine learning", "unsupervised learning", "acoustic scene classification", "acoustic eve...
renumics
null
@dataset{kota_dohi_2023_7882613, author = {Kota Dohi and Keisuke Imoto and Noboru Harada and Daisuke Niizumi and Yuma Koizumi and Tomoya Nishida and Harsh Purohit and Takashi Endo and Yohei Kawaguchi}, title = {DCASE 2023 Challenge Task 2 Development Dataset}, month = mar, year = 2023, publisher = {Zenodo}, version = {3.0}, doi = {10.5281/zenodo.7882613}, url = {https://doi.org/10.5281/zenodo.7882613} }
3
10
2023-03-02T12:41:35
--- license: cc-by-4.0 task_categories: - audio-classification pretty_name: >- Enriched DCASE 2023 Challenge Task 2 Dataset size_categories: - 1K<n<10K tags: - anomaly detection - anomalous sound detection - acoustic condition monitoring - sound machine fault diagnosis - machine learning - unsupervised learning - acoustic scene classification - acoustic event detection - acoustic signal processing - audio domain shift - domain generalization --- # Dataset Card for the Enriched "DCASE 2023 Challenge Task 2 Dataset". ## Table of contents [//]: # (todo: create new) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Explore the data with Spotlight](#explore-the-data-with-spotlight) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Baseline system](#baseline-system) - [Dataset Curators](#dataset-curators) - [Licensing Information - Condition of use](#licensing-information---condition-of-use) - [Citation Information (original)](#citation-information-original) ## Dataset Description - **Homepage:** [Renumics Homepage](https://renumics.com/) - **Homepage** [DCASE23 Task 2 Challenge](https://dcase.community/challenge2023/task-first-shot-unsupervised-anomalous-sound-detection-for-machine-condition-monitoring#evaluation) - **Homepage:** [HF Dataset Creator](https://syoy.github.io/) - **Original Dataset Upload (Dev)** [ZENODO: DCASE 2023 Challenge Task 2 Development Dataset](https://zenodo.org/record/7687464#.Y_9VtdLMLmE) - **Paper** [MIMII DG](https://arxiv.org/abs/2205.13879) - **Paper** [ToyADMOS2](https://arxiv.org/abs/2106.02369) - **Paper** [First-shot anomaly detection for machine condition monitoring: A domain generalization baseline](https://arxiv.org/pdf/2303.00455.pdf) ### Dataset Summary [Data-centric AI](https://datacentricai.org) principles have become increasingly important for real-world use cases. At [Renumics](https://renumics.com/) we believe that classical benchmark datasets and competitions should be extended to reflect this development. This is why we are publishing benchmark datasets with application-specific enrichments (e.g. embeddings, baseline results, uncertainties, label error scores). We hope this helps the ML community in the following ways: 1. Enable new researchers to quickly develop a profound understanding of the dataset. 2. Popularize data-centric AI principles and tooling in the ML community. 3. Encourage the sharing of meaningful qualitative insights in addition to traditional quantitative metrics. This dataset is an enriched version of the [dataset](https://zenodo.org/record/7690148#.ZAXsSdLMLmE) provided in the context of the [anomalous sound detection task](https://dcase.community/challenge2023/task-first-shot-unsupervised-anomalous-sound-detection-for-machine-condition-monitoring) of the [DCASE2023 challenge](https://dcase.community/challenge2023/). The enrichment include an embedding generated by a pre-trained [Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer#transformers.ASTFeatureExtractor) and results of the official challenge [baseline implementation](https://github.com/nttcslab/dase2023_task2_baseline_ae). ### DCASE23 Task2 Dataset Once a year, the [DCASE community](https://dcase.community/) publishes a [challenge](https://dcase.community/challenge2023/) with several tasks in the context of acoustic event detection and classification. [Task 2 of this challenge](https://dcase.community/challenge2023/task-first-shot-unsupervised-anomalous-sound-detection-for-machine-condition-monitoring) deals with anomalous sound detection for machine condition monitoring. The original dataset is based on the [MIMII DG](https://arxiv.org/abs/2205.13879) and the [ToyADMOS2](https://arxiv.org/abs/2106.02369) datasets. Please cite the papers by [Harada et al.](https://arxiv.org/abs/2106.02369) and [Dohi et al.](https://arxiv.org/abs/2205.13879) if you use this dataset and the paper by [Harada et al.](https://arxiv.org/pdf/2303.00455.pdf) if you use the baseline results. ### Explore Dataset ![Analyze DCASE23 Task 2 with Spotlight](https://spotlight.renumics.com/resources/preview_dcase_1.png) The enrichments allow you to quickly gain insights into the dataset. The open source data curation tool Renumics Spotlight enables that with just a few lines of code: Install datasets and Spotlight via [pip](https://packaging.python.org/en/latest/key_projects/#pip): ```python !pip install renumics-spotlight datasets[audio] ``` > **_Notice:_** On Linux, non-Python dependency on libsndfile package must be installed manually. See [Datasets - Installation](https://huggingface.co/docs/datasets/installation#audio) for more information. Load the dataset from huggingface in your notebook: ```python import datasets dataset = datasets.load_dataset("renumics/dcase23-task2-enriched", "dev", split="all", streaming=False) ``` Start exploring with a simple view that leverages embeddings to identify relevant data segments: ```python from renumics import spotlight df = dataset.to_pandas() simple_layout = datasets.load_dataset_builder("renumics/dcase23-task2-enriched", "dev").config.get_layout(config="simple") spotlight.show(df, dtype={'path': spotlight.Audio, "embeddings_ast-finetuned-audioset-10-10-0.4593": spotlight.Embedding}, layout=simple_layout) ``` You can use the UI to interactively configure the view on the data. Depending on the concrete taks (e.g. model comparison, debugging, outlier detection) you might want to leverage different enrichments and metadata. In this example we focus on the valve class. We specifically look at normal data points that have high anomaly scores in both models. This is one example on how to find difficult example or edge cases: ```python from renumics import spotlight extended_layout = datasets.load_dataset_builder("renumics/dcase23-task2-enriched", "dev").config.get_layout(config="extended") spotlight.show(df, dtype={'path': spotlight.Audio, "embeddings_ast-finetuned-audioset-10-10-0.4593": spotlight.Embedding}, layout=extended_layout) ``` ![Analyze DCASE23 Task 2 with Spotlight](data/preview_dcase_2.png "Analyze DCASE23 Task 2 with Spotlight") ## Using custom model results and enrichments When developing your custom model you want to use different kinds of information from you model (e.g. embedding, anomaly scores etc.) to gain further insights into the dataset and the model behvior. Suppose you have your model's embeddings for each datapoint as a 2D-Numpy array called `embeddings` and your anomaly score as a 1D-Numpy array called `anomaly_scores`. Then you can add this information to the dataset: ```python df['my_model_embedding'] = embeddings df['anomaly_score'] = anomaly_scores ``` Depending on your concrete task you might want to use different enrichments. For a good overview on great open source tooling for uncertainty quantification, explainability and outlier detection, you can take a look at our [curated list for open source data-centric AI tooling](https://github.com/Renumics/awesome-open-data-centric-ai) on Github. You can also save your view configuration in Spotlight in a JSON configuration file by clicking on the respective icon: ![Save a data curation layout in Spotlight](data/spotlight_save_layout.png "Save a data curation layout in Spotlight") For more information how to configure the Spotlight UI please refer to the [documentation](https://spotlight.renumics.com). ## Dataset Structure ### Data Instances For each instance, there is a Audio for the audio, a string for the path, an integer for the section, a string for the d1p (parameter), a string for the d1v (value), a ClassLabel for the label and a ClassLabel for the class. ```python {'audio': {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414, 0. , 0. ], dtype=float32), 'path': 'train/fan_section_01_source_train_normal_0592_f-n_A.wav', 'sampling_rate': 16000 } 'path': 'train/fan_section_01_source_train_normal_0592_f-n_A.wav' 'section': 1 'd1p': 'f-n' 'd1v': 'A' 'd2p': 'nan' 'd2v': 'nan' 'd3p': 'nan' 'd3v': 'nan' 'domain': 0 (source) 'label': 0 (normal) 'class': 1 (fan) 'dev_train_lof_anomaly': 0 'dev_train_lof_anomaly_score': 1.241023 'add_train_lof_anomaly': 1 'add_train_lof_anomaly_score': 1.806289 'ast-finetuned-audioset-10-10-0.4593-embeddings': [0.8152204155921936, 1.5862374305725098, ..., 1.7154160737991333] } ``` The length of each audio file is 10 seconds. ### Data Fields - `audio`: an `datasets.Audio` - `path`: a string representing the path of the audio file inside the _tar.gz._-archive. - `section`: an integer representing the section, see [Definition](#Description) - `d*p`: a string representing the name of the d*-parameter - `d*v`: a string representing the value of the corresponding d*-parameter - `domain`: an integer whose value may be either _0_, indicating that the audio sample is from the _source_ domain, _1_, indicating that the audio sample is from the _target_. - `class`: an integer as class label. - `label`: an integer whose value may be either _0_, indicating that the audio sample is _normal_, _1_, indicating that the audio sample contains an _anomaly_. - '[X]_lof_anomaly': an integer as anomaly indicator. The anomaly prediction is computed with the [Local Outlier Factor](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.LocalOutlierFactor.html) algorithm based on the "[X]"-dataset. - '[X]_lof_anomaly_score': a float as anomaly score. The anomaly score is computed with the [Local Outlier Factor](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.LocalOutlierFactor.html) algorithm based on the "[X]"-dataset. - `embeddings_ast-finetuned-audioset-10-10-0.4593`: an `datasets.Sequence(Value("float32"), shape=(1, 768))` representing audio embeddings that are generated with an [Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer#transformers.ASTFeatureExtractor). ### Data Splits The development dataset has 2 splits: _train_ and _test_. | Dataset Split | Number of Instances in Split | Source Domain / Target Domain Samples | | ------------- |------------------------------|---------------------------------------| | Train | 7000 | 6930 / 70 | | Test | 1400 | 700 / 700 | The additional training dataset has 1 split: _train_. | Dataset Split | Number of Instances in Split | Source Domain / Target Domain Samples | | ------------- |------------------------------|---------------------------------------| | Train | 7000 | 6930 / 70 | The evaluation dataset has 1 split: _test_. | Dataset Split | Number of Instances in Split | Source Domain / Target Domain Samples | |---------------|------------------------------|---------------------------------------| | Test | 1400 | ? | ## Dataset Creation The following information is copied from the original [dataset upload on zenodo.org](https://zenodo.org/record/7690148#.ZAXsSdLMLmE) ### Curation Rationale This dataset is the "development dataset" for the [DCASE 2023 Challenge Task 2 "First-Shot Unsupervised Anomalous Sound Detection for Machine Condition Monitoring"](https://dcase.community/challenge2023/task-unsupervised-anomalous-sound-detection-for-machine-condition-monitoring). The data consists of the normal/anomalous operating sounds of seven types of real/toy machines. Each recording is a single-channel 10-second audio that includes both a machine's operating sound and environmental noise. The following seven types of real/toy machines are used in this task: - ToyCar - ToyTrain - Fan - Gearbox - Bearing - Slide rail - Valve The "additional training data" and "evaluation data" datasets contain the following classes: - bandsaw - grinder - shaker - ToyDrone - ToyNscale - ToyTank - Vacuum ### Source Data #### Definition We first define key terms in this task: "machine type," "section," "source domain," "target domain," and "attributes.". - "Machine type" indicates the type of machine, which in the development dataset is one of seven: fan, gearbox, bearing, slide rail, valve, ToyCar, and ToyTrain. - A section is defined as a subset of the dataset for calculating performance metrics. - The source domain is the domain under which most of the training data and some of the test data were recorded, and the target domain is a different set of domains under which some of the training data and some of the test data were recorded. There are differences between the source and target domains in terms of operating speed, machine load, viscosity, heating temperature, type of environmental noise, signal-to-noise ratio, etc. - Attributes are parameters that define states of machines or types of noise. #### Description This dataset consists of seven machine types. For each machine type, one section is provided, and the section is a complete set of training and test data. For each section, this dataset provides (i) 990 clips of normal sounds in the source domain for training, (ii) ten clips of normal sounds in the target domain for training, and (iii) 100 clips each of normal and anomalous sounds for the test. The source/target domain of each sample is provided. Additionally, the attributes of each sample in the training and test data are provided in the file names and attribute csv files. #### Recording procedure Normal/anomalous operating sounds of machines and its related equipment are recorded. Anomalous sounds were collected by deliberately damaging target machines. For simplifying the task, we use only the first channel of multi-channel recordings; all recordings are regarded as single-channel recordings of a fixed microphone. We mixed a target machine sound with environmental noise, and only noisy recordings are provided as training/test data. The environmental noise samples were recorded in several real factory environments. We will publish papers on the dataset to explain the details of the recording procedure by the submission deadline. ### Supported Tasks and Leaderboards Anomalous sound detection (ASD) is the task of identifying whether the sound emitted from a target machine is normal or anomalous. Automatic detection of mechanical failure is an essential technology in the fourth industrial revolution, which involves artificial-intelligence-based factory automation. Prompt detection of machine anomalies by observing sounds is useful for monitoring the condition of machines. This task is the follow-up from DCASE 2020 Task 2 to DCASE 2022 Task 2. The task this year is to develop an ASD system that meets the following four requirements. **1. Train a model using only normal sound (unsupervised learning scenario)** Because anomalies rarely occur and are highly diverse in real-world factories, it can be difficult to collect exhaustive patterns of anomalous sounds. Therefore, the system must detect unknown types of anomalous sounds that are not provided in the training data. This is the same requirement as in the previous tasks. **2. Detect anomalies regardless of domain shifts (domain generalization task)** In real-world cases, the operational states of a machine or the environmental noise can change to cause domain shifts. Domain-generalization techniques can be useful for handling domain shifts that occur frequently or are hard-to-notice. In this task, the system is required to use domain-generalization techniques for handling these domain shifts. This requirement is the same as in DCASE 2022 Task 2. **3. Train a model for a completely new machine type** For a completely new machine type, hyperparameters of the trained model cannot be tuned. Therefore, the system should have the ability to train models without additional hyperparameter tuning. **4. Train a model using only one machine from its machine type** While sounds from multiple machines of the same machine type can be used to enhance detection performance, it is often the case that sound data from only one machine are available for a machine type. In such a case, the system should be able to train models using only one machine from a machine type. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Baseline system The baseline system is available on the Github repository [dcase2023_task2_baseline_ae](https://github.com/nttcslab/dase2023_task2_baseline_ae).The baseline systems provide a simple entry-level approach that gives a reasonable performance in the dataset of Task 2. They are good starting points, especially for entry-level researchers who want to get familiar with the anomalous-sound-detection task. ### Dataset Curators [//]: # (todo) [More Information Needed] ### Licensing Information - Condition of use This is a feature/embeddings-enriched version of the "DCASE 2023 Challenge Task 2 Development Dataset". The [original dataset](https://dcase.community/challenge2023/task-first-shot-unsupervised-anomalous-sound-detection-for-machine-condition-monitoring#audio-datasets) was created jointly by **Hitachi, Ltd.** and **NTT Corporation** and is available under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. ### Citation Information (original) If you use this dataset, please cite all the following papers. We will publish a paper on DCASE 2023 Task 2, so pleasure make sure to cite the paper, too. - Kota Dohi, Tomoya Nishida, Harsh Purohit, Ryo Tanabe, Takashi Endo, Masaaki Yamamoto, Yuki Nikaido, and Yohei Kawaguchi. MIMII DG: sound dataset for malfunctioning industrial machine investigation and inspection for domain generalization task. In arXiv e-prints: 2205.13879, 2022. [[URL](https://arxiv.org/abs/2205.13879)] - Noboru Harada, Daisuke Niizumi, Daiki Takeuchi, Yasunori Ohishi, Masahiro Yasuda, and Shoichiro Saito. ToyADMOS2: another dataset of miniature-machine operating sounds for anomalous sound detection under domain shift conditions. In Proceedings of the 6th Detection and Classification of Acoustic Scenes and Events 2021 Workshop (DCASE2021), 1–5. Barcelona, Spain, November 2021. [[URL](https://dcase.community/documents/workshop2021/proceedings/DCASE2021Workshop_Harada_6.pdf)] - Noboru Harada, Daisuke Niizumi, Daiki Takeuchi, Yasunori Ohishi, and Masahiro Yasuda. First-shot anomaly detection for machine condition monitoring: a domain generalization baseline. In arXiv e-prints: 2303.00455, 2023. [[URL](https://arxiv.org/abs/2303.00455.pdf)] ``` @dataset{kota_dohi_2023_7882613, author = {Kota Dohi and Keisuke Imoto and Noboru Harada and Daisuke Niizumi and Yuma Koizumi and Tomoya Nishida and Harsh Purohit and Takashi Endo and Yohei Kawaguchi}, title = {DCASE 2023 Challenge Task 2 Development Dataset}, month = mar, year = 2023, publisher = {Zenodo}, version = {3.0}, doi = {10.5281/zenodo.7882613}, url = {https://doi.org/10.5281/zenodo.7882613} } ```
20,175
[ [ -0.031494140625, -0.045196533203125, 0.0239105224609375, 0.0018663406372070312, 0.0027141571044921875, -0.006336212158203125, -0.0181732177734375, -0.025054931640625, 0.01806640625, 0.0212860107421875, -0.07183837890625, -0.058746337890625, -0.04364013671875, ...
SaylorTwift/Gutenberg
2023-03-02T14:33:50.000Z
[ "region:us" ]
SaylorTwift
null
null
3
10
2023-03-02T13:59:30
--- dataset_info: features: - name: id dtype: string - name: title dtype: string - name: author dtype: string - name: authoryearofbirth dtype: int32 - name: authoryearofdeath dtype: int32 - name: downloads dtype: int32 - name: text dtype: string - name: type dtype: string splits: - name: train num_bytes: 20279073235 num_examples: 54810 download_size: 12344747182 dataset_size: 20279073235 --- # Dataset Card for "Gutenberg" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
624
[ [ -0.047515869140625, -0.01873779296875, 0.021392822265625, 0.00653076171875, -0.008148193359375, -0.00421142578125, 0.00501251220703125, -0.0212554931640625, 0.04248046875, 0.03790283203125, -0.05035400390625, -0.0592041015625, -0.04754638671875, -0.019165039...
jjmachan/NSFW-reddit
2023-03-04T10:22:48.000Z
[ "region:us" ]
jjmachan
null
null
2
10
2023-03-04T10:22:32
--- dataset_info: features: - name: title dtype: string - name: subreddit dtype: string - name: post_id dtype: string - name: score dtype: int64 - name: link_flair_text dtype: string - name: is_self dtype: bool - name: over_18 dtype: bool - name: upvote_ratio dtype: float64 - name: is_question dtype: bool - name: C1 dtype: string - name: C2 dtype: string - name: C3 dtype: string - name: C4 dtype: string - name: C5 dtype: string splits: - name: train num_bytes: 3178233 num_examples: 23519 download_size: 1238046 dataset_size: 3178233 --- # Dataset Card for "NSFW-reddit" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
809
[ [ -0.04669189453125, -0.032379150390625, 0.019134521484375, 0.0296173095703125, -0.026397705078125, -0.00939178466796875, 0.0223541259765625, -0.018157958984375, 0.05499267578125, 0.0333251953125, -0.06658935546875, -0.05718994140625, -0.0496826171875, 0.00214...
boragokbakan/entity_disambiguation
2023-03-10T19:29:56.000Z
[ "task_categories:question-answering", "language:en", "license:afl-3.0", "entity disambiguation", "disambiguation", "ned", "GENRE", "BLINK", "region:us" ]
boragokbakan
null
@inproceedings{decao2021autoregressive, author = {Nicola {De Cao} and Gautier Izacard and Sebastian Riedel and Fabio Petroni}, title = {Autoregressive Entity Retrieval}, booktitle = {9th International Conference on Learning Representations, {ICLR} 2021, Virtual Event, Austria, May 3-7, 2021}, publisher = {OpenReview.net}, year = {2021}, url = {https://openreview.net/forum?id=5k8F6UU39V}, }
2
10
2023-03-10T15:56:58
--- license: afl-3.0 language: - en tags: - entity disambiguation - disambiguation - ned - GENRE - BLINK pretty_name: Entity Disambiguation task_categories: - question-answering --- Entity Disambiguation datasets as provided in the [GENRE](https://github.com/facebookresearch/GENRE/blob/main/scripts_genre/download_all_datasets.sh) repo. The dataset can be used to train and evaluate entity disambiguators. The datasets can be imported easily as follows: ``` from datasets import load_dataset ds = load_dataset("boragokbakan/entity_disambiguation", "aida") ``` Available dataset names are: - `blink` - `ace2004` - `aida` - `aquaint` - `blink` - `clueweb` - `msnbc` - `wiki` **Note:** As the BLINK training set is very large in size (~10GB), it is advised to set `streaming=True` when calling `load_dataset`.
813
[ [ -0.03424072265625, -0.03436279296875, 0.0179595947265625, 0.0207366943359375, -0.0205535888671875, 0.01174163818359375, -0.0189208984375, -0.0209197998046875, 0.0343017578125, 0.0293426513671875, -0.052001953125, -0.04595947265625, -0.03497314453125, 0.04391...
HuggingFaceGECLM/REDDIT_submissions
2023-03-17T07:44:37.000Z
[ "task_categories:text-generation", "task_ids:dialogue-modeling", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:1B<n<10B", "language:en", "reddit", "social-media", "arxiv:2001.08435", "r...
HuggingFaceGECLM
null
null
2
10
2023-03-15T14:13:43
--- dataset_info: features: - name: allow_live_comments dtype: string - name: archived dtype: string - name: author dtype: string - name: author_fullname dtype: string - name: banned_by dtype: string - name: category dtype: string - name: content_categories dtype: string - name: contest_mode dtype: string - name: created_utc dtype: string - name: discussion_type dtype: string - name: distinguished dtype: string - name: domain dtype: string - name: edited dtype: string - name: gilded dtype: string - name: hidden dtype: string - name: hide_score dtype: string - name: id dtype: string - name: is_created_from_ads_ui dtype: string - name: is_crosspostable dtype: string - name: is_meta dtype: string - name: is_original_content dtype: string - name: is_reddit_media_domain dtype: string - name: is_robot_indexable dtype: string - name: is_self dtype: string - name: is_video dtype: string - name: locked dtype: string - name: media dtype: string - name: media_embed dtype: string - name: media_only dtype: string - name: name dtype: string - name: no_follow dtype: string - name: num_comments dtype: string - name: num_crossposts dtype: string - name: over_18 dtype: string - name: parent_whitelist_status dtype: string - name: permalink dtype: string - name: pinned dtype: string - name: post_hint dtype: string - name: pwls dtype: string - name: quarantine dtype: string - name: removed_by dtype: string - name: removed_by_category dtype: string - name: retrieved_on dtype: string - name: score dtype: string - name: secure_media dtype: string - name: secure_media_embed dtype: string - name: selftext dtype: string - name: send_replies dtype: string - name: spoiler dtype: string - name: stickied dtype: string - name: subreddit_id dtype: string - name: subreddit_name_prefixed dtype: string - name: subreddit_subscribers dtype: string - name: subreddit_type dtype: string - name: suggested_sort dtype: string - name: title dtype: string - name: top_awarded_type dtype: string - name: total_awards_received dtype: string - name: treatment_tags dtype: string - name: upvote_ratio dtype: string - name: url dtype: string - name: url_overridden_by_dest dtype: string - name: view_count dtype: string - name: whitelist_status dtype: string - name: wls dtype: string splits: - name: tifu num_bytes: 711926746 num_examples: 526283 - name: explainlikeimfive num_bytes: 1407570925 num_examples: 1811324 - name: WritingPrompts num_bytes: 883683696 num_examples: 1001358 - name: changemyview num_bytes: 366049867 num_examples: 257332 - name: LifeProTips num_bytes: 596724168 num_examples: 715494 - name: todayilearned num_bytes: 1882122179 num_examples: 2153849 - name: science num_bytes: 675817380 num_examples: 872768 - name: askscience num_bytes: 1180347707 num_examples: 1562708 - name: ifyoulikeblank num_bytes: 248876237 num_examples: 221368 - name: Foodforthought num_bytes: 56817554 num_examples: 70647 - name: IWantToLearn num_bytes: 97666128 num_examples: 103347 - name: bestof num_bytes: 230879506 num_examples: 341029 - name: IAmA num_bytes: 375534116 num_examples: 436003 - name: socialskills num_bytes: 327412682 num_examples: 260354 - name: relationship_advice num_bytes: 5050087947 num_examples: 3284961 - name: philosophy num_bytes: 230221165 num_examples: 212792 - name: YouShouldKnow num_bytes: 87706881 num_examples: 94635 - name: history num_bytes: 295389153 num_examples: 284318 - name: books num_bytes: 635450859 num_examples: 692807 - name: Showerthoughts num_bytes: 4859309870 num_examples: 6358205 - name: personalfinance num_bytes: 1813984142 num_examples: 1347837 - name: buildapc num_bytes: 4754190700 num_examples: 3030207 - name: EatCheapAndHealthy num_bytes: 95544413 num_examples: 79694 - name: boardgames num_bytes: 379980593 num_examples: 287493 - name: malefashionadvice num_bytes: 523741819 num_examples: 548587 - name: femalefashionadvice num_bytes: 131338068 num_examples: 131110 - name: scifi num_bytes: 148283250 num_examples: 134568 - name: Fantasy num_bytes: 265612464 num_examples: 175866 - name: Games num_bytes: 1112497898 num_examples: 830997 - name: bodyweightfitness num_bytes: 154845910 num_examples: 144829 - name: SkincareAddiction num_bytes: 908265410 num_examples: 890421 - name: podcasts num_bytes: 114495922 num_examples: 113707 - name: suggestmeabook num_bytes: 307022597 num_examples: 300601 - name: AskHistorians num_bytes: 586939915 num_examples: 592242 - name: gaming num_bytes: 7306865977 num_examples: 6418305 - name: DIY num_bytes: 612049815 num_examples: 505769 - name: mildlyinteresting num_bytes: 1497282377 num_examples: 1971187 - name: sports num_bytes: 866461524 num_examples: 783890 - name: space num_bytes: 413125181 num_examples: 415629 - name: gadgets num_bytes: 242359652 num_examples: 284487 - name: Documentaries num_bytes: 658519015 num_examples: 300935 - name: GetMotivated num_bytes: 458864553 num_examples: 395894 - name: UpliftingNews num_bytes: 294091853 num_examples: 285339 - name: technology num_bytes: 1562501874 num_examples: 2112572 - name: Fitness num_bytes: 939461866 num_examples: 1035109 - name: travel num_bytes: 988622317 num_examples: 1012452 - name: lifehacks num_bytes: 124628404 num_examples: 116871 - name: Damnthatsinteresting num_bytes: 536680874 num_examples: 397143 - name: gardening num_bytes: 652169745 num_examples: 723267 - name: programming num_bytes: 455470198 num_examples: 571221 download_size: 15928530968 dataset_size: 49105493092 annotations_creators: - no-annotation language: - en language_creators: - machine-generated license: [] multilinguality: - monolingual pretty_name: Reddit submissions size_categories: - 1B<n<10B source_datasets: [] tags: - reddit - social-media task_categories: - text-generation task_ids: - dialogue-modeling - language-modeling --- # Dataset Card for "REDDIT_submissions" ## Dataset Description - **Homepage:** - **Paper: https://arxiv.org/abs/2001.08435** ### Dataset Summary Submissions of 50 high-quality subreddits, extracted from the REDDIT PushShift data dumps (from 2006 to Jan 2023). ### Supported Tasks These submissions can be used for text generation and language modeling, as well as dialogue modeling. ## Dataset Structure ### Data Splits Each split corresponds to a specific subreddit in the following list: "tifu", "explainlikeimfive", "WritingPrompts", "changemyview", "LifeProTips", "todayilearned", "science", "askscience", "ifyoulikeblank", "Foodforthought", "IWantToLearn", "bestof", "IAmA", "socialskills", "relationship_advice", "philosophy", "YouShouldKnow", "history", "books", "Showerthoughts", "personalfinance", "buildapc", "EatCheapAndHealthy", "boardgames", "malefashionadvice", "femalefashionadvice", "scifi", "Fantasy", "Games", "bodyweightfitness", "SkincareAddiction", "podcasts", "suggestmeabook", "AskHistorians", "gaming", "DIY", "mildlyinteresting", "sports", "space", "gadgets", "Documentaries", "GetMotivated", "UpliftingNews", "technology", "Fitness", "travel", "lifehacks", "Damnthatsinteresting", "gardening", "programming" ## Dataset Creation ### Curation Rationale All the information fields have been cast to string, as their format change through time from one dump to the following. A reduced number of keys have been kept: "allow_live_comments", "archived", "author", "author_fullname", "banned_by", "category", "content_categories", "contest_mode", "created_utc", "discussion_type", "distinguished", "domain", "edited", "gilded", "hidden", "hide_score", "id", "is_created_from_ads_ui", "is_crosspostable", "is_meta", "is_original_content", "is_reddit_media_domain", "is_robot_indexable", "is_self", "is_video", "locked", "media", "media_embed", "media_only", "name", "no_follow", "num_comments", "num_crossposts", "over_18", "parent_whitelist_status", "permalink", "pinned", "post_hint", "pwls", "quarantine", "removed_by", "removed_by_category", "retrieved_on", "score", "secure_media", "secure_media_embed", "selftext", "send_replies", "spoiler", "stickied", "subreddit", "subreddit_id", "subreddit_name_prefixed", "subreddit_subscribers", "subreddit_type", "suggested_sort", "title", "top_awarded_type", "total_awards_received", "treatment_tags", "upvote_ratio", "url", "url_overridden_by_dest", "view_count", "whitelist_status", "wls". ### Source Data The [Reddit PushShift data dumps](https://files.pushshift.io/reddit/) are part of a data collection effort which crawls Reddit at regular intervals, to extract and keep all its data. #### Initial Data Collection and Normalization See the paper. #### Who are the source language producers? Redditors are mostly young (65% below 30), male (70%), and American (50% of the site). ### Personal and Sensitive Information The data contains Redditor's usernames associated to their content. ## Considerations for Using the Data This dataset should be anonymized before any processing. Though the subreddits selected are considered as being of higher quality, they can still reflect what you can find on the internet in terms of expressions of biases and toxicity. ### Contributions Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset.
10,020
[ [ -0.048309326171875, -0.058990478515625, 0.02056884765625, 0.0147705078125, -0.021636962890625, 0.016571044921875, -0.022247314453125, -0.006191253662109375, 0.0450439453125, 0.030914306640625, -0.062225341796875, -0.064697265625, -0.054168701171875, 0.026519...
Multimodal-Fatima/Birdsnap_train
2023-03-22T02:00:53.000Z
[ "region:us" ]
Multimodal-Fatima
null
null
0
10
2023-03-22T01:56:50
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
open-source-metrics/pip-external
2023-10-26T12:04:20.000Z
[ "region:us" ]
open-source-metrics
null
null
0
10
2023-03-24T14:32:07
--- dataset_info: features: - name: day dtype: string - name: num_downloads dtype: int64 splits: - name: pytorch num_bytes: 33132 num_examples: 1506 - name: tensorflow num_bytes: 33132 num_examples: 1506 - name: langchain num_bytes: 7678 num_examples: 349 download_size: 43902 dataset_size: 73942 configs: - config_name: default data_files: - split: langchain path: data/langchain-* - split: pytorch path: data/pytorch-* - split: tensorflow path: data/tensorflow-* --- # Dataset Card for "pip-external" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
705
[ [ -0.03912353515625, 0.0050506591796875, 0.0034770965576171875, 0.0228118896484375, -0.00959014892578125, -0.0163421630859375, 0.026824951171875, -0.011474609375, 0.0538330078125, 0.024932861328125, -0.062103271484375, -0.0289459228515625, -0.044921875, -0.012...
HuggingFaceH4/self_instruct
2023-03-27T22:03:01.000Z
[ "task_categories:text-generation", "license:apache-2.0", "region:us" ]
HuggingFaceH4
null
null
3
10
2023-03-27T21:58:57
--- license: apache-2.0 task_categories: - text-generation --- This dataset splits the original [Self-instruct dataset](https://huggingface.co/datasets/yizhongw/self_instruct) into training (90%) and test (10%).
211
[ [ -0.03338623046875, -0.03643798828125, -0.0079498291015625, 0.01531219482421875, 0.00565338134765625, -0.0172119140625, 0.01488494873046875, -0.009185791015625, 0.039886474609375, 0.052154541015625, -0.07574462890625, -0.01120758056640625, -0.01241302490234375, ...
gbharti/finance-alpaca-csv
2023-03-29T04:14:52.000Z
[ "region:us" ]
gbharti
null
null
7
10
2023-03-29T04:14:17
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
Babypotatotang/logo-captioning-BLIP-BrandInfoWBP
2023-04-04T06:23:31.000Z
[ "region:us" ]
Babypotatotang
null
null
1
10
2023-04-04T05:03:29
--- dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 321581037.08 num_examples: 24080 - name: test num_bytes: 82453208.54 num_examples: 6021 download_size: 265975818 dataset_size: 404034245.62 --- # Dataset Card for "logo-captioning-BLIP-BrandInfoWBP" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
486
[ [ -0.031982421875, 0.00519561767578125, -0.0129852294921875, 0.031951904296875, -0.0230712890625, 0.03558349609375, 0.0082550048828125, -0.02947998046875, 0.06097412109375, 0.0309295654296875, -0.054107666015625, -0.04522705078125, -0.048736572265625, -0.00608...
mstz/mammography
2023-04-16T17:34:26.000Z
[ "task_categories:tabular-classification", "size_categories:n<1K", "language:en", "license:cc", "mammography", "tabular_classification", "binary_classification", "UCI", "region:us" ]
mstz
null
@misc{misc_mammographic_mass_161, author = {Elter,Matthias}, title = {{Mammographic Mass}}, year = {2007}, howpublished = {UCI Machine Learning Repository}, note = {{DOI}: \\url{10.24432/C53K6Z}} }
1
10
2023-04-06T14:54:30
--- language: - en tags: - mammography - tabular_classification - binary_classification - UCI pretty_name: Mammography size_categories: - n<1K task_categories: - tabular-classification configs: - mammography license: cc --- # Mammography The [Mammography dataset](https://archive.ics.uci.edu/ml/datasets/Mammography) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets). # Configurations and tasks | **Configuration** | **Task** | **Description** | |-------------------|---------------------------|------------------------| | mammography | Binary classification | Is the lesion benign? | # Usage ```python from datasets import load_dataset dataset = load_dataset("mstz/mammography")["train"] ```
751
[ [ -0.0228729248046875, -0.031494140625, 0.034088134765625, -0.01212310791015625, -0.044097900390625, -0.0357666015625, 0.0416259765625, 0.0006208419799804688, 0.01043701171875, 0.0404052734375, -0.031524658203125, -0.0587158203125, -0.07049560546875, 0.0062255...
mstz/titanic
2023-04-09T23:30:09.000Z
[ "task_categories:tabular-classification", "size_categories:n<1K", "language:en", "license:cc", "titanic", "tabular_classification", "binary_classification", "region:us" ]
mstz
null
null
0
10
2023-04-07T09:15:56
--- language: - en tags: - titanic - tabular_classification - binary_classification pretty_name: Titanic size_categories: - n<1K task_categories: - tabular-classification configs: - survival license: cc --- # Titanic The [Titanic dataset](https://www.kaggle.com/datasets/vinicius150987/titanic3) from [Kaggle](https://www.kaggle.com/). # Configurations and tasks | **Configuration** | **Task** | **Description** | |-------------------|---------------------------|----------------------------| | survival | Binary classification | Has the passanger survived?| # Usage ```python from datasets import load_dataset dataset = load_dataset("mstz/titanic")["train"] ```
707
[ [ -0.0204925537109375, -0.0278472900390625, 0.01215362548828125, 0.035552978515625, -0.0247802734375, -0.0002942085266113281, 0.0194854736328125, 0.01027679443359375, 0.0169219970703125, 0.036590576171875, -0.038177490234375, -0.020843505859375, -0.03857421875, ...
grosenthal/latin_english_parallel
2023-04-28T02:11:31.000Z
[ "task_categories:translation", "size_categories:10K<n<100K", "language:la", "language:en", "license:mit", "region:us" ]
grosenthal
null
null
3
10
2023-04-07T21:09:52
--- dataset_info: features: - name: id dtype: int64 - name: la dtype: string - name: en dtype: string - name: file dtype: string splits: - name: train num_bytes: 39252644 num_examples: 99343 - name: test num_bytes: 405056 num_examples: 1014 - name: valid num_bytes: 392886 num_examples: 1014 download_size: 25567350 dataset_size: 40050586 license: mit task_categories: - translation language: - la - en pretty_name: Latin to English Translation Pairs size_categories: - 10K<n<100K --- # Dataset Card for "latin_english_parallel" 101k translation pairs between Latin and English, split 99/1/1 as train/test/val. These have been collected roughly 66% from the Loeb Classical Library and 34% from the Vulgate translation. For those that were gathered from the Loeb Classical Library, alignment was performd manually between Source and Target sequences. Additionally, the English translations were both 1. copyrighted and 2. outdated. As such, we decided to modernize and transform them into ones that could be used in the public domain, as the original Latin is not copyrighted. To perform this, we used the gpt3.5-turbo model on OpenAI with the prompt `Translate an old dataset from the 1800s to modern English while preserving the original meaning and exact same sentence structure. Retain extended adjectives, dependent clauses, and punctuation. Output the translation preceded by the text "Modern Translation: ". If a given translation is not a complete sentence, repeat the input sentence. \n'` followed by the source English. We then manually corrected all outputs that did not conform to the standard. Each sample is annotated with the index and file (and therefore author/work) that the sample is from. If you find errors, please feel free to submit a PR to fix them. ![alt text](distribution.png)
1,872
[ [ -0.01378631591796875, -0.03228759765625, 0.0226593017578125, 0.0224761962890625, -0.036468505859375, -0.0242156982421875, -0.0306854248046875, -0.037261962890625, 0.036407470703125, 0.046661376953125, -0.0231781005859375, -0.0443115234375, -0.0196075439453125, ...
0x7194633/spam_detector
2023-04-09T04:09:42.000Z
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:en", "license:apache-2.0", "region:us" ]
0x7194633
null
null
0
10
2023-04-08T14:27:11
--- task_categories: - text-classification language: - en pretty_name: Spam Detector size_categories: - 1K<n<10K license: apache-2.0 --- # Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
1,675
[ [ -0.038177490234375, -0.02984619140625, -0.0036067962646484375, 0.027130126953125, -0.0323486328125, 0.0037822723388671875, -0.01727294921875, -0.02020263671875, 0.049041748046875, 0.04046630859375, -0.0634765625, -0.08062744140625, -0.052947998046875, 0.0020...
climatebert/climate_specificity
2023-04-18T16:02:48.000Z
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-nc-sa-4.0", "region:us" ]
climatebert
null
null
1
10
2023-04-11T13:12:11
--- annotations_creators: - expert-generated language_creators: - found language: - en license: cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: [] pretty_name: ClimateSpecificity dataset_info: features: - name: text dtype: string - name: label dtype: class_label: names: '0': non-specific '1': specific splits: - name: train num_bytes: 492077 num_examples: 1000 - name: test num_bytes: 174265 num_examples: 320 download_size: 373454 dataset_size: 666342 --- # Dataset Card for climate_specificity ## Dataset Description - **Homepage:** [climatebert.ai](https://climatebert.ai) - **Repository:** - **Paper:** [papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435) - **Leaderboard:** - **Point of Contact:** [Nicolas Webersinke](mailto:nicolas.webersinke@fau.de) ### Dataset Summary We introduce an expert-annotated dataset for classifying the climate-related specificity of climate-related paragraphs in corporate disclosures. ### Supported Tasks and Leaderboards The dataset supports a binary classification task of whether a given climate-related paragraph is specific or not. ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances ``` { 'text': '− Scope 3: Optional scope that includes indirect emissions associated with the goods and services supply chain produced outside the organization. Included are emissions from the transport of products from our logistics centres to stores (downstream) performed by external logistics operators (air, land and sea transport) as well as the emissions associated with electricity consumption in franchise stores.', 'label': 1 } ``` ### Data Fields - text: a climate-related paragraph extracted from corporate annual reports and sustainability reports - label: the label (0 -> non-specific, 1 -> specific) ### Data Splits The dataset is split into: - train: 1,000 - test: 320 ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization Our dataset contains climate-related paragraphs extracted from financial disclosures by firms. We collect text from corporate annual reports and sustainability reports. For more information regarding our sample selection, please refer to the Appendix of our paper (see [citation](#citation-information)). #### Who are the source language producers? Mainly large listed companies. ### Annotations #### Annotation process For more information on our annotation process and annotation guidelines, please refer to the Appendix of our paper (see [citation](#citation-information)). #### Who are the annotators? The authors and students at Universität Zürich and Friedrich-Alexander-Universität Erlangen-Nürnberg with majors in finance and sustainable finance. ### Personal and Sensitive Information Since our text sources contain public information, no personal and sensitive information should be included. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators - Julia Anna Bingler - Mathias Kraus - Markus Leippold - Nicolas Webersinke ### Licensing Information This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (cc-by-nc-sa-4.0). To view a copy of this license, visit [creativecommons.org/licenses/by-nc-sa/4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). If you are interested in commercial use of the dataset, please contact [markus.leippold@bf.uzh.ch](mailto:markus.leippold@bf.uzh.ch). ### Citation Information ```bibtex @techreport{bingler2023cheaptalk, title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk}, author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas}, type={Working paper}, institution={Available at SSRN 3998435}, year={2023} } ``` ### Contributions Thanks to [@webersni](https://github.com/webersni) for adding this dataset.
4,403
[ [ -0.0213775634765625, -0.0265045166015625, 0.01398468017578125, 0.0121612548828125, -0.027069091796875, -0.00958251953125, -0.0220184326171875, -0.0445556640625, 0.0277557373046875, 0.0311279296875, -0.03564453125, -0.059661865234375, -0.035736083984375, 0.00...
crangana/railroad-fault-detection
2023-04-12T10:07:13.000Z
[ "region:us" ]
crangana
null
null
0
10
2023-04-12T08:18:06
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
mstz/hayes_roth
2023-04-16T17:30:45.000Z
[ "task_categories:tabular-classification", "size_categories:n<1K", "language:en", "license:cc", "hayes", "tabular_classification", "binary_classification", "multiclass_classification", "UCI", "region:us" ]
mstz
null
@misc{misc_hayes_efficiency_242, author = {Tsanas,Athanasios & Xifara,Angeliki}, title = {{Hayes efficiency}}, year = {2012}, howpublished = {UCI Machine Learning Repository}, note = {{DOI}: \\url{10.24432/C51307}} }
0
10
2023-04-12T10:24:15
--- language: - en tags: - hayes - tabular_classification - binary_classification - multiclass_classification - UCI pretty_name: Hayes evaluation size_categories: - n<1K task_categories: - tabular-classification configs: - hayes - hayes_1 - hayes_2 - hayes_3 license: cc --- # Hayes The [Hayes-Roth dataset](https://archive-beta.ics.uci.edu/dataset/44/hayes+roth) from the [UCI repository](https://archive-beta.ics.uci.edu). # Configurations and tasks | **Configuration** | **Task** | **Description** | |-------------------|---------------------------|--------------------------------| | hayes | Multiclass classification | Classify hayes type. | | hayes_1 | Binary classification | Is this instance of class 1? | | hayes_2 | Binary classification | Is this instance of class 2? | | hayes_3 | Binary classification | Is this instance of class 3? | # Usage ```python from datasets import load_dataset dataset = load_dataset("mstz/hayes", "hayes")["train"] ```
1,065
[ [ -0.023468017578125, -0.008514404296875, 0.01477813720703125, 0.009674072265625, -0.0013780593872070312, -0.00689697265625, 0.0004267692565917969, -0.0136871337890625, 0.018951416015625, 0.01959228515625, -0.043182373046875, -0.043304443359375, -0.039093017578125...
seanghay/kmcs
2023-05-03T04:38:54.000Z
[ "license:apache-2.0", "region:us" ]
seanghay
null
null
0
10
2023-04-17T03:14:06
--- license: apache-2.0 dataset_info: features: - name: audio dtype: audio - name: transcription dtype: string splits: - name: train num_bytes: 1226373371.915 num_examples: 5565 download_size: 1064307923 dataset_size: 1226373371.915 --- # ⚠️ Migration Notice Moved to [seanghay/km-speech-corpus](https://huggingface.co/datasets/seanghay/km-speech-corpus) ## Khmer Common Speech 1.0 This dataset contains 5,565 samples of Khmer speech downloaded from public YouTube videos. 4.83 hours in total. This dataset was made by this project: https://github.com/seanghay/subtitle-demuxer ## References - [Chanty Sothy](https://github.com/chantysothy) - the initial idea and YouTube links with Khmer subtitles.
736
[ [ -0.0224609375, -0.02545166015625, 0.0222015380859375, 0.0170135498046875, -0.044586181640625, 0.00638580322265625, 0.000012040138244628906, -0.0006976127624511719, 0.0372314453125, 0.066650390625, -0.0546875, -0.046905517578125, -0.05108642578125, 0.00991058...
tasksource/disrpt
2023-10-19T07:35:31.000Z
[ "language:en", "license:apache-2.0", "region:us" ]
tasksource
null
null
1
10
2023-04-18T07:36:18
--- license: apache-2.0 language: - en --- https://github.com/disrpt/sharedtask2023 scditb: ``` @inproceedings{yang-li-2018-scidtb, title = "{S}ci{DTB}: Discourse Dependency {T}ree{B}ank for Scientific Abstracts", author = "Yang, An and Li, Sujian", booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2018", address = "Melbourne, Australia", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P18-2071", doi = "10.18653/v1/P18-2071", pages = "444--449", abstract = "Annotation corpus for discourse relations benefits NLP tasks such as machine translation and question answering. In this paper, we present SciDTB, a domain-specific discourse treebank annotated on scientific articles. Different from widely-used RST-DT and PDTB, SciDTB uses dependency trees to represent discourse structure, which is flexible and simplified to some extent but do not sacrifice structural integrity. We discuss the labeling framework, annotation workflow and some statistics about SciDTB. Furthermore, our treebank is made as a benchmark for evaluating discourse dependency parsers, on which we provide several baselines as fundamental work.", } ```
1,319
[ [ -0.01287841796875, -0.037384033203125, 0.037506103515625, 0.0411376953125, -0.0193939208984375, 0.032379150390625, -0.000017940998077392578, -0.046356201171875, 0.047027587890625, 0.00839996337890625, -0.01251220703125, -0.038909912109375, -0.0445556640625, ...
joey234/mmlu-high_school_chemistry-neg
2023-04-20T05:50:33.000Z
[ "region:us" ]
joey234
null
null
2
10
2023-04-20T05:50:29
--- dataset_info: features: - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D - name: question dtype: string splits: - name: test num_bytes: 53898 num_examples: 203 download_size: 30994 dataset_size: 53898 --- # Dataset Card for "mmlu-high_school_chemistry-neg" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
537
[ [ -0.03533935546875, -0.033050537109375, 0.0225830078125, -0.00925445556640625, 0.01479339599609375, 0.01377105712890625, 0.021636962890625, 0.01885986328125, 0.06182861328125, 0.015655517578125, -0.066650390625, -0.06890869140625, -0.04296875, -0.005500793457...
iamketan25/oig-instructions-dataset
2023-04-26T07:41:47.000Z
[ "region:us" ]
iamketan25
null
null
0
10
2023-04-26T07:39:42
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
AlekseyKorshuk/oasst1-chatml
2023-06-05T22:04:39.000Z
[ "region:us" ]
AlekseyKorshuk
null
null
1
10
2023-04-27T01:49:56
--- dataset_info: features: - name: conversation list: - name: content dtype: string - name: do_train dtype: bool - name: role dtype: string splits: - name: train num_bytes: 6948001 num_examples: 3670 download_size: 3661524 dataset_size: 6948001 --- # Dataset Card for "oasst1-chatml" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
471
[ [ -0.029876708984375, -0.032257080078125, 0.0121612548828125, 0.00507354736328125, -0.005680084228515625, -0.000553131103515625, 0.016845703125, -0.00970458984375, 0.047027587890625, 0.036712646484375, -0.06317138671875, -0.058746337890625, -0.045166015625, -0...
0x22almostEvil/multilingual-wikihow-qa-16k
2023-05-13T16:59:15.000Z
[ "task_categories:question-answering", "size_categories:10K<n<100K", "language:en", "language:ru", "language:pt", "language:it", "language:es", "language:fr", "language:de", "language:nl", "license:cc-by-nc-3.0", "wikihow", "QnA", "region:us" ]
0x22almostEvil
null
null
7
10
2023-04-29T03:37:09
--- license: cc-by-nc-3.0 task_categories: - question-answering language: - en - ru - pt - it - es - fr - de - nl pretty_name: multilingual-wikihow-qa-16k size_categories: - 10K<n<100K tags: - wikihow - QnA dataset_info: features: - name: INSTRUCTION dtype: string - name: RESPONSE dtype: string - name: SOURCE dtype: string - name: METADATA dtype: string splits: - name: train num_bytes: 144407512 num_examples: 16822 download_size: 76391535 dataset_size: 144407512 --- # Dataset Card for multilingual WikiHow with ~16.8K entries. ~(2-2.2)K for each language. ### Warning [1] The WikiHow team contacted me and made it clear that **they forbid the use of their data for machine learning purposes**. However, I am not calling for anything, and this dataset only shows the concept, and I strongly advise against violating their ToS. However, consultation with lawyers made it clear that **dataset can be used for such purposes** if the project has **research purposes**. ### Warning [2] Source code is kinda **very** bad, and I'm lazy to fix it. ### Dataset Summary Contains Parquet of a list of instructions and WikiHow articles on different languages. Each row consists of * INSTRUCTION * RESPONSE * SOURCE (*.wikihow.com) * METADATA (json with url and language). ### Licensing Information Data is from WikiHow, license for content is located here: https://www.wikihow.com/wikiHow:Creative-Commons ### Acknowledgements This helped me a lot! https://github.com/HelloChatterbox/PyWikiHow; https://pypi.org/project/pywikihow/
1,579
[ [ -0.01751708984375, -0.0284423828125, -0.0120086669921875, 0.0254669189453125, -0.0400390625, -0.023284912109375, -0.0340576171875, -0.012664794921875, 0.005588531494140625, 0.036468505859375, -0.0406494140625, -0.04931640625, -0.04229736328125, 0.01338195800...
seanghay/khmer-speech-large
2023-04-30T05:11:07.000Z
[ "region:us" ]
seanghay
null
null
0
10
2023-04-30T04:59:37
--- dataset_info: features: - name: audio dtype: audio: sampling_rate: 16000 - name: sentence dtype: string splits: - name: train num_bytes: 5686102163.1 num_examples: 19850 - name: test num_bytes: 726356614.0 num_examples: 771 download_size: 6074861609 dataset_size: 6412458777.1 --- # Dataset Card for "khmer-speech-large" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
511
[ [ -0.034759521484375, -0.0289154052734375, 0.01007843017578125, 0.0312347412109375, -0.033294677734375, 0.0037708282470703125, -0.0214691162109375, -0.01410675048828125, 0.05706787109375, 0.05267333984375, -0.03631591796875, -0.05975341796875, -0.0556640625, -...
philschmid/chip2_oasst1_en_code
2023-05-01T12:31:13.000Z
[ "region:us" ]
philschmid
null
null
0
10
2023-05-01T12:31:10
--- dataset_info: features: - name: messages list: - name: content dtype: string - name: role dtype: string splits: - name: train num_bytes: 4934232 num_examples: 4687 download_size: 1866641 dataset_size: 4934232 --- # Dataset Card for "chip2_oasst1_en_code" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
435
[ [ -0.023406982421875, -0.005542755126953125, 0.012481689453125, 0.00925445556640625, -0.02294921875, 0.00811767578125, 0.029571533203125, -0.017974853515625, 0.05084228515625, 0.0258636474609375, -0.049346923828125, -0.05230712890625, -0.038482666015625, -0.03...
TempoFunk/small
2023-05-10T03:37:12.000Z
[ "task_categories:text-to-video", "task_categories:text-to-image", "task_categories:video-classification", "task_categories:image-classification", "size_categories:1K<n<10K", "language:en", "license:agpl-3.0", "region:us" ]
TempoFunk
null
null
6
10
2023-05-01T20:17:08
--- task_categories: - text-to-video - text-to-image - video-classification - image-classification language: - en size_categories: - 1K<n<10K license: agpl-3.0 --- # TempoFunk Small 7.8k samples of metadata and encoded latents & prompts of random videos. ## Data format - Video frame latents - Numpy arrays - 120 frames, 512x512 source size - Encoded shape (120, 4, 64, 64) - CLIP (openai) encoded prompts - Video description (as seen in metadata) - Encoded shape (77,768) - Video metadata as JSON (description, tags, categories, source URL, etc.)
561
[ [ -0.017669677734375, -0.057281494140625, 0.02508544921875, 0.030609130859375, -0.03466796875, -0.0197601318359375, 0.004573822021484375, 0.0380859375, 0.0440673828125, 0.0279693603515625, -0.0545654296875, -0.01132965087890625, -0.040679931640625, -0.01108551...
durhamvin/followup_questions_dataset_paired
2023-05-18T13:31:20.000Z
[ "region:us" ]
durhamvin
null
null
0
10
2023-05-02T11:31:37
--- dataset_info: features: - name: question dtype: string - name: response_j dtype: string - name: response_k dtype: string splits: - name: train num_bytes: 89911000 num_examples: 51359 - name: validation num_bytes: 5878662 num_examples: 3031 download_size: 25102863 dataset_size: 95789662 --- # Dataset Card for "followup_questions_dataset_paired" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
528
[ [ -0.03399658203125, -0.01105499267578125, 0.0144500732421875, 0.0184173583984375, -0.007122039794921875, -0.008270263671875, 0.013946533203125, -0.0012979507446289062, 0.055877685546875, 0.048095703125, -0.0716552734375, -0.04315185546875, -0.019439697265625, ...
fiveflow/cot_ranking
2023-05-03T10:05:32.000Z
[ "region:us" ]
fiveflow
null
null
4
10
2023-05-03T10:04:34
--- dataset_info: features: - name: question dtype: string - name: response_j dtype: string - name: response_k dtype: string splits: - name: train num_bytes: 64266082 num_examples: 67830 - name: test num_bytes: 3323500 num_examples: 3570 download_size: 408618 dataset_size: 67589582 --- # Dataset Card for "cot_ranking" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
498
[ [ -0.048583984375, -0.01468658447265625, 0.0215301513671875, 0.0196075439453125, -0.01424407958984375, 0.0189971923828125, 0.005153656005859375, -0.01171112060546875, 0.04638671875, 0.033355712890625, -0.03192138671875, -0.0758056640625, -0.0621337890625, -0.0...
wangrongsheng/cMedQA-merged
2023-05-04T11:03:15.000Z
[ "region:us" ]
wangrongsheng
null
null
3
10
2023-05-04T10:54:36
Entry not found
15
[ [ -0.0214080810546875, -0.01494598388671875, 0.057159423828125, 0.028839111328125, -0.0350341796875, 0.04656982421875, 0.052490234375, 0.00504302978515625, 0.0513916015625, 0.016998291015625, -0.0521240234375, -0.0149993896484375, -0.06036376953125, 0.03790283...
HAERAE-HUB/KoInstruct-Base
2023-05-05T13:28:52.000Z
[ "region:us" ]
HAERAE-HUB
null
null
1
10
2023-05-05T11:28:26
--- dataset_info: features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string - name: type dtype: string - name: template dtype: string splits: - name: train num_bytes: 279249821 num_examples: 50169 download_size: 128982824 dataset_size: 279249821 --- # Dataset Card for "ko_instruct_org_v0.1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
515
[ [ -0.035369873046875, -0.00628662109375, 0.0205230712890625, 0.01018524169921875, -0.0223236083984375, -0.0069427490234375, 0.0291748046875, -0.0084686279296875, 0.050537109375, 0.0439453125, -0.060028076171875, -0.058013916015625, -0.0309906005859375, -0.0166...
danielv835/personal_finance_v0.2
2023-05-13T21:06:35.000Z
[ "region:us" ]
danielv835
null
null
11
10
2023-05-13T21:06:30
--- dataset_info: features: - name: context dtype: string - name: chosen dtype: string - name: rejected dtype: string splits: - name: train num_bytes: 105692600 num_examples: 56557 - name: test num_bytes: 1825911 num_examples: 1000 download_size: 64159306 dataset_size: 107518511 --- # Dataset Card for "personal_finance_v0.2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
505
[ [ -0.0318603515625, -0.01378631591796875, 0.00901031494140625, 0.0285797119140625, -0.01027679443359375, -0.00019490718841552734, 0.018951416015625, -0.00908660888671875, 0.04931640625, 0.044403076171875, -0.0506591796875, -0.045867919921875, -0.03253173828125, ...