text-classification
bool
2 classes
text
stringlengths
0
664k
false
# Dataset Card for ZINC ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](https://zinc15.docking.org/)** - **[Repository](https://www.dropbox.com/s/feo9qle74kg48gy/molecules.zip?dl=1):**: - **Paper:**: ZINC 15 – Ligand Discovery for Everyone (see citation) - **Leaderboard:**: [Papers with code leaderboard](https://paperswithcode.com/sota/) ### Dataset Summary The `ZINC` dataset is a "curated collection of commercially available chemical compounds prepared especially for virtual screening" (Wikipedia). ### Supported Tasks and Leaderboards `ZINC` should be used for molecular property prediction (aiming to predict the constrained solubility of the molecules), a graph regression task. The score used is the MAE. The associated leaderboard is here: [Papers with code leaderboard](https://paperswithcode.com/sota/graph-regression-on-zinc). ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") # For the train set (replace by valid or test as needed) dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | big | | #graphs | 220011 | | average #nodes | 23.15 | | average #edges | 49.81 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one) - `num_nodes` (int): number of nodes of the graph ### Data Splits This data comes from the PyGeometric version of the dataset, and follows the provided data splits. This information can be found back using ```python from torch_geometric.datasets import ZINC dataset = ZINC(root = '', split='train') # valid, test ``` ## Additional Information ### Licensing Information The dataset has been released under unknown license. Please open an issue if you know what is the license of this dataset. ### Citation Information ```bibtex @article{doi:10.1021/acs.jcim.5b00559, author = {Sterling, Teague and Irwin, John J.}, title = {ZINC 15 – Ligand Discovery for Everyone}, journal = {Journal of Chemical Information and Modeling}, volume = {55}, number = {11}, pages = {2324-2337}, year = {2015}, doi = {10.1021/acs.jcim.5b00559}, note ={PMID: 26479676}, URL = { https://doi.org/10.1021/acs.jcim.5b00559 }, eprint = { https://doi.org/10.1021/acs.jcim.5b00559 } } ``` ### Contributions Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset.
false
# laion2B-multi-korean-subset ## Dataset Description - **Homepage:** [laion-5b](https://laion.ai/blog/laion-5b/) - **Huggingface:** [laion/laion2B-multi](https://huggingface.co/datasets/laion/laion2B-multi) ## About dataset a subset data of [laion/laion2B-multi](https://huggingface.co/datasets/laion/laion2B-multi), including only korean ### Lisence CC-BY-4.0 ## Data Structure ### Data Instance ```py >>> from datasets import load_dataset >>> dataset = load_dataset("Bingsu/laion2B-multi-korean-subset") >>> dataset DatasetDict({ train: Dataset({ features: ['SAMPLE_ID', 'URL', 'TEXT', 'HEIGHT', 'WIDTH', 'LICENSE', 'LANGUAGE', 'NSFW', 'similarity'], num_rows: 11376263 }) }) ``` ```py >>> dataset["train"].features {'SAMPLE_ID': Value(dtype='int64', id=None), 'URL': Value(dtype='string', id=None), 'TEXT': Value(dtype='string', id=None), 'HEIGHT': Value(dtype='int32', id=None), 'WIDTH': Value(dtype='int32', id=None), 'LICENSE': Value(dtype='string', id=None), 'LANGUAGE': Value(dtype='string', id=None), 'NSFW': Value(dtype='string', id=None), 'similarity': Value(dtype='float32', id=None)} ``` ### Data Size download: 1.56 GiB<br> generated: 2.37 GiB<br> total: 3.93 GiB ### Data Field - 'SAMPLE_ID': `int` - 'URL': `string` - 'TEXT': `string` - 'HEIGHT': `int` - 'WIDTH': `int` - 'LICENSE': `string` - 'LANGUAGE': `string` - 'NSFW': `string` - 'similarity': `float` ### Data Splits | | train | | --------- | -------- | | # of data | 11376263 | ## Note ### Height, Width 이미지의 가로가 `HEIGHT`로, 세로가 `WIDTH`로 되어있는 것 같습니다. ```pycon >>> dataset["train"][98] {'SAMPLE_ID': 2937471001780, 'URL': 'https://image.ajunews.com/content/image/2019/04/12/20190412175643597949.png', 'TEXT': '인천시교육청, 인천 시군구발전협의회 임원진과의 간담회 개최', 'HEIGHT': 640, 'WIDTH': 321, 'LICENSE': '?', 'LANGUAGE': 'ko', 'NSFW': 'UNLIKELY', 'similarity': 0.33347243070602417} ``` ![image](https://image.ajunews.com/content/image/2019/04/12/20190412175643597949.png) ### csv file, pandas ```py # pip install zstandard import pandas as pd from huggingface_hub import hf_hub_url url = hf_hub_url("Bingsu/laion2B-multi-korean-subset", filename="laion2B-multi-korean-subset.csv.zst", repo_type="dataset") # url = "https://huggingface.co/datasets/Bingsu/laion2B-multi-korean-subset/resolve/main/laion2B-multi-korean-subset.csv.zst" df = pd.read_csv(url) ``` <https://huggingface.co/datasets/Bingsu/laion2B-multi-korean-subset/resolve/main/laion2B-multi-korean-subset.csv.zst> 778 MB ### Code used to generate ```py import csv import re from datasets import load_dataset from tqdm import tqdm pattern = re.compile(r"[가-힣]") def quote(s: str) -> str: s = s.replace('"""', "") return s def filter_func(example) -> bool: lang = example.get("LANGUAGE") text = example.get("TEXT") if not isinstance(lang, str) or not isinstance(text, str): return False return lang == "ko" or pattern.search(text) is not None file = open("./laion2B-mulit_korean_subset.csv", "w", encoding="utf-8", newline="") ds = load_dataset("laion/laion2B-multi", split="train", streaming=True) dsf = ds.filter(filter_func) header = [ "SAMPLE_ID", "URL", "TEXT", "HEIGHT", "WIDTH", "LICENSE", "LANGUAGE", "NSFW", "similarity", ] writer = csv.DictWriter(file, fieldnames=header) writer.writeheader() try: for data in tqdm(dsf): # total=11378843 data["TEXT"] = quote(data.get("TEXT", "")) if data["TEXT"]: writer.writerow(data) finally: file.close() print("Done!") ``` 실행에 약 8시간이 소요되었습니다. 이후에 `HEIGHT`나 `WIDTH`가 None인 데이터를 제거하고 업로드하였습니다. ### img2dataset [img2dataset](https://github.com/rom1504/img2dataset)을 사용하여 URL로된 이미지들을 데이터셋 형태로 만들 수 있습니다.
false
### Dataset Summary Dataset of satirical news from "Panorama", Russian "The Onion". ### Dataset Format Dataset is in JSONLines format, where "title" is the article title, and "body" are contents of the article.
false
EN-ME Special Chars is a dataset of roughly 58000 aligned sentence pairs in English and Middle English, collected from the works of Geoffrey Chaucer, John Wycliffe, and the Gawain Poet. It includes special characters such as þ. This dataset reflects the spelling inconsistencies characteristic of Middle English.
false
# Dataset Card for `reviews_with_drift` ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description ### Dataset Summary This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place. ### Supported Tasks and Leaderboards `text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative). ### Languages Text is mainly written in english. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset.
false
EN-ME Special Chars is a dataset of roughly 58000 aligned sentence pairs in English and Middle English, collected from the works of Geoffrey Chaucer, John Lydgate, John Wycliffe, and the Gawain Poet. It includes special characters such as þ. There is mild standardization, but this dataset reflects the spelling inconsistencies characteristic of Middle English.
false
Dataset with sentences regarding professions, half of the translations are to feminine and half for masculine sentences. How to use it: ``` from datasets import load_dataset remote_dataset = load_dataset("VanessaSchenkel/handmade-dataset", field="data") remote_dataset ``` Output: ``` DatasetDict({ train: Dataset({ features: ['id', 'translation'], num_rows: 388 }) }) ``` Exemple: ``` remote_dataset["train"][5] ``` Output: ``` {'id': '5', 'translation': {'english': 'the postman finished her work .', 'portuguese': 'A carteira terminou seu trabalho .'}} ```
true
# Indonesian News Categorization ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Indonews: Multiclass News Categorization scrapped popular news portals in Indonesia. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset.
false
### Dataset Summary KoPI-CC (Korpus Perayapan Indonesia)-CC is Indonesian only extract from Common Crawl snapshots using [ungoliant](https://github.com/oscar-corpus/ungoliant), each snapshot also filtered using some some deduplicate technique such as exact hash(md5) dedup technique and minhash LSH neardup ### Preprocessing Each folder name inside snapshots folder denoted preprocessing technique that has been applied . - **Raw** - this processed directly from cc snapshot using ungoliant without any addition filter ,you can read it in their paper (citation below) - use same "raw cc snapshot" for `2021_10` and `2021_49` which can be found in oscar dataset ([2109](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/tree/main/packaged_nondedup/id) and [2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201/tree/main/compressed/id_meta)) - **Dedup** - use data from raw folder - apply cleaning techniques for every text in documents such as - fix html - remove noisy unicode - fix news tag - remove control char - filter by removing short text (20 words) - filter by character ratio occurred inside text such as - min_alphabet_ratio (0.75) - max_upper_ratio (0.10) - max_number_ratio(0.05) - filter by exact dedup technique - hash all text with md5 hashlib - remove non-unique hash - full code about dedup step adapted from [here](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned/tree/main) - **Neardup** - use data from dedup folder - create index cluster using neardup [Minhash and LSH](http://ekzhu.com/datasketch/lsh.html) with following config : - use 128 permuation - 6 n-grams size - use word tokenization (split sentence by space) - use 0.8 as similarity score - fillter by removing all index from cluster - full code about neardup step adapted from [here](https://github.com/ChenghaoMou/text-dedup) - **Neardup_clean** - use data from neardup folder - Removing documents containing words from a selection of the [Indonesian Bad Words](https://github.com/acul3/c4_id_processed/blob/67e10c086d43152788549ef05b7f09060e769993/clean/badwords_ennl.py#L64). - Removing sentences containing: - Less than 3 words. - A word longer than 1000 characters. - An end symbol not matching end-of-sentence punctuation. - Strings associated to javascript code (e.g. `{`), lorem ipsum, policy information in indonesia - Removing documents (after sentence filtering): - Containing less than 5 sentences. - Containing less than 500 or more than 50'000 characters. - full code about neardup_clean step adapted from [here](https://gitlab.com/yhavinga/c4nlpreproc) ## Dataset Structure ### Data Instances An example from the dataset: ``` {'text': 'Panitia Kerja (Panja) pembahasan RUU Cipta Kerja (Ciptaker) DPR RI memastikan naskah UU Ciptaker sudah final, tapi masih dalam penyisiran. Penyisiran dilakukan agar isi UU Ciptaker sesuai dengan kesepakatan dalam pembahasan dan tidak ada salah pengetikan (typo).\n"Kan memang sudah diumumkan, naskah final itu sudah. Cuma kita sekarang … DPR itu kan punya waktu 7 hari sebelum naskah resminya kita kirim ke pemerintah. Nah, sekarang itu kita sisir, jangan sampai ada yang salah pengetikan, tapi tidak mengubah substansi," kata Ketua Panja RUU Ciptaker Supratman Andi Agtas saat berbincang dengan detikcom, Jumat (9/10/2020) pukul 10.56 WIB.\nSupratman mengungkapkan Panja RUU Ciptaker menggelar rapat hari ini untuk melakukan penyisiran terhadap naskah UU Ciptaker. Panja, sebut dia, bekerja sama dengan pemerintah dan ahli bahasa untuk melakukan penyisiran naskah.\n"Sebentar, siang saya undang seluruh poksi-poksi (kelompok fraksi) Baleg (Badan Legislasi DPR), anggota Panja itu datang ke Baleg untuk melihat satu per satu, jangan sampai …. Karena kan sekarang ini tim dapur pemerintah dan DPR lagi bekerja bersama dengan ahli bahasa melihat jangan sampai ada yang typo, redundant," terangnya.\nSupratman membenarkan bahwa naskah UU Ciptaker yang final itu sudah beredar. Ketua Baleg DPR itu memastikan penyisiran yang dilakukan tidak mengubah substansi setiap pasal yang telah melalui proses pembahasan.\n"Itu yang sudah dibagikan. Tapi kan itu substansinya yang tidak mungkin akan berubah. Nah, kita pastikan nih dari sisi drafting-nya yang jadi kita pastikan," tutur Supratman.\nLebih lanjut Supratman menjelaskan DPR memiliki waktu 7 hari untuk melakukan penyisiran. Anggota DPR dari Fraksi Gerindra itu memastikan paling lambat Selasa (13/10) pekan depan, naskah UU Ciptaker sudah bisa diakses oleh masyarakat melalui situs DPR.\n"Kita itu, DPR, punya waktu sampai 7 hari kerja. Jadi harusnya hari Selasa sudah final semua, paling lambat. Tapi saya usahakan hari ini bisa final. Kalau sudah final, semua itu langsung bisa diakses di web DPR," terang Supratman.\nDiberitakan sebelumnya, Wakil Ketua Baleg DPR Achmad Baidowi mengakui naskah UU Ciptaker yang telah disahkan di paripurna DPR masih dalam proses pengecekan untuk menghindari kesalahan pengetikan. Anggota Komisi VI DPR itu menyinggung soal salah ketik dalam revisi UU KPK yang disahkan pada 2019.\n"Mengoreksi yang typo itu boleh, asalkan tidak mengubah substansi. Jangan sampai seperti tahun lalu, ada UU salah ketik soal umur \'50 (empat puluh)\', sehingga pemerintah harus mengonfirmasi lagi ke DPR," ucap Baidowi, Kamis (8/10).', 'url': 'https://news.detik.com/berita/d-5206925/baleg-dpr-naskah-final-uu-ciptaker-sedang-diperbaiki-tanpa-ubah-substansi?tag_from=wp_cb_mostPopular_list&_ga=2.71339034.848625040.1602222726-629985507.1602222726', 'timestamp': '2021-10-22T04:09:47Z', 'meta': '{"warc_headers": {"content-length": "2747", "content-type": "text/plain", "warc-date": "2021-10-22T04:09:47Z", "warc-record-id": "<urn:uuid:a5b2cc09-bd2b-4d0e-9e5b-2fcc5fce47cb>", "warc-identified-content-language": "ind,eng", "warc-target-uri": "https://news.detik.com/berita/d-5206925/baleg-dpr-naskah-final-uu-ciptaker-sedang-diperbaiki-tanpa-ubah-substansi?tag_from=wp_cb_mostPopular_list&_ga=2.71339034.848625040.1602222726-629985507.1602222726", "warc-block-digest": "sha1:65AWBDBLS74AGDCGDBNDHBHADOKSXCKV", "warc-type": "conversion", "warc-refers-to": "<urn:uuid:b7ceadba-7120-4e38-927c-a50db21f0d4f>"}, "identification": {"label": "id", "prob": 0.6240405}, "annotations": null, "line_identifications": [null, {"label": "id", "prob": 0.9043896}, null, null, {"label": "id", "prob": 0.87111086}, {"label": "id", "prob": 0.9095224}, {"label": "id", "prob": 0.8579232}, {"label": "id", "prob": 0.81366056}, {"label": "id", "prob": 0.9286813}, {"label": "id", "prob": 0.8435194}, {"label": "id", "prob": 0.8387821}, null]}'} ``` ### Data Fields The data contains the following fields: - `url`: url of the source as a string - `text`: text content as a string - `timestamp`: timestamp of extraction as a string - `meta` : json representation of the original from ungoliant tools,can be found [here](https://oscar-corpus.com/post/oscar-v22-01/) (warc_heder) ## Additional Information ### Dataset Curators For inquiries or requests regarding the KoPI-CC contained in this repository, please contact me at [samsulrahmadani@gmail.com](mailto:samsulrahmadani@gmail.com) ### Licensing Information These data are released under this licensing scheme I do not own any of the text from which these data has been extracted. the license actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/ Should you consider that data contains material that is owned by you and should therefore not be reproduced here, please: * Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted. * Clearly identify the copyrighted work claimed to be infringed. * Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material. I will comply to legitimate requests by removing the affected sources from the next release of the corpus. ### Citation Information ``` @ARTICLE{2022arXiv220106642A, author = {{Abadji}, Julien and {Ortiz Suarez}, Pedro and {Romary}, Laurent and {Sagot}, Beno{\^\i}t}, title = "{Towards a Cleaner Document-Oriented Multilingual Crawled Corpus}", journal = {arXiv e-prints}, keywords = {Computer Science - Computation and Language}, year = 2022, month = jan, eid = {arXiv:2201.06642}, pages = {arXiv:2201.06642}, archivePrefix = {arXiv}, eprint = {2201.06642}, primaryClass = {cs.CL}, adsurl = {https://ui.adsabs.harvard.edu/abs/2022arXiv220106642A}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @inproceedings{AbadjiOrtizSuarezRomaryetal.2021, author = {Julien Abadji and Pedro Javier Ortiz Su{\'a}rez and Laurent Romary and Beno{\^i}t Sagot}, title = {Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus}, series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-9) 2021. Limerick, 12 July 2021 (Online-Event)}, editor = {Harald L{\"u}ngen and Marc Kupietz and Piotr Bański and Adrien Barbaresi and Simon Clematide and Ines Pisetta}, publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache}, address = {Mannheim}, doi = {10.14618/ids-pub-10468}, url = {https://nbn-resolving.org/urn:nbn:de:bsz:mh39-104688}, pages = {1 -- 9}, year = {2021}, abstract = {Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.}, language = {en} } ```
true
# AutoTrain Dataset for project: provision_classification ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project provision_classification. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "Each Partner hereby represents and warrants to the Partnership and each other Partner that (a)\u00a0if such Partner is a corporation, it is duly organized, validly existing, and in good standing under the laws of the jurisdiction of its incorporation and is duly qualified and in good standing as a foreign corporation in the jurisdiction of its principal place of business (if not incorporated therein), (b) if such Partner is a trust, estate or other entity, it is duly formed, validly existing, and (if applicable) in good standing under the laws of the jurisdiction of its formation, and if required by law is duly qualified to do business and (if applicable) in good standing in the jurisdiction of its principal place of business (if not formed therein), (c) such Partner has full corporate, trust, or other applicable right, power and authority to enter into this Agreement and to perform its obligations hereunder and all necessary actions by the board of directors, trustees, beneficiaries, or other Persons necessary for the due authorization, execution, delivery, and performance of this Agreement by such Partner have been duly taken, and such authorization, execution, delivery, and performance do not conflict with any other agreement or arrangement to which such Partner is a party or by which it is bound, and (d)\u00a0such Partner is acquiring its interest in the Partnership for investment purposes and not with a view to distribution thereof.", "target": 13 }, { "text": "This Letter Agreement is binding upon and inures to the benefit of the parties and their respective heirs, executors, administrators, personal representatives, successors, and permitted assigns. This Letter Agreement is personal to you and the availability of you to perform services and the covenants provided by you hereunder have been a material consideration for the Company to enter into this Letter Agreement. Accordingly, you may not assign any of your rights or delegate any of your duties under this Letter Agreement, either voluntarily or by operation of law, without the prior written consent of the Company, which may be given or withheld by the Company in its sole and absolute discretion.", "target": 0 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "ClassLabel(num_classes=19, names=['Assignment', 'Attorney Fees', 'Bankruptcy', 'Change of Control', 'Compliance with Laws', 'Confidentiality', 'Entire Agreement', 'General Definition', 'Governing Law', 'Indemnification', 'Injunctive Relief', 'Jurisdiction and Venue', 'Liens', 'No Warranties', 'Other', 'Permitted Disclosure', 'Survival', 'Term', 'Termination for Convenience'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 119023 | | valid | 13225 |
false
About Dataset Context This contains data of news headlines published over a period of nineteen years. Sourced from the reputable Australian news source ABC (Australian Broadcasting Corporation) Agency Site: (http://www.abc.net.au) Content Format: CSV ; Single File publish_date: Date of publishing for the article in yyyyMMdd format headline_text: Text of the headline in Ascii , English , lowercase Start Date: 2003-02-19 ; End Date: 2021-12-31 Inspiration I look at this news dataset as a summarised historical record of noteworthy events in the globe from early-2003 to end-2021 with a more granular focus on Australia. This includes the entire corpus of articles published by the abcnews website in the given date range. With a volume of two hundred articles per day and a good focus on international news, we can be fairly certain that every event of significance has been captured here. Digging into the keywords, one can see all the important episodes shaping the last decade and how they evolved over time. Ex: afghanistan war, financial crisis, multiple elections, ecological disasters, terrorism, famous people, criminal activity et cetera. Similar Work Similar news datasets exploring other attributes, countries and topics can be seen on my profile. Most kernals can be reused with minimal changes across these news datasets. Prepared by Rohit Kulkarni Taken from https://www.kaggle.com/datasets/therohk/million-headlines
false
# Dataset Card for Cerpen Corpus ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This is a small size for Indonesian short story gathered from the internet. We keep the large size for internal research. if you are interested, please join to [our discord server](https://discord.gg/6v28dq8dRE) ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset.
false
# Dataset Card for Visual Spatial Reasoning ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://ltl.mmll.cam.ac.uk/ - **Repository:** https://github.com/cambridgeltl/visual-spatial-reasoning - **Paper:** https://arxiv.org/abs/2205.00363 - **Leaderboard:** https://paperswithcode.com/sota/visual-reasoning-on-vsr - **Point of Contact:** https://ltl.mmll.cam.ac.uk/ ### Dataset Summary The Visual Spatial Reasoning (VSR) corpus is a collection of caption-image pairs with true/false labels. Each caption describes the spatial relation of two individual objects in the image, and a vision-language model (VLM) needs to judge whether the caption is correctly describing the image (True) or not (False). ### Supported Tasks and Leaderboards We test three baselines, all supported in huggingface. They are VisualBERT [(Li et al. 2019)](https://arxiv.org/abs/1908.03557), LXMERT [(Tan and Bansal, 2019)](https://arxiv.org/abs/1908.07490) and ViLT [(Kim et al. 2021)](https://arxiv.org/abs/2102.03334). The leaderboard can be checked at [Papers With Code](https://paperswithcode.com/sota/visual-reasoning-on-vsr). model | random split | zero-shot :-------------|:-------------:|:-------------: *human* | *95.4* | *95.4* VisualBERT | 57.4 | 54.0 LXMERT | **72.5** | **63.2** ViLT | 71.0 | 62.4 ### Languages The language in the dataset is English as spoken by the annotators. The BCP-47 code for English is en. [`meta_data.csv`](https://github.com/cambridgeltl/visual-spatial-reasoning/tree/master/data/data_files/meta_data.jsonl) contains meta data of annotators. ## Dataset Structure ### Data Instances Each line is an individual data point. Each `jsonl` file is of the following format: ```json {"image": "000000050403.jpg", "image_link": "http://images.cocodataset.org/train2017/000000050403.jpg", "caption": "The teddy bear is in front of the person.", "label": 1, "relation": "in front of", "annotator_id": 31, "vote_true_validator_id": [2, 6], "vote_false_validator_id": []} {"image": "000000401552.jpg", "image_link": "http://images.cocodataset.org/train2017/000000401552.jpg", "caption": "The umbrella is far away from the motorcycle.", "label": 0, "relation": "far away from", "annotator_id": 2, "vote_true_validator_id": [], "vote_false_validator_id": [2, 9, 1]} ``` ### Data Fields `image` denotes name of the image in COCO and `image_link` points to the image on the COCO server (so you can also access directly). `caption` is self-explanatory. `label` being `0` and `1` corresponds to False and True respectively. `relation` records the spatial relation used. `annotator_id` points to the annotator who originally wrote the caption. `vote_true_validator_id` and `vote_false_validator_id` are annotators who voted True or False in the second phase validation. ### Data Splits The VSR corpus, after validation, contains 10,119 data points with high agreement. On top of these, we create two splits (1) random split and (2) zero-shot split. For random split, we randomly split all data points into train, development, and test sets. Zero-shot split makes sure that train, development and test sets have no overlap of concepts (i.e., if *dog* is in test set, it is not used for training and development). Below are some basic statistics of the two splits. split | train | dev | test | total :------|:--------:|:--------:|:--------:|:--------: random | 7,083 | 1,012 | 2,024 | 10,119 zero-shot | 5,440 | 259 | 731 | 6,430 Check out [`data/`](https://github.com/cambridgeltl/visual-spatial-reasoning/tree/master/data) for more details. ## Dataset Creation ### Curation Rationale Understanding spatial relations is fundamental to achieve intelligence. Existing vision-language reasoning datasets are great but they compose multiple types of challenges and can thus conflate different sources of error. The VSR corpus focuses specifically on spatial relations so we can have accurate diagnosis and maximum interpretability. ### Source Data #### Initial Data Collection and Normalization **Image pair sampling.** MS COCO 2017 contains 123,287 images and has labelled the segmentation and classes of 886,284 instances (individual objects). Leveraging the segmentation, we first randomly select two concepts, then retrieve all images containing the two concepts in COCO 2017 (train and validation sets). Then images that contain multiple instances of any of the concept are filtered out to avoid referencing ambiguity. For the single-instance images, we also filter out any of the images with instance area size < 30, 000, to prevent extremely small instances. After these filtering steps, we randomly sample a pair in the remaining images. We repeat such process to obtain a large number of individual image pairs for caption generation. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process **Fill in the blank: template-based caption generation.** Given a pair of images, the annotator needs to come up with a valid caption that makes it correctly describing one image but incorrect for the other. In this way, the annotator could focus on the key difference of the two images (which should be spatial relation of the two objects of interest) and come up with challenging relation that differentiates the two. Similar paradigms are also used in the annotation of previous vision-language reasoning datasets such as NLVR2 (Suhr et al., 2017, 2019) and MaRVL (Liu et al., 2021). To regularise annotators from writing modifiers and differentiating the image pair with things beyond accurate spatial relations, we opt for a template-based classification task instead of free-form caption writing. Besides, the template-generated dataset can be easily categorised based on relations and their meta-categories. The caption template has the format of “The `OBJ1` (is) __ the `OBJ2`.”, and the annotators are instructed to select a relation from a fixed set to fill in the slot. The copula “is” can be omitted for grammaticality. For example, for “contains”, “consists of”, and “has as a part”, “is” should be discarded in the template when extracting the final caption. The fixed set of spatial relations enable us to obtain the full control of the generation process. The full list of used relations are listed in the table below. It contains 71 spatial relations and is adapted from the summarised relation table of Fagundes et al. (2021). We made minor changes to filter out clearly unusable relations, made relation names grammatical under our template, and reduced repeated relations. In our final dataset, 65 out of the 71 available relations are actually included (the other 6 are either not selected by annotators or are selected but the captions did not pass the validation phase). | Category | Spatial Relations | |-------------|-------------------------------------------------------------------------------------------------------------------------------------------------| | Adjacency | Adjacent to, alongside, at the side of, at the right side of, at the left side of, attached to, at the back of, ahead of, against, at the edge of | | Directional | Off, past, toward, down, deep down*, up*, away from, along, around, from*, into, to*, across, across from, through*, down from | | Orientation | Facing, facing away from, parallel to, perpendicular to | | Projective | On top of, beneath, beside, behind, left of, right of, under, in front of, below, above, over, in the middle of | | Proximity | By, close to, near, far from, far away from | | Topological | Connected to, detached from, has as a part, part of, contains, within, at, on, in, with, surrounding, among, consists of, out of, between, inside, outside, touching | | Unallocated | Beyond, next to, opposite to, after*, among, enclosed by | **Second-round Human Validation.** Every annotated data point is reviewed by at least two additional human annotators (validators). In validation, given a data point (consists of an image and a caption), the validator gives either a True or False label. We exclude data points that have < 2/3 validators agreeing with the original label. In the guideline, we communicated to the validators that, for relations such as “left”/“right”, “in front of”/“behind”, they should tolerate different reference frame: i.e., if the caption is true from either the object’s or the viewer’s reference, it should be given a True label. Only when the caption is incorrect under all reference frames, a False label is assigned. This adds difficulty to the models since they could not naively rely on relative locations of the objects in the images but also need to correctly identify orientations of objects to make the best judgement. #### Who are the annotators? Annotators are hired from [prolific.co](https://prolific.co). We require them (1) have at least a bachelor’s degree, (2) are fluent in English or native speaker, and (3) have a >99% historical approval rate on the platform. All annotators are paid with an hourly salary of 12 GBP. Prolific takes an extra 33% of service charge and 20% VAT on the service charge. For caption generation, we release the task with batches of 200 instances and the annotator is required to finish a batch in 80 minutes. An annotator cannot take more than one batch per day. In this way we have a diverse set of annotators and can also prevent annotators from being fatigued. For second round validation, we group 500 data points in one batch and an annotator is asked to label each batch in 90 minutes. In total, 24 annotators participated in caption generation and 26 participated in validation. The annotators have diverse demographic background: they were born in 13 different countries; live in 13 different couturiers; and have 14 different nationalities. 57.4% of the annotators identify themselves as females and 42.6% as males. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information This project is licensed under the [Apache-2.0 License](https://github.com/cambridgeltl/visual-spatial-reasoning/blob/master/LICENSE). ### Citation Information ```bibtex @article{Liu2022VisualSR, title={Visual Spatial Reasoning}, author={Fangyu Liu and Guy Edward Toh Emerson and Nigel Collier}, journal={ArXiv}, year={2022}, volume={abs/2205.00363} } ``` ### Contributions Thanks to [@juletx](https://github.com/juletx) for adding this dataset.
false
# Dataset Card for Indonesian News Title Generation ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset.
false
This is the summarization datasets collected by TextBox, including: - CNN/Daily Mail (cnndm) - XSum (xsum) - SAMSum (samsum) - WLE (wle) - Newsroom (nr) - WikiHow (wikihow) - MicroSoft News (msn) - MediaSum (mediasum) - English Gigaword (eg). The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset).
false
This is the commonsense generation datasets collected by TextBox, including: - CommonGen (cg). The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset).
false
This is the question generation datasets collected by TextBox, including: - SQuAD (squadqg) - CoQA (coqaqg) - NewsQA (newsqa) - HotpotQA (hotpotqa) - MS MARCO (marco) - MSQG (msqg) - NarrativeQA (nqa) - QuAC (quac). The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset).
false
This is the simplification datasets collected by TextBox, including: - WikiAuto + Turk/ASSET (wia-t). The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset).
false
This is the task dialogue datasets collected by TextBox, including: - MultiWOZ 2.0 (multiwoz) - MetaLWOZ (metalwoz) - KVRET (kvret) - WOZ (woz) - CamRest676 (camres676) - Frames (frames) - TaskMaster (taskmaster) - Schema-Guided (schema) - MSR-E2E (e2e_msr). The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset).
false
# Dataset Card for Swedish Xsum Dataset The Swedish xsum dataset has only been machine-translated to improve downstream fine-tuning on Swedish summarization tasks. ## Dataset Summary Read about the full details at original English version: https://huggingface.co/datasets/xsum ### Data Fields - `id`: a string containing the heximal formated SHA1 hash of the url where the story was retrieved from - `document`: a string containing the body of the news article - `summary`: a string containing the summary of the article as written by the article author ### Data Splits The Swedish xsum dataset follows the same splits as the original English version and has 3 splits: _train_, _validation_, and _test_. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 204,045 | | Validation | 11,332 | | Test | 11,334 |
false
# Dataset Card for Swedish Wiki_lingua Dataset The Swedish wiki_lingua dataset has only been machine-translated to improve downstream fine-tuning on Swedish summarization tasks. ## Dataset Summary Read about the full details at original Multilingual version: https://huggingface.co/datasets/wiki_lingua ### Data details - gem_id: the id for the data instance. - gem_id_parent: the id for the data instance. - Document: a string containing the document body. - Summary: a string containing the summary of the body. ### Data Splits The Swedish wiki_lingua dataset follows the same splits as the original English version and has 3 splits: _train_, _validation_, and _test_. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 95,516 | | Validation | 27,489 | | Test | 13,340 |
false
# Dataset Card for Indonesian Sentence Paraphrase Detection ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The dataset is originally from [Microsoft Research Paraphrase Corpus](https://www.microsoft.com/en-us/download/details.aspx?id=52398). We translated the text into Bahasa using google translate. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Indonesian ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset.
false
# Dataset Card for broad_twitter_corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [https://github.com/GateNLP/broad_twitter_corpus](https://github.com/GateNLP/broad_twitter_corpus) - **Repository:** [https://github.com/GateNLP/broad_twitter_corpus](https://github.com/GateNLP/broad_twitter_corpus) - **Paper:** [http://www.aclweb.org/anthology/C16-1111](http://www.aclweb.org/anthology/C16-1111) - **Leaderboard:** [Named Entity Recognition on Broad Twitter Corpus](https://paperswithcode.com/sota/named-entity-recognition-on-broad-twitter) - **Point of Contact:** [Leon Derczynski](https://github.com/leondz) ### Dataset Summary This is the Broad Twitter corpus, a dataset of tweets collected over stratified times, places and social uses. The goal is to represent a broad range of activities, giving a dataset more representative of the language used in this hardest of social media formats to process. Further, the BTC is annotated for named entities. See the paper, [Broad Twitter Corpus: A Diverse Named Entity Recognition Resource](http://www.aclweb.org/anthology/C16-1111), for details. ### Supported Tasks and Leaderboards * Named Entity Recognition * On PWC: [Named Entity Recognition on Broad Twitter Corpus](https://paperswithcode.com/sota/named-entity-recognition-on-broad-twitter) ### Languages English from UK, US, Australia, Canada, Ireland, New Zealand; `bcp47:en` ## Dataset Structure ### Data Instances Feature |Count ---|---: Documents |9 551 Tokens |165 739 Person entities |5 271 Location entities |3 114 Organization entities |3 732 ### Data Fields Each tweet contains an ID, a list of tokens, and a list of NER tags - `id`: a `string` feature. - `tokens`: a `list` of `strings` - `ner_tags`: a `list` of class IDs (`int`s) representing the NER class: ``` 0: O 1: B-PER 2: I-PER 3: B-ORG 4: I-ORG 5: B-LOC 6: I-LOC ``` ### Data Splits Section|Region|Collection period|Description|Annotators|Tweet count ---|---|---|---|---|---: A | UK| 2012.01| General collection |Expert| 1000 B |UK |2012.01-02 |Non-directed tweets |Expert |2000 E |Global| 2014.07| Related to MH17 disaster| Crowd & expert |200 F |Stratified |2009-2014| Twitterati |Crowd & expert |2000 G |Stratified| 2011-2014| Mainstream news| Crowd & expert| 2351 H |Non-UK| 2014 |General collection |Crowd & expert |2000 The most varied parts of the BTC are sections F and H. However, each of the remaining four sections has some specific readily-identifiable bias. So, we propose that one uses half of section H for evaluation and leaves the other half in the training data. Section H should be partitioned in the order of the JSON-format lines. Note that the CoNLL-format data is readily reconstructible from the JSON format, which is the authoritative data format from which others are derived. **Test**: Section F **Development**: Section H (the paper says "second half of Section H" but ordinality could be ambiguous, so it all goes in. Bonne chance) **Training**: everything else ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information Creative Commons Attribution 4.0 International (CC BY 4.0) ### Citation Information ``` @inproceedings{derczynski2016broad, title={Broad twitter corpus: A diverse named entity recognition resource}, author={Derczynski, Leon and Bontcheva, Kalina and Roberts, Ian}, booktitle={Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers}, pages={1169--1179}, year={2016} } ``` ### Contributions Author-added dataset [@leondz](https://github.com/leondz)
false
# Dataset Card for Indonesian Movie Subtitle ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset.
false
true
false
# Dataset Card for KOMET ### Dataset Summary KOMET 1.0 is a hand-annotated Slovenian corpus of metaphorical expressions which contains about 200 000 words (across 13 963 sentences) from Slovene journalistic, fiction and online texts. ### Supported Tasks and Leaderboards Metaphor detection, metaphor type classification, metaphor frame classification. ### Languages Slovenian. ## Dataset Structure ### Data Instances A sample instance from the dataset: ``` { 'document_name': 'komet49.div.xml', 'idx': 60, 'idx_paragraph': 24, 'idx_sentence': 1, 'sentence_words': ['Morda', 'zato', ',', 'ker', 'resnice', 'nočete', 'sprejeti', ',', 'in', 'nadaljujete', 'po', 'svoje', '.'], 'met_type': [{'type': 'MRWi', 'word_indices': [10]}], 'met_frame': [{'type': 'spatial_orientation', 'word_indices': [10]}, {'type': 'adverbial_phrase', 'word_indices': [10, 11]}]} ``` The sentence comes from the document `komet49.div.xml`, is the 60th sentence in the document and is the 1st sentence inside the 24th paragraph in the document. The word "po" is annotated as an indirect metaphor-related word (`MRWi`). The phrase "po svoje" is annotated with the frame "adverbial phrase" and the word "po" is additionally annotated with the frame "spatial_orientation". ### Data Fields - `document_name`: a string containing the name of the document in which the sentence appears; - `idx`: a uint32 containing the index of the sentence inside its document; - `idx_paragraph`: a uint32 containing the index of the paragraph in which the sentence appears; - `idx_sentence`: a uint32 containing the index of the sentence inside its paragraph; containing the consecutive number of the paragraph inside the current news article; - `sentence_words`: words in the sentence; - `met_type`: metaphors in the sentence, marked by their type and word indices; - `met_frame`: metaphor frames in the sentence, marked by their type (frame name) and word indices. ## Dataset Creation The texts were sampled from the Corpus of Slovene youth literature MAKS (journalistic, fiction and online texts). Initially, words whose meaning deviates from their primary meaning in the Dictionary of the standard Slovene Language were marked as metaphors. Then, their type was determined, i.e. whether they are an indirect (MRWi), direct (MRWd), borderline (WIDLI) metaphor or a metaphor flag (signal, marker; MFlag). For more information, please check out the paper (which is in Slovenian language) or contact the dataset author ## Additional Information ### Dataset Curators Špela Antloga. ### Licensing Information CC BY-NC-SA 4.0 ### Citation Information ``` @InProceedings{antloga2020komet, title = {Korpus metafor KOMET 1.0}, author={Antloga, \v{S}pela}, booktitle={Proceedings of the Conference on Language Technologies and Digital Humanities (Student abstracts)}, year={2020}, pages={167-170} } ``` ### Contributions Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
false
# Dataset Card for "tner/ttc" (Dummy) ***WARNING***: This is a dummy dataset for `ttc` and the correct one is [`tner/ttc`](https://huggingface.co/datasets/tner/ttc), which is private since **TTC dataset is not publicly released at this point**. We will grant you an access to the `tner/ttc` dataset, once you retained the original dataset from the authors (you need to send an inquiry to Shruti Rijhwani, `srijhwan@cs.cmu.edu`). See their repository for more detail [https://github.com/shrutirij/temporal-twitter-corpus](https://github.com/shrutirij/temporal-twitter-corpus). Once you are granted access to the original TTC dataset by the author, please request the access at [here](https://huggingface.co/datasets/tner/ttc_dummy/discussions/1). ## Dataset Description - **Repository:** [T-NER](https://github.com/asahi417/tner) - **Paper:** [https://aclanthology.org/2020.acl-main.680/](https://aclanthology.org/2020.acl-main.680/) - **Dataset:** Temporal Twitter Corpus - **Domain:** Twitter - **Number of Entity:** 3 ### Dataset Summary Broad Twitter Corpus NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project. - Entity Types: `LOC`, `ORG`, `PER` ## Dataset Structure ### Data Instances An example of `train` looks as follows. ``` { 'tokens': ['😝', 'lemme', 'ask', '$MENTION$', ',', 'Timb', '???', '"', '$MENTION$', ':', '$RESERVED$', '!!!', '"', '$MENTION$', ':', '$MENTION$', 'Nezzzz', '!!', 'How', "'", 'bout', 'do', 'a', 'duet', 'with', '$MENTION$', '??!', ';)', '"'], 'tags': [6, 6, 6, 6, 6, 2, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6] } ``` ### Label ID The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/btc/raw/main/dataset/label.json). ```python { "B-LOC": 0, "B-ORG": 1, "B-PER": 2, "I-LOC": 3, "I-ORG": 4, "I-PER": 5, "O": 6 } ``` ### Data Splits | name |train|validation|test| |---------|----:|---------:|---:| |ttc | 9995| 500|1477| ### Citation Information ``` @inproceedings{rijhwani-preotiuc-pietro-2020-temporally, title = "Temporally-Informed Analysis of Named Entity Recognition", author = "Rijhwani, Shruti and Preotiuc-Pietro, Daniel", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.acl-main.680", doi = "10.18653/v1/2020.acl-main.680", pages = "7605--7617", abstract = "Natural language processing models often have to make predictions on text data that evolves over time as a result of changes in language use or the information described in the text. However, evaluation results on existing data sets are seldom reported by taking the timestamp of the document into account. We analyze and propose methods that make better use of temporally-diverse training data, with a focus on the task of named entity recognition. To support these experiments, we introduce a novel data set of English tweets annotated with named entities. We empirically demonstrate the effect of temporal drift on performance, and how the temporal information of documents can be used to obtain better models compared to those that disregard temporal information. Our analysis gives insights into why this information is useful, in the hope of informing potential avenues of improvement for named entity recognition as well as other NLP tasks under similar experimental setups.", } ```
false
# Dataset Card for [COCO] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-scuwyh2000](https://github.com/scuwyh2000) for adding this dataset.
false
# Dataset Card for NSME-COM ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage**: [https://huggingface.co/asaxena1990) - **Repository:** [https://huggingface.co/datasets/asaxena1990/NSME-COM) - **Point of Contact:** (Ayushman Dash <ayushman@neuralspace.ai>, Ankur Saxena <ankursaxena@neuralspace.ai>) - **Size of downloaded dataset files:** 10.86 KB ### Dataset Summary NSME-COM, the NeuralSpace Massive E-commerce Dataset is a collection of resources for training, evaluating, and analyzing natural language understanding systems. ### Supported Tasks and Leaderboards The leaderboard for the GLUE benchmark can be found [at this address](https://gluebenchmark.com/). It comprises the following tasks: #### nsds A manually-curated domain specific dataset by Data Engineers at NeuralSpace for rare E-commerce domains such as Insurance and Retail for NL researchers and practitioners to evaluate state of the art models [here](https://www.neuralspace.ai/) in 100+ languages. The dataset files are available in JSON format. ### Languages The language data in NSME-COM is in English (BCP-47 `en`) ## Dataset Structure ### Data Instances - **Size of downloaded dataset files:** 10.86 KB An example of 'test' looks as follows. ``` { "text": "is it good to add roadside assistance?", "intent": "Add", "type": "Test" } ``` An example of 'train' looks as follows. ```{ "text": "how can I add my spouse as a nominee?", "intent": "Add", "type": "Train" }, ``` ### Data Fields The data fields are the same among all splits. #### nsds - `text`: a `string` feature. - `intent`: a `string` feature. - `type`: a classification label, with possible values including `train` or `test`. ### Data Splits #### nsds | |train|test| |----|----:|---:| |nsds| 1725| 406| ### Contributions Ankur Saxena (ankursaxena@neuralspace.ai)
false
# naab-raw (raw version of the naab corpus) _[If you want to join our community to keep up with news, models and datasets from naab, click on [this](https://docs.google.com/forms/d/e/1FAIpQLSe8kevFl_ODCx-zapAuOIAQYr8IvkVVaVHOuhRL9Ha0RVJ6kg/viewform) link.]_ ## Table of Contents - [Dataset Card Creation Guide](#dataset-card-creation-guide) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Changelog](#changelog) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Contribution Guideline](#contribution-guideline) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Sharif Speech and Language Processing Lab](https://huggingface.co/SLPL) - **Paper:** [naab: A ready-to-use plug-and-play corpus for Farsi](https://arxiv.org/abs/2208.13486) - **Point of Contact:** [Sadra Sabouri](mailto:sabouri.sadra@gmail.com) ### Dataset Summary This is the raw (uncleaned) version of the [naab](https://huggingface.co/datasets/SLPL/naab) corpus. You can use also customize our [preprocess script](https://github.com/Sharif-SLPL/t5-fa/tree/main/preprocess) and make your own cleaned corpus. This repository is a hub for all Farsi corpora. Feel free to add your corpus following the [contribution guidelines](#contribution-guideline). You can download the dataset by the command below: ```python from datasets import load_dataset dataset = load_dataset("SLPL/naab-raw") ``` If you wanted to download a specific part of the corpus you can set the config name to the specific corpus name: ```python from datasets import load_dataset dataset = load_dataset("SLPL/naab-raw", "CC-fa") ``` ### Supported Tasks and Leaderboards This corpus can be used for training all language models trained by Masked Language Modeling (MLM) or any other self-supervised objective. - `language-modeling` - `masked-language-modeling` ### Changelog It's crucial to log changes on the projects which face changes periodically. Please refer to the [CHANGELOG.md](https://huggingface.co/datasets/SLPL/naab-raw/blob/main/CHANGELOG.md) for more details. ## Dataset Structure Each row of the dataset will look like something like the below: ```json { 'text': "این یک تست برای نمایش یک پاراگراف در پیکره متنی ناب است.", } ``` + `text` : the textual paragraph. ### Data Splits This corpus contains only a split (the `train` split). ## Dataset Creation ### Curation Rationale Here are some details about each part of this corpus. #### CC-fa The Common Crawl corpus contains petabytes of data collected since 2008. It contains raw web page data, extracted metadata, and text extractions. We use the Farsi part of it here. #### W2C The W2C stands for Web to Corpus and it contains several corpera. We contain the Farsi part of it in this corpus. ### Contribution Guideline In order to add your dataset, you should follow the below steps and make a pull request in order to be merged with the _naab-raw_: 1. Add your dataset to `_CORPUS_URLS` in `naab-raw.py` like: ```python ... "DATASET_NAME": "LINK_TO_A_PUBLIC_DOWNLOADABLE_FILE.txt" ... ``` 2. Add a log of your changes to the [CHANGELOG.md](https://huggingface.co/datasets/SLPL/naab-raw/blob/main/CHANGELOG.md). 3. Add some minor descriptions to the [Curation Rationale](#curation-rationale) under a subsection with your dataset name. ### Personal and Sensitive Information Since this corpus is briefly a compilation of some former corpora we take no responsibility for personal information included in this corpus. If you detect any of these violations please let us know, we try our best to remove them from the corpus ASAP. We tried our best to provide anonymity while keeping the crucial information. We shuffled some parts of the corpus so the information passing through possible conversations wouldn't be harmful. ## Additional Information ### Dataset Curators + Sadra Sabouri (Sharif University of Technology) + Elnaz Rahmati (Sharif University of Technology) ### Licensing Information mit ### Citation Information ``` @article{sabouri2022naab, title={naab: A ready-to-use plug-and-play corpus for Farsi}, author={Sabouri, Sadra and Rahmati, Elnaz and Gooran, Soroush and Sameti, Hossein}, journal={arXiv preprint arXiv:2208.13486}, year={2022} } ``` DOI:[https://doi.org/10.48550/arXiv.2208.13486](https://doi.org/10.48550/arXiv.2208.13486). ### Contributions Thanks to [@sadrasabouri](https://github.com/sadrasabouri) and [@elnazrahmati](https://github.com/elnazrahmati) for adding this dataset. ### Keywords + Farsi + Persian + raw text + پیکره فارسی + پیکره متنی + آموزش مدل زبانی
true
# WikiCAT_ca: Catalan Text Classification dataset ## Dataset Description - **Paper:** - **Point of Contact:** carlos.rodriguez1@bsc.es **Repository** https://github.com/TeMU-BSC/WikiCAT ### Dataset Summary WikiCAT_ca is a Catalan corpus for thematic Text Classification tasks. It is created automagically from Wikipedia and Wikidata sources, and contains 13201 articles from the Viquipedia classified under 13 different categories. This dataset was developed by BSC TeMU as part of the AINA project, and intended as an evaluation of LT capabilities to generate useful synthetic corpus. ### Supported Tasks and Leaderboards Text classification, Language Model ### Languages CA- Catalan ## Dataset Structure ### Data Instances Two json files, one for each split. ### Data Fields We used a simple model with the article text and associated labels, without further metadata. #### Example: <pre> {"version": "1.1.0", "data": [ { 'sentence': ' Celsius és conegut com l\'inventor de l\'escala centesimal del termòmetre. Encara que aquest instrument és un invent molt antic, la història de la seva gradació és molt més capritxosa. Durant el segle xvi era graduat com "fred" col·locant-lo (...)', 'label': 'Ciència' }, . . . ] } </pre> #### Labels 'Ciència_i_Tecnologia', 'Dret', 'Economia', 'Enginyeria', 'Entreteniment', 'Esport', 'Filosofia', 'Història', 'Humanitats', 'Matemàtiques', 'Música', 'Política', 'Religió' ### Data Splits * dev_ca.json: 2484 label-document pairs * train_ca.json: 9907 label-document pairs ## Dataset Creation ### Methodology “Category” starting pages are chosen to represent the topics in each language. We extract, for each category, the main pages, as well as the subcategories ones, and the individual pages under this first level. For each page, the "summary" provided by Wikipedia is also extracted as the representative text. ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization The source data are thematic categories in the different Wikipedias #### Who are the source language producers? ### Annotations #### Annotation process Automatic annotation #### Who are the annotators? [N/A] ### Personal and Sensitive Information No personal or sensitive information included. ## Considerations for Using the Data ### Social Impact of Dataset We hope this corpus contributes to the development of language models in Catalan, a low-resource language. ### Discussion of Biases We are aware that this data might contain biases. We have not applied any steps to reduce their impact. ### Other Known Limitations [N/A] ## Additional Information ### Dataset Curators Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es) This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina). ### Licensing Information This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International</a>. ### Contributions [N/A]
false
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used: - __query__: The `related_work` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits - __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example Retrieval results on the `train` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.5482 | 0.2243 | 0.2243 | 0.2243 | Retrieval results on the `validation` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.5476 | 0.2209 | 0.2209 | 0.2209 | Retrieval results on the `test` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.5480 | 0.2272 | 0.2272 | 0.2272 |
false
# Dataset Card for Inglish: Indonesian English Translation Dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The original dataset is from MSRP dataset. The translation was generated from google translate. Feel free to check the translation if you find any error and open new discussion. ### Supported Tasks and Leaderboards Machine Translation ### Languages English - Indonesian ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset.
false
# Dataset Card for EstCOPA ### Dataset Summary EstCOPA is an extended version of [XCOPA](https://huggingface.co/datasets/xcopa) that was created with a goal to further investigate Estonian language understanding of large language models. EstCOPA provides two new versions of train, eval and test datasets in Estonian: firstly, a machine translated (En->Et) version of original English COPA ([Roemmele et al., 2011](http://commonsensereasoning.org/2011/papers/Roemmele.pdf)) and secondly, a manually post-edited version of the same machine translated data. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages - et ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information If you use the dataset in your work, please cite ``` @article{kuulmets_estcopa_2022, title={Estonian Language Understanding: a Case Study on the COPA Task}, volume={10}, DOI={https://doi.org/10.22364/bjmc.2022.10.3.19}, number={3}, journal={Baltic Journal of Modern Computing}, author={Kuulmets, Hele-Andra and Tättar, Andre and Fishel, Mark}, year={2022}, pages={470–480} } ``` ### Contributions Thanks to [@helehh](https://github.com/helehh) for adding this dataset.
false
# Dataset Card for GitHub-Issues ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
false
# Dataset Card for 20Q
false
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
false
# AutoTrain Dataset for project: image-classification-test-18 ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project image-classification-test-18. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "image": "<224x224 RGB PIL image>", "target": 2 }, { "image": "<224x224 RGB PIL image>", "target": 2 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "image": "Image(decode=True, id=None)", "target": "ClassLabel(num_classes=3, names=['ADONIS', 'AFRICAN GIANT SWALLOWTAIL', 'AMERICAN SNOOT'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 269 | | valid | 69 |
false
# Dataset Card for librispeech_asr ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [LibriSpeech ASR corpus](http://www.openslr.org/12) - **Repository:** [Needs More Information] - **Paper:** [LibriSpeech: An ASR Corpus Based On Public Domain Audio Books](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf) - **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench) - **Point of Contact:** [Daniel Povey](mailto:dpovey@gmail.com) ### Dataset Summary LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned. ### Supported Tasks and Leaderboards - `automatic-speech-recognition`, `audio-speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER. An external leaderboard at https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-clean ranks the latest models from research and academia. ### Languages The audio is in English. There are two configurations: `clean` and `other`. The speakers in the corpus were ranked according to the WER of the transcripts of a model trained on a different dataset, and were divided roughly in the middle, with the lower-WER speakers designated as "clean" and the higher WER speakers designated as "other". ## Dataset Structure ### Data Instances A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided. ``` {'chapter_id': 141231, 'file': '/home/siddhant/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac', 'audio': {'path': '/home/siddhant/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 16000}, 'id': '1272-141231-0000', 'speaker_id': 1272, 'text': 'A MAN SAID TO THE UNIVERSE SIR I EXIST'} ``` ### Data Fields - file: A path to the downloaded audio file in .flac format. - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - text: the transcription of the audio file. - id: unique id of the data sample. - speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples. - chapter_id: id of the audiobook chapter which includes the transcription. ### Data Splits The size of the corpus makes it impractical, or at least inconvenient for some users, to distribute it as a single large archive. Thus the training portion of the corpus is split into three subsets, with approximate size 100, 360 and 500 hours respectively. A simple automatic procedure was used to select the audio in the first two sets to be, on average, of higher recording quality and with accents closer to US English. An acoustic model was trained on WSJ’s si-84 data subset and was used to recognize the audio in the corpus, using a bigram LM estimated on the text of the respective books. We computed the Word Error Rate (WER) of this automatic transcript relative to our reference transcripts obtained from the book texts. The speakers in the corpus were ranked according to the WER of the WSJ model’s transcripts, and were divided roughly in the middle, with the lower-WER speakers designated as "clean" and the higher-WER speakers designated as "other". For "clean", the data is split into train, validation, and test set. The train set is further split into train.100 and train.360 respectively accounting for 100h and 360h of the training data. For "other", the data is split into train, validation, and test set. The train set contains approximately 500h of recorded speech. | | Train.500 | Train.360 | Train.100 | Valid | Test | | ----- | ------ | ----- | ---- | ---- | ---- | | clean | - | 104014 | 28539 | 2703 | 2620| | other | 148688 | - | - | 2864 | 2939 | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators The dataset was initially created by Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. ### Licensing Information [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) ### Citation Information ``` @inproceedings{panayotov2015librispeech, title={Myspeech: an ASR corpus based on public domain audio books}, author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev}, booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on}, pages={5206--5210}, year={2015}, organization={IEEE} } ``` ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
false
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used: - __query__: The `related_work` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits - __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"mean"`, i.e. the number of documents retrieved, `k`, is set as the mean number of documents seen across examples in this dataset, in this case `k==4` Retrieval results on the `train` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.5482 | 0.2243 | 0.1578 | 0.2689 | Retrieval results on the `validation` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.5476 | 0.2209 | 0.1592 | 0.2650 | Retrieval results on the `test` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.548 | 0.2272 | 0.1611 | 0.2704 |
false
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used: - __query__: The `related_work` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits - __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==20` Retrieval results on the `train` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.5482 | 0.2243 | 0.0547 | 0.4063 | Retrieval results on the `validation` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.5476 | 0.2209 | 0.0553 | 0.4026 | Retrieval results on the `test` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.5480 | 0.2272 | 0.055 | 0.4039 |
false
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
true
# Dataset Card for "ArabicNLPDataset" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Preprocessing](#dataset-preprocessing) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/BihterDass/ArabicTextClassificationDataset] - **Repository:** [https://github.com/BihterDass/ArabicTextClassificationDataset] - **Size of downloaded dataset files:** 23.5 MB - **Size of the generated dataset:** 23.5 MB ### Dataset Summary The dataset was compiled from user comments from e-commerce sites. It consists of 10,000 validations, 10,000 tests and 80000 train data. Data were classified into 3 classes (positive(pos), negative(neg) and natural(nor). The data is available to you on github. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] #### arabic-dataset-v1 - **Size of downloaded dataset files:** 23.5 MB - **Size of the generated dataset:** 23.5 MB ### Data Fields The data fields are the same among all splits. #### arabic-dataset-v-v1 - `text`: a `string` feature. - `label`: a classification label, with possible values including `positive` (2), `natural` (1), `negative` (0). ### Data Splits | |train |validation|test | |----|--------:|---------:|---------:| |Data| 80000 | 10000 | 10000 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@PnrSvc](https://github.com/PnrSvc) for adding this dataset.
false
This is a copy of the [MS^2](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used: - __query__: The `background` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`. - __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"mean"`, i.e. the number of documents retrieved, `k`, is set as the mean number of documents seen across examples in this dataset, in this case `k==17` Retrieval results on the `train` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.4333 | 0.2163 | 0.2051 | 0.2197 | Retrieval results on the `validation` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.3780 | 0.1827 | 0.1815 | 0.1792 | Retrieval results on the `test` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.3928 | 0.1898 | 0.1951 | 0.1820 |
false
This is a copy of the [MS^2](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used: - __query__: The `background` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`. - __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example Retrieval results on the `train` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.4333 | 0.2163 | 0.2163 | 0.2163 | Retrieval results on the `validation` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.3780 | 0.1827 | 0.1827 | 0.1827 | Retrieval results on the `test` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.3928 | 0.1898 | 0.1898 | 0.1898 |
false
30,000 256x256 mel spectrograms of 5 second samples that have been used in music, sourced from [WhoSampled](https://whosampled.com) and [YouTube](https://youtube.com). The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference using De-noising Diffusion Probabilistic Models. ``` x_res = 256 y_res = 256 sample_rate = 22050 n_fft = 2048 hop_length = 512 ```
true
# Dataset Card for "UnpredicTable-5k" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Repository:** https://github.com/AnonCodeShare/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/unpredictable/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-support-google-com](https://huggingface.co/datasets/unpredictable/unpredictable_support-google-com) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Licensing Information Apache 2.0
true
# Dataset Card for "UnpredicTable-unique" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Repository:** https://github.com/AnonCodeShare/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/unpredictable/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-support-google-com](https://huggingface.co/datasets/unpredictable/unpredictable_support-google-com) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Licensing Information Apache 2.0
false
KoPI (Korpus Perayapan Indonesia) is Indonesian general corpora for sequence language modelling Subset of KoPI corpora: KoPI-CC + KoPI-CC-NEWS + KoPI-Mc4 + KoPI-Wiki + KoPI-Leipzig + KoPI-Paper
false
# Dataset Card for ScandiQA ## Dataset Description - **Repository:** <https://github.com/alexandrainst/scandi-qa> - **Point of Contact:** [Dan Saattrup Nielsen](mailto:dan.nielsen@alexandra.dk) - **Size of downloaded dataset files:** 69 MB - **Size of the generated dataset:** 67 MB - **Total amount of disk used:** 136 MB ### Dataset Summary ScandiQA is a dataset of questions and answers in the Danish, Norwegian, and Swedish languages. All samples come from the Natural Questions (NQ) dataset, which is a large question answering dataset from Google searches. The Scandinavian questions and answers come from the MKQA dataset, where 10,000 NQ samples were manually translated into, among others, Danish, Norwegian, and Swedish. However, this did not include a translated context, hindering the training of extractive question answering models. We merged the NQ dataset with the MKQA dataset, and extracted contexts as either "long answers" from the NQ dataset, being the paragraph in which the answer was found, or otherwise we extract the context by locating the paragraphs which have the largest cosine similarity to the question, and which contains the desired answer. Further, many answers in the MKQA dataset were "language normalised": for instance, all date answers were converted to the format "YYYY-MM-DD", meaning that in most cases these answers are not appearing in any paragraphs. We solve this by extending the MKQA answers with plausible "answer candidates", being slight perturbations or translations of the answer. With the contexts extracted, we translated these to Danish, Swedish and Norwegian using the [DeepL translation service](https://www.deepl.com/pro-api?cta=header-pro-api) for Danish and Swedish, and the [Google Translation service](https://cloud.google.com/translate/docs/reference/rest/) for Norwegian. After translation we ensured that the Scandinavian answers do indeed occur in the translated contexts. As we are filtering the MKQA samples at both the "merging stage" and the "translation stage", we are not able to fully convert the 10,000 samples to the Scandinavian languages, and instead get roughly 8,000 samples per language. These have further been split into a training, validation and test split, with the latter two containing roughly 750 samples. The splits have been created in such a way that the proportion of samples without an answer is roughly the same in each split. ### Supported Tasks and Leaderboards Training machine learning models for extractive question answering is the intended task for this dataset. No leaderboard is active at this point. ### Languages The dataset is available in Danish (`da`), Swedish (`sv`) and Norwegian (`no`). ## Dataset Structure ### Data Instances - **Size of downloaded dataset files:** 69 MB - **Size of the generated dataset:** 67 MB - **Total amount of disk used:** 136 MB An example from the `train` split of the `da` subset looks as follows. ``` { 'example_id': 123, 'question': 'Er dette en test?', 'answer': 'Dette er en test', 'answer_start': 0, 'context': 'Dette er en testkontekst.', 'answer_en': 'This is a test', 'answer_start_en': 0, 'context_en': "This is a test context.", 'title_en': 'Train test' } ``` ### Data Fields The data fields are the same among all splits. - `example_id`: an `int64` feature. - `question`: a `string` feature. - `answer`: a `string` feature. - `answer_start`: an `int64` feature. - `context`: a `string` feature. - `answer_en`: a `string` feature. - `answer_start_en`: an `int64` feature. - `context_en`: a `string` feature. - `title_en`: a `string` feature. ### Data Splits | name | train | validation | test | |----------|------:|-----------:|-----:| | da | 6311 | 749 | 750 | | sv | 6299 | 750 | 749 | | no | 6314 | 749 | 750 | ## Dataset Creation ### Curation Rationale The Scandinavian languages does not have any gold standard question answering dataset. This is not quite gold standard, but the fact both the questions and answers are all manually translated, it is a solid silver standard dataset. ### Source Data The original data was collected from the [MKQA](https://github.com/apple/ml-mkqa/) and [Natural Questions](https://ai.google.com/research/NaturalQuestions) datasets from Apple and Google, respectively. ## Additional Information ### Dataset Curators [Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra Institute](https://alexandra.dk/) curated this dataset. ### Licensing Information The dataset is licensed under the [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/).
false
# Dataset Card for alchemy ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](https://alchemy.tencent.com/)** - **Paper:**: (see citation) - **Leaderboard:**: [Leaderboard](https://alchemy.tencent.com/) ### Dataset Summary The `alchemy` dataset is a molecular dataset, called Alchemy, which lists 12 quantum mechanical properties of 130,000+ organic molecules comprising up to 12 heavy atoms (C, N, O, S, F and Cl), sampled from the GDBMedChem database. ### Supported Tasks and Leaderboards `alchemy` should be used for organic quantum molecular property prediction, a regression task on 12 properties. The score used is MAE. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") # For the train set (replace by valid or test as needed) dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | big | | #graphs | 202578 | | average #nodes | 10.101387606810183 | | average #edges | 20.877326870011206 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one) - `num_nodes` (int): number of nodes of the graph ### Data Splits This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset. ## Additional Information ### Licensing Information The dataset has been released under license mit. ### Citation Information ``` @inproceedings{Morris+2020, title={TUDataset: A collection of benchmark datasets for learning with graphs}, author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann}, booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)}, archivePrefix={arXiv}, eprint={2007.08663}, url={www.graphlearning.io}, year={2020} } ``` ``` @article{DBLP:journals/corr/abs-1906-09427, author = {Guangyong Chen and Pengfei Chen and Chang{-}Yu Hsieh and Chee{-}Kong Lee and Benben Liao and Renjie Liao and Weiwen Liu and Jiezhong Qiu and Qiming Sun and Jie Tang and Richard S. Zemel and Shengyu Zhang}, title = {Alchemy: {A} Quantum Chemistry Dataset for Benchmarking {AI} Models}, journal = {CoRR}, volume = {abs/1906.09427}, year = {2019}, url = {http://arxiv.org/abs/1906.09427}, eprinttype = {arXiv}, eprint = {1906.09427}, timestamp = {Mon, 11 Nov 2019 12:55:11 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1906-09427.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
false
# Dataset Card for aspirin ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](http://www.sgdml.org/#datasets)** - **Paper:**: (see citation) ### Dataset Summary The `aspirin` dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively. ### Supported Tasks and Leaderboards `aspirin` should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") # For the full set dataset_pg_list = [Data(graph) for graph in dataset_hf["full"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | big | | #graphs | 111762 | | average #nodes | 21.0 | | average #edges | 303.0447106824262 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: #labels): contains the number of labels available to predict - `num_nodes` (int): number of nodes of the graph ### Data Splits This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset. ## Additional Information ### Licensing Information The dataset has been released under license unknown. ### Citation Information ``` @inproceedings{Morris+2020, title={TUDataset: A collection of benchmark datasets for learning with graphs}, author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann}, booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)}, archivePrefix={arXiv}, eprint={2007.08663}, url={www.graphlearning.io}, year={2020} } ``` ``` @article{Chmiela_2017, doi = {10.1126/sciadv.1603015}, url = {https://doi.org/10.1126%2Fsciadv.1603015}, year = 2017, month = {may}, publisher = {American Association for the Advancement of Science ({AAAS})}, volume = {3}, number = {5}, author = {Stefan Chmiela and Alexandre Tkatchenko and Huziel E. Sauceda and Igor Poltavsky and Kristof T. Schütt and Klaus-Robert Müller}, title = {Machine learning of accurate energy-conserving molecular force fields}, journal = {Science Advances} } ```
false
# Dataset Card for benzene ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](http://www.sgdml.org/#datasets)** - **Paper:**: (see citation) ### Dataset Summary The `benzene` dataset is molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively. ### Supported Tasks and Leaderboards `benzene` should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") # For the train set (replace by valid or test as needed) dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | big | | #graphs | 527983 | | average #nodes | 12.0 | | average #edges | 129.8848866632322 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: #labels): contains the number of labels available to predict - `num_nodes` (int): number of nodes of the graph ### Data Splits This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset. ## Additional Information ### Licensing Information The dataset has been released under license unknown. ### Citation Information ``` @inproceedings{Morris+2020, title={TUDataset: A collection of benchmark datasets for learning with graphs}, author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann}, booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)}, archivePrefix={arXiv}, eprint={2007.08663}, url={www.graphlearning.io}, year={2020} } ``` ``` @article{Chmiela_2017, doi = {10.1126/sciadv.1603015}, url = {https://doi.org/10.1126%2Fsciadv.1603015}, year = 2017, month = {may}, publisher = {American Association for the Advancement of Science ({AAAS})}, volume = {3}, number = {5}, author = {Stefan Chmiela and Alexandre Tkatchenko and Huziel E. Sauceda and Igor Poltavsky and Kristof T. Schütt and Klaus-Robert Müller}, title = {Machine learning of accurate energy-conserving molecular force fields}, journal = {Science Advances} } ```
false
# Dataset Card for ethanol ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](http://www.sgdml.org/#datasets)** - **Paper:**: (see citation) ### Dataset Summary The `ethanol` dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively. ### Supported Tasks and Leaderboards `ethanol` should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") # For the train set (replace by valid or test as needed) dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | big | | #graphs | 455092 | | average #nodes | 9.0 | | average #edges | 72.0 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: #labels): contains the number of labels available to predict - `num_nodes` (int): number of nodes of the graph ### Data Splits This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset. ## Additional Information ### Licensing Information The dataset has been released under license unknown. ### Citation Information Please cite both papers when using these datasets in publications. ``` @inproceedings{Morris+2020, title={TUDataset: A collection of benchmark datasets for learning with graphs}, author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann}, booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)}, archivePrefix={arXiv}, eprint={2007.08663}, url={www.graphlearning.io}, year={2020} } ``` ``` @article{Chmiela_2017, doi = {10.1126/sciadv.1603015}, url = {https://doi.org/10.1126%2Fsciadv.1603015}, year = 2017, month = {may}, publisher = {American Association for the Advancement of Science ({AAAS})}, volume = {3}, number = {5}, author = {Stefan Chmiela and Alexandre Tkatchenko and Huziel E. Sauceda and Igor Poltavsky and Kristof T. Schütt and Klaus-Robert Müller}, title = {Machine learning of accurate energy-conserving molecular force fields}, journal = {Science Advances} } ```
false
# Dataset Card for malonaldehyde ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](http://www.sgdml.org/#datasets)** - **Paper:**: (see citation) ### Dataset Summary The `malonaldehyde` dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively. ### Supported Tasks and Leaderboards `malonaldehyde` should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") # For the train set (replace by valid or test as needed) dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | big | | #graphs | 893237 | | average #nodes | 9.0 | | average #edges | 71.99990148202383 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: #labels): contains the number of labels available to predict - `num_nodes` (int): number of nodes of the graph ### Data Splits This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset. ## Additional Information ### Licensing Information The dataset has been released under license unknown. ### Citation Information ``` @inproceedings{Morris+2020, title={TUDataset: A collection of benchmark datasets for learning with graphs}, author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann}, booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)}, archivePrefix={arXiv}, eprint={2007.08663}, url={www.graphlearning.io}, year={2020} } ``` ``` @article{Chmiela_2017, doi = {10.1126/sciadv.1603015}, url = {https://doi.org/10.1126%2Fsciadv.1603015}, year = 2017, month = {may}, publisher = {American Association for the Advancement of Science ({AAAS})}, volume = {3}, number = {5}, author = {Stefan Chmiela and Alexandre Tkatchenko and Huziel E. Sauceda and Igor Poltavsky and Kristof T. Schütt and Klaus-Robert Müller}, title = {Machine learning of accurate energy-conserving molecular force fields}, journal = {Science Advances} } ```
true
# Dataset Card for VaccinChatNL ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) <!-- - [Curation Rationale](#curation-rationale) --> <!-- - [Source Data](#source-data) --> - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) <!-- - [Social Impact of Dataset](#social-impact-of-dataset) --> - [Discussion of Biases](#discussion-of-biases) <!-- - [Other Known Limitations](#other-known-limitations) --> - [Additional Information](#additional-information) <!-- - [Dataset Curators](#dataset-curators) --> <!-- - [Licensing Information](#licensing-information) --> - [Citation Information](#citation-information) <!-- - [Contributions](#contributions) --> ## Dataset Description <!-- - **Homepage:** - **Repository:** - **Paper:** [To be added] - **Leaderboard:** --> - **Point of Contact:** [Jeska Buhmann](mailto:jeska.buhmann@uantwerpen.be) ### Dataset Summary VaccinChatNL is a Flemish Dutch FAQ dataset on the topic of COVID-19 vaccinations in Flanders. It consists of 12,833 user questions divided over 181 answer labels, thus providing large groups of semantically equivalent paraphrases (a many-to-one mapping of user questions to answer labels). VaccinChatNL is the first Dutch many-to-one FAQ dataset of this size. ### Supported Tasks and Leaderboards - 'text-classification': the dataset can be used to train a classification model for Dutch frequently asked questions on the topic of COVID-19 vaccination in Flanders. ### Languages Dutch (Flemish): the BCP-47 code for Dutch as generally spoken in Flanders (Belgium) is nl-BE. ## Dataset Structure ### Data Instances For each instance, there is a string for the user question and a string for the label of the annotated answer. See the [CLiPS / VaccinChatNL dataset viewer](https://huggingface.co/datasets/clips/VaccinChatNL/viewer/clips--VaccinChatNL/train). ``` {"sentence1": "Waar kan ik de bijsluiters van de vaccins vinden?", "label": "faq_ask_bijsluiter"} ``` ### Data Fields - `sentence1`: a string containing the user question - `label`: a string containing the name of the intent (the answer class) ### Data Splits The VaccinChatNL dataset has 3 splits: _train_, _valid_, and _test_. Below are the statistics for the dataset. | Dataset Split | Number of Labeled User Questions in Split | | ------------- | ------------------------------------------ | | Train | 10,542 | | Validation | 1,171 | | Test | 1,170 | ## Dataset Creation <!-- ### Curation Rationale [More Information Needed] --> <!-- ### Source Data [Perhaps a link to vaccinchat.be and some of the website that were used for information] --> <!-- #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] --> ### Annotations #### Annotation process Annotation was an iterative semi-automatic process. Starting from a very limited dataset with approximately 50 question-answer pairs (_sentence1-label_ pairs) a text classification model was trained and implemented in a publicly available chatbot. When the chatbot was used, the predicted labels for the new questions were checked and corrected if necessary. In addition, new answers were added to the dataset. After each round of corrections, the model was retrained on the updated dataset. This iterative approach led to the final dataset containing 12,883 user questions divided over 181 answer labels. #### Who are the annotators? The VaccinChatNL data were annotated by members and students of [CLiPS](https://www.uantwerpen.be/en/research-groups/clips/). All annotators have a background in Computational Linguistics. ### Personal and Sensitive Information The data are anonymized in the sense that a user question can never be traced back to a specific individual. ## Considerations for Using the Data <!-- ### Social Impact of Dataset [More Information Needed] --> ### Discussion of Biases This dataset contains real user questions, including a rather large section (7%) of out-of-domain questions or remarks (_label: nlu_fallback_). This class of user questions consists of ununderstandable questions, but also jokes and insulting remarks. <!-- ### Other Known Limitations [Perhaps some information of % of exact overlap between train and test set] --> ## Additional Information <!-- ### Dataset Curators [More Information Needed] --> <!-- ### Licensing Information [More Information Needed] --> ### Citation Information ``` @inproceedings{buhmann-etal-2022-domain, title = "Domain- and Task-Adaptation for {V}accin{C}hat{NL}, a {D}utch {COVID}-19 {FAQ} Answering Corpus and Classification Model", author = "Buhmann, Jeska and De Bruyn, Maxime and Lotfi, Ehsan and Daelemans, Walter", booktitle = "Proceedings of the 29th International Conference on Computational Linguistics", month = oct, year = "2022", address = "Gyeongju, Republic of Korea", publisher = "International Committee on Computational Linguistics", url = "https://aclanthology.org/2022.coling-1.312", pages = "3539--3549" } ``` <!-- ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. -->
false
# Dataset Card for naphthalene ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](http://www.sgdml.org/#datasets)** - **Paper:**: (see citation) ### Dataset Summary The `naphthalene` dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively. ### Supported Tasks and Leaderboards `naphthalene` should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") # For the train set (replace by valid or test as needed) dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | big | | #graphs | 226255 | | average #nodes | 18.0 | | average #edges | 254.73246234354005 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: #labels): contains the number of labels available to predict - `num_nodes` (int): number of nodes of the graph ### Data Splits This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset. ## Additional Information ### Licensing Information The dataset has been released under license unknown. ### Citation Information ``` @inproceedings{Morris+2020, title={TUDataset: A collection of benchmark datasets for learning with graphs}, author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann}, booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)}, archivePrefix={arXiv}, eprint={2007.08663}, url={www.graphlearning.io}, year={2020} } ``` ``` @article{Chmiela_2017, doi = {10.1126/sciadv.1603015}, url = {https://doi.org/10.1126%2Fsciadv.1603015}, year = 2017, month = {may}, publisher = {American Association for the Advancement of Science ({AAAS})}, volume = {3}, number = {5}, author = {Stefan Chmiela and Alexandre Tkatchenko and Huziel E. Sauceda and Igor Poltavsky and Kristof T. Schütt and Klaus-Robert Müller}, title = {Machine learning of accurate energy-conserving molecular force fields}, journal = {Science Advances} } ```
false
# Dataset Card for salicylic_acid ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](http://www.sgdml.org/#datasets)** - **Paper:**: (see citation) ### Dataset Summary The `salicylic_acid` dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively. ### Supported Tasks and Leaderboards `salicylic_acid` should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") # For the train set (replace by valid or test as needed) dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | big | | #graphs | 220231 | | average #nodes | 16.0 | | average #edges | 208.2681717461586 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: #labels): contains the number of labels available to predict - `num_nodes` (int): number of nodes of the graph ### Data Splits This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset. ## Additional Information ### Licensing Information The dataset has been released under license unknown. ### Citation Information ``` @inproceedings{Morris+2020, title={TUDataset: A collection of benchmark datasets for learning with graphs}, author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann}, booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)}, archivePrefix={arXiv}, eprint={2007.08663}, url={www.graphlearning.io}, year={2020} } ``` ``` @article{Chmiela_2017, doi = {10.1126/sciadv.1603015}, url = {https://doi.org/10.1126%2Fsciadv.1603015}, year = 2017, month = {may}, publisher = {American Association for the Advancement of Science ({AAAS})}, volume = {3}, number = {5}, author = {Stefan Chmiela and Alexandre Tkatchenko and Huziel E. Sauceda and Igor Poltavsky and Kristof T. Schütt and Klaus-Robert Müller}, title = {Machine learning of accurate energy-conserving molecular force fields}, journal = {Science Advances} } ```
false
# Dataset Card for toluene ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](http://www.sgdml.org/#datasets)** - **Paper:**: (see citation) ### Dataset Summary The `toluene` dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively. ### Supported Tasks and Leaderboards `toluene` should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") # For the train set (replace by valid or test as needed) dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | big | | #graphs | 342790 | | average #nodes | 15.0 | | average #edges | 192.30698588936116 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: #labels): contains the number of labels available to predict - `num_nodes` (int): number of nodes of the graph ### Data Splits This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset. ## Additional Information ### Licensing Information The dataset has been released under license unknown. ### Citation Information ``` @inproceedings{Morris+2020, title={TUDataset: A collection of benchmark datasets for learning with graphs}, author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann}, booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)}, archivePrefix={arXiv}, eprint={2007.08663}, url={www.graphlearning.io}, year={2020} } ``` ``` @article{Chmiela_2017, doi = {10.1126/sciadv.1603015}, url = {https://doi.org/10.1126%2Fsciadv.1603015}, year = 2017, month = {may}, publisher = {American Association for the Advancement of Science ({AAAS})}, volume = {3}, number = {5}, author = {Stefan Chmiela and Alexandre Tkatchenko and Huziel E. Sauceda and Igor Poltavsky and Kristof T. Schütt and Klaus-Robert Müller}, title = {Machine learning of accurate energy-conserving molecular force fields}, journal = {Science Advances} } ```
false
# Dataset Card for uracil ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](http://www.sgdml.org/#datasets)** - **Paper:**: (see citation) ### Dataset Summary The `uracil` dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively. ### Supported Tasks and Leaderboards `uracil` should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") # For the train set (replace by valid or test as needed) dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | big | | #graphs | 133769 | | average #nodes | 12.0 | | average #edges | 128.88676085818943 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: #labels): contains the number of labels available to predict - `num_nodes` (int): number of nodes of the graph ### Data Splits This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset. ## Additional Information ### Licensing Information The dataset has been released under license unknown. ### Citation Information ``` @inproceedings{Morris+2020, title={TUDataset: A collection of benchmark datasets for learning with graphs}, author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann}, booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)}, archivePrefix={arXiv}, eprint={2007.08663}, url={www.graphlearning.io}, year={2020} } ``` ``` @article{Chmiela_2017, doi = {10.1126/sciadv.1603015}, url = {https://doi.org/10.1126%2Fsciadv.1603015}, year = 2017, month = {may}, publisher = {American Association for the Advancement of Science ({AAAS})}, volume = {3}, number = {5}, author = {Stefan Chmiela and Alexandre Tkatchenko and Huziel E. Sauceda and Igor Poltavsky and Kristof T. Schütt and Klaus-Robert Müller}, title = {Machine learning of accurate energy-conserving molecular force fields}, journal = {Science Advances} } ```
false
# AutoTrain Dataset for project: dog-classifiers ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project dog-classifiers. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "image": "<474x592 RGB PIL image>", "target": 1 }, { "image": "<474x296 RGB PIL image>", "target": 1 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "image": "Image(decode=True, id=None)", "target": "ClassLabel(num_classes=5, names=['akita inu', 'corgi', 'leonberger', 'samoyed', 'shiba inu'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 598 | | valid | 150 |
false
### dataset description We downloaded ZINC dataset from [here](https://zinc15.docking.org/) and canonicalized it. We used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit. ```python: from rdkit import Chem def canonicalize(mol): mol = Chem.MolToSmiles(Chem.MolFromSmiles(mol),True) return mol ``` We randomly split the preprocessed data into train and validation. The ratio is 9 : 1.
false
# Dataset Card for Yandex_Jobs ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Contributions](#contributions) ## Dataset Description ### Dataset Summary This is a dataset of more than 600 IT vacancies in Russian from parsing telegram channel https://t.me/ya_jobs. All the texts are perfectly structured, no missing values. ### Supported Tasks and Leaderboards `text-generation` with the 'Raw text column'. `summarization` as for getting from all the info the header. `multiple-choice` as for the hashtags (to choose multiple from all available in the dataset) ### Languages The text in the dataset is in only in Russian. The associated BCP-47 code is `ru`. ## Dataset Structure ### Data Instances The data is parsed from a vacancy of Russian IT company [Yandex](https://ya.ru/). An example from the set looks as follows: ``` {'Header': 'Разработчик интерфейсов в группу разработки спецпроектов', 'Emoji': '🎳', 'Description': 'Конструктор лендингов — это инструмент Яндекса, который позволяет пользователям создавать лендинги и турбо-лендинги для Яндекс.Директа. Турбо — режим ускоренной загрузки страниц для показа на мобильных. У нас современный стек, смелые планы и высокая динамика.\nМы ищем опытного и открытого новому фронтенд-разработчика.', 'Requirements': '• отлично знаете JavaScript • разрабатывали на Node.js, применяли фреймворк Express • умеете создавать веб-приложения на React + Redux • знаете HTML и CSS, особенности их отображения в браузерах', 'Tasks': '• разрабатывать интерфейсы', 'Pluses': '• писали интеграционные, модульные, функциональные или браузерные тесты • умеете разворачивать и администрировать веб-сервисы: собирать Docker-образы, настраивать мониторинги, выкладывать в облачные системы, отлаживать в продакшене • работали с реляционными БД PostgreSQL', 'Hashtags': '#фронтенд #турбо #JS', 'Link': 'https://ya.cc/t/t7E3UsmVSKs6L', 'Raw text': 'Разработчик интерфейсов в группу разработки спецпроектов🎳 Конструктор лендингов — это инструмент Яндекса, который позволяет пользователям создавать лендинги и турбо-лендинги для Яндекс.Директа. Турбо — режим ускоренной загрузки страниц для показа на мобильных. У нас современный стек, смелые планы и высокая динамика. Мы ищем опытного и открытого новому фронтенд-разработчика. Мы ждем, что вы: • отлично знаете JavaScript • разрабатывали на Node.js, применяли фреймворк Express • умеете создавать веб-приложения на React + Redux • знаете HTML и CSS, особенности их отображения в браузерах Что нужно делать: • разрабатывать интерфейсы Будет плюсом, если вы: • писали интеграционные, модульные, функциональные или браузерные тесты • умеете разворачивать и администрировать веб-сервисы: собирать Docker-образы, настраивать мониторинги, выкладывать в облачные системы, отлаживать в продакшене • работали с реляционными БД PostgreSQL https://ya.cc/t/t7E3UsmVSKs6L #фронтенд #турбо #JS' } ``` ### Data Fields - `Header`: A string with a position title (str) - `Emoji`: Emoji that is used at the end of the title position (usually asosiated with the position) (str) - `Description`: Short description of the vacancy (str) - `Requirements`: A couple of required technologies/programming languages/experience (str) - `Tasks`: Examples of the tasks of the job position (str) - `Pluses`: A couple of great points for the applicant to have (technologies/experience/etc) - `Hashtags`: A list of hashtags assosiated with the job (usually programming languages) (str) - `Link`: A link to a job description (there may be more information, but it is not checked) (str) - `Raw text`: Raw text with all the formatiing from the channel. Created with other fields. (str) ### Data Splits There is not enough examples yet to split it to train/test/val in my opinion. ## Dataset Creation It downloaded and parsed from telegram channel https://t.me/ya_jobs 03.09.2022. All the unparsed examples and the ones missing any field are deleted (from 1600 vacancies to only 600 without any missing fields like emojis or links) ## Considerations for Using the Data These vacancies are for only one IT company (yandex). This means they can be pretty specific and probably can not be generalized as any vacancies or even any IT vacancies. ## Contributions - **Point of Contact and Author:** [Kirill Gelvan](telegram: @kirili4ik)
false
false
# Dataset Card for MetaQA Agents' Predictions ## Dataset Description - **Repository:** [MetaQA's Repository](https://github.com/UKPLab/MetaQA) - **Paper:** [MetaQA: Combining Expert Agents for Multi-Skill Question Answering](https://arxiv.org/abs/2112.01922) - **Point of Contact:** [Haritz Puerto](mailto:puerto@ukp.informatik.tu-darmstadt.de) ## Dataset Summary This dataset contains the answer predictions of the QA agents for the [QA datasets](https://huggingface.co/datasets/haritzpuerto/MetaQA_Datasets) used in [MetaQA paper](https://arxiv.org/abs/2112.01922). In particular, it contains the following QA agents' predictions: ### Span-Extraction Agents - Agent: Span-BERT Large (Joshi et al.,2020) trained on SQuAD. Predictions for: - SQuAD - NewsQA - HotpotQA - SearchQA - Natural Questions - TriviaQA-web - QAMR - DuoRC - DROP - Agent: Span-BERT Large (Joshi et al.,2020) trained on NewsQA. Predictions for: - SQuAD - NewsQA - HotpotQA - SearchQA - Natural Questions - TriviaQA-web - QAMR - DuoRC - DROP - Agent: Span-BERT Large (Joshi et al.,2020) trained on HotpotQA. Predictions for: - SQuAD - NewsQA - HotpotQA - SearchQA - Natural Questions - TriviaQA-web - QAMR - DuoRC - DROP - Agent: Span-BERT Large (Joshi et al.,2020) trained on SearchQA. Predictions for: - SQuAD - NewsQA - HotpotQA - SearchQA - Natural Questions - TriviaQA-web - QAMR - DuoRC - DROP - Agent: Span-BERT Large (Joshi et al.,2020) trained on Natural Questions. Predictions for: - SQuAD - NewsQA - HotpotQA - SearchQA - Natural Questions - TriviaQA-web - QAMR - DuoRC - DROP - Agent: Span-BERT Large (Joshi et al.,2020) trained on TriviaQA-web. Predictions for: - SQuAD - NewsQA - HotpotQA - SearchQA - Natural Questions - TriviaQA-web - QAMR - DuoRC - DROP - Agent: Span-BERT Large (Joshi et al.,2020) trained on QAMR. Predictions for: - SQuAD - NewsQA - HotpotQA - SearchQA - Natural Questions - TriviaQA-web - QAMR - DuoRC - DROP - Agent: Span-BERT Large (Joshi et al.,2020) trained on DuoRC. Predictions for: - SQuAD - NewsQA - HotpotQA - SearchQA - Natural Questions - TriviaQA-web - QAMR - DuoRC - DROP - Agent: Span-BERT Large (Joshi et al.,2020) trained on DROP. Predictions for: - SQuAD - NewsQA - HotpotQA - SearchQA - Natural Questions - TriviaQA-web - QAMR - DuoRC - DROP ### Multiple-Choice Agents - Agent: RoBERTa Large (Liu et al., 2019) trained on RACE. Predictions for: - RACE - Commonsense QA - BoolQ - HellaSWAG - Social IQA - Agent: RoBERTa Large (Liu et al., 2019) trained on HellaSWAG. Predictions for: - RACE - Commonsense QA - BoolQ - HellaSWAG - Social IQA - Agent: AlBERT xxlarge-v2 (Lan et al., 2020) trained on Commonsense QA. Predictions for: - RACE - Commonsense QA - BoolQ - HellaSWAG - Social IQA - Agent: BERT Large-wwm (Devlin et al., 2019) trained on BoolQ. Predictions for: - BoolQ ### Abstractive Agents - Agent: TASE (Segal et al., 2020) trained on DROP. Predictions for: - DROP - Agent: BART Large with Adapters (Pfeiffer et al., 2020) trained on NarrativeQA. Predictions for: - NarrativeQA ### Multimodal Agents - Agent: Hybrider (Chen et al., 2020) trained on HybridQA. Predictions for: - HybridQA ### Languages All the QA datasets used English and thus, the Agents's predictions are also in English. ## Dataset Structure Each agent has a folder. Inside, there is a folder for each dataset containing four files: - predict_nbest_predictions.json - predict_predictions.json / predictions.json - predict_results.json (for span-extraction agents) ### Structure of predict_nbest_predictions.json ``` {id: [{"start_logit": ..., "end_logit": ..., "text": ..., "probability": ... }]} ``` ### Structure of predict_predictions.json ``` {id: answer_text} ``` ### Data Splits All the QA datasets have 3 splits: train, validation, and test. The splits (Question-Context pairs) are provided in https://huggingface.co/datasets/haritzpuerto/MetaQA_Datasets ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to help developing new multi-agent models and analyzing the predictions of QA models. ### Discussion of Biases The QA models used to create this predictions may not be perfect, generate false data, and contain biases. The release of these predictions may help to identify these flaws in the models. ## Additional Information ### License The MetaQA Agents' prediction dataset version is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0). ### Citation ``` @article{Puerto2021MetaQACE, title={MetaQA: Combining Expert Agents for Multi-Skill Question Answering}, author={Haritz Puerto and Gozde Gul cSahin and Iryna Gurevych}, journal={ArXiv}, year={2021}, volume={abs/2112.01922} } ```
false
# AutoTrain Dataset for project: satellite-image-classification ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project satellite-image-classification. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "image": "<256x256 CMYK PIL image>", "target": 0 }, { "image": "<256x256 CMYK PIL image>", "target": 0 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "image": "Image(decode=True, id=None)", "target": "ClassLabel(num_classes=1, names=['cloudy'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 1200 | | valid | 300 |
false
# Dataset Card for Europarl v7 (en-it split) This dataset contains only the English-Italian split of Europarl v7. We created the dataset to provide it to the [M2L 2022 Summer School](https://www.m2lschool.org/) students. For all the information on the dataset, please refer to: [https://www.statmt.org/europarl/](https://www.statmt.org/europarl/) ## Dataset Structure ### Data Fields - sent_en: English transcript - sent_it: Italian translation ### Data Splits We created three custom training/validation/testing splits. Feel free to rearrange them if needed. These ARE NOT by any means official splits. - train (1717204 pairs) - validation (190911 pairs) - test (1000 pairs) ### Citation Information If using the dataset, please cite: `Koehn, P. (2005). Europarl: A parallel corpus for statistical machine translation. In Proceedings of machine translation summit x: papers (pp. 79-86).` ### Contributions Thanks to [@g8a9](https://github.com/g8a9) for adding this dataset.
false
# Battery Device QA Data Battery device records, including anode, cathode, and electrolyte. Examples of the question answering evaluation dataset: \{'question': 'What is the cathode?', 'answer': 'Al foil', 'context': 'The blended slurry was then cast onto a clean current collector (Al foil for the cathode and Cu foil for the anode) and dried at 90 °C under vacuum overnight.', 'start index': 645\} \{'question': 'What is the anode?', 'answer': 'Cu foil', 'context': 'The blended slurry was then cast onto a clean current collector (Al foil for the cathode and Cu foil for the anode) and dried at 90 °C under vacuum overnight. Finally, the obtained electrodes were cut into desired shapes on demand. It should be noted that the electrode mass ratio of cathode/anode is set to about 4, thus achieving the battery balance.', 'start index': 673\} \{'question': 'What is the cathode?', 'answer': 'SiC/RGO nanocomposite', 'context': 'In conclusion, the SiC/RGO nanocomposite, integrating the synergistic effect of SiC flakes and RGO, was synthesized by an in situ gas–solid fabrication method. Taking advantage of the enhanced photogenerated charge separation, large CO2 adsorption, and numerous exposed active sites, SiC/RGO nanocomposite served as the cathode material for the photo-assisted Li–CO2 battery.', 'start index': 284\} # Usage ``` from datasets import load_dataset dataset = load_dataset("batterydata/battery-device-data-qa") ``` # Citation ``` @article{huang2022batterybert, title={BatteryBERT: A Pretrained Language Model for Battery Database Enhancement}, author={Huang, Shu and Cole, Jacqueline M}, journal={J. Chem. Inf. Model.}, year={2022}, doi={10.1021/acs.jcim.2c00035}, url={DOI:10.1021/acs.jcim.2c00035}, pages={DOI: 10.1021/acs.jcim.2c00035}, publisher={ACS Publications} } ```
false
# Abbreviation Detection Dataset ## Original Data Source #### PLOS I. Zilio, H. Saadany, P. Sharma, D. Kanojia and C. Orasan, PLOD: An Abbreviation Detection Dataset for Scientific Docu- ments, 2022, https://arxiv.org/abs/2204.12061. #### SDU@AAAI-21 A. P. B. Veyseh, F. Dernoncourt, Q. H. Tran and T. H. Nguyen, Proceedings of the 28th International Conference on Compu- tational Linguistics, 2020, pp. 3285–3301 ## Citation BatteryDataExtractor: battery-aware text-mining software embedded with BERT models
false
# CNER Dataset ## Original Data Source #### CHEMDNER M. Krallinger, O. Rabal, F. Leitner, M. Vazquez, D. Salgado, Z. Lu, R. Leaman, Y. Lu, D. Ji, D. M. Lowe et al., J. Cheminf., 2015, 7, 1–17. #### MatScholar I. Weston, V. Tshitoyan, J. Dagdelen, O. Kononova, A. Tre- wartha, K. A. Persson, G. Ceder and A. Jain, J. Chem. Inf. Model., 2019, 59, 3692–3702. #### SOFC A. Friedrich, H. Adel, F. Tomazic, J. Hingerl, R. Benteau, A. Maruscyk and L. Lange, The SOFC-exp corpus and neural approaches to information extraction in the materials science domain, 2020, https://arxiv.org/abs/2006.03039. #### BioNLP G. Crichton, S. Pyysalo, B. Chiu and A. Korhonen, BMC Bioinf., 2017, 18, 1–14. ## Citation BatteryDataExtractor: battery-aware text-mining software embedded with BERT models
false
# School Notebooks Dataset The images of school notebooks with handwritten notes in Russian. The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages. ## Annotation format The annotation is in COCO format. The `annotation.json` should have the following dictionaries: - `annotation["categories"]` - a list of dicts with a categories info (categotiy names and indexes). - `annotation["images"]` - a list of dictionaries with a description of images, each dictionary must contain fields: - `file_name` - name of the image file. - `id` for image id. - `annotation["annotations"]` - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields: - `image_id` - the index of the image on which the polygon is located. - `category_id` - the polygon’s category index. - `attributes` - dict with some additional annotation information. In the `translation` subdict you can find text translation for the line. - `segmentation` - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y.
false
# Dataset Card for LibriVox Indonesia 1.0 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia - **Repository:** https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia - **Point of Contact:** [Cahya Wirawan](mailto:cahya.wirawan@gmail.com) ### Dataset Summary The LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public domain audiobooks [LibriVox](https://librivox.org/). We collected only languages in Indonesia for this dataset. The original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours. Each audio file in the speech dataset now lasts from a few seconds to a maximum of 20 seconds. We converted the audiobooks to speech datasets using the forced alignment software we developed. It supports multilingual, including low-resource languages, such as Acehnese, Balinese, or Minangkabau. We can also use it for other languages without additional work to train the model. The dataset currently consists of 8 hours in 7 languages from Indonesia. We will add more languages or audio files as we collect them. ### Languages ``` Acehnese, Balinese, Bugisnese, Indonesian, Minangkabau, Javanese, Sundanese ``` ## Dataset Structure ### Data Instances A typical data point comprises the `path` to the audio file and its `sentence`. Additional fields include `reader` and `language`. ```python { 'path': 'librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3', 'language': 'sun', 'reader': '3174', 'sentence': 'pernyataan umum ngeunaan hak hak asasi manusa sakabeh manusa', 'audio': { 'path': 'librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 44100 }, } ``` ### Data Fields `path` (`string`): The path to the audio file `language` (`string`): The language of the audio file `reader` (`string`): The reader Id in LibriVox `sentence` (`string`): The sentence the user read from the book. `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. ### Data Splits The speech material has only train split. ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/) ### Citation Information ``` ```
false
# AutoTrain Dataset for project: donut-vs-croissant ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project donut-vs-croissant. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "image": "<512x512 RGB PIL image>", "target": 0 }, { "image": "<512x512 RGB PIL image>", "target": 0 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "image": "Image(decode=True, id=None)", "target": "ClassLabel(num_classes=2, names=['croissant', 'donut'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 133 | | valid | 362 |
false
# Datastet card for Encyclopaedia Britannica Illustrated ## Table of Contents - [Dataset Card Creation Guide](#dataset-card-creation-guide) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://data.nls.uk/data/digitised-collections/encyclopaedia-britannica/](https://data.nls.uk/data/digitised-collections/encyclopaedia-britannica/) ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Citation Information ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
false
# LAION-Aesthetics :: CLIP → UMAP This dataset is a CLIP (text) → UMAP embedding of the [LAION-Aesthetics dataset](https://laion.ai/blog/laion-aesthetics/) - specifically the [`improved_aesthetics_6plus` version](https://huggingface.co/datasets/ChristophSchuhmann/improved_aesthetics_6plus), which filters the full dataset to images with scores of > 6 under the "aesthetic" filtering model. Thanks LAION for this amazing corpus! --- The dataset here includes coordinates for 3x separate UMAP fits using different values for the `n_neighbors` parameter - `10`, `30`, and `60` - which are broken out as separate columns with different suffixes: - `n_neighbors=10` → (`x_nn10`, `y_nn10`) - `n_neighbors=30` → (`x_nn30`, `y_nn30`) - `n_neighbors=60` → (`x_nn60`, `y_nn60`) ### `nn10` ![nn10](https://user-images.githubusercontent.com/814168/189763846-efa9ecc9-3d57-469b-9d4e-02ddc1723265.jpg) ### `nn30` ![nn30](https://user-images.githubusercontent.com/814168/189763863-a67d4bb1-e043-48ec-8c5a-38dce960731b.jpg) ### `nn60` (The version from [Twitter](https://twitter.com/clured/status/1565399157606580224).) ![nn60](https://user-images.githubusercontent.com/814168/189763872-5847cde5-e03b-45e1-a9be-d95966bc5ded.jpg) ## Pipeline The script for producing this can be found here: https://github.com/davidmcclure/loam-viz/blob/laion/laion.py And is very simple - just using the `openai/clip-vit-base-patch32` model out-of-the-box to encode the text captions: ```python @app.command() def clip( src: str, dst: str, text_col: str = 'TEXT', limit: Optional[int] = typer.Option(None), batch_size: int = typer.Option(512), ): """Embed with CLIP.""" df = pd.read_parquet(src) if limit: df = df.head(limit) tokenizer = CLIPTokenizerFast.from_pretrained('openai/clip-vit-base-patch32') model = CLIPTextModel.from_pretrained('openai/clip-vit-base-patch32') model = model.to(device) texts = df[text_col].tolist() embeds = [] for batch in chunked_iter(tqdm(texts), batch_size): enc = tokenizer( batch, return_tensors='pt', padding=True, truncation=True, ) enc = enc.to(device) with torch.no_grad(): res = model(**enc) embeds.append(res.pooler_output.to('cpu')) embeds = torch.cat(embeds).numpy() np.save(dst, embeds) print(embeds.shape) ``` Then using `cuml.GaussianRandomProjection` to do an initial squeeze to 64d (which gets the embedding tensor small enough to fit onto a single GPU for the UMAP) - ```python @app.command() def random_projection(src: str, dst: str, dim: int = 64): """Random projection on an embedding matrix.""" rmm.reinitialize(managed_memory=True) embeds = np.load(src) rp = cuml.GaussianRandomProjection(n_components=dim) embeds = rp.fit_transform(embeds) np.save(dst, embeds) print(embeds.shape) ``` And then `cuml.UMAP` to get from 64d -> 2d - ```python @app.command() def umap( df_src: str, embeds_src: str, dst: str, n_neighbors: int = typer.Option(30), n_epochs: int = typer.Option(1000), negative_sample_rate: int = typer.Option(20), ): """UMAP to 2d.""" rmm.reinitialize(managed_memory=True) df = pd.read_parquet(df_src) embeds = np.load(embeds_src) embeds = embeds.astype('float16') print(embeds.shape) print(embeds.dtype) reducer = cuml.UMAP( n_neighbors=n_neighbors, n_epochs=n_epochs, negative_sample_rate=negative_sample_rate, verbose=True, ) x = reducer.fit_transform(embeds) df['x'] = x[:,0] df['y'] = x[:,1] df.to_parquet(dst) print(df) ```
false
# CABank Japanese Sakura Corpus - Susanne Miyata - Department of Medical Sciences - Aichi Shukotoku University - smiyata@asu.aasa.ac.jp - website: https://ca.talkbank.org/access/Sakura.html ## Important This data set is a copy from the original one located at https://ca.talkbank.org/access/Sakura.html. ## Details - Participants: 31 - Type of Study: xxx - Location: Japan - Media type: audio - DOI: doi:10.21415/T5M90R ## Citation information Some citation here. In accordance with TalkBank rules, any use of data from this corpus must be accompanied by at least one of the above references. ## Project Description This corpus of 18 conversations is the product of six graduation theses on gender differences in students' group talk. Each conversation lasted between 12 and 35 minutes (avg. 25 minutes) resulting in an overall time of 7 hours and 30 minutes. 31 Students (19 female, 12 male) participated in the study (Table 1). The participants gathered in groups of 4 students, either of the same or the opposite sex (6 conversations with a group of 4 female students, 6 with 4 male students, and 6 conversations with 2 male and 2 female students), according to age (first and third year students) and affiliation (two academic departments). In addition, the participants of each conversation came from the same small-sized class and were well acquainted. The participants were informed that their conversations may be transcribed and a video recorded for use in possible publication when recruited. Additionally, permission was asked once more after the transcription in cases where either private information had been displayed, or a misunderstanding concerning the nature and degree of the publication of the conversations became apparent during the conversation. The recordings took place in a small conference room at the university between or after lectures. The participants were given a card with a conversation topic to start with, but were free to vary (topic 1 "What do you expect from an opposite sex friend?" [isee ni motomeru koto]; topic 2 "Are you a dog lover or a cat lover?" [inuha ka nekoha ka]; topic 3 "About part-time work" [arubaito ni tsuite]). The investigator was not present during the recording. The combination of participants, the topic, and the duration of the 18 conversations are given in Table 2. The participants produced 15,449 utterances overall (female: 8,027 utterances, male: 7,422 utterances). All utterances were linked to video and transcribed in regular Japanese orthography and Latin script (Wakachi2002), and provided with morphological tags (JMOR04.1). Proper names were replaced by pseudonyms. ## Acknowledgements Additional contributors: Banno, Kyoko; Konishi, Saya; Matsui, Ayumi; Matsumoto, Shiori; Oogi, Rie; Takahashi, Akane; Muraki, Kyoko.
false
# CABank Japanese CallHome Corpus - Participants: 120 - Type of Study: phone call - Location: United States - Media type: audio - DOI: doi:10.21415/T5H59V - Web: https://ca.talkbank.org/access/CallHome/jpn.html ## Citation information Some citation here. In accordance with TalkBank rules, any use of data from this corpus must be accompanied by at least one of the above references. ## Project Description This is the Japanese portion of CallHome. Speakers were solicited by the LDC to participate in this telephone speech collection effort via the internet, publications (advertisements), and personal contacts. A total of 200 call originators were found, each of whom placed a telephone call via a toll-free robot operator maintained by the LDC. Access to the robot operator was possible via a unique Personal Identification Number (PIN) issued by the recruiting staff at the LDC when the caller enrolled in the project. The participants were made aware that their telephone call would be recorded, as were the call recipients. The call was allowed only if both parties agreed to being recorded. Each caller was allowed to talk up to 30 minutes. Upon successful completion of the call, the caller was paid $20 (in addition to making a free long-distance telephone call). Each caller was allowed to place only one telephone call. Although the goal of the call collection effort was to have unique speakers in all calls, a handful of repeat speakers are included in the corpus. In all, 200 calls were transcribed. Of these, 80 have been designated as training calls, 20 as development test calls, and 100 as evaluation test calls. For each of the training and development test calls, a contiguous 10-minute region was selected for transcription; for the evaluation test calls, a 5-minute region was transcribed. For the present publication, only 20 of the evaluation test calls are being released; the remaining 80 test calls are being held in reserve for future LVCSR benchmark tests. After a successful call was completed, a human audit of each telephone call was conducted to verify that the proper language was spoken, to check the quality of the recording, and to select and describe the region to be transcribed. The description of the transcribed region provides information about channel quality, number of speakers, their gender, and other attributes. ## Acknowledgements Andrew Yankes reformatted this corpus into accord with current versions of CHAT.
false
This is a copy of the [Cochrane](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used: - __query__: The `target` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`. - __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==25` Retrieval results on the `train` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.7014 | 0.3841 | 0.1698 | 0.5471 | Retrieval results on the `validation` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.7226 | 0.4023 | 0.1729 | 0.5676 | Retrieval results on the `test` set: N/A. Test set is blind so we do not have any queries.
false
This is a copy of the [Cochrane](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used: - __query__: The `target` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`. - __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"mean"`, i.e. the number of documents retrieved, `k`, is set as the mean number of documents seen across examples in this dataset, in this case `k==9` Retrieval results on the `train` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.7014 | 0.3841 | 0.2976 | 0.4157 | Retrieval results on the `validation` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.7226 | 0.4023 | 0.3095 | 0.4443 | Retrieval results on the `test` set: N/A. Test set is blind so we do not have any queries.
false
This is a copy of the [Cochrane](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used: - __query__: The `target` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`. - __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example Retrieval results on the `train` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.7014 | 0.3841 | 0.3841 | 0.3841 | Retrieval results on the `validation` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.7226 | 0.4023 | 0.4023 | 0.4023 | Retrieval results on the `test` set: N/A. Test set is blind so we do not have any queries.
false
# Şalom Ladino articles text corpus Text corpus compiled from 397 articles from the Judeo-Espanyol section of [Şalom newspaper](https://www.salom.com.tr/haberler/17/judeo-espanyol). Original sentences and articles belong to Şalom. Size: 176,843 words [Offical link](https://data.sefarad.com.tr/dataset/salom-ladino-articles-text-corpus) Paper on [ArXiv](https://arxiv.org/abs/2205.15599) Citation: ``` Preparing an endangered language for the digital age: The Case of Judeo-Spanish. Alp Öktem, Rodolfo Zevallos, Yasmin Moslem, Güneş Öztürk, Karen Şarhon. Workshop on Resources and Technologies for Indigenous, Endangered and Lesser-resourced Languages in Eurasia (EURALI) @ LREC 2022. Marseille, France. 20 June 2022 ``` This dataset is created as part of project "Judeo-Spanish: Connecting the two ends of the Mediterranean" carried out by Col·lectivaT and Sephardic Center of Istanbul within the framework of the “Grant Scheme for Common Cultural Heritage: Preservation and Dialogue between Turkey and the EU–II (CCH-II)” implemented by the Ministry of Culture and Tourism of the Republic of Turkey with the financial support of the European Union. The content of this website is the sole responsibility of Col·lectivaT and does not necessarily reflect the views of the European Union.
false
# Una fraza al diya Ladino language learning sentences prepared by Karen Sarhon of Sephardic Center of Istanbul. Each sentence has translations in Turkish, English, Spanish. Includes audio and image. 307 sentences in total. Source: https://sefarad.com.tr/judeo-espanyolladino/frazadeldia/ Images and audio: http://collectivat.cat/share/judeoespanyol_audio_image.zip [Offical link on Ladino Data Hub](https://data.sefarad.com.tr/dataset/una-fraza-al-diya-skad) Paper on [ArXiv](https://arxiv.org/abs/2205.15599) Citation: ``` Preparing an endangered language for the digital age: The Case of Judeo-Spanish. Alp Öktem, Rodolfo Zevallos, Yasmin Moslem, Güneş Öztürk, Karen Şarhon. Workshop on Resources and Technologies for Indigenous, Endangered and Lesser-resourced Languages in Eurasia (EURALI) @ LREC 2022. Marseille, France. 20 June 2022 ``` This dataset is created as part of project "Judeo-Spanish: Connecting the two ends of the Mediterranean" carried out by Col·lectivaT and Sephardic Center of Istanbul within the framework of the “Grant Scheme for Common Cultural Heritage: Preservation and Dialogue between Turkey and the EU–II (CCH-II)” implemented by the Ministry of Culture and Tourism of the Republic of Turkey with the financial support of the European Union. The content of this website is the sole responsibility of Col·lectivaT and does not necessarily reflect the views of the European Union.
false
# Data card for Internet Archive historic book pages unlabelled. - `10,844,387` unlabelled pages from historical books from the internet archive. - Intended to be used for: - pre-training computer vision models in an unsupervised manner - using weak supervision to generate labels
true
# Dataset Card for Kelly Keywords for Language Learning for Young and adults alike ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://spraakbanken.gu.se/en/resources/kelly - **Paper:** https://link.springer.com/article/10.1007/s10579-013-9251-2 ### Dataset Summary The Swedish Kelly list is a freely available frequency-based vocabulary list that comprises general-purpose language of modern Swedish. The list was generated from a large web-acquired corpus (SweWaC) of 114 million words dating from the 2010s. It is adapted to the needs of language learners and contains 8,425 most frequent lemmas that cover 80% of SweWaC. ### Languages Swedish (sv-SE) ## Dataset Structure ### Data Instances Here is a sample of the data: ```python { 'id': 190, 'raw_frequency': 117835.0, 'relative_frequency': 1033.61, 'cefr_level': 'A1', 'source': 'SweWaC', 'marker': 'en', 'lemma': 'dag', 'pos': 'noun-en', 'examples': 'e.g. god dag' } ``` This can be understood as: > The common noun "dag" ("day") has a rank of 190 in the list. It was used 117,835 times in SweWaC, meaning it occured 1033.61 times per million words. This word is among the most important vocabulary words for Swedish language learners and should be learned at the A1 CEFR level. An example usage of this word is the phrase "god dag" ("good day"). ### Data Fields - `id`: The row number for the data entry, starting at 1. Generally corresponds to the rank of the word. - `raw_frequency`: The raw frequency of the word. - `relative_frequency`: The relative frequency of the word measured in number of occurences per million words. - `cefr_level`: The CEFR level (A1, A2, B1, B2, C1, C2) of the word. - `source`: Whether the word came from SweWaC, translation lists (T2), or was manually added (manual). - `marker`: The grammatical marker of the word, if any, such as an article or infinitive marker. - `lemma`: The lemma of the word, sometimes provided with its spelling or stylistic variants. - `pos`: The word's part-of-speech. - `examples`: Usage examples and comments. Only available for some of the words. Manual entries were prepended to the list, giving them a higher rank than they might otherwise have had. For example, the manual entry "Göteborg ("Gothenberg") has a rank of 20, while the first non-manual entry "och" ("and") has a rank of 87. However, a conjunction and common stopword is far more likely to occur than the name of a city. ### Data Splits There is a single split, `train`. ## Dataset Creation Please refer to the article [Corpus-based approaches for the creation of a frequency based vocabulary list in the EU project KELLY – issues on reliability, validity and coverage](https://gup.ub.gu.se/publication/148533?lang=en) for information about how the original dataset was created and considerations for using the data. **The following changes have been made to the original dataset**: - Changed header names. - Normalized the large web-acquired corpus name to "SweWac" in the `source` field. - Set the relative frequency of manual entries to null rather than 1000000. ## Additional Information ### Licensing Information [CC BY 4.0](https://creativecommons.org/licenses/by/4.0) ### Citation Information Please cite the authors if you use this dataset in your work: ```bibtex @article{Kilgarriff2013, doi = {10.1007/s10579-013-9251-2}, url = {https://doi.org/10.1007/s10579-013-9251-2}, year = {2013}, month = sep, publisher = {Springer Science and Business Media {LLC}}, volume = {48}, number = {1}, pages = {121--163}, author = {Adam Kilgarriff and Frieda Charalabopoulou and Maria Gavrilidou and Janne Bondi Johannessen and Saussan Khalil and Sofie Johansson Kokkinakis and Robert Lew and Serge Sharoff and Ravikiran Vadlapudi and Elena Volodina}, title = {Corpus-based vocabulary lists for language learners for nine languages}, journal = {Language Resources and Evaluation} } ``` ### Contributions Thanks to [@spraakbanken](https://github.com/spraakbanken) for creating this dataset and to [@codesue](https://github.com/codesue) for adding it.
true
NLI를 위한 한국어 속담 데이터셋입니다. 'question'은 속담의 의미와 보기(5지선다)가 표시되어 있으며, 'label'에는 정답의 번호(0-4)가 표시되어 있습니다. licence: cc-by-sa-2.0-kr (원본 출처:국립국어원 표준국어대사전) |Model| psyche/korean_idioms | |:------:|:---:| |klue/bert-base|0.7646|
true
|Model| psyche/bool_sentence (10k) | |:------:|:---:| |klue/bert-base|0.9335| licence: cc-by-sa-2.0-kr (원본 출처:국립국어원 표준국어대사전)
true
# AutoTrain Dataset for project: consbert ## Dataset Description This dataset has been automatically processed by AutoTrain for project consbert. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "DECLARATION OF PERFORMANCE fermacell Screws 1. unique identification code of the product type 2. purpose of use 3. manufacturer 5. system(s) for assessment and verification of constancy of performance 6. harmonised standard Notified body(ies) 7. Declared performance Essential feature Reaction to fire Tensile strength Length Corrosion protection (Reis oeueelt Nr. FC-0103 A FC-0103 A Drywall screws type TSN for fastening gypsum fibreboards James Hardie Europe GmbH Bennigsen- Platz 1 D-40474 Disseldorf Tel. +49 800 3864001 E-Mail fermacell jameshardie.de System 4 DIN EN 14566:2008+A1:2009 Stichting Hout Research (2590) Performance Al fulfilled <63mm Phosphated - Class 48 The performance of the above product corresponds to the declared performance(s). The manufacturer mentioned aboveis solely responsible for the preparation of the declaration of performancein accordance with Regulation (EU) No. 305/2011. Signed for the manufacturer and on behalf of the manufacturerof: Dusseldorf, 01.01.2020 2020 James Hardie Europe GmbH. and designate registered and incorporated trademarks of James Hardie Technology Limited Dr. J\u00e9rg Brinkmann (CEO) AESTUVER Seite 1/1 ", "target": 1 }, { "text": "DERBIGUM\u201d MAKING BUILDINGS SMART 9 - Performances d\u00e9clar\u00e9es selon EN 13707 : 2004 + A2: 2009 Caract\u00e9ristiques essentielles Performances Unit\u00e9s R\u00e9sistance a un feu ext\u00e9rieur (Note 1) FRoof (t3) - R\u00e9action au feu F - Etanch\u00e9it\u00e9 a l\u2019eau Conforme - Propri\u00e9t\u00e9s en traction : R\u00e9sistance en traction LxT* 900 x 700(+4 20%) N/50 mm Allongement LxT* 45 x 45 (+ 15) % R\u00e9sistance aux racines NPD** - R\u00e9sistance au poinconnementstatique (A) 20 kg R\u00e9sistance au choc (A et B) NPD** mm R\u00e9sistance a la d\u00e9chirure LxT* 200 x 200 (+ 20%) N R\u00e9sistance des jonctions: R\u00e9sistance au pelage NPD** N/50 mm R\u00e9sistance au cisaillement NPD** N/50 mm Durabilit\u00e9 : Sous UV, eau et chaleur Conforme - Pliabilit\u00e9 a froid apr\u00e9s vieillissement a la -10 (+ 5) \u00b0C chaleur Pliabilit\u00e9 a froid -18 \u00b0C Substances dangereuses (Note 2) - * L signifie la direction longitudinale, T signifie la direction transversale **NPD signifie Performance Non D\u00e9termin\u00e9e Note 1: Aucune performance ne peut \u00e9tre donn\u00e9e pourle produit seul, la performance de r\u00e9sistance a un feu ext\u00e9rieur d\u2019une toiture d\u00e9pend du syst\u00e9me complet Note 2: En l\u2019absence de norme d\u2019essai europ\u00e9enne harmonis\u00e9e, aucune performanceli\u00e9e au comportementa la lixiviation ne peut \u00e9tre d\u00e9clar\u00e9e, la d\u00e9claration doit \u00e9tre \u00e9tablie selon les dispositions nationales en vigueur. 10 - Les performances du produit identifi\u00e9 aux points 1 et 2 ci-dessus sont conformes aux performances d\u00e9clar\u00e9es indiqu\u00e9es au point 9. La pr\u00e9sente d\u00e9claration des performances est \u00e9tablie sous la seule responsabilit\u00e9 du fabricant identifi\u00e9 au point 4 Sign\u00e9 pourle fabricant et en son nom par: Mr Steve Geubels, Group Operations Director Perwez ,30/09/2016 Page 2 of 2 ", "target": 8 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "ClassLabel(num_classes=9, names=['0', '1', '2', '3', '4', '5', '6', '7', '8'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 59 | | valid | 18 |
false
# AutoTrain Dataset for project: opus-mt-en-zh_hanz ## Dataset Description This dataset has been automatically processed by AutoTrain for project opus-mt-en-zh_hanz. ### Languages The BCP-47 code for the dataset's language is en2zh. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "source": "And then I hear something.", "target": "\u63a5\u7740\u542c\u5230\u4ec0\u4e48\u52a8\u9759\u3002", "feat_en_length": 26, "feat_zh_length": 9 }, { "source": "A ghostly iron whistle blows through the tunnels.", "target": "\u9b3c\u9b45\u7684\u54e8\u58f0\u5439\u8fc7\u96a7\u9053\u3002", "feat_en_length": 49, "feat_zh_length": 10 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "source": "Value(dtype='string', id=None)", "target": "Value(dtype='string', id=None)", "feat_en_length": "Value(dtype='int64', id=None)", "feat_zh_length": "Value(dtype='int64', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 16350 | | valid | 4088 |
false
# Dataset Card for **slone/myv_ru_2022** ## Dataset Description - **Repository:** https://github.com/slone-nlp/myv-nmt - **Paper:**: https://arxiv.org/abs/2209.09368 - **Point of Contact:** @cointegrated ### Dataset Summary This is a corpus of parallel Erzya-Russian words, phrases and sentences, collected in the paper [The first neural machine translation system for the Erzya language](https://arxiv.org/abs/2209.09368). Erzya (`myv`) is a language from the Uralic family. It is spoken primarily in the Republic of Mordovia and some other regions of Russia and other post-Soviet countries. We use the Cyrillic version of its script. The corpus consists of the following parts: | name | size | composition | | -----| ---- | -------| |train | 74503 | parallel words, phrases and sentences, mined from dictionaries, books and web texts | | dev | 1500 | parallel sentences mined from books and web texts | | test | 1500 | parallel sentences mined from books and web texts | | mono | 333651| Erzya sentences mined from books and web texts, translated to Russian by a neural model | The dev and test splits contain sentences from the following sources | name | size | description| | ---------------|----| -------| |wiki |600 | Aligned sentences from linked Erzya and Russian Wikipedia articles | |bible |400 | Paired verses from the Bible (https://finugorbib.com) | |games |250 | Aligned sentences from the book *"Сказовые формы мордовской литературы", И.И. Шеянова, 2017, НИИ гуманитарых наук при Правительстве Республики Мордовия, Саранск* | |tales |100 | Aligned sentences from the book *"Мордовские народные игры", В.С. Брыжинский, 2009, Мордовское книжное издательство, Саранск* | |fiction |100 | Aligned sentences from modern Erzya prose and poetry (https://rus4all.ru/myv) | |constitution | 50 | Aligned sentences from the Soviet 1938 constitution | To load the first three parts (train, validation and test), use the code: ```Python from datasets import load_dataset data = load_dataset('slone/myv_ru_2022') ``` To load all four parts (included the back-translated data), please specify the data files explicitly: ```Python from datasets import load_dataset data_extended = load_dataset( 'slone/myv_ru_2022', data_files={'train':'train.jsonl', 'validation': 'dev.jsonl', 'test': 'test.jsonl', 'mono': 'back_translated.jsonl'} ) ``` ### Supported Tasks and Leaderboards - `translation`: the dataset may be used to train `ru-myv` translation models. There are no specific leaderboards for it yet, but if you feel like discussing it, welcome to the comments! ### Languages The main part of the dataset (`train`, `dev` and `test`) consists of "natural" Erzya (Cyrillic) and Russian sentences, translated to the other language by humans. There is also a larger Erzya-only part of the corpus (`mono`), translated to Russian automatically. ## Dataset Structure ### Data Instances All data instances have three string fields: `myv`, `ru` and `src` (the last one is currently meaningful only for dev and test splits), for example: ``` {'myv': 'Сюкпря Пазонтень, кие кирвазтизе Титэнь седейс тынк кисэ секе жо бажамонть, кона палы минек седейсэяк!', 'ru': 'Благодарение Богу, вложившему в сердце Титово такое усердие к вам.', 'src': 'bible'} ``` ### Data Fields - `myv`: the Erzya text (word, phrase, or sentence) - `ru`: the corresponding Russian text - `src`: the source of data (only for dev and test splits) ### Data Splits - train: parallel sentences, words and phrases, collected from various sources. Most of them are aligned automatically. Noisy. - dev: 1500 parallel sentences, selected from the 6 most reliable and diverse sources. - test: same as dev. - mono: Erzya sentences collected from various sources, with the Russian counterpart generated by a neural machine translation model. ## Dataset Creation ### Curation Rationale This is, as far as we know, the first publicly available parallel Russian-Erzya corpus, and the first medium-sized translation corpus for Erzya. We hope that it sets a meaningful baseline for Erzya machine translation. ### Source Data #### Initial Data Collection and Normalization The dataset was collected from various sources (see below). The texts were spit in sentences using the [razdel]() package. For some sources, sentences were filtered by language using the [slone/fastText-LID-323](https://huggingface.co/slone/fastText-LID-323) model. For most of the sources, `myv` and `ru` sentences were aligned automatically using the [slone/LaBSE-en-ru-myv-v1](https://huggingface.co/slone/LaBSE-en-ru-myv-v1) sentence encoder and the code from [the paper repository](https://github.com/slone-nlp/myv-nmt). #### Who are the source language producers? The dataset comprises parallel `myv-ru` and monolingual `myv` texts from diverse sources: - 12K parallel sentences from the Bible (http://finugorbib.com); - 3K parallel Wikimedia sentences from OPUS; - 42K parallel words or short phrases collected from various online dictionaries (); - the Erzya Wikipedia and the corresponding articles from the Russian Wikipedia; - 18 books, including 3 books with Erzya-Russian bitexts (http://lib.e-mordovia.ru); - Soviet-time books and periodicals (https://fennougrica.kansalliskirjasto.fi); - The Erzya part of Wikisource (https://wikisource.org/wiki/Main_Page/?oldid=895127); - Short texts by modern Erzya authors (https://rus4all.ru/myv/); - News articles from the Erzya Pravda website (http://erziapr.ru); - Texts found in LiveJournal (https://www.livejournal.com) by searching with the 100 most frequent Erzya words. ### Annotations No human annotation was involved in the data collection. ### Personal and Sensitive Information All data was collected from public sources, so no sensitive information is expected in them. However, some sentences collected, for example, from news articles or LiveJournal posts, can contain personal data. ## Considerations for Using the Data ### Social Impact of Dataset Publication of this dataset may attract some attention to the endangered Erzya language. ### Discussion of Biases Most of the dataset has been collected by automatical means, so it may contain errors and noise. Some types of these errors are systemic: for example, the words for "Erzya" and "Russian" are often aligned together, because they appear in the corresponding Wikipedias on similar positions. ### Other Known Limitations The dataset is noisy: some texts in it may be ungrammatical, in a wrong language, or poorly aligned. ## Additional Information ### Dataset Curators The data was collected by David Dale (https://huggingface.co/cointegrated). ### Licensing Information The status of the dataset is not final, but after we check everything, we hope to be able to distribute it under the [CC-BY-SA license](http://creativecommons.org/licenses/by-sa/4.0/). ### Citation Information [TBD]
false
256x256 mel spectrograms of 5 second samples of instrumental Hip Hop. The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference using De-noising Diffusion Probabilistic Models. ``` x_res = 256 y_res = 256 sample_rate = 22050 n_fft = 2048 hop_length = 512 ```
false
A sampled version of the [CCMatrix](https://huggingface.co/datasets/yhavinga/ccmatrix) dataset for the English-Romanian pair, containing 1M train entries. Please refer to the original for more info.
false
# Mario Maker 2 level comments Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 level comment dataset consists of 31.9 million level comments from Nintendo's online service totaling around 20GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 level comment dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_level_comments", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: { 'data_id': 3000006, 'comment_id': '20200430072710528979_302de3722145c7a2_2dc6c6', 'type': 2, 'pid': '3471680967096518562', 'posted': 1561652887, 'clear_required': 0, 'text': '', 'reaction_image_id': 10, 'custom_image': [some binary data], 'has_beaten': 0, 'x': 557, 'y': 64, 'reaction_face': 0, 'unk8': 0, 'unk10': 0, 'unk12': 0, 'unk14': [some binary data], 'unk17': 0 } ``` Comments can be one of three types: text, reaction image or custom image. `type` can be used with the enum below to identify different kinds of comments. Custom images are binary PNGs. You can also download the full dataset. Note that this will download ~20GB: ```python ds = load_dataset("TheGreatRambler/mm2_level_comments", split="train") ``` ## Data Structure ### Data Instances ```python { 'data_id': 3000006, 'comment_id': '20200430072710528979_302de3722145c7a2_2dc6c6', 'type': 2, 'pid': '3471680967096518562', 'posted': 1561652887, 'clear_required': 0, 'text': '', 'reaction_image_id': 10, 'custom_image': [some binary data], 'has_beaten': 0, 'x': 557, 'y': 64, 'reaction_face': 0, 'unk8': 0, 'unk10': 0, 'unk12': 0, 'unk14': [some binary data], 'unk17': 0 } ``` ### Data Fields |Field|Type|Description| |---|---|---| |data_id|int|The data ID of the level this comment appears on| |comment_id|string|Comment ID| |type|int|Type of comment, enum below| |pid|string|Player ID of the comment creator| |posted|int|UTC timestamp of when this comment was created| |clear_required|bool|Whether this comment requires a clear to view| |text|string|If the comment type is text, the text of the comment| |reaction_image_id|int|If this comment is a reaction image, the id of the reaction image, enum below| |custom_image|bytes|If this comment is a custom drawing, the custom drawing as a PNG binary| |has_beaten|int|Whether the user had beaten the level when they created the comment| |x|int|The X position of the comment in game| |y|int|The Y position of the comment in game| |reaction_face|int|The reaction face of the mii of this user, enum below| |unk8|int|Unknown| |unk10|int|Unknown| |unk12|int|Unknown| |unk14|bytes|Unknown| |unk17|int|Unknown| ### Data Splits The dataset only contains a train split. ## Enums The dataset contains some enum integer fields. This can be used to convert back to their string equivalents: ```python CommentType = { 0: "Custom Image", 1: "Text", 2: "Reaction Image" } CommentReactionImage = { 0: "Nice!", 1: "Good stuff!", 2: "So tough...", 3: "EASY", 4: "Seriously?!", 5: "Wow!", 6: "Cool idea!", 7: "SPEEDRUN!", 8: "How?!", 9: "Be careful!", 10: "So close!", 11: "Beat it!" } CommentReactionFace = { 0: "Normal", 16: "Wink", 1: "Happy", 4: "Surprised", 18: "Scared", 3: "Confused" } ``` <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset consists of comments from many different Mario Maker 2 players globally and as such their text could contain harmful language. Harmful depictions could also be present in the custom images.
false
# Mario Maker 2 level plays Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 level plays dataset consists of 1 billion level plays from Nintendo's online service totaling around 20GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 level plays dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_level_played", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: { 'data_id': 3000004, 'pid': '6382913755133534321', 'cleared': 1, 'liked': 0 } ``` Each row is a unique play in the level denoted by the `data_id` done by the player denoted by the `pid`, `pid` is a 64 bit integer stored within a string from database limitations. `cleared` and `liked` denote if the player successfully cleared the level during their play and/or liked the level during their play. Every level has only one unique play per player. You can also download the full dataset. Note that this will download ~20GB: ```python ds = load_dataset("TheGreatRambler/mm2_level_played", split="train") ``` ## Data Structure ### Data Instances ```python { 'data_id': 3000004, 'pid': '6382913755133534321', 'cleared': 1, 'liked': 0 } ``` ### Data Fields |Field|Type|Description| |---|---|---| |data_id|int|The data ID of the level this play occured in| |pid|string|Player ID of the player| |cleared|bool|Whether the player cleared the level during their play| |liked|bool|Whether the player liked the level during their play| ### Data Splits The dataset only contains a train split. <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset contains no harmful language or depictions.
false
# Mario Maker 2 level deaths Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 level deaths dataset consists of 564 million level deaths from Nintendo's online service totaling around 2.5GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 level deaths dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_level_deaths", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: { 'data_id': 3000382, 'x': 696, 'y': 0, 'is_subworld': 0 } ``` Each row is a unique death in the level denoted by the `data_id` that occurs at the provided coordinates. `is_subworld` denotes whether the death happened in the main world or the subworld. You can also download the full dataset. Note that this will download ~2.5GB: ```python ds = load_dataset("TheGreatRambler/mm2_level_deaths", split="train") ``` ## Data Structure ### Data Instances ```python { 'data_id': 3000382, 'x': 696, 'y': 0, 'is_subworld': 0 } ``` ### Data Fields |Field|Type|Description| |---|---|---| |data_id|int|The data ID of the level this death occured in| |x|int|X coordinate of death| |y|int|Y coordinate of death| |is_subworld|bool|Whether the death happened in the main world or the subworld| ### Data Splits The dataset only contains a train split. <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset contains no harmful language or depictions.
false
# Mario Maker 2 users Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 users dataset consists of 6 million users from Nintendo's online service totaling around 1.2GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 users dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_user", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: { 'pid': '14608829447232141607', 'data_id': 1, 'region': 0, 'name': 'げんまい', 'country': 'JP', 'last_active': 1578384457, 'mii_data': [some binary data], 'mii_image': '000f165d6574777a7881949e9da1acc1cac7cacad3dad9e0eff2f9faf900430a151c25384258637084878e8b96a0b0', 'pose': 0, 'hat': 0, 'shirt': 0, 'pants': 0, 'wearing_outfit': 0, 'courses_played': 12, 'courses_cleared': 10, 'courses_attempted': 23, 'courses_deaths': 13, 'likes': 0, 'maker_points': 0, 'easy_highscore': 0, 'normal_highscore': 0, 'expert_highscore': 0, 'super_expert_highscore': 0, 'versus_rating': 0, 'versus_rank': 1, 'versus_won': 0, 'versus_lost': 1, 'versus_win_streak': 0, 'versus_lose_streak': 1, 'versus_plays': 1, 'versus_disconnected': 0, 'coop_clears': 1, 'coop_plays': 1, 'recent_performance': 1383, 'versus_kills': 0, 'versus_killed_by_others': 0, 'multiplayer_unk13': 286, 'multiplayer_unk14': 5999927, 'first_clears': 0, 'world_records': 0, 'unique_super_world_clears': 0, 'uploaded_levels': 0, 'maximum_uploaded_levels': 100, 'weekly_maker_points': 0, 'last_uploaded_level': 1561555201, 'is_nintendo_employee': 0, 'comments_enabled': 1, 'tags_enabled': 0, 'super_world_id': '', 'unk3': 0, 'unk12': 0, 'unk16': 0 } ``` Each row is a unique play in the level denoted by the `data_id` done by the player denoted by the `pid`, `pid` is a 64 bit integer stored within a string from database limitations. `cleared` and `liked` denote if the player successfully cleared the level during their play and/or liked the level during their play. Every level has only one unique play per player. Each row is a unique user associated denoted by the `pid`. `data_id` is not used by Nintendo but, like levels, it counts up sequentially and can be used to determine account age. `mii_data` is a `charinfo` type Switch Mii. `mii_image` can be used with Nintendo's online studio API to generate images: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_user", streaming=True, split="train") mii_image = next(iter(ds))["mii_image"] print("Face: https://studio.mii.nintendo.com/miis/image.png?data=%s&type=face&width=512&instanceCount=1" % mii_image) print("Body: https://studio.mii.nintendo.com/miis/image.png?data=%s&type=all_body&width=512&instanceCount=1" % mii_image) print("Face (x16): https://studio.mii.nintendo.com/miis/image.png?data=%s&type=face&width=512&instanceCount=16" % mii_image) print("Body (x16): https://studio.mii.nintendo.com/miis/image.png?data=%s&type=all_body&width=512&instanceCount=16" % mii_image) ``` `pose`, `hat`, `shirt` and `pants` has associated enums described below. `last_active` and `last_uploaded_level` are UTC timestamps. `super_world_id`, if not empty, provides the ID of a super world in `TheGreatRambler/mm2_world`. You can also download the full dataset. Note that this will download ~1.2GB: ```python ds = load_dataset("TheGreatRambler/mm2_user", split="train") ``` ## Data Structure ### Data Instances ```python { 'pid': '14608829447232141607', 'data_id': 1, 'region': 0, 'name': 'げんまい', 'country': 'JP', 'last_active': 1578384457, 'mii_data': [some binary data], 'mii_image': '000f165d6574777a7881949e9da1acc1cac7cacad3dad9e0eff2f9faf900430a151c25384258637084878e8b96a0b0', 'pose': 0, 'hat': 0, 'shirt': 0, 'pants': 0, 'wearing_outfit': 0, 'courses_played': 12, 'courses_cleared': 10, 'courses_attempted': 23, 'courses_deaths': 13, 'likes': 0, 'maker_points': 0, 'easy_highscore': 0, 'normal_highscore': 0, 'expert_highscore': 0, 'super_expert_highscore': 0, 'versus_rating': 0, 'versus_rank': 1, 'versus_won': 0, 'versus_lost': 1, 'versus_win_streak': 0, 'versus_lose_streak': 1, 'versus_plays': 1, 'versus_disconnected': 0, 'coop_clears': 1, 'coop_plays': 1, 'recent_performance': 1383, 'versus_kills': 0, 'versus_killed_by_others': 0, 'multiplayer_unk13': 286, 'multiplayer_unk14': 5999927, 'first_clears': 0, 'world_records': 0, 'unique_super_world_clears': 0, 'uploaded_levels': 0, 'maximum_uploaded_levels': 100, 'weekly_maker_points': 0, 'last_uploaded_level': 1561555201, 'is_nintendo_employee': 0, 'comments_enabled': 1, 'tags_enabled': 0, 'super_world_id': '', 'unk3': 0, 'unk12': 0, 'unk16': 0 } ``` ### Data Fields |Field|Type|Description| |---|---|---| |pid|string|The player ID of this user, an unsigned 64 bit integer as a string| |data_id|int|The data ID of this user, while not used internally user codes are generated using this| |region|int|User region, enum below| |name|string|User name| |country|string|User country as a 2 letter ALPHA-2 code| |last_active|int|UTC timestamp of when this user was last active, not known what constitutes active| |mii_data|bytes|The CHARINFO blob of this user's Mii| |mii_image|string|A string that can be fed into Nintendo's studio API to generate an image| |pose|int|Pose, enum below| |hat|int|Hat, enum below| |shirt|int|Shirt, enum below| |pants|int|Pants, enum below| |wearing_outfit|bool|Whether this user is wearing pants| |courses_played|int|How many courses this user has played| |courses_cleared|int|How many courses this user has cleared| |courses_attempted|int|How many courses this user has attempted| |courses_deaths|int|How many times this user has died| |likes|int|How many likes this user has recieved| |maker_points|int|Maker points| |easy_highscore|int|Easy highscore| |normal_highscore|int|Normal highscore| |expert_highscore|int|Expert highscore| |super_expert_highscore|int|Super expert high score| |versus_rating|int|Versus rating| |versus_rank|int|Versus rank, enum below| |versus_won|int|How many courses this user has won in versus| |versus_lost|int|How many courses this user has lost in versus| |versus_win_streak|int|Versus win streak| |versus_lose_streak|int|Versus lose streak| |versus_plays|int|Versus plays| |versus_disconnected|int|Times user has disconnected in versus| |coop_clears|int|Coop clears| |coop_plays|int|Coop plays| |recent_performance|int|Unknown variable relating to versus performance| |versus_kills|int|Kills in versus, unknown what activities constitute a kill| |versus_killed_by_others|int|Deaths in versus from other users, little is known about what activities constitute a death| |multiplayer_unk13|int|Unknown, relating to multiplayer| |multiplayer_unk14|int|Unknown, relating to multiplayer| |first_clears|int|First clears| |world_records|int|World records| |unique_super_world_clears|int|Super world clears| |uploaded_levels|int|Number of uploaded levels| |maximum_uploaded_levels|int|Maximum number of levels this user may upload| |weekly_maker_points|int|Weekly maker points| |last_uploaded_level|int|UTC timestamp of when this user last uploaded a level| |is_nintendo_employee|bool|Whether this user is an official Nintendo account| |comments_enabled|bool|Whether this user has comments enabled on their levels| |tags_enabled|bool|Whether this user has tags enabled on their levels| |super_world_id|string|The ID of this user's super world, blank if they do not have one| |unk3|int|Unknown| |unk12|int|Unknown| |unk16|int|Unknown| ### Data Splits The dataset only contains a train split. ## Enums The dataset contains some enum integer fields. This can be used to convert back to their string equivalents: ```python Regions = { 0: "Asia", 1: "Americas", 2: "Europe", 3: "Other" } MultiplayerVersusRanks = { 1: "D", 2: "C", 3: "B", 4: "A", 5: "S", 6: "S+" } UserPose = { 0: "Normal", 15: "Fidgety", 17: "Annoyed", 18: "Buoyant", 19: "Thrilled", 20: "Let's go!", 21: "Hello!", 29: "Show-Off", 31: "Cutesy", 39: "Hyped!" } UserHat = { 0: "None", 1: "Mario Cap", 2: "Luigi Cap", 4: "Mushroom Hairclip", 5: "Bowser Headpiece", 8: "Princess Peach Wig", 11: "Builder Hard Hat", 12: "Bowser Jr. Headpiece", 13: "Pipe Hat", 15: "Cat Mario Headgear", 16: "Propeller Mario Helmet", 17: "Cheep Cheep Hat", 18: "Yoshi Hat", 21: "Faceplant", 22: "Toad Cap", 23: "Shy Cap", 24: "Magikoopa Hat", 25: "Fancy Top Hat", 26: "Doctor Headgear", 27: "Rocky Wrench Manhold Lid", 28: "Super Star Barrette", 29: "Rosalina Wig", 30: "Fried-Chicken Headgear", 31: "Royal Crown", 32: "Edamame Barrette", 33: "Superball Mario Hat", 34: "Robot Cap", 35: "Frog Cap", 36: "Cheetah Headgear", 37: "Ninji Cap", 38: "Super Acorn Hat", 39: "Pokey Hat", 40: "Snow Pokey Hat" } UserShirt = { 0: "Nintendo Shirt", 1: "Mario Outfit", 2: "Luigi Outfit", 3: "Super Mushroom Shirt", 5: "Blockstripe Shirt", 8: "Bowser Suit", 12: "Builder Mario Outfit", 13: "Princess Peach Dress", 16: "Nintendo Uniform", 17: "Fireworks Shirt", 19: "Refreshing Shirt", 21: "Reset Dress", 22: "Thwomp Suit", 23: "Slobbery Shirt", 26: "Cat Suit", 27: "Propeller Mario Clothes", 28: "Banzai Bill Shirt", 29: "Staredown Shirt", 31: "Yoshi Suit", 33: "Midnight Dress", 34: "Magikoopa Robes", 35: "Doctor Coat", 37: "Chomp-Dog Shirt", 38: "Fish Bone Shirt", 40: "Toad Outfit", 41: "Googoo Onesie", 42: "Matrimony Dress", 43: "Fancy Tuxedo", 44: "Koopa Troopa Suit", 45: "Laughing Shirt", 46: "Running Shirt", 47: "Rosalina Dress", 49: "Angry Sun Shirt", 50: "Fried-Chicken Hoodie", 51: "? Block Hoodie", 52: "Edamame Camisole", 53: "I-Like-You Camisole", 54: "White Tanktop", 55: "Hot Hot Shirt", 56: "Royal Attire", 57: "Superball Mario Suit", 59: "Partrick Shirt", 60: "Robot Suit", 61: "Superb Suit", 62: "Yamamura Shirt", 63: "Princess Peach Tennis Outfit", 64: "1-Up Hoodie", 65: "Cheetah Tanktop", 66: "Cheetah Suit", 67: "Ninji Shirt", 68: "Ninji Garb", 69: "Dash Block Hoodie", 70: "Fire Mario Shirt", 71: "Raccoon Mario Shirt", 72: "Cape Mario Shirt", 73: "Flying Squirrel Mario Shirt", 74: "Cat Mario Shirt", 75: "World Wear", 76: "Koopaling Hawaiian Shirt", 77: "Frog Mario Raincoat", 78: "Phanto Hoodie" } UserPants = { 0: "Black Short-Shorts", 1: "Denim Jeans", 5: "Denim Skirt", 8: "Pipe Skirt", 9: "Skull Skirt", 10: "Burner Skirt", 11: "Cloudwalker", 12: "Platform Skirt", 13: "Parent-and-Child Skirt", 17: "Mario Swim Trunks", 22: "Wind-Up Shoe", 23: "Hoverclown", 24: "Big-Spender Shorts", 25: "Shorts of Doom!", 26: "Doorduroys", 27: "Antsy Corduroys", 28: "Bouncy Skirt", 29: "Stingby Skirt", 31: "Super Star Flares", 32: "Cheetah Runners", 33: "Ninji Slacks" } # Checked against user's shirt UserIsOutfit = { 0: False, 1: True, 2: True, 3: False, 5: False, 8: True, 12: True, 13: True, 16: False, 17: False, 19: False, 21: True, 22: True, 23: False, 26: True, 27: True, 28: False, 29: False, 31: True, 33: True, 34: True, 35: True, 37: False, 38: False, 40: True, 41: True, 42: True, 43: True, 44: True, 45: False, 46: False, 47: True, 49: False, 50: False, 51: False, 52: False, 53: False, 54: False, 55: False, 56: True, 57: True, 59: False, 60: True, 61: True, 62: False, 63: True, 64: False, 65: False, 66: True, 67: False, 68: True, 69: False, 70: False, 71: False, 72: False, 73: False, 74: False, 75: True, 76: False, 77: True, 78: False } ``` <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset consists of many different Mario Maker 2 players globally and as such their names could contain harmful language. Harmful depictions could also be present in their Miis, should you choose to render it.
false
# Mario Maker 2 user badges Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 user badges dataset consists of 9328 user badges (they are capped to 10k globally) from Nintendo's online service and adds onto `TheGreatRambler/mm2_user`. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_user_badges", split="train") print(next(iter(ds))) #OUTPUT: { 'pid': '1779763691699286988', 'type': 4, 'rank': 6 } ``` Each row is a badge awarded to the player denoted by `pid`. `TheGreatRambler/mm2_user` contains these players. ## Data Structure ### Data Instances ```python { 'pid': '1779763691699286988', 'type': 4, 'rank': 6 } ``` ### Data Fields |Field|Type|Description| |---|---|---| |pid|string|Player ID| |type|int|The kind of badge, enum below| |rank|int|The rank of badge, enum below| ### Data Splits The dataset only contains a train split. ## Enums The dataset contains some enum integer fields. This can be used to convert back to their string equivalents: ```python BadgeTypes = { 0: "Maker Points (All-Time)", 1: "Endless Challenge (Easy)", 2: "Endless Challenge (Normal)", 3: "Endless Challenge (Expert)", 4: "Endless Challenge (Super Expert)", 5: "Multiplayer Versus", 6: "Number of Clears", 7: "Number of First Clears", 8: "Number of World Records", 9: "Maker Points (Weekly)" } BadgeRanks = { 6: "Bronze", 5: "Silver", 4: "Gold", 3: "Bronze Ribbon", 2: "Silver Ribbon", 1: "Gold Ribbon" } ``` <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset contains no harmful language or depictions.
false
# Mario Maker 2 user plays Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 user plays dataset consists of 329.8 million user plays from Nintendo's online service totaling around 2GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 user plays dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_user_played", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: { 'pid': '4920036968545706712', 'data_id': 25548552 } ``` Each row is a unique play in the level denoted by the `data_id` done by the player denoted by the `pid`. You can also download the full dataset. Note that this will download ~2GB: ```python ds = load_dataset("TheGreatRambler/mm2_user_played", split="train") ``` ## Data Structure ### Data Instances ```python { 'pid': '4920036968545706712', 'data_id': 25548552 } ``` ### Data Fields |Field|Type|Description| |---|---|---| |pid|string|The player ID of this user, an unsigned 64 bit integer as a string| |data_id|int|The data ID of the level this user played| ### Data Splits The dataset only contains a train split. <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset contains no harmful language or depictions.
false
# Mario Maker 2 user likes Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 user likes dataset consists of 105.5 million user likes from Nintendo's online service totaling around 630MB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 user likes dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_user_liked", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: { 'pid': '14510618610706594411', 'data_id': 25861713 } ``` Each row is a unique like in the level denoted by the `data_id` done by the player denoted by the `pid`. You can also download the full dataset. Note that this will download ~630MB: ```python ds = load_dataset("TheGreatRambler/mm2_user_liked", split="train") ``` ## Data Structure ### Data Instances ```python { 'pid': '14510618610706594411', 'data_id': 25861713 } ``` ### Data Fields |Field|Type|Description| |---|---|---| |pid|string|The player ID of this user, an unsigned 64 bit integer as a string| |data_id|int|The data ID of the level this user liked| ### Data Splits The dataset only contains a train split. <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset contains no harmful language or depictions.
false
# Mario Maker 2 user uploaded Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 user uploaded dataset consists of 26.5 million uploaded user levels from Nintendo's online service totaling around 215MB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 user uploaded dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_user_posted", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: { 'pid': '10491033288855085861', 'data_id': 27359486 } ``` Each row is a unique uploaded level denoted by the `data_id` uploaded by the player denoted by the `pid`. You can also download the full dataset. Note that this will download ~215MB: ```python ds = load_dataset("TheGreatRambler/mm2_user_posted", split="train") ``` ## Data Structure ### Data Instances ```python { 'pid': '10491033288855085861', 'data_id': 27359486 } ``` ### Data Fields |Field|Type|Description| |---|---|---| |pid|string|The player ID of this user, an unsigned 64 bit integer as a string| |data_id|int|The data ID of the level this user uploaded| ### Data Splits The dataset only contains a train split. <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset contains no harmful language or depictions.