id
stringlengths
2
115
lastModified
stringlengths
24
24
tags
list
author
stringlengths
2
42
description
stringlengths
0
6.67k
citation
stringlengths
0
10.7k
likes
int64
0
3.66k
downloads
int64
0
8.89M
created
timestamp[us]
card
stringlengths
11
977k
card_len
int64
11
977k
embeddings
list
dalle-mini/wit
2021-09-14T02:48:56.000Z
[ "region:us" ]
dalle-mini
null
null
5
87
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
damlab/HIV_FLT
2022-02-08T20:58:56.000Z
[ "region:us" ]
damlab
null
null
0
87
2022-03-02T23:29:22
# Dataset Description ## Dataset Summary This dataset was derived from the Los Alamos National Laboratory HIV sequence (LANL) database. It contains the most recent version (2016-Full-genome), composed of 1,609 high-quality full-length genomes. The genes within these sequences were processed using the GeneCutter tool and translated into corresponding amino acid sequences using the BioPython library Seq.translate function. Supported Tasks and Leaderboards: None Languages: English ## Dataset Structure ### Data Instances Each column represents the protein amino acid sequence of the HIV genome. The ID field indicates the Genbank reference ID for future cross-referencing. There are 1,609 full length HIV genomes. Data Fields: ID, gag, pol, env, nef, tat, rev, proteome Data Splits: None ## Dataset Creation Curation Rationale: This dataset was curated to train a model (HIV-BERT) designed to predict a variety of sequence-dependent features regarding HIV. Initial Data Collection and Normalization: Dataset was downloaded and curated on 12/21/2021. ## Considerations for Using the Data Social Impact of Dataset: This dataset can be used to study sequence-dependent features of HIV, a virus that has claimed the lives of many individuals globally in the last few decades. Discussion of Biases: This dataset was derived from the Los Alamos National Laboratory HIV sequence (LANL) database full genome database and contains a representative sample from each subtype and geographic region. ## Additional Information: - Dataset Curators: Will Dampier - Citation Information: TBA
1,616
[ [ -0.0158233642578125, -0.04144287109375, -0.0015592575073242188, 0.007663726806640625, -0.007534027099609375, 0.002941131591796875, 0.014434814453125, -0.03302001953125, 0.0293121337890625, 0.031585693359375, -0.04022216796875, -0.036712646484375, -0.042144775390...
davanstrien/19th-century-ads
2022-01-18T15:15:02.000Z
[ "region:us" ]
davanstrien
null
null
0
87
2022-03-02T23:29:22
Entry not found
15
[ [ -0.021392822265625, -0.01494598388671875, 0.05718994140625, 0.028839111328125, -0.0350341796875, 0.046539306640625, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.01702880859375, -0.052093505859375, -0.01494598388671875, -0.06036376953125, 0.03790...
davanstrien/manuscript_iiif_test
2022-02-05T11:43:31.000Z
[ "region:us" ]
davanstrien
null
null
0
87
2022-03-02T23:29:22
Entry not found
15
[ [ -0.021392822265625, -0.01494598388671875, 0.05718994140625, 0.028839111328125, -0.0350341796875, 0.046539306640625, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.01702880859375, -0.052093505859375, -0.01494598388671875, -0.06036376953125, 0.03790...
DebateLabKIT/deepa2
2022-12-16T14:49:35.000Z
[ "task_categories:text-retrieval", "task_categories:text-generation", "task_ids:text-simplification", "task_ids:parsing", "language_creators:other", "multilinguality:monolingual", "size_categories:unknown", "language:en", "license:other", "argument-mining", "summarization", "conditional-text-ge...
DebateLabKIT
null
null
3
87
2022-03-02T23:29:22
--- annotations_creators: [] language_creators: - other language: - en license: - other multilinguality: - monolingual size_categories: - unknown source_datasets: [] task_categories: - text-retrieval - text-generation task_ids: - text-simplification - parsing pretty_name: deepa2 tags: - argument-mining - summarization - conditional-text-generation - structure-prediction --- # `deepa2` Datasets Collection ## Table of Contents - [`deepa2` Datasets Collection](#deepa2-datasets-collection) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Sub-Datasets](#sub-datasets) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [blog post](https://debatelab.github.io/journal/deepa2.html) - **Repository:** [github](https://github.com/debatelab/deepa2) - **Paper:** [arxiv](https://arxiv.org/abs/2110.01509) - **Point of Contact:** [Gregor Betz](gregor.betz@kit.edu) ### Dataset Summary This is a growing, curated collection of `deepa2` datasets, i.e. datasets that contain comprehensive logical analyses of argumentative texts. The collection comprises: * datasets that are built from existing NLP datasets by means of the [`deepa2 bake`](https://github.com/debatelab/deepa2) tool. * original `deepa2` datasets specifically created for this collection. The tool [`deepa2 serve`](https://github.com/debatelab/deepa2#integrating-deepa2-into-your-training-pipeline) may be used to render the data in this collection as text2text examples. ### Supported Tasks and Leaderboards For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the `task-category-tag` with an appropriate `other:other-task-name`). - `conditional-text-generation`: The dataset can be used to train models to generate a fully reconstruction of an argument from a source text, making, e.g., its implicit assumptions explicit. - `structure-prediction`: The dataset can be used to train models to formalize sentences. - `text-retrieval`: The dataset can be used to train models to extract reason statements and conjectures from a given source text. ### Languages English. Will be extended to cover other languages in the futures. ## Dataset Structure ### Sub-Datasets This collection contains the following `deepa2` datasets: * `esnli`: created from e-SNLI with `deepa2 bake` as [described here](https://github.com/debatelab/deepa2/blob/main/docs/esnli.md). * `enbank` (`task_1`, `task_2`): created from Entailment Bank with `deepa2 bake` as [described here](https://github.com/debatelab/deepa2/blob/main/docs/enbank.md). * `argq`: created from IBM-ArgQ with `deepa2 bake` as [described here](https://github.com/debatelab/deepa2/blob/main/docs/argq.md). * `argkp`: created from IBM-KPA with `deepa2 bake` as [described here](https://github.com/debatelab/deepa2/blob/main/docs/argkp.md). * `aifdb` (`moral-maze`, `us2016`, `vacc-itc`): created from AIFdb with `deepa2 bake` as [described here](https://github.com/debatelab/deepa2/blob/main/docs/aifdb.md). * `aaac` (`aaac01` and `aaac02`): original, machine-generated contribution; based on an an improved and extended algorithm that backs https://huggingface.co/datasets/debatelab/aaac. ### Data Instances see: https://github.com/debatelab/deepa2/tree/main/docs ### Data Fields see: https://github.com/debatelab/deepa2/tree/main/docs |feature|esnli|enbank|aifdb|aaac|argq|argkp| |--|--|--|--|--|--|--| | `source_text` | x | x | x | x | x | x | | `title` | | x | | x | | | | `gist` | x | x | | x | | x | | `source_paraphrase` | x | x | x | x | | | | `context` | | x | | x | | x | | `reasons` | x | x | x | x | x | | | `conjectures` | x | x | x | x | x | | | `argdown_reconstruction` | x | x | | x | | x | | `erroneous_argdown` | x | | | x | | | | `premises` | x | x | | x | | x | | `intermediary_conclusion` | | | | x | | | | `conclusion` | x | x | | x | | x | | `premises_formalized` | x | | | x | | x | | `intermediary_conclusion_formalized` | | | | x | | | | `conclusion_formalized` | x | | | x | | x | | `predicate_placeholders` | | | | x | | | | `entity_placeholders` | | | | x | | | | `misc_placeholders` | x | | | x | | x | | `plchd_substitutions` | x | | | x | | x | ### Data Splits Each sub-dataset contains three splits: `train`, `validation`, and `test`. ## Dataset Creation ### Curation Rationale Many NLP datasets focus on tasks that are relevant for logical analysis and argument reconstruction. This collection is the attempt to unify these resources in a common framework. ### Source Data See: [Sub-Datasets](#sub-datasets) ## Additional Information ### Dataset Curators Gregor Betz, KIT; Kyle Richardson, Allen AI ### Licensing Information We re-distribute the the imported sub-datasets under their original license: |Sub-dataset|License| |--|--| |esnli|MIT| |aifdb|free for academic use ([TOU](https://arg-tech.org/index.php/research/argument-corpora/))| |enbank|CC BY 4.0| |aaac|CC BY 4.0| |argq|CC BY SA 4.0| |argkp|Apache| ### Citation Information ``` @article{betz2021deepa2, title={DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained Neural Text2Text Language Models}, author={Gregor Betz and Kyle Richardson}, year={2021}, eprint={2110.01509}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!--If the dataset has a [DOI](https://www.doi.org/), please provide it here.-->
6,988
[ [ -0.039642333984375, -0.06365966796875, 0.0338134765625, -0.001789093017578125, -0.020538330078125, -0.01079559326171875, -0.01490020751953125, -0.01788330078125, 0.0152740478515625, 0.03582763671875, -0.036041259765625, -0.036529541015625, -0.05517578125, -0...
dragosnicolae555/RoITD
2022-10-25T09:07:43.000Z
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:ro-RO", "license:cc-by-4.0", "region:us" ]
dragosnicolae555
null
null
0
87
2022-03-02T23:29:22
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - ro-RO license: - cc-by-4.0 multilinguality: - monolingual pretty_name: 'RoITD: Romanian IT Question Answering Dataset' size_categories: - unknown source_datasets: - original task_categories: - question-answering task_ids: - extractive-qa --- ## Dataset Summary We introduce a Romanian IT Dataset (RoITD) resembling SQuAD 1.1. RoITD consists of 9575 Romanian QA pairs formulated by crowd workers. QA pairs are based on 5043 articles from Romanian Wikipedia articles describing IT and household products. Of the total number of questions, 5103 are possible (i.e. the correct answer can be found within the paragraph) and 4472 are not possible (i.e. the given answer is a "plausible answer" and not correct) ## Dataset Structure The data structure follows the format of SQuAD, which contains several attributes such as **question**, **id**, **text**, `**answer_start**, **is_impossible** and **context**. The paragraph provided to crowd sourcing workers is stored in the field **context**. This incorporates manually-selected paragraphs from Wikipedia. The field **id** is comprised of a randomly assigned unique identification number for the answer-question pair. Only the numbers "0" and "1" are allowed in the **is_impossible** field. The category "A" is assigned the value "0", indicating that the answer is correct. The value "1" corresponds to the category "U", indicating a plausible answer. The question posed by the source crowd source worker is represented by the field **question**. The field **answer_start** keeps track of the character index marking the beginning of an answer.
1,690
[ [ -0.03326416015625, -0.0517578125, 0.00733184814453125, 0.04150390625, -0.00762939453125, -0.0026264190673828125, 0.011688232421875, -0.0225982666015625, 0.0268096923828125, 0.037353515625, -0.06787109375, -0.026641845703125, -0.0214385986328125, 0.0285339355...
CEBaB/CEBaB
2022-08-16T21:54:47.000Z
[ "region:us" ]
CEBaB
null
null
5
87
2022-05-09T22:51:59
Entry not found
15
[ [ -0.021392822265625, -0.01494598388671875, 0.05718994140625, 0.028839111328125, -0.0350341796875, 0.046539306640625, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.01702880859375, -0.052093505859375, -0.01494598388671875, -0.06036376953125, 0.03790...
tobiolatunji/afrispeech-200
2023-05-20T23:29:22.000Z
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-nc-sa-4.0", "regio...
tobiolatunji
AFRISPEECH-200 is a 200hr Pan-African speech corpus for clinical and general domain English accented ASR; a dataset with 120 African accents from 13 countries and 2,463 unique African speakers. Our goal is to raise awareness for and advance Pan-African English ASR research, especially for the clinical domain.
TBD
8
87
2023-01-30T22:34:30
--- pretty_name: AfriSpeech-200 annotations_creators: - expert-generated language_creators: - crowdsourced - expert-generated language: - en license: - cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - automatic-speech-recognition task_ids: [] dataset_info: features: - name: user_id dtype: string - name: path dtype: string - name: audio dtype: audio: sampling_rate: 44100 - name: transcript dtype: string splits: - name: train num_bytes: 1722002133 num_examples: 58000 - name: dev num_bytes: 86120227 num_examples: 3231 download_size: 1475540500 dataset_size: 1808122360 extra_gated_prompt: By clicking on “Access repository” below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset. --- # Dataset Card for AfriSpeech-200 ## Table of Contents - [Dataset Card for AfriSpeech-200](#dataset-card-for-afrispeech-200) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [How to use](#how-to-use) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/intron-innovation/AfriSpeech-Dataset-Paper - **Repository:** https://github.com/intron-innovation/AfriSpeech-Dataset-Paper - **Paper:** [AfriSpeech-200: Pan-African accented speech dataset for clinical and general domain ASR](https://github.com/intron-innovation/AfriSpeech-Dataset-Paper) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Intron Innovation](mailto:intron@intron.io) ### Dataset Summary AFRISPEECH-200 is a 200hr Pan-African speech corpus for clinical and general domain English accented ASR; a dataset with 120 African accents from 13 countries and 2,463 unique African speakers. Our goal is to raise awareness for and advance Pan-African English ASR research, especially for the clinical domain. ## How to use The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function. ```python from datasets import load_dataset afrispeech = load_dataset("tobiolatunji/afrispeech-200", "all") ``` The entire dataset is ~120GB and may take about 2hrs to download depending on internet speed/bandwidth. If you have disk space or bandwidth limitations, you can use `streaming` mode described below to work with smaller subsets of the data. Alterntively you are able to pass a config to the `load_dataset` function and download only a subset of the data corresponding to a specific accent of interest. The example provided below is `isizulu`. For example, to download the isizulu config, simply specify the corresponding accent config name. The list of supported accents is provided in the `accent list` section below: ```python from datasets import load_dataset afrispeech = load_dataset("tobiolatunji/afrispeech-200", "isizulu", split="train") ``` Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk. ```python from datasets import load_dataset afrispeech = load_dataset("tobiolatunji/afrispeech-200", "isizulu", split="train", streaming=True) print(next(iter(afrispeech))) print(list(afrispeech.take(5))) ``` ### Local ```python from datasets import load_dataset from torch.utils.data.sampler import BatchSampler, RandomSampler afrispeech = load_dataset("tobiolatunji/afrispeech-200", "isizulu", split="train") batch_sampler = BatchSampler(RandomSampler(afrispeech), batch_size=32, drop_last=False) dataloader = DataLoader(afrispeech, batch_sampler=batch_sampler) ``` ### Streaming ```python from datasets import load_dataset from torch.utils.data import DataLoader afrispeech = load_dataset("tobiolatunji/afrispeech-200", "isizulu", split="train", streaming=True) dataloader = DataLoader(afrispeech, batch_size=32) ``` ### Caveats Note that till the end of the ongoing [AfriSpeech ASR Challenge event](https://zindi.africa/competitions/intron-afrispeech-200-automatic-speech-recognition-challenge) (Feb - May 2023), the transcripts in the validation set are hidden and the test set will be unreleased till May 19, 2023. ### Fine-tuning Colab tutorial To walk through a complete colab tutorial that finetunes a wav2vec2 model on the afrispeech-200 dataset with `transformers`, take a look at this colab notebook [afrispeech/wav2vec2-colab-tutorial](https://colab.research.google.com/drive/1uZYew6pcgN6UE6sFDLohxD_HKivvDXzD?usp=sharing). ### Supported Tasks and Leaderboards - Automatic Speech Recognition - Speech Synthesis (Text-to-Speech) ### Languages English (Accented) ## Dataset Structure ### Data Instances A typical data point comprises the path to the audio file, called `path` and its transcription, called `transcript`. Some additional information about the speaker is provided. ``` { 'speaker_id': 'b545a4ca235a7b72688a1c0b3eb6bde6', 'path': 'aad9bd69-7ca0-4db1-b650-1eeea17a0153/5dcb6ee086e392376cd3b7131a250397.wav', 'audio_id': 'aad9bd69-7ca0-4db1-b650-1eeea17a0153/5dcb6ee086e392376cd3b7131a250397', 'audio': { 'path': 'aad9bd69-7ca0-4db1-b650-1eeea17a0153/5dcb6ee086e392376cd3b7131a250397.wav', 'array': array([0.00018311, 0.00061035, 0.00012207, ..., 0.00192261, 0.00195312, 0.00216675]), 'sampling_rate': 44100}, 'transcript': 'His mother is in her 50 s and has hypertension .', 'age_group': '26-40', 'gender': 'Male', 'accent': 'yoruba', 'domain': 'clinical', 'country': 'US', 'duration': 3.241995464852608 } ``` ### Data Fields - speaker_id: An id for which speaker (voice) made the recording - path: The path to the audio file - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - transcript: The sentence the user was prompted to speak ### Data Splits The speech material has been subdivided into portions for train, dev, and test. Speech was recorded in a quiet environment with high quality microphone, speakers were asked to read one sentence at a time. - Total Number of Unique Speakers: 2,463 - Female/Male/Other Ratio: 57.11/42.41/0.48 - Data was first split on speakers. Speakers in Train/Dev/Test do not cross partitions | | Train | Dev | Test | | ----------- | ----------- | ----------- | ----------- | | # Speakers | 1466 | 247 | 750 | | # Seconds | 624228.83 | 31447.09 | 67559.10 | | # Hours | 173.4 | 8.74 | 18.77 | | # Accents | 71 | 45 | 108 | | Avg secs/speaker | 425.81 | 127.32 | 90.08 | | Avg num clips/speaker | 39.56 | 13.08 | 8.46 | | Avg num speakers/accent | 20.65 | 5.49 | 6.94 | | Avg secs/accent | 8791.96 | 698.82 | 625.55 | | # clips general domain | 21682 | 1407 | 2723 | | # clips clinical domain | 36318 | 1824 | 3623 | ## Dataset Creation ### Curation Rationale Africa has a very low doctor-to-patient ratio. At very busy clinics, doctors could see 30+ patients per day-- a heavy patient burden compared with developed countries-- but productivity tools such as clinical automatic speech recognition (ASR) are lacking for these overworked clinicians. However, clinical ASR is mature, even ubiquitous, in developed nations, and clinician-reported performance of commercial clinical ASR systems is generally satisfactory. Furthermore, the recent performance of general domain ASR is approaching human accuracy. However, several gaps exist. Several publications have highlighted racial bias with speech-to-text algorithms and performance on minority accents lags significantly. To our knowledge, there is no publicly available research or benchmark on accented African clinical ASR, and speech data is non-existent for the majority of African accents. We release AfriSpeech, 200hrs of Pan-African speech, 67,577 clips from 2,463 unique speakers, across 120 indigenous accents from 13 countries for clinical and general domain ASR, a benchmark test set, with publicly available pre-trained models with SOTA performance on the AfriSpeech benchmark. ### Source Data #### Country Stats | Country | Clips | Speakers | Duration (seconds) | Duration (hrs) | | ----------- | ----------- | ----------- | ----------- | ----------- | | NG | 45875 | 1979 | 512646.88 | 142.40 | | KE | 8304 | 137 | 75195.43 | 20.89 | | ZA | 7870 | 223 | 81688.11 | 22.69 | | GH | 2018 | 37 | 18581.13 | 5.16 | | BW | 1391 | 38 | 14249.01 | 3.96 | | UG | 1092 | 26 | 10420.42 | 2.89 | | RW | 469 | 9 | 5300.99 | 1.47 | | US | 219 | 5 | 1900.98 | 0.53 | | TR | 66 | 1 | 664.01 | 0.18 | | ZW | 63 | 3 | 635.11 | 0.18 | | MW | 60 | 1 | 554.61 | 0.15 | | TZ | 51 | 2 | 645.51 | 0.18 | | LS | 7 | 1 | 78.40 | 0.02 | #### Accent Stats | Accent | Clips | Speakers | Duration (s) | Country | Splits | | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | | yoruba | 15407 | 683 | 161587.55 | US,NG | train,test,dev | | igbo | 8677 | 374 | 93035.79 | US,NG,ZA | train,test,dev | | swahili | 6320 | 119 | 55932.82 | KE,TZ,ZA,UG | train,test,dev | | hausa | 5765 | 248 | 70878.67 | NG | train,test,dev | | ijaw | 2499 | 105 | 33178.9 | NG | train,test,dev | | afrikaans | 2048 | 33 | 20586.49 | ZA | train,test,dev | | idoma | 1877 | 72 | 20463.6 | NG | train,test,dev | | zulu | 1794 | 52 | 18216.97 | ZA,TR,LS | dev,train,test | | setswana | 1588 | 39 | 16553.22 | BW,ZA | dev,test,train | | twi | 1566 | 22 | 14340.12 | GH | test,train,dev | | isizulu | 1048 | 48 | 10376.09 | ZA | test,train,dev | | igala | 919 | 31 | 9854.72 | NG | train,test | | izon | 838 | 47 | 9602.53 | NG | train,dev,test | | kiswahili | 827 | 6 | 8988.26 | KE | train,test | | ebira | 757 | 42 | 7752.94 | NG | train,test,dev | | luganda | 722 | 22 | 6768.19 | UG,BW,KE | test,dev,train | | urhobo | 646 | 32 | 6685.12 | NG | train,dev,test | | nembe | 578 | 16 | 6644.72 | NG | train,test,dev | | ibibio | 570 | 39 | 6489.29 | NG | train,test,dev | | pidgin | 514 | 20 | 5871.57 | NG | test,train,dev | | luhya | 508 | 4 | 4497.02 | KE | train,test | | kinyarwanda | 469 | 9 | 5300.99 | RW | train,test,dev | | xhosa | 392 | 12 | 4604.84 | ZA | train,dev,test | | tswana | 387 | 18 | 4148.58 | ZA,BW | train,test,dev | | esan | 380 | 13 | 4162.63 | NG | train,test,dev | | alago | 363 | 8 | 3902.09 | NG | train,test | | tshivenda | 353 | 5 | 3264.77 | ZA | test,train | | fulani | 312 | 18 | 5084.32 | NG | test,train | | isoko | 298 | 16 | 4236.88 | NG | train,test,dev | | akan (fante) | 295 | 9 | 2848.54 | GH | train,dev,test | | ikwere | 293 | 14 | 3480.43 | NG | test,train,dev | | sepedi | 275 | 10 | 2751.68 | ZA | dev,test,train | | efik | 269 | 11 | 2559.32 | NG | test,train,dev | | edo | 237 | 12 | 1842.32 | NG | train,test,dev | | luo | 234 | 4 | 2052.25 | UG,KE | test,train,dev | | kikuyu | 229 | 4 | 1949.62 | KE | train,test,dev | | bekwarra | 218 | 3 | 2000.46 | NG | train,test | | isixhosa | 210 | 9 | 2100.28 | ZA | train,dev,test | | hausa/fulani | 202 | 3 | 2213.53 | NG | test,train | | epie | 202 | 6 | 2320.21 | NG | train,test | | isindebele | 198 | 2 | 1759.49 | ZA | train,test | | venda and xitsonga | 188 | 2 | 2603.75 | ZA | train,test | | sotho | 182 | 4 | 2082.21 | ZA | dev,test,train | | akan | 157 | 6 | 1392.47 | GH | test,train | | nupe | 156 | 9 | 1608.24 | NG | dev,train,test | | anaang | 153 | 8 | 1532.56 | NG | test,dev | | english | 151 | 11 | 2445.98 | NG | dev,test | | afemai | 142 | 2 | 1877.04 | NG | train,test | | shona | 138 | 8 | 1419.98 | ZA,ZW | test,train,dev | | eggon | 137 | 5 | 1833.77 | NG | test | | luganda and kiswahili | 134 | 1 | 1356.93 | UG | train | | ukwuani | 133 | 7 | 1269.02 | NG | test | | sesotho | 132 | 10 | 1397.16 | ZA | train,dev,test | | benin | 124 | 4 | 1457.48 | NG | train,test | | kagoma | 123 | 1 | 1781.04 | NG | train | | nasarawa eggon | 120 | 1 | 1039.99 | NG | train | | tiv | 120 | 14 | 1084.52 | NG | train,test,dev | | south african english | 119 | 2 | 1643.82 | ZA | train,test | | borana | 112 | 1 | 1090.71 | KE | train | | swahili ,luganda ,arabic | 109 | 1 | 929.46 | UG | train | | ogoni | 109 | 4 | 1629.7 | NG | train,test | | mada | 109 | 2 | 1786.26 | NG | test | | bette | 106 | 4 | 930.16 | NG | train,test | | berom | 105 | 4 | 1272.99 | NG | dev,test | | bini | 104 | 4 | 1499.75 | NG | test | | ngas | 102 | 3 | 1234.16 | NG | train,test | | etsako | 101 | 4 | 1074.53 | NG | train,test | | okrika | 100 | 3 | 1887.47 | NG | train,test | | venda | 99 | 2 | 938.14 | ZA | train,test | | siswati | 96 | 5 | 1367.45 | ZA | dev,train,test | | damara | 92 | 1 | 674.43 | NG | train | | yoruba, hausa | 89 | 5 | 928.98 | NG | test | | southern sotho | 89 | 1 | 889.73 | ZA | train | | kanuri | 86 | 7 | 1936.78 | NG | test,dev | | itsekiri | 82 | 3 | 778.47 | NG | test,dev | | ekpeye | 80 | 2 | 922.88 | NG | test | | mwaghavul | 78 | 2 | 738.02 | NG | test | | bajju | 72 | 2 | 758.16 | NG | test | | luo, swahili | 71 | 1 | 616.57 | KE | train | | dholuo | 70 | 1 | 669.07 | KE | train | | ekene | 68 | 1 | 839.31 | NG | test | | jaba | 65 | 2 | 540.66 | NG | test | | ika | 65 | 4 | 576.56 | NG | test,dev | | angas | 65 | 1 | 589.99 | NG | test | | ateso | 63 | 1 | 624.28 | UG | train | | brass | 62 | 2 | 900.04 | NG | test | | ikulu | 61 | 1 | 313.2 | NG | test | | eleme | 60 | 2 | 1207.92 | NG | test | | chichewa | 60 | 1 | 554.61 | MW | train | | oklo | 58 | 1 | 871.37 | NG | test | | meru | 58 | 2 | 865.07 | KE | train,test | | agatu | 55 | 1 | 369.11 | NG | test | | okirika | 54 | 1 | 792.65 | NG | test | | igarra | 54 | 1 | 562.12 | NG | test | | ijaw(nembe) | 54 | 2 | 537.56 | NG | test | | khana | 51 | 2 | 497.42 | NG | test | | ogbia | 51 | 4 | 461.15 | NG | test,dev | | gbagyi | 51 | 4 | 693.43 | NG | test | | portuguese | 50 | 1 | 525.02 | ZA | train | | delta | 49 | 2 | 425.76 | NG | test | | bassa | 49 | 1 | 646.13 | NG | test | | etche | 49 | 1 | 637.48 | NG | test | | kubi | 46 | 1 | 495.21 | NG | test | | jukun | 44 | 2 | 362.12 | NG | test | | igbo and yoruba | 43 | 2 | 466.98 | NG | test | | urobo | 43 | 3 | 573.14 | NG | test | | kalabari | 42 | 5 | 305.49 | NG | test | | ibani | 42 | 1 | 322.34 | NG | test | | obolo | 37 | 1 | 204.79 | NG | test | | idah | 34 | 1 | 533.5 | NG | test | | bassa-nge/nupe | 31 | 3 | 267.42 | NG | test,dev | | yala mbembe | 29 | 1 | 237.27 | NG | test | | eket | 28 | 1 | 238.85 | NG | test | | afo | 26 | 1 | 171.15 | NG | test | | ebiobo | 25 | 1 | 226.27 | NG | test | | nyandang | 25 | 1 | 230.41 | NG | test | | ishan | 23 | 1 | 194.12 | NG | test | | bagi | 20 | 1 | 284.54 | NG | test | | estako | 20 | 1 | 480.78 | NG | test | | gerawa | 13 | 1 | 342.15 | NG | test | #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators The dataset was initially prepared by Intron and refined for public release by CLAIR Lab. ### Licensing Information Public Domain, Creative Commons Attribution NonCommercial ShareAlike v4.0 ([CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)) ### Citation Information [More Information Needed] ### Contributions Thanks to [@tobiolatunji](https://github.com/tobiolatunji) for adding this dataset.
17,874
[ [ -0.04156494140625, -0.03997802734375, -0.00547027587890625, 0.03070068359375, -0.00649261474609375, -0.0064849853515625, -0.03814697265625, -0.0211639404296875, 0.034027099609375, 0.028778076171875, -0.052734375, -0.045379638671875, -0.046142578125, 0.019393...
Francesco/liver-disease
2023-03-30T09:11:15.000Z
[ "task_categories:object-detection", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc", "rf100", "region:us" ]
Francesco
null
null
1
87
2023-03-30T09:10:00
--- dataset_info: features: - name: image_id dtype: int64 - name: image dtype: image - name: width dtype: int32 - name: height dtype: int32 - name: objects sequence: - name: id dtype: int64 - name: area dtype: int64 - name: bbox sequence: float32 length: 4 - name: category dtype: class_label: names: '0': diseases '1': ballooning '2': fibrosis '3': inflammation '4': steatosis annotations_creators: - crowdsourced language_creators: - found language: - en license: - cc multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - object-detection task_ids: [] pretty_name: liver-disease tags: - rf100 --- # Dataset Card for liver-disease ** The original COCO dataset is stored at `dataset.tar.gz`** ## Dataset Description - **Homepage:** https://universe.roboflow.com/object-detection/liver-disease - **Point of Contact:** francesco.zuppichini@gmail.com ### Dataset Summary liver-disease ### Supported Tasks and Leaderboards - `object-detection`: The dataset can be used to train a model for Object Detection. ### Languages English ## Dataset Structure ### Data Instances A data point comprises an image and its object annotations. ``` { 'image_id': 15, 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>, 'width': 964043, 'height': 640, 'objects': { 'id': [114, 115, 116, 117], 'area': [3796, 1596, 152768, 81002], 'bbox': [ [302.0, 109.0, 73.0, 52.0], [810.0, 100.0, 57.0, 28.0], [160.0, 31.0, 248.0, 616.0], [741.0, 68.0, 202.0, 401.0] ], 'category': [4, 4, 0, 0] } } ``` ### Data Fields - `image`: the image id - `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `width`: the image width - `height`: the image height - `objects`: a dictionary containing bounding box metadata for the objects present on the image - `id`: the annotation id - `area`: the area of the bounding box - `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format) - `category`: the object's category. #### Who are the annotators? Annotators are Roboflow users ## Additional Information ### Licensing Information See original homepage https://universe.roboflow.com/object-detection/liver-disease ### Citation Information ``` @misc{ liver-disease, title = { liver disease Dataset }, type = { Open Source Dataset }, author = { Roboflow 100 }, howpublished = { \url{ https://universe.roboflow.com/object-detection/liver-disease } }, url = { https://universe.roboflow.com/object-detection/liver-disease }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2022 }, month = { nov }, note = { visited on 2023-03-29 }, }" ``` ### Contributions Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
3,431
[ [ -0.034881591796875, -0.0404052734375, 0.006237030029296875, -0.003173828125, -0.03485107421875, -0.0234832763671875, -0.0031681060791015625, -0.0438232421875, 0.032623291015625, 0.0350341796875, -0.04058837890625, -0.07391357421875, -0.02825927734375, 0.0342...
dmayhem93/agieval-lsat-ar
2023-06-18T17:25:42.000Z
[ "arxiv:2304.06364", "arxiv:2104.06598", "region:us" ]
dmayhem93
null
null
1
87
2023-06-18T12:50:26
--- dataset_info: features: - name: query dtype: string - name: choices sequence: string - name: gold sequence: int64 splits: - name: test num_bytes: 273902 num_examples: 230 download_size: 66495 dataset_size: 273902 --- # Dataset Card for "agieval-lsat-ar" Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo. Raw datset: https://github.com/zhongwanjun/AR-LSAT MIT License Copyright (c) 2022 Wanjun Zhong Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. @misc{zhong2023agieval, title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models}, author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan}, year={2023}, eprint={2304.06364}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{zhong2021arlsat, title={AR-LSAT: Investigating Analytical Reasoning of Text}, author={Wanjun Zhong and Siyuan Wang and Duyu Tang and Zenan Xu and Daya Guo and Jiahai Wang and Jian Yin and Ming Zhou and Nan Duan}, year={2021}, eprint={2104.06598}, archivePrefix={arXiv}, primaryClass={cs.CL} } @article{wang2022lsat, title={From lsat: The progress and challenges of complex reasoning}, author={Wang, Siyuan and Liu, Zhongkun and Zhong, Wanjun and Zhou, Ming and Wei, Zhongyu and Chen, Zhumin and Duan, Nan}, journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing}, year={2022}, publisher={IEEE} }
2,534
[ [ -0.036102294921875, -0.047271728515625, 0.0216217041015625, 0.01258087158203125, -0.01425933837890625, -0.0171966552734375, 0.0027828216552734375, -0.033447265625, 0.00504302978515625, 0.036224365234375, -0.0382080078125, -0.0231781005859375, -0.0311279296875, ...
yzhuang/autotree_automl_10000_bank-marketing_sgosdt_l256_dim7_d3_sd0
2023-09-07T02:31:08.000Z
[ "region:us" ]
yzhuang
null
null
0
87
2023-09-07T02:31:04
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 205720000 num_examples: 10000 - name: validation num_bytes: 205720000 num_examples: 10000 download_size: 74206478 dataset_size: 411440000 --- # Dataset Card for "autotree_automl_10000_bank-marketing_sgosdt_l256_dim7_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
850
[ [ -0.0197296142578125, -0.021484375, 0.00894927978515625, 0.024383544921875, -0.01070404052734375, 0.01560211181640625, 0.042755126953125, -0.0018978118896484375, 0.0478515625, 0.04034423828125, -0.053558349609375, -0.0474853515625, -0.04473876953125, 0.001138...
erico25/aminer_title_abstract_v10
2023-10-23T20:43:49.000Z
[ "size_categories:1M<n<10M", "language:en", "region:us" ]
erico25
null
null
0
87
2023-10-23T19:07:34
--- dataset_info: features: - name: title dtype: string - name: abstract dtype: string splits: - name: train num_bytes: 2628760201 num_examples: 2548532 download_size: 0 dataset_size: 2628760201 language: - en size_categories: - 1M<n<10M --- # Dataset Card for "aminer_title_abstract_v10" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
451
[ [ -0.04339599609375, 0.0031833648681640625, 0.0330810546875, 0.01172637939453125, -0.01065826416015625, -0.00830841064453125, 0.0267333984375, -0.01538848876953125, 0.054290771484375, 0.021575927734375, -0.046142578125, -0.05419921875, -0.0509033203125, 0.0070...
id_newspapers_2018
2022-11-03T16:16:15.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:id",...
null
The dataset contains around 500K articles (136M of words) from 7 Indonesian newspapers: Detik, Kompas, Tempo, CNN Indonesia, Sindo, Republika and Poskota. The articles are dated between 1st January 2018 and 20th August 2018 (with few exceptions dated earlier). The size of uncompressed 500K json files (newspapers-json.tgz) is around 2.2GB, and the cleaned uncompressed in a big text file (newspapers.txt.gz) is about 1GB. The original source in Google Drive contains also a dataset in html format which include raw data (pictures, css, javascript, ...) from the online news website
@inproceedings{id_newspapers_2018, author = {}, title = {Indonesian Newspapers 2018}, year = {2019}, url = {https://github.com/feryandi/Dataset-Artikel}, }
4
86
2022-03-02T23:29:22
--- annotations_creators: - no-annotation language_creators: - found language: - id license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling paperswithcode_id: null pretty_name: Indonesian Newspapers 2018 dataset_info: features: - name: id dtype: string - name: url dtype: string - name: date dtype: string - name: title dtype: string - name: content dtype: string config_name: id_newspapers_2018 splits: - name: train num_bytes: 1116031922 num_examples: 499164 download_size: 446018349 dataset_size: 1116031922 --- # Dataset Card for Indonesian Newspapers 2018 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Indonesian Newspapers](https://github.com/feryandi/Dataset-Artikel) - **Repository:** [Indonesian Newspapers](https://github.com/feryandi/Dataset-Artikel) - **Paper:** - **Leaderboard:** - **Point of Contact:** [feryandi.n@gmail.com](mailto:feryandi.n@gmail.com), [cahya.wirawan@gmail.com](mailto:cahya.wirawan@gmail.com) ### Dataset Summary The dataset contains around 500K articles (136M of words) from 7 Indonesian newspapers: Detik, Kompas, Tempo, CNN Indonesia, Sindo, Republika and Poskota. The articles are dated between 1st January 2018 and 20th August 2018 (with few exceptions dated earlier). The size of uncompressed 500K json files (newspapers-json.tgz) is around 2.2GB, and the cleaned uncompressed in a big text file (newspapers.txt.gz) is about 1GB. The original source in Google Drive contains also a dataset in html format which include raw data (pictures, css, javascript, ...) from the online news website. A copy of the original dataset is available at https://cloud.uncool.ai/index.php/s/mfYEAgKQoY3ebbM ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Indonesian ## Dataset Structure ``` { 'id': 'string', 'url': 'string', 'date': 'string', 'title': 'string', 'content': 'string' } ``` ### Data Instances An instance from the dataset is ``` {'id': '0', 'url': 'https://www.cnnindonesia.com/olahraga/20161221234219-156-181385/lorenzo-ingin-samai-rekor-rossi-dan-stoner', 'date': '2016-12-22 07:00:00', 'title': 'Lorenzo Ingin Samai Rekor Rossi dan Stoner', 'content': 'Jakarta, CNN Indonesia -- Setelah bergabung dengan Ducati, Jorge Lorenzo berharap bisa masuk dalam jajaran pebalap yang mampu jadi juara dunia kelas utama dengan dua pabrikan berbeda. Pujian Max Biaggi untuk Valentino Rossi Jorge Lorenzo Hadir dalam Ucapan Selamat Natal Yamaha Iannone: Saya Sering Jatuh Karena Ingin yang Terbaik Sepanjang sejarah, hanya ada lima pebalap yang mampu jadi juara kelas utama (500cc/MotoGP) dengan dua pabrikan berbeda, yaitu Geoff Duke, Giacomo Agostini, Eddie Lawson, Valentino Rossi, dan Casey Stoner. Lorenzo ingin bergabung dalam jajaran legenda tersebut. “Fakta ini sangat penting bagi saya karena hanya ada lima pebalap yang mampu menang dengan dua pabrikan berbeda dalam sejarah balap motor.” “Kedatangan saya ke Ducati juga menghadirkan tantangan yang sangat menarik karena hampir tak ada yang bisa menang dengan Ducati sebelumnya, kecuali Casey Stoner. Hal itu jadi motivasi yang sangat bagus bagi saya,” tutur Lorenzo seperti dikutip dari Crash Lorenzo saat ini diliputi rasa penasaran yang besar untuk menunggang sepeda motor Desmosedici yang dipakai tim Ducati karena ia baru sekali menjajal motor tersebut pada sesi tes di Valencia, usai MotoGP musim 2016 berakhir. “Saya sangat tertarik dengan Ducati arena saya hanya memiliki kesempatan mencoba motor itu di Valencia dua hari setelah musim berakhir. Setelah itu saya tak boleh lagi menjajalnya hingga akhir Januari mendatang. Jadi saya menjalani penantian selama dua bulan yang panjang,” kata pebalap asal Spanyol ini. Dengan kondisi tersebut, maka Lorenzo memanfaatkan waktu yang ada untuk liburan dan melepaskan penat. “Setidaknya apa yang terjadi pada saya saat ini sangat bagus karena saya jadi memiliki waktu bebas dan sedikit liburan.” “Namun tentunya saya tak akan larut dalam liburan karena saya harus lebih bersiap, terutama dalam kondisi fisik dibandingkan sebelumnya, karena saya akan menunggangi motor yang sulit dikendarai,” ucap Lorenzo. Selama sembilan musim bersama Yamaha, Lorenzo sendiri sudah tiga kali jadi juara dunia, yaitu pada 2010, 2012, dan 2015. (kid)'} ``` ### Data Fields - `id`: id of the sample - `url`: the url to the original article - `date`: the publishing date of the article - `title`: the title of the article - `content`: the content of the article ### Data Splits The dataset contains train set of 499164 samples. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. The dataset is shared for the sole purpose of aiding open scientific research in Bahasa Indonesia (computing or linguistics), and can only be used for that purpose. The ownership of each article within the dataset belongs to the respective newspaper from which it was extracted; and the maintainer of the repository does not claim ownership of any of the content within it. If you think, by any means, that this dataset breaches any established copyrights; please contact the repository maintainer. ### Citation Information [N/A] ### Contributions Thanks to [@cahya-wirawan](https://github.com/cahya-wirawan) for adding this dataset.
7,227
[ [ -0.035797119140625, -0.052764892578125, 0.00801849365234375, 0.0269775390625, -0.052947998046875, -0.010223388671875, -0.0296783447265625, -0.034271240234375, 0.0496826171875, 0.03228759765625, -0.039794921875, -0.039642333984375, -0.039947509765625, 0.03442...
moroco
2023-01-25T14:40:41.000Z
[ "task_categories:text-classification", "task_ids:topic-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ro", "license:cc-by-4.0", "arxiv:1901.06543", "region:us" ]
null
The MOROCO (Moldavian and Romanian Dialectal Corpus) dataset contains 33564 samples of text collected from the news domain. The samples belong to one of the following six topics: - culture - finance - politics - science - sports - tech
@inproceedings{ Butnaru-ACL-2019, author = {Andrei M. Butnaru and Radu Tudor Ionescu}, title = "{MOROCO: The Moldavian and Romanian Dialectal Corpus}", booktitle = {Proceedings of ACL}, year = {2019}, pages={688--698}, }
0
86
2022-03-02T23:29:22
--- annotations_creators: - found language_creators: - found language: - ro license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - topic-classification paperswithcode_id: moroco pretty_name: 'MOROCO: The Moldavian and Romanian Dialectal Corpus' language_bcp47: - ro-MD dataset_info: features: - name: id dtype: string - name: category dtype: class_label: names: '0': culture '1': finance '2': politics '3': science '4': sports '5': tech - name: sample dtype: string config_name: moroco splits: - name: train num_bytes: 39314292 num_examples: 21719 - name: test num_bytes: 10877813 num_examples: 5924 - name: validation num_bytes: 10721304 num_examples: 5921 download_size: 60711985 dataset_size: 60913409 --- # Dataset Card for MOROCO ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/butnaruandrei/MOROCO) - **Repository:** [Github](https://github.com/butnaruandrei/MOROCO) - **Paper:** [Arxiv](https://arxiv.org/abs/1901.06543) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [email](raducu.ionescu@gmail.com) ### Dataset Summary Introducing MOROCO - The **Mo**ldavian and **Ro**manian Dialectal **Co**rpus. The MOROCO data set contains Moldavian and Romanian samples of text collected from the news domain. The samples belong to one of the following six topics: (0) culture, (1) finance, (2) politics, (3) science, (4) sports, (5) tech. The corpus features a total of 33,564 samples labelled with one of the fore mentioned six categories. We are also including a train/validation/test split with 21,719/5,921/5,924 samples in each subset. ### Supported Tasks and Leaderboards [LiRo Benchmark and Leaderboard](https://eemlcommunity.github.io/ro_benchmark_leaderboard/site/) ### Languages The text dataset is in Romanian (`ro`) ## Dataset Structure ### Data Instances Below we have an example of sample from MOROCO: ``` {'id': , '48482', 'category': 2, 'sample': '“$NE$ cum am spus, nu este un sfârşit de drum . Vom continua lupta cu toate instrumentele şi cu toate mijloacele legale, parlamentare şi civice pe care le avem la dispoziţie . Evident că vom contesta la $NE$ această lege, au anunţat şi colegii de la $NE$ o astfel de contestaţie . Practic trebuie utilizat orice instrument pe care îl identificăm pentru a bloca intrarea în vigoare a acestei legi . Bineînţeles, şi preşedintele are punctul său de vedere . ( . . . ) $NE$ legi sunt împănate de motive de neconstituţionalitate . Colegii mei de la departamentul juridic lucrează în prezent pentru a definitiva textul contestaţiei”, a declarat $NE$ $NE$ citat de news . ro . Senatul a adoptat, marţi, în calitate de for decizional, $NE$ privind statutul judecătorilor şi procurorilor, cu 80 de voturi ”pentru” şi niciun vot ”împotrivă”, în condiţiile în care niciun partid din opoziţie nu a fost prezent în sală .', } ``` where 48482 is the sample ID, followed by the category ground truth label, and then the text representing the actual content to be classified by topic. Note: The category label has integer values ranging from 0 to 5. ### Data Fields - `id`: string, the unique indentifier of a sample - `category_label`: integer in the range [0, 5]; the category assigned to a sample. - `sample`: a string, news report to be classified / used in classification. ### Data Splits The train/validation/test split contains 21,719/5,921/5,924 samples tagged with the category assigned to each sample in the dataset. ## Dataset Creation ### Curation Rationale The samples are preprocessed in order to eliminate named entities. This is required to prevent classifiers from taking the decision based on features that are not related to the topics. For example, named entities that refer to politicians or football players names can provide clues about the topic. For more details, please read the [paper](https://arxiv.org/abs/1901.06543). ### Source Data #### Data Collection and Normalization For the data collection, five of the most popular news websites in Romania and the Republic of Moldova were targetted. Given that the data set was obtained through a web scraping technique, all the HTML tags needed to be removed, as well as replace consecutive white spaces with a single space. As part of the pre-processing, we remove named entities, such as country names, cities, public figures, etc. The named entities have been replaced with $NE$. The necessity to remove them, comes also from the scope of this dataset: categorization by topic. Thus, the authors decided to remove named entities in order to prevent classifiers from taking the decision based on features that are not truly indicative of the topics. #### Who are the source language producers? The original text comes from news websites from Romania and the Republic of Moldova. ### Annotations #### Annotation process As mentioned above, MOROCO is composed of text samples from the top five most popular news websites in Romania and the Republic of Moldova, respectively. Since there are topic tags in the news websites targetd, the text samples can be automatically labeled with the corresponding category. #### Who are the annotators? N/A ### Personal and Sensitive Information The textual data collected for MOROCO consists in news reports freely available on the Internet and of public interest. To the best of authors' knowledge, there is no personal or sensitive information that needed to be considered in the said textual inputs collected. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is part of an effort to encourage text classification research in languages other than English. Such work increases the accessibility of natural language technology to more regions and cultures. In the past three years there was a growing interest for studying Romanian from a Computational Linguistics perspective. However, we are far from having enough datasets and resources in this particular language. ### Discussion of Biases The data included in MOROCO spans a well defined time frame of a few years. Part of the topics that were of interest then in the news landscape, might not show up nowadays or a few years from now in news websites. ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators Published and managed by Radu Tudor Ionescu and Andrei Butnaru. ### Licensing Information CC BY-SA 4.0 License ### Citation Information ``` @inproceedings{ Butnaru-ACL-2019, author = {Andrei M. Butnaru and Radu Tudor Ionescu}, title = "{MOROCO: The Moldavian and Romanian Dialectal Corpus}", booktitle = {Proceedings of ACL}, year = {2019}, pages={688--698}, } ``` ### Contributions Thanks to [@MihaelaGaman](https://github.com/MihaelaGaman) for adding this dataset.
8,072
[ [ -0.02276611328125, -0.04248046875, -0.004180908203125, 0.0126953125, -0.03717041015625, -0.006458282470703125, -0.017852783203125, -0.0237274169921875, 0.044677734375, 0.037872314453125, -0.0260467529296875, -0.07843017578125, -0.057373046875, 0.027801513671...
ro_sts
2022-11-18T21:42:20.000Z
[ "task_categories:text-classification", "task_ids:text-scoring", "task_ids:semantic-similarity-scoring", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|other-sts-b", "language:ro", "license:...
null
The RO-STS (Romanian Semantic Textual Similarity) dataset contains 8628 pairs of sentences with their similarity score. It is a high-quality translation of the STS benchmark dataset.
@inproceedings{dumitrescu2021liro, title={Liro: Benchmark and leaderboard for romanian language tasks}, author={Dumitrescu, Stefan Daniel and Rebeja, Petru and Lorincz, Beata and Gaman, Mihaela and Avram, Andrei and Ilie, Mihai and Pruteanu, Andrei and Stan, Adriana and Rosia, Lorena and Iacobescu, Cristina and others}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1)}, year={2021} }
0
86
2022-03-02T23:29:22
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - ro license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - extended|other-sts-b task_categories: - text-classification task_ids: - text-scoring - semantic-similarity-scoring paperswithcode_id: null pretty_name: RO-STS dataset_info: features: - name: score dtype: float32 - name: sentence1 dtype: string - name: sentence2 dtype: string config_name: ro_sts splits: - name: train num_bytes: 879073 num_examples: 5749 - name: test num_bytes: 194330 num_examples: 1379 - name: validation num_bytes: 245926 num_examples: 1500 download_size: 1267607 dataset_size: 1319329 --- # Dataset Card for RO-STS ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [GitHub](https://github.com/dumitrescustefan/RO-STS) - **Repository:** [GitHub](https://github.com/dumitrescustefan/RO-STS) - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [email](dumitrescu.stefan@gmail.com) ### Dataset Summary We present RO-STS - the Semantic Textual Similarity dataset for the Romanian language. It is a high-quality translation of the [STS English dataset](https://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark). RO-STS contains 8,628 sentence pairs with their similarity scores. The original English sentences were collected from news headlines, captions of images and user forums, and are categorized accordingly. The Romanian release follows this categorization and provides the same train/validation/test split with 5,749/1,500/1,379 sentence pairs in each subset. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages The text dataset is in Romanian (`ro`) ## Dataset Structure ### Data Instances An example looks like this: ``` {'score': 1.5, 'sentence1': 'Un bărbat cântă la harpă.', 'sentence2': 'Un bărbat cântă la claviatură.', } ``` ### Data Fields - `score`: a float representing the semantic similarity score where 0.0 is the lowest score and 5.0 is the highest - `sentence1`: a string representing a text - `sentence2`: another string to compare the previous text with ### Data Splits The train/validation/test split contain 5,749/1,500/1,379 sentence pairs. ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data [Needs More Information] #### Initial Data Collection and Normalization *To construct the dataset, we first obtained automatic translations using Google's translation engine. These were then manually checked, corrected, and cross-validated by human volunteers. * #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information CC BY-SA 4.0 License ### Citation Information ``` @inproceedings{dumitrescu2021liro, title={Liro: Benchmark and leaderboard for romanian language tasks}, author={Dumitrescu, Stefan Daniel and Rebeja, Petru and Lorincz, Beata and Gaman, Mihaela and Avram, Andrei and Ilie, Mihai and Pruteanu, Andrei and Stan, Adriana and Rosia, Lorena and Iacobescu, Cristina and others}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1)}, year={2021} } ``` ### Contributions Thanks to [@lorinczb](https://github.com/lorinczb) for adding this dataset.
4,848
[ [ -0.01154327392578125, -0.04437255859375, 0.01416778564453125, 0.00998687744140625, -0.0298614501953125, -0.00229644775390625, -0.032012939453125, -0.02825927734375, 0.031280517578125, 0.0274658203125, -0.054168701171875, -0.07781982421875, -0.055023193359375, ...
stsb_mt_sv
2022-11-18T21:48:42.000Z
[ "task_categories:text-classification", "task_ids:text-scoring", "task_ids:semantic-similarity-scoring", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|o...
null
null
@article{isbister2020not, title={Why Not Simply Translate? A First Swedish Evaluation Benchmark for Semantic Similarity}, author={Isbister, Tim and Sahlgren, Magnus}, journal={arXiv preprint arXiv:2009.03116}, year={2020} }
1
86
2022-03-02T23:29:22
--- annotations_creators: - crowdsourced language_creators: - crowdsourced - machine-generated language: - sv license: - unknown multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - extended|other-sts-b task_categories: - text-classification task_ids: - text-scoring - semantic-similarity-scoring paperswithcode_id: null pretty_name: Swedish Machine Translated STS-B dataset_info: features: - name: sentence1 dtype: string - name: sentence2 dtype: string - name: score dtype: float32 config_name: plain_text splits: - name: test num_bytes: 171823 num_examples: 1379 - name: validation num_bytes: 218843 num_examples: 1500 - name: train num_bytes: 772847 num_examples: 5749 download_size: 383047 dataset_size: 1163513 --- # Dataset Card for Swedish Machine Translated STS-B ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [stsb-mt-sv homepage](https://github.com/timpal0l/sts-benchmark-swedish) - **Repository:** [stsb-mt-sv repository](https://github.com/timpal0l/sts-benchmark-swedish) - **Paper:** [Why Not Simply Translate? A First Swedish Evaluation Benchmark for Semantic Similarity ](https://arxiv.org/abs/2009.03116) - **Point of Contact:** [Tim Isbister](mailto:timisbisters@gmail.com) ### Dataset Summary This dataset is a Swedish machine translated version for semantic textual similarity. ### Supported Tasks and Leaderboards This dataset can be used to evaluate text similarity on Swedish. ### Languages The text in the dataset is in Swedish. The associated BCP-47 code is `sv`. ## Dataset Structure ### Data Instances What a sample looks like: ``` {'score': '4.2', 'sentence1': 'Undrar om jultomten kommer i år pga Corona..?', 'sentence2': 'Jag undrar om jultomen kommer hit i år med tanke på covid-19', } ``` ### Data Fields - `score`: a float representing the semantic similarity score. Where 0.0 is the lowest score and 5.0 is the highest. - `sentence1`: a string representing a text - `sentence2`: another string to compare the semantic with ### Data Splits The data is split into a training, validation and test set. The final split sizes are as follow: | Train | Valid | Test | | ------ | ----- | ---- | | 5749 | 1500 | 1379 | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators The machine translated version were put together by @timpal0l ### Licensing Information [Needs More Information] ### Citation Information ``` @article{isbister2020not, title={Why Not Simply Translate? A First Swedish Evaluation Benchmark for Semantic Similarity}, author={Isbister, Tim and Sahlgren, Magnus}, journal={arXiv preprint arXiv:2009.03116}, year={2020} } ``` ### Contributions Thanks to [@timpal0l](https://github.com/timpal0l) for adding this dataset.
4,448
[ [ -0.0225067138671875, -0.035980224609375, 0.018402099609375, 0.01520538330078125, -0.0509033203125, -0.005687713623046875, -0.034759521484375, -0.029815673828125, 0.0212249755859375, 0.037017822265625, -0.062042236328125, -0.0860595703125, -0.0550537109375, 0...
urdu_sentiment_corpus
2023-01-25T15:02:01.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ur", "license:unknown", "region:us" ]
null
“Urdu Sentiment Corpus” (USC) shares the dat of Urdu tweets for the sentiment analysis and polarity detection. The dataset is consisting of tweets and overall, the dataset is comprising over 17, 185 tokens with 52% records as positive, and 48 % records as negative.
@inproceedings{khan2020usc, title={Urdu Sentiment Corpus (v1.0): Linguistic Exploration and Visualization of Labeled Datasetfor Urdu Sentiment Analysis.}, author={Khan, Muhammad Yaseen and Nizami, Muhammad Suffian}, booktitle={2020 IEEE 2nd International Conference On Information Science & Communication Technology (ICISCT)}, pages={}, year={2020}, organization={IEEE} }
1
86
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - crowdsourced language: - ur license: - unknown multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification paperswithcode_id: urdu-sentiment-corpus pretty_name: Urdu Sentiment Corpus (USC) dataset_info: features: - name: sentence dtype: string - name: sentiment dtype: class_label: names: '0': P '1': N '2': O splits: - name: train num_bytes: 161190 num_examples: 1000 download_size: 51583 dataset_size: 161190 --- # Dataset Card for Urdu Sentiment Corpus (USC) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/MuhammadYaseenKhan/Urdu-Sentiment-Corpus) - **Repository:** [Github](https://github.com/MuhammadYaseenKhan/Urdu-Sentiment-Corpus) - **Paper:** [IEEE](https://ieeexplore.ieee.org/abstract/document/9080043) - **Leaderboard:** - **Point of Contact:** [Muhammad Yaseen Khan](https://github.com/MuhammadYaseenKhan) ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - sentences: The Urdu tweet - sentiment: The sentiment that was exhibited in the tweet, which can be Positive(P) or Negative(N) or Objective(O). ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@chaitnayabasava](https://github.com/chaitnayabasava) for adding this dataset.
3,493
[ [ -0.032318115234375, -0.0147857666015625, 0.00189208984375, 0.050201416015625, -0.027435302734375, 0.027984619140625, -0.0227203369140625, -0.00675201416015625, 0.03350830078125, 0.032501220703125, -0.047515869140625, -0.083740234375, -0.056793212890625, 0.02...
youtube_caption_corrections
2023-01-25T15:03:42.000Z
[ "task_categories:other", "task_categories:text-generation", "task_categories:fill-mask", "task_ids:slot-filling", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "...
null
Dataset built from pairs of YouTube captions where both 'auto-generated' and 'manually-corrected' captions are available for a single specified language. This dataset labels two-way (e.g. ignoring single-sided insertions) same-length token differences in the `diff_type` column. The `default_seq` is composed of tokens from the 'auto-generated' captions. When a difference occurs between the 'auto-generated' vs 'manually-corrected' captions types, the `correction_seq` contains tokens from the 'manually-corrected' captions.
null
4
86
2022-03-02T23:29:22
--- annotations_creators: - expert-generated - machine-generated language_creators: - machine-generated language: - en license: - mit multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - other - text-generation - fill-mask task_ids: - slot-filling pretty_name: YouTube Caption Corrections tags: - token-classification-of-text-errors dataset_info: features: - name: video_ids dtype: string - name: default_seq sequence: string - name: correction_seq sequence: string - name: diff_type sequence: class_label: names: '0': NO_DIFF '1': CASE_DIFF '2': PUNCUATION_DIFF '3': CASE_AND_PUNCUATION_DIFF '4': STEM_BASED_DIFF '5': DIGIT_DIFF '6': INTRAWORD_PUNC_DIFF '7': UNKNOWN_TYPE_DIFF '8': RESERVED_DIFF splits: - name: train num_bytes: 355978939 num_examples: 10769 download_size: 222479455 dataset_size: 355978939 --- # Dataset Card for YouTube Caption Corrections ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/2dot71mily/youtube_captions_corrections - **Repository:** https://github.com/2dot71mily/youtube_captions_corrections - **Paper:** [N/A] - **Leaderboard:** [N/A] - **Point of Contact:** Emily McMilin ### Dataset Summary This dataset is built from pairs of YouTube captions where both an auto-generated and a manually-corrected caption are available for a single specified language. It currently only in English, but scripts at repo support other languages. The motivation for creating it was from viewing errors in auto-generated captions at a recent virtual conference, with the hope that there could be some way to help correct those errors. The dataset in the repo at https://github.com/2dot71mily/youtube_captions_corrections records in a non-destructive manner all the differences between an auto-generated and a manually-corrected caption for thousands of videos. The dataset here focuses on the subset of those differences which are mutual and have the same size in token length difference, which means it excludes token insertion or deletion differences between the two captions. Therefore dataset here remains a non-destructive representation of the original auto-generated captions, but excludes some of the differences that are found in the manually-corrected captions. ### Supported Tasks and Leaderboards - `token-classification`: The tokens in `default_seq` are from the auto-generated YouTube captions. If `diff_type` is labeled greater than `0` at a given index, then the associated token in same index in the `default_seq` was found to be different to the token in the manually-corrected YouTube caption, and therefore we assume it is an error. A model can be trained to learn when there are errors in the auto-generated captions. - `slot-filling`: The `correction_seq` is sparsely populated with tokens from the manually-corrected YouTube captions in the locations where there was found to be a difference to the token in the auto-generated YouTube captions. These 'incorrect' tokens in the `default_seq` can be masked in the locations where `diff_type` is labeled greater than `0`, so that a model can be trained to hopefully find a better word to fill in, rather than the 'incorrect' one. End to end, the models could maybe first identify and then replace (with suitable alternatives) errors in YouTube and other auto-generated captions that are lacking manual corrections ### Languages English ## Dataset Structure ### Data Instances If `diff_type` is labeled greater than `0` at a given index, then the associated token in same index in the `default_seq` was found to have a difference to the token in the manually-corrected YouTube caption. The `correction_seq` is sparsely populated with tokens from the manually-corrected YouTube captions at those locations of differences. `diff_type` labels for tokens are as follows: 0: No difference 1: Case based difference, e.g. `hello` vs `Hello` 2: Punctuation difference, e.g. `hello` vs `hello` 3: Case and punctuation difference, e.g. `hello` vs `Hello,` 4: Word difference with same stem, e.g. `thank` vs `thanked` 5: Digit difference, e.g. `2` vs `two` 6: Intra-word punctuation difference, e.g. `autogenerated` vs `auto-generated` 7: Unknown type of difference, e.g. `laughter` vs `draft` 8: Reserved for unspecified difference { 'video_titles': '_QUEXsHfsA0', 'default_seq': ['you', 'see', "it's", 'a', 'laughter', 'but', 'by', 'the', 'time', 'you', 'see', 'this', 'it', "won't", 'be', 'so', 'we', 'have', 'a', 'big'] 'correction_seq': ['', 'see,', '', '', 'draft,', '', '', '', '', '', 'read', 'this,', '', '', 'be.', 'So', '', '', '', ''] 'diff_type': [0, 2, 0, 0, 7, 0, 0, 0, 0, 0, 7, 2, 0, 0, 2, 1, 0, 0, 0, 0] } ### Data Fields - 'video_ids': Unique ID used by YouTube for each video. Can paste into `https://www.youtube.com/watch?v=<{video_ids}` to see video - 'default_seq': Tokenized auto-generated YouTube captions for the video - 'correction_seq': Tokenized manually-corrected YouTube captions only at those locations, where there is a difference between the auto-generated and manually-corrected captions - 'diff_type': A value greater than `0` at every token where there is a difference between the auto-generated and manually-corrected captions ### Data Splits No data splits ## Dataset Creation ### Curation Rationale It was created after viewing errors in auto-generated captions at a recent virtual conference, with the hope that there could be some way to help correct those errors. ### Source Data #### Initial Data Collection and Normalization All captions are requested via `googleapiclient` and `youtube_transcript_api` at the `channel_id` and language granularity, using scripts written at https://github.com/2dot71mily/youtube_captions_corrections. The captions are tokenized on spaces and the manually-corrected sequence has here been reduced to only include differences between it and the auto-generated sequence. #### Who are the source language producers? Auto-generated scripts are from YouTube and the manually-corrected scripts are from creators, and any support they may have (e.g. community or software support) ### Annotations #### Annotation process Scripts at repo, https://github.com/2dot71mily/youtube_captions_corrections take a diff of the two captions and use this to create annotations. #### Who are the annotators? YouTube creators, and any support they may have (e.g. community or software support) ### Personal and Sensitive Information All content publicly available on YouTube ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Emily McMilin ### Licensing Information MIT License ### Citation Information https://github.com/2dot71mily/youtube_captions_corrections ### Contributions Thanks to [@2dot71mily](https://github.com/2dot71mily) for adding this dataset.
8,215
[ [ -0.03363037109375, -0.048675537109375, -0.00397491455078125, 0.0229339599609375, -0.0252685546875, 0.012115478515625, -0.0250396728515625, 0.002902984619140625, 0.037872314453125, 0.0214996337890625, -0.064697265625, -0.044891357421875, -0.05694580078125, 0....
ASCCCCCCCC/amazon_zh
2022-02-17T02:16:59.000Z
[ "license:apache-2.0", "region:us" ]
ASCCCCCCCC
null
null
1
86
2022-03-02T23:29:22
--- license: apache-2.0 --- this is a datasets about amazon reviews
70
[ [ -0.026397705078125, -0.00858306884765625, 0.0001062154769897461, 0.021942138671875, -0.0135040283203125, 0.0009489059448242188, 0.0325927734375, -0.0087890625, 0.0310821533203125, 0.0635986328125, -0.06646728515625, -0.04425048828125, -0.005977630615234375, ...
Abirate/code_net_dataset
2021-12-11T17:41:32.000Z
[ "region:us" ]
Abirate
null
null
2
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.014984130859375, 0.057220458984375, 0.0288238525390625, -0.03509521484375, 0.04656982421875, 0.052520751953125, 0.00506591796875, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060455322265625, 0.03793334...
AhmedSSoliman/CoNaLa
2022-01-22T09:34:19.000Z
[ "region:us" ]
AhmedSSoliman
null
null
0
86
2022-03-02T23:29:22
--- task_categories: - Code Generation - Translation - Text2Text generation --- # CoNaLa Dataset for Code Generation ## Table of content - [Dataset Description](#dataset-description) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) ## Dataset Descritpion This dataset has been processed for Code Generation. CMU CoNaLa, the Code/Natural Language Challenge is a joint project of the Carnegie Mellon University NeuLab and STRUDEL Lab. This dataset was designed to test systems for generating program snippets from natural language. It is avilable at https://conala-corpus.github.io/ , and this is about 13k records from the full corpus of about 600k examples. ### Languages English ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "intent": "convert a list to a dictionary in python", "snippet": "b = dict(zip(a[0::2], a[1::2]))" }, { "intent": "python - sort a list of nested lists", "snippet": "l.sort(key=sum_nested)" } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "intent": "Value(dtype='string', id=None)", "snippet": "Value(dtype='string', id=None)" } ``` ### Dataset Splits This dataset is split into a train, validation and test split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 11125 | | valid | 1237 | | test | 500 |
1,593
[ [ -0.023895263671875, -0.047027587890625, -0.003055572509765625, 0.016082763671875, -0.01299285888671875, 0.00559234619140625, -0.02862548828125, -0.00511932373046875, 0.0111083984375, 0.020782470703125, -0.042449951171875, -0.05450439453125, -0.037933349609375, ...
Aisha/BAAD16
2022-10-22T05:31:54.000Z
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:found", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:found", "language_creators:crowdsourced", "multilinguality:monolingual", "source_datasets:origi...
Aisha
null
null
0
86
2022-03-02T23:29:22
--- annotations_creators: - found - crowdsourced - expert-generated language_creators: - found - crowdsourced language: - bn license: - cc-by-4.0 multilinguality: - monolingual pretty_name: 'BAAD16: Bangla Authorship Attribution Dataset (16 Authors)' source_datasets: - original task_categories: - text-classification task_ids: - multi-class-classification --- ## Description **BAAD16** is an **Authorship Attribution dataset for Bengali Literature**. It was collected and analyzed by the authors of [this paper](https://arxiv.org/abs/2001.05316). It was created by scraping text from an online Bangla e-library using custom web crawler and contains literary works of various famous Bangla writers. It contains novels, stories, series, and other works of 16 authors. Each sample document is created with 750 words. The dataset is imbalanced and resembles real-world scenarios more closely, where not all the authors will have a large number of sample texts. The following table gives more details about the dataset. | Author Name | Number of Samples | Word Count | Unique Word | --- | --- | --- | --- | | zahir rayhan | 185 | 138k | 20k |nazrul | 223 | 167k | 33k |manik bandhopaddhay | 469 | 351k | 44k |nihar ronjon gupta | 476 | 357k | 43k |bongkim | 562 | 421k | 62k |tarashonkor | 775 | 581k | 84k |shottojit roy | 849 | 636k | 67k |shordindu | 888 | 666k | 84k |toslima nasrin | 931 | 698k | 76k |shirshendu | 1048 | 786k | 69k |zafar iqbal | 1100 | 825k | 53k |robindronath | 1259 | 944k | 89k |shorotchandra | 1312 | 984k | 78k |shomresh | 1408 | 1056k|69k |shunil gongopaddhay | 1963 | 1472k|109k |humayun ahmed | 4518 | 3388k |161k **Total**| 17,966|13,474,500 | 590,660 **Average**|1,122.875|842,156.25| 71,822.25 ## Citation If you use this dataset, please cite the paper [Authorship Attribution in Bangla literature using Character-level CNN](https://ieeexplore.ieee.org/abstract/document/9038560/). [Archive link](https://arxiv.org/abs/2001.05316). ``` @inproceedings{BAAD16Dataset, title={Authorship Attribution in Bangla literature using Character-level CNN}, author={Khatun, Aisha and Rahman, Anisur and Islam, Md Saiful and others}, booktitle={2019 22nd International Conference on Computer and Information Technology (ICCIT)}, pages={1--5}, year={2019}, organization={IEEE} doi={10.1109/ICCIT48885.2019.9038560} } ``` This dataset is also available in Mendeley: [BAAD16 dataset](https://data.mendeley.com/datasets/6d9jrkgtvv/4). Always make sure to use the latest version of the dataset. Cite the dataset directly by: ``` @misc{BAAD6Dataset, author = {Khatun, Aisha and Rahman, Anisur and Islam, Md. Saiful}, title = {BAAD16: Bangla Authorship Attribution Dataset}, year={2019}, doi = {10.17632/6d9jrkgtvv.4}, howpublished= {\url{https://data.mendeley.com/datasets/6d9jrkgtvv/4}} } ```
3,171
[ [ -0.00609588623046875, -0.0107574462890625, 0.012725830078125, 0.013641357421875, -0.0198516845703125, 0.00867462158203125, 0.0171661376953125, -0.0302886962890625, 0.0157012939453125, 0.0274505615234375, -0.02777099609375, -0.0236968994140625, -0.048065185546875...
Akila/ForgottenRealmsWikiDataset
2022-12-18T12:28:34.000Z
[ "region:us" ]
Akila
null
null
2
86
2022-03-02T23:29:22
## Citing this work @inproceedings{peiris2022synthesis, title={{Synthesis and Evaluation of a Domain-specific Large Data Set for Dungeons \& Dragons}}, author={Akila Peiris and Nisansa de Silva}, booktitle={Proceedings of the 36th Pacific Asia Conference on Language, Information and Computation}, pages={to appear}, year={2022} }
368
[ [ -0.0198516845703125, -0.03216552734375, 0.02313232421875, 0.027008056640625, -0.007293701171875, -0.0019664764404296875, -0.00905609130859375, -0.0266265869140625, 0.0295257568359375, 0.042877197265625, -0.0447998046875, -0.037109375, -0.0131072998046875, 0....
Check/region_2
2021-09-04T11:04:11.000Z
[ "region:us" ]
Check
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.0213775634765625, -0.014984130859375, 0.05718994140625, 0.0288543701171875, -0.0350341796875, 0.046478271484375, 0.052520751953125, 0.005062103271484375, 0.051361083984375, 0.016998291015625, -0.0521240234375, -0.01496124267578125, -0.0604248046875, 0.037...
Check/region_3
2021-09-04T11:05:41.000Z
[ "region:us" ]
Check
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.0213775634765625, -0.014984130859375, 0.05718994140625, 0.0288543701171875, -0.0350341796875, 0.046478271484375, 0.052520751953125, 0.005062103271484375, 0.051361083984375, 0.016998291015625, -0.0521240234375, -0.01496124267578125, -0.0604248046875, 0.037...
Check/region_4
2021-09-04T11:06:52.000Z
[ "region:us" ]
Check
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.0213775634765625, -0.014984130859375, 0.05718994140625, 0.0288543701171875, -0.0350341796875, 0.046478271484375, 0.052520751953125, 0.005062103271484375, 0.051361083984375, 0.016998291015625, -0.0521240234375, -0.01496124267578125, -0.0604248046875, 0.037...
Check/region_5
2021-09-04T11:07:26.000Z
[ "region:us" ]
Check
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
Check/region_6
2021-09-04T11:08:02.000Z
[ "region:us" ]
Check
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
Check/region_7
2021-09-04T11:08:48.000Z
[ "region:us" ]
Check
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
Check/region_8
2021-09-04T11:09:53.000Z
[ "region:us" ]
Check
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
Check/regions
2021-08-31T14:34:50.000Z
[ "region:us" ]
Check
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
Check/vverify
2021-09-11T05:13:10.000Z
[ "region:us" ]
Check
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
Cropinky/wow_fishing_bobber
2021-06-30T22:14:04.000Z
[ "region:us" ]
Cropinky
null
null
0
86
2022-03-02T23:29:22
## Wow fishing bobber object detection dataset Hello, in this zip you will find 160 annotated images each containing 1 fishing bobber from World of warcraft. I think this is an easy object detection datset, my yolov3 network was trained on it for 2000 iterations, it achieved a loss of 0.05. It was working flawlessly as it classified the newly generated images (also from wailing caverns zone) I haven't even tested it how it would work outside or on other fishing locations in the game, pozz.
495
[ [ -0.06939697265625, -0.06494140625, 0.0214996337890625, -0.0250091552734375, -0.041046142578125, -0.0247344970703125, 0.0313720703125, -0.037139892578125, -0.00775146484375, 0.031890869140625, -0.05535888671875, -0.048065185546875, -0.048980712890625, 0.02163...
DDSC/reddit-da-asr-preprocessed
2022-02-15T19:17:08.000Z
[ "region:us" ]
DDSC
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
Davlan/conll2003_noMISC
2022-02-03T19:00:25.000Z
[ "region:us" ]
Davlan
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
DelgadoPanadero/Pokemon
2022-01-03T10:10:40.000Z
[ "region:us" ]
DelgadoPanadero
null
null
3
86
2022-03-02T23:29:22
# Pokemon Dataset This dataset contains a text representation of more that 10k pokemon sprites from different pokemon videogames (red, yellow, gold, ruby,...). The original images are from 40 to 96 pixel of size and every pixel is represented with an ASCII character depending to its color. # Supported Tasks * Text Generation # Languages * ASCII colo representation # Data Fields ``` {'pokemon': pokemon sprite in ASCII representation 'game': videogame in witch the sprite appears 'size': pixel size 'number': number of the pokemon} ``` # License * All the creative right are property of Nintendo # Preview ``` 00 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 01 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 02 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 03 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 04 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 05 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 06 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 07 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ; ; ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 08 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ; ; ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 09 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ; P ; ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 10 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ; P P ; ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ; ; ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 11 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ; P P P ; ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ; ; ; P P ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 12 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ; P P P F ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ; ; P P P P ; ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 13 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ; P P J J ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ F F J P P P P ; ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 14 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ; J J J F ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ F J J J P P P ; ; ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 15 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ F J J J F ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ F F J J J J F P ; ; ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 16 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ F J J J F ; ~ ~ ~ ~ ~ ~ ~ ~ ~ A J J J J J J J ; ; ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 17 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ F J F F ; F F F F F F F A A J J J J J J J F ; ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 18 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ F F F F F Z Z Z Z Z J J F J J J J J J F F ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 19 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ A F Z Z Z Z Z Z Z Z J J J J J J F F ; ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ F F ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 20 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ F J J Z Z Z Z Z Z Z Z J J J J J F A ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ F F Z J A ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 21 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ A J J J J Z Z Z Z Z J F ; ; F J J F A ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ F Z J J J A ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 22 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ F F ; ; J J J J J J J ; ~ ; ; J J F F ; ~ ~ ~ ~ ~ ~ ~ ~ F Z J J J F A ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 23 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ F J ; ~ ; J J J J J J J ; ; P ; J J F F ; ~ ~ ~ ~ ~ ~ F F Z J J F F F F A ~ ~ ~ ~ ~ ~ ~ ~ ~ 24 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ A J ; ; P J J J J J J J F ; ; F J J F F A ~ ~ ~ ~ ~ F Z Z J F F F F F F A ~ ~ ~ ~ ~ ~ ~ ~ ~ 25 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ A F J F ; F J J A F J J J J J J J > > F F F ; ~ ~ F F J J J F F F F F F F A ~ ~ ~ ~ ~ ~ ~ ~ ~ 26 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ; R J J J J F J J J J J F J J J > > > = F F ; A A J J F F F F F F F F F F A ~ ~ ~ ~ ~ ~ ~ ~ ~ 27 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ; > F J J J J F = = = = F J J J > > > = F A A Z F A F F F F F F F F F F F A ~ ~ ~ ~ ~ ~ ~ ~ ~ 28 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ A ~ ; = J J J J J = = R R J J J J > > = = A Z F J Z Z A F F F F F F F F F F ; ~ ~ ~ ~ ~ ~ ~ ~ ~ 29 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ A Z A A = J J J J J = R R = J J J J J = = F A J J J J F A F F F F F F F F A ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 30 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ A Z J J F A J J J J J J = = J J J J J J J F A J J J J J J ; F F F F F F A ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 31 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ F J J F F A J J J J J J J J J J J J J J A J J J J F F ; F F F F F A ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 32 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ A F F F F F F J J J J J J J J J J J J J J J J J F F F ; F F F F A ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 33 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ; F F F F J J J J J J J J J J J J J J J J J F F F ; A F F F ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 34 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ; F F F J J J J J J J J J J J J J J J J F F F ; ~ ~ A F F F ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 35 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ; A J J J J J J J J J J J J J J J J F F F F ; ~ ~ ~ A F F F ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 36 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ F J J J J J J J J J J J J J J J J F F F A ~ ~ A A F F F F F ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 37 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ A J J J J J J J J J J J J J J J J J F F F A ; ; J F F F F A ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 38 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ F J J J J J J J J J J J J J J J J J F F F A J F F F F A ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 39 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ A J J J J J J J J J J J J J J J J F F F F A F F F F ; ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 40 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ A A A J J J J J J J J J J J J J J F F F F F ; F F F ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 41 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ A Z Z F A J J J J J J J J J J J J J F F F F F ; ; F F F ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 42 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ F F J F J A J J J J J J J J J J J F F F F F F ; ~ ; F F F ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 43 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ A F F F F F J J J J J J J J J F F F F F F F F ; ; A A A F F ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 44 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ F F F F F A F F J J J J F F F F F F F F F F A A A A ; ; ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 45 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ A F F F A F F F F F F F F F F F F F F F A A ; ; ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 46 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ; F F F ; F F F F F F F F F F F F F F F ; ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 47 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ A ; ; F F F F F F F F F F F F F F F F ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 48 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ; ; ; ; A F F F F F F F F F F ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 49 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ; ; A F F F F F A ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 50 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ; ; F F ; ; ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 51 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ A F F F F ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 52 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ A F A F F ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 53 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ A F F F ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 54 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ; ; ; ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 55 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 56 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 57 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 58 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 59 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 60 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 61 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 62 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 63 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ```
8,885
[ [ -0.0574951171875, -0.03839111328125, 0.0220489501953125, 0.00583648681640625, -0.00024962425231933594, -0.005878448486328125, 0.000919342041015625, -0.01146697998046875, 0.04144287109375, 0.0204620361328125, -0.021697998046875, -0.0199127197265625, -0.0477294921...
Emanuel/UD_Portuguese-Bosque
2022-10-25T08:54:18.000Z
[ "language:pt", "region:us" ]
Emanuel
null
null
1
86
2022-03-02T23:29:22
--- language: - pt --- # AutoNLP Dataset for project: pos-tag-bosque ## Table of content - [Dataset Description](#dataset-description) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) ## Dataset Descritpion This dataset has been automatically processed by AutoNLP for project pos-tag-bosque. ### Languages The BCP-47 code for the dataset's language is pt. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "tags": [ 5, 7, 0 ], "tokens": [ "Um", "revivalismo", "refrescante" ] }, { "tags": [ 5, 11, 11, 11, 3, 5, 7, 1, 5, 7, 0, 12 ], "tokens": [ "O", "7", "e", "Meio", "\u00e9", "um", "ex-libris", "de", "a", "noite", "algarvia", "." ] } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "tags": "Sequence(feature=ClassLabel(num_classes=17, names=['ADJ', 'ADP', 'ADV', 'AUX', 'CCONJ', 'DET', 'INTJ', 'NOUN', 'NUM', 'PART', 'PRON', 'PROPN', 'PUNCT', 'SCONJ', 'SYM', 'VERB', 'X'], names_file=None, id=None), length=-1, id=None)", "tokens": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 8328 | | valid | 476 |
1,705
[ [ -0.0347900390625, 0.00493621826171875, -0.00412750244140625, 0.0147247314453125, -0.02001953125, 0.0185394287109375, -0.0103912353515625, -0.0167236328125, 0.025054931640625, 0.0289764404296875, -0.0259552001953125, -0.06964111328125, -0.03924560546875, 0.01...
FIG-Loneliness/FIG-Loneliness
2022-07-14T23:14:43.000Z
[ "region:us" ]
FIG-Loneliness
null
null
1
86
2022-03-02T23:29:22
# Dataset Card for FIG-Loneliness ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [FIG-Loneliness](https://ojs.aaai.org/index.php/ICWSM/article/view/19302) - **Paper:** [Many Ways to be Lonely](https://ojs.aaai.org/index.php/ICWSM/article/view/19302/19074) - **Point of Contact:** [Sherry Yueyi Jiang](mailto:yujiang@ucsd.edu) ### Dataset Summary FIG-Loneliness is a dataset for fine-grained loneliness characterization and model training. This dataset consists of 2633 lonely and 3000 non-lonely Reddit posts annotated by trained human annotators. For the lonely posts, we provide fine-grained category labels for the forms of loneliness including duration, context and interpersonal relationships, and for the coping strategies of the authors including reaching out, seeking advice, seeking validation and non-directed interaction. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The language supported is English. ## Dataset Structure ### Loading To load the dataset, first clone this dataset repo: ```bash git clone https://huggingface.co/datasets/FIG-Loneliness/FIG-Loneliness ``` Then we can load datasets using Huggingface Datasets API: ```python import os import datasets as hf_ds ROOT = "dir/to/data/repo" # load datasets train_set = hf_ds.load_from_disk(os.path.join(ROOT, "train_set")) dev_set = hf_ds.load_from_disk(os.path.join(ROOT, "dev_set")) test_set = hf_ds.load_from_disk(os.path.join(ROOT, "test_set")) ``` ### Data Instances The `train_set` split contains 3,943 instances. The `dev_set` split contains 1,126. The `test_set` split contains 564 instances. ### Data Fields Each instance contains 8 fields: `idx`, `unique_id`, `text`, `lonely`, `temporal`, `interaction`, `context_pri`, and `interpersonal_pri`. | Field | Meaning | |:---:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | `idx` | Integer index of this instance from our scrapped Reddit posts. | | `unique_id` | Unique ID of this Reddit post. | | `text` | Textual content of the Reddit post. | | `lonely` | 2-len one-hot vector, representing [non-lonely, lonely]. | | `temporal` | **Duration**. 4-len vector summarizing human annotators' votings in the order of [transient, enduring, ambiguous, NA]. | | `interaction` | **Interaction**. 5-len vector summarizing human annotators' votings in the order of [seeking advice, providing help, seeking validation and affirmation, reaching out, non directed interaction] | | `context_pri` | **Context**. 5-len vector summarizing human annotators' votings in the order of [social, physical, somatic, romantic, N/A] | | `interpersonal_pri` | **Interpersonal**. 5-len vector summarizing human annotators' votings in the order of [romantic, friendship, family, colleagues, N/A] | ### Data Splits The entire dataset is split into 3,943 training instances, 1,126 dev instances, and 564 test instances. ## Dataset Creation ### Curation Rationale The data curation rationale is to capture **loneliness expressions** not only from a wider user base but also from users who specifically belong to the young adult age group (a vulnerable group for loneliness). We sampled data from Reddit and subsequently annotated the data with loneliness labels. ### Source Data #### Initial Data Collection and Normalization By using Reddit’s Pushshift API, we collected all posts from two loneliness specific subreddits (*r/loneliness*, *r/lonely*) and two subreddits for young adults (*r/youngadults*, *r/college*) from 2018 to 2020. #### Who are the source language producers? [More Information Needed] ### Annotations #### Who are the annotators? Annotation labels were provided by trained undergraduate research assistants and Amazon’s Mechanical Turk workers (MTurkers) with a Master certification. #### Annotation process For the potential lonely samples: We had research assistants labeled the sampled potential lonely posts. Each post was labeled by three of research assistants. A posts was first labeled on whether it contains an expression on self-disclosure of loneliness. If the majority of the annotators labeled a post as not containing such expression, the post would be discarded, otherwise it is further labeled according to a codebook that contains the following categories: (1) *duration*: the duration of the loneliness experience (transient, enduring, and ambiguous), (2) *context*: the contexts of the experience (social, physical, somatic, and romantic), (3) *interpresonal*: the interpersonal relationships involved in the experience (romantic, family, friendship, and peers), and (4) *interaction*: user interaction styles (seeking advice, providing support, seeking validation/affirmation, reaching out and non-directed interaction). The codebook is intended for dissecting different forms of loneliness and users’ coping strategies in the loneliness discourse. We also included a ‘not applicable’ (NA) label to accommodate situations that are not suitable for classification. For each category, the annotators gave *one value* which they think would best capture the source of loneliness in the post or the poster’s interaction intent. For the potential non-lonely samples: MTurkers were instructed to classify whether the Reddit posters express loneliness. Each post was annotated by three MTurkers, and only posts labeled as non-lonely by the majority would remain in the final annotated dataset. All the labeled posts and annotations were included in FIG-Loneliness, which consists of roughly 3000 lonely and 3000 non-lonely posts. #### Dataset Codebook See code rule and example posts for the category labels [here](https://drive.google.com/file/d/1J6i72qyqirAIC40jWuJvDN-ZWU87XttH/view). ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations See **Limitation and Data Disclaimer** [here](https://ojs.aaai.org/index.php/ICWSM/article/view/19302/19074) ## Additional Information ### Dataset Curator [More Information Needed] ### Licensing Information [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) ### Citation Information Jiang, Y., Jiang, Y., Leqi, L., & Winkielman, P. (2022). Many Ways to Be Lonely: Fine-Grained Characterization of Loneliness and Its Potential Changes in COVID-19. Proceedings of the International AAAI Conference on Web and Social Media, 16(1), 405-416. Retrieved from https://ojs.aaai.org/index.php/ICWSM/article/view/19302
8,774
[ [ -0.04241943359375, -0.051483154296875, 0.0302886962890625, 0.05169677734375, -0.0300750732421875, -0.0214691162109375, -0.0206451416015625, -0.039886474609375, 0.052337646484375, 0.0118408203125, -0.061126708984375, -0.068603515625, -0.0301513671875, 0.03115...
GEM/CrossWOZ
2022-10-24T15:29:55.000Z
[ "task_categories:conversational", "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:zh", "license:apache-2.0", "dialog-response-generation", "region:us" ]
GEM
CrossWOZ is the first large-scale Chinese Cross-Domain Wizard-of-Oz task-oriented dataset. It contains 6K dialogue sessions and 102K utterances for 5 domains, including hotel, restaurant, attraction, metro, and taxi. Moreover, the corpus contains rich annotation of dialogue states and dialogue acts at both user and system sides.
@article{zhu2020crosswoz, author = {Qi Zhu and Kaili Huang and Zheng Zhang and Xiaoyan Zhu and Minlie Huang}, title = {Cross{WOZ}: A Large-Scale Chinese Cross-Domain Task-Oriented Dialogue Dataset}, journal = {Transactions of the Association for Computational Linguistics}, year = {2020} }
5
86
2022-03-02T23:29:22
--- annotations_creators: - none language_creators: - unknown language: - zh license: - apache-2.0 multilinguality: - unknown size_categories: - unknown source_datasets: - original task_categories: - conversational task_ids: [] pretty_name: CrossWOZ tags: - dialog-response-generation --- # Dataset Card for GEM/CrossWOZ ## Dataset Description - **Homepage:** https://github.com/thu-coai/CrossWOZ - **Repository:** https://github.com/thu-coai/CrossWOZ - **Paper:** https://aclanthology.org/2020.tacl-1.19 - **Leaderboard:** N/A - **Point of Contact:** Qi Zhu ### Link to Main Data Card You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/CrossWOZ). ### Dataset Summary CrossWOZ is a Chinese multi-domain task-oriented dialogue dataset . It contains 6K dialogue sessions and 102K utterances for 5 domains, including hotel, restaurant, attraction, metro, and taxi. About 60{\%} of the dialogues have cross-domain user goals that favor inter-domain dependency and encourage natural transition across domains in conversation. You can load the dataset via: ``` import datasets data = datasets.load_dataset('GEM/CrossWOZ') ``` The data loader can be found [here](https://huggingface.co/datasets/GEM/CrossWOZ). #### website [Github](https://github.com/thu-coai/CrossWOZ) #### paper [ACL Anthology](https://aclanthology.org/2020.tacl-1.19) #### authors Qi Zhu, Kaili Huang, Zheng Zhang, Xiaoyan Zhu, and Minlie Huang from CoAI group, Tsinghua University ## Dataset Overview ### Where to find the Data and its Documentation #### Webpage <!-- info: What is the webpage for the dataset (if it exists)? --> <!-- scope: telescope --> [Github](https://github.com/thu-coai/CrossWOZ) #### Download <!-- info: What is the link to where the original dataset is hosted? --> <!-- scope: telescope --> [Github](https://github.com/thu-coai/CrossWOZ) #### Paper <!-- info: What is the link to the paper describing the dataset (open access preferred)? --> <!-- scope: telescope --> [ACL Anthology](https://aclanthology.org/2020.tacl-1.19) #### BibTex <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. --> <!-- scope: microscope --> ``` @article{zhu-etal-2020-crosswoz, title = "{C}ross{WOZ}: A Large-Scale {C}hinese Cross-Domain Task-Oriented Dialogue Dataset", author = "Zhu, Qi and Huang, Kaili and Zhang, Zheng and Zhu, Xiaoyan and Huang, Minlie", journal = "Transactions of the Association for Computational Linguistics", volume = "8", year = "2020", url = "https://aclanthology.org/2020.tacl-1.19", doi = "10.1162/tacl_a_00314", pages = "281--295", abstract = "To advance multi-domain (cross-domain) dialogue modeling as well as alleviate the shortage of Chinese task-oriented datasets, we propose CrossWOZ, the first large-scale Chinese Cross-Domain Wizard-of-Oz task-oriented dataset. It contains 6K dialogue sessions and 102K utterances for 5 domains, including hotel, restaurant, attraction, metro, and taxi. Moreover, the corpus contains rich annotation of dialogue states and dialogue acts on both user and system sides. About 60{\%} of the dialogues have cross-domain user goals that favor inter-domain dependency and encourage natural transition across domains in conversation. We also provide a user simulator and several benchmark models for pipelined task-oriented dialogue systems, which will facilitate researchers to compare and evaluate their models on this corpus. The large size and rich annotation of CrossWOZ make it suitable to investigate a variety of tasks in cross-domain dialogue modeling, such as dialogue state tracking, policy learning, user simulation, etc.", } ``` #### Contact Name <!-- quick --> <!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> Qi Zhu #### Contact Email <!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> zhuq96@gmail.com #### Has a Leaderboard? <!-- info: Does the dataset have an active leaderboard? --> <!-- scope: telescope --> no ### Languages and Intended Use #### Multilingual? <!-- quick --> <!-- info: Is the dataset multilingual? --> <!-- scope: telescope --> no #### Covered Languages <!-- quick --> <!-- info: What languages/dialects are covered in the dataset? --> <!-- scope: telescope --> `Chinese` #### License <!-- quick --> <!-- info: What is the license of the dataset? --> <!-- scope: telescope --> apache-2.0: Apache License 2.0 #### Intended Use <!-- info: What is the intended use of the dataset? --> <!-- scope: microscope --> CrossWOZ is the first large-scale Chinese Cross-Domain Wizard-of-Oz task-oriented dataset. It contains 6K dialogue sessions and 102K utterances for 5 domains, including hotel, restaurant, attraction, metro, and taxi. Moreover, the corpus contains rich annotation of dialogue states and dialogue acts at both user and system sides. We also provide a user simulator and several benchmark models for pipelined taskoriented dialogue systems, which will facilitate researchers to compare and evaluate their models on this corpus. #### Primary Task <!-- info: What primary task does the dataset support? --> <!-- scope: telescope --> Dialog Response Generation #### Communicative Goal <!-- quick --> <!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. --> <!-- scope: periscope --> Generate a response according to the dialog context and database search results. ### Credit #### Curation Organization Type(s) <!-- info: In what kind of organization did the dataset curation happen? --> <!-- scope: telescope --> `academic` #### Curation Organization(s) <!-- info: Name the organization(s). --> <!-- scope: periscope --> Tsinghua University #### Dataset Creators <!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). --> <!-- scope: microscope --> Qi Zhu, Kaili Huang, Zheng Zhang, Xiaoyan Zhu, and Minlie Huang from CoAI group, Tsinghua University #### Funding <!-- info: Who funded the data creation? --> <!-- scope: microscope --> National Science Foundation of China, National Key R&D Program of China #### Who added the Dataset to GEM? <!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. --> <!-- scope: microscope --> Qi Zhu (Tsinghua University) ### Dataset Structure #### Data Fields <!-- info: List and describe the fields present in the dataset. --> <!-- scope: telescope --> - `gem_id` (string): GEM-CrossWOZ-{split}-{id} - `dialog_id` (string): dialog ID - `sys_id` (string): system annotator ID - `usr_id` (string): user annotation ID - `type` (string): dialog type - `task description` (list of strings): natural language descriptions of the user goal - `goal` (list of tuples), includes: - `sub-goal id` (string) - `domain name` (string) - `slot name` (string) - `constraint` if filled, else `requirement` (string) - `whether be mentioned in previous turns` (string) - `messages` (list of dict): dialog turns. Each turn contains: - `content` (string): utterance - `role` (string): user or system - `dialog_act` (list of tuples), includes: - `domain` (string) - `intent` (string) - `slot` (string) - `value` (string) - `user_state` (list of tuples): same format as "goal", can be viewed as dynamic goal. - `sys_state_init` (dict): the first db query emitted, records user constraints faithfully. If the system find no result that matches, he/she may relax the constraints manually and search db multiple times. - `domain` (dict): slot(string)-value(string) pairs - `selectedResults` (list of string): db search result that would be used in this turn. - `sys_state` (dict): the final db query emitted, records the db used by the system in this turn. Same format as sys_state_init. Note that this may not satisfy all user constraints. - `final_goal` (list of tuples): user state/goal at the end of dialog. same format as "goal". #### Example Instance <!-- info: Provide a JSON formatted example of a typical instance in the dataset. --> <!-- scope: periscope --> ``` {'dialog_id': '2303', 'final_goal': [['1', '餐馆', '人均消费', '50-100元', 'True'], ['1', '餐馆', '推荐菜', "['美食街']", 'True'], ['1', '餐馆', '名称', '鲜鱼口老字号美食街', 'True'], ['1', '餐馆', '营业时间', '周一至周日 10:00-22:00', 'True'], ['1', '餐馆', '周边景点', "['天安门广场', '前门大街', '恭王府', '故宫']", 'True'], ['2', '景点', '名称', '故宫', 'True'], ['2', '景点', '评分', '4.5分以上', 'True'], ['2', '景点', '地址', '北京市东城区景山前街4号', 'True'], ['2', '景点', '电话', '010-85007938', 'True'], ['3', '酒店', '名称', '桔子水晶酒店(北京安贞店)', 'True'], ['3', '酒店', '电话', '010-84273030', 'True']], 'gem_id': 'GEM-CrossWOZ-test-0', 'goal': [['1', '餐馆', '人均消费', '50-100元', 'False'], ['1', '餐馆', '推荐菜', "['美食街']", 'False'], ['1', '餐馆', '名称', '', 'False'], ['1', '餐馆', '营业时间', '', 'False'], ['1', '餐馆', '周边景点', '[]', 'False'], ['2', '景点', '名称', '出现在id=1的周边景点里', 'False'], ['2', '景点', '评分', '4.5分以上', 'False'], ['2', '景点', '地址', '', 'False'], ['2', '景点', '电话', '', 'False'], ['3', '酒店', '名称', '桔子水晶酒店(北京安贞店)', 'False'], ['3', '酒店', '电话', '', 'False']], 'messages': {'content': ['你好,我想吃美食街,帮我推荐一个人均消费在50-100元的餐馆,谢谢。', '为您推荐鲜鱼口老字号美食街,人均消费75元,有您想吃的美食街哦。', '营业时间是什么时间?', '周一至周日 10:00-22:00。', '他家周边有什么景点吗?', '有故宫, 前门大街, 恭王府, 天安门广场。', '哦,我想在这些附近景点里找一个4.5分以上的,有吗?', '故宫就是哦,4.7分。', '好的,电话和地址告诉我一下。', '010-85007938;北京市东城区景山前街4号。', '好的,麻烦你帮我查一下桔子水晶酒店(北京安贞店)电话呗。', '010-84273030。', '好的,收到,谢谢你!', '不客气。'], 'dialog_act': [[['General', 'greet', 'none', 'none'], ['General', 'thank', 'none', 'none'], ['Inform', '餐馆', '人均消费', '50-100元'], ['Inform', '餐馆', '推荐菜', '美食街'], ['Request', '餐馆', '名称', '']], [['Inform', '餐馆', '人均消费', '75元'], ['Inform', '餐馆', '名称', '鲜鱼口老字号美食街']], [['Request', '餐馆', '营业时间', '']], [['Inform', '餐馆', '营业时间', '周一至周日 10:00-22:00']], [['Request', '餐馆', '周边景点', '']], [['Inform', '餐馆', '周边景点', '前门大街'], ['Inform', '餐馆', '周边景点', '天安门广场'], ['Inform', '餐馆', '周边景点', '恭王府'], ['Inform', '餐馆', '周边景点', '故宫']], [['Inform', '景点', '评分', '4.5分以上'], ['Select', '景点', '源领域', '餐馆']], [['Inform', '景点', '名称', '故宫'], ['Inform', '景点', '评分', '4.7分']], [['Request', '景点', '地址', ''], ['Request', '景点', '电话', '']], [['Inform', '景点', '地址', '北京市东城区景山前街4号'], ['Inform', '景点', '电话', '010-85007938']], [['Inform', '酒店', '名称', '桔子水晶酒店(北京安贞店)'], ['Request', '酒店', '电话', '']], [['Inform', '酒店', '电话', '010-84273030']], [['General', 'thank', 'none', 'none']], [['General', 'welcome', 'none', 'none']]], 'role': ['usr', 'sys', 'usr', 'sys', 'usr', 'sys', 'usr', 'sys', 'usr', 'sys', 'usr', 'sys', 'usr', 'sys'], 'sys_state': [{'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': [], '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': [], '人均消费': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': [], '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': ['鲜鱼口老字号美食街'], '人均消费': '50-100元', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '美食街', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': [], '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': [], '人均消费': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': [], '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': ['鲜鱼口老字号美食街'], '人均消费': '50-100元', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '美食街', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': [], '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': [], '人均消费': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': [], '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': ['鲜鱼口老字号美食街'], '人均消费': '50-100元', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '美食街', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': [], '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': [], '人均消费': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': ['故宫'], '名称': '故宫', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': [], '人均消费': '50-100元', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '美食街', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': [], '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': [], '人均消费': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': ['故宫'], '名称': '故宫', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': [], '人均消费': '50-100元', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '美食街', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': [], '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': [], '人均消费': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': [], '名称': '故宫', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': ['桔子水晶酒店(北京安贞店)'], '价格': '', '名称': '桔子水晶酒店(北京安贞店)', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': [], '人均消费': '50-100元', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '美食街', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': [], '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': [], '人均消费': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': [], '名称': '故宫', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '桔子水晶酒店(北京安贞店)', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': [], '人均消费': '50-100元', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '美食街', '评分': ''}}], 'sys_state_init': [{'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': [], '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': [], '人均消费': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': [], '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': ['鲜鱼口老字号美食街'], '人均消费': '50-100元', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '美食街', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': [], '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': [], '人均消费': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': [], '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': ['鲜鱼口老字号美食街'], '人均消费': '50-100元', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '美食街', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': [], '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': [], '人均消费': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': [], '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': ['鲜鱼口老字号美食街'], '人均消费': '50-100元', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '美食街', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': [], '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': [], '人均消费': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': ['故宫'], '名称': '故宫', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': ['鲜鱼口老字号美食街'], '人均消费': '50-100元', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '美食街', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': [], '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': [], '人均消费': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': ['故宫'], '名称': '故宫', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': [], '人均消费': '50-100元', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '美食街', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': [], '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': [], '人均消费': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': ['故宫'], '名称': '故宫', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': ['桔子水晶酒店(北京安贞店)'], '价格': '', '名称': '桔子水晶酒店(北京安贞店)', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': [], '人均消费': '50-100元', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '美食街', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': [], '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': [], '价格': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': [], '人均消费': '', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '', '评分': ''}}, {'出租': {'selectedResults': [], '出发地': '', '目的地': ''}, '地铁': {'selectedResults': [], '出发地': '', '目的地': ''}, '景点': {'selectedResults': [], '名称': '故宫', '周边景点': '', '周边酒店': '', '周边餐馆': '', '游玩时间': '', '评分': '', '门票': ''}, '酒店': {'selectedResults': ['桔子水晶酒店(北京安贞店)'], '价格': '', '名称': '桔子水晶酒店(北京安贞店)', '周边景点': '', '周边酒店': '', '周边餐馆': '', '评分': '', '酒店类型': '', '酒店设施': ''}, '餐馆': {'selectedResults': [], '人均消费': '50-100元', '名称': '', '周边景点': '', '周边酒店': '', '周边餐馆': '', '推荐菜': '美食街', '评分': ''}}], 'user_state': [[['1', '餐馆', '人均消费', '50-100元', 'True'], ['1', '餐馆', '推荐菜', "['美食街']", 'True'], ['1', '餐馆', '名称', '', 'True'], ['1', '餐馆', '营业时间', '', 'False'], ['1', '餐馆', '周边景点', '[]', 'False'], ['2', '景点', '名称', '出现在id=1的周边景点里', 'False'], ['2', '景点', '评分', '4.5分以上', 'False'], ['2', '景点', '地址', '', 'False'], ['2', '景点', '电话', '', 'False'], ['3', '酒店', '名称', '桔子水晶酒店(北京安贞店)', 'False'], ['3', '酒店', '电话', '', 'False']], [], [['1', '餐馆', '人均消费', '50-100元', 'True'], ['1', '餐馆', '推荐菜', "['美食街']", 'True'], ['1', '餐馆', '名称', '鲜鱼口老字号美食街', 'True'], ['1', '餐馆', '营业时间', '', 'True'], ['1', '餐馆', '周边景点', '[]', 'False'], ['2', '景点', '名称', '出现在id=1的周边景点里', 'False'], ['2', '景点', '评分', '4.5分以上', 'False'], ['2', '景点', '地址', '', 'False'], ['2', '景点', '电话', '', 'False'], ['3', '酒店', '名称', '桔子水晶酒店(北京安贞店)', 'False'], ['3', '酒店', '电话', '', 'False']], [], [['1', '餐馆', '人均消费', '50-100元', 'True'], ['1', '餐馆', '推荐菜', "['美食街']", 'True'], ['1', '餐馆', '名称', '鲜鱼口老字号美食街', 'True'], ['1', '餐馆', '营业时间', '周一至周日 10:00-22:00', 'True'], ['1', '餐馆', '周边景点', '[]', 'True'], ['2', '景点', '名称', '出现在id=1的周边景点里', 'False'], ['2', '景点', '评分', '4.5分以上', 'False'], ['2', '景点', '地址', '', 'False'], ['2', '景点', '电话', '', 'False'], ['3', '酒店', '名称', '桔子水晶酒店(北京安贞店)', 'False'], ['3', '酒店', '电话', '', 'False']], [], [['1', '餐馆', '人均消费', '50-100元', 'True'], ['1', '餐馆', '推荐菜', "['美食街']", 'True'], ['1', '餐馆', '名称', '鲜鱼口老字号美食街', 'True'], ['1', '餐馆', '营业时间', '周一至周日 10:00-22:00', 'True'], ['1', '餐馆', '周边景点', "['天安门广场', '前门大街', '恭王府', '故宫']", 'True'], ['2', '景点', '名称', '出现在id=1的周边景点里', 'True'], ['2', '景点', '评分', '4.5分以上', 'True'], ['2', '景点', '地址', '', 'False'], ['2', '景点', '电话', '', 'False'], ['3', '酒店', '名称', '桔子水晶酒店(北京安贞店)', 'False'], ['3', '酒店', '电话', '', 'False']], [], [['1', '餐馆', '人均消费', '50-100元', 'True'], ['1', '餐馆', '推荐菜', "['美食街']", 'True'], ['1', '餐馆', '名称', '鲜鱼口老字号美食街', 'True'], ['1', '餐馆', '营业时间', '周一至周日 10:00-22:00', 'True'], ['1', '餐馆', '周边景点', "['天安门广场', '前门大街', '恭王府', '故宫']", 'True'], ['2', '景点', '名称', '故宫', 'True'], ['2', '景点', '评分', '4.5分以上', 'True'], ['2', '景点', '地址', '', 'True'], ['2', '景点', '电话', '', 'True'], ['3', '酒店', '名称', '桔子水晶酒店(北京安贞店)', 'False'], ['3', '酒店', '电话', '', 'False']], [], [['1', '餐馆', '人均消费', '50-100元', 'True'], ['1', '餐馆', '推荐菜', "['美食街']", 'True'], ['1', '餐馆', '名称', '鲜鱼口老字号美食街', 'True'], ['1', '餐馆', '营业时间', '周一至周日 10:00-22:00', 'True'], ['1', '餐馆', '周边景点', "['天安门广场', '前门大街', '恭王府', '故宫']", 'True'], ['2', '景点', '名称', '故宫', 'True'], ['2', '景点', '评分', '4.5分以上', 'True'], ['2', '景点', '地址', '北京市东城区景山前街4号', 'True'], ['2', '景点', '电话', '010-85007938', 'True'], ['3', '酒店', '名称', '桔子水晶酒店(北京安贞店)', 'True'], ['3', '酒店', '电话', '', 'True']], [], [['1', '餐馆', '人均消费', '50-100元', 'True'], ['1', '餐馆', '推荐菜', "['美食街']", 'True'], ['1', '餐馆', '名称', '鲜鱼口老字号美食街', 'True'], ['1', '餐馆', '营业时间', '周一至周日 10:00-22:00', 'True'], ['1', '餐馆', '周边景点', "['天安门广场', '前门大街', '恭王府', '故宫']", 'True'], ['2', '景点', '名称', '故宫', 'True'], ['2', '景点', '评分', '4.5分以上', 'True'], ['2', '景点', '地址', '北京市东城区景山前街4号', 'True'], ['2', '景点', '电话', '010-85007938', 'True'], ['3', '酒店', '名称', '桔子水晶酒店(北京安贞店)', 'True'], ['3', '酒店', '电话', '010-84273030', 'True']], []]}, 'sys_id': 96, 'task description': ['你要去一个餐馆(id=1)用餐。你希望餐馆的人均消费是50-100元的。你想吃的菜肴是美食街。你想知道这个餐馆的名称、营业时间、周边景点。', '你要去id=1附近的景点(id=2)游玩。你希望景点的评分是4.5分以上。你想知道这个景点的地址、电话。', '你要去名叫桔子水晶酒店(北京安贞店)的酒店(id=3)住宿。你想知道这个酒店的电话。'], 'type': '不独立多领域', 'usr_id': 97} ``` #### Data Splits <!-- info: Describe and name the splits in the dataset if there are more than one. --> <!-- scope: periscope --> | Split | Train | Valid | Test | | --------------------- | ------ | ----- | ----- | | \# dialogues | 5,012 | 500 | 500 | | \# Turns (utterances) | 84,692 | 8,458 | 8,476 | | Vocab | 12,502 | 5,202 | 5,143 | | Avg. sub-goals | 3.24 | 3.26 | 3.26 | | Avg. semantic tuples | 14.8 | 14.9 | 15.0 | | Avg. turns | 16.9 | 16.9 | 17.0 | | Avg. tokens per turn | 16.3 | 16.3 | 16.2 | ## Dataset in GEM ### Rationale for Inclusion in GEM #### Why is the Dataset in GEM? <!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? --> <!-- scope: microscope --> CrossWOZ is the first large-scale Chinese Cross-Domain Wizard-of-Oz task-oriented dataset. #### Similar Datasets <!-- info: Do other datasets for the high level task exist? --> <!-- scope: telescope --> yes #### Unique Language Coverage <!-- info: Does this dataset cover other languages than other datasets for the same task? --> <!-- scope: periscope --> no #### Difference from other GEM datasets <!-- info: What else sets this dataset apart from other similar datasets in GEM? --> <!-- scope: microscope --> The corpus contains rich annotation of dialogue states and dialogue acts at both user and system sides, which can be used in a wide range of tasks. #### Ability that the Dataset measures <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: periscope --> Dialog understanding, dialog policy learning ### GEM-Specific Curation #### Modificatied for GEM? <!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? --> <!-- scope: telescope --> yes #### GEM Modifications <!-- info: What changes have been made to he original dataset? --> <!-- scope: periscope --> `other` #### Modification Details <!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification --> <!-- scope: microscope --> To adapt to hugging face Datasets, we 1) separate user annotators' ID and system annotations' ID; 2) we convert the data type in goal/user state to string. #### Additional Splits? <!-- info: Does GEM provide additional splits to the dataset? --> <!-- scope: telescope --> no ### Getting Started with the Task #### Pointers to Resources <!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. --> <!-- scope: microscope --> [Code](https://github.com/thu-coai/Convlab-2) #### Technical Terms <!-- info: Technical terms used in this card and the dataset and their definitions --> <!-- scope: microscope --> According to the type of user goal, we group the dialogues in the training set into five categories: - S: 417 dialogues have only one sub-goal in HAR domains. - M: 1573 dialogues have multiple sub-goals (2-3) in HAR domains. However, these sub-goals do not have cross-domain informable slots. - M+T: 691 dialogues have multiple sub-goals in HAR domains and at least one sub-goal in the metro or taxi domain (3-5 sub-goals). The sub-goals in HAR domains do not have cross-domain informable slots. - CM: 1,759 dialogues have multiple sub-goals (2-5) in HAR domains with cross-domain informable slots. - CM+T: 572 dialogues have multiple sub-goals in HAR domains with cross-domain informable slots and at least one sub-goal in the metro or taxi domain (3-5 sub-goals). ## Previous Results ### Previous Results #### Measured Model Abilities <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: telescope --> Dialog understanding, dialog policy learning #### Metrics <!-- info: What metrics are typically used for this task? --> <!-- scope: periscope --> `BLEU` #### Proposed Evaluation <!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. --> <!-- scope: microscope --> BLEU evaluates the generation quality. #### Previous results available? <!-- info: Are previous results available? --> <!-- scope: telescope --> yes #### Other Evaluation Approaches <!-- info: What evaluation approaches have others used? --> <!-- scope: periscope --> Inform rate: how many entities in the gold response appear in the generated response. #### Relevant Previous Results <!-- info: What are the most relevant previous results for this task/dataset? --> <!-- scope: microscope --> BLEU on MultiWOZ dataset. ## Dataset Curation ### Original Curation #### Original Curation Rationale <!-- info: Original curation rationale --> <!-- scope: telescope --> Gather human-to-human dialog in Chinese. #### Communicative Goal <!-- info: What was the communicative goal? --> <!-- scope: periscope --> Generate a response according to the dialog context and database search results. #### Sourced from Different Sources <!-- info: Is the dataset aggregated from different data sources? --> <!-- scope: telescope --> no ### Language Data #### How was Language Data Obtained? <!-- info: How was the language data obtained? --> <!-- scope: telescope --> `Crowdsourced` #### Where was it crowdsourced? <!-- info: If crowdsourced, where from? --> <!-- scope: periscope --> `Participatory experiment` #### Language Producers <!-- info: What further information do we have on the language producers? --> <!-- scope: microscope --> An usr/sys ID indicates the creator of different data points. #### Topics Covered <!-- info: Does the language in the dataset focus on specific topics? How would you describe them? --> <!-- scope: periscope --> domains: attraction, hotel, restaurant, metro, taxi #### Data Validation <!-- info: Was the text validated by a different worker or a data curator? --> <!-- scope: telescope --> validated by data curator #### Was Data Filtered? <!-- info: Were text instances selected or filtered? --> <!-- scope: telescope --> not filtered ### Structured Annotations #### Additional Annotations? <!-- quick --> <!-- info: Does the dataset have additional annotations for each instance? --> <!-- scope: telescope --> none #### Annotation Service? <!-- info: Was an annotation service used? --> <!-- scope: telescope --> no ### Consent #### Any Consent Policy? <!-- info: Was there a consent policy involved when gathering the data? --> <!-- scope: telescope --> yes #### Consent Policy Details <!-- info: What was the consent policy? --> <!-- scope: microscope --> Annotators agree using the dataset for research purpose. #### Other Consented Downstream Use <!-- info: What other downstream uses of the data did the original data creators and the data curators consent to? --> <!-- scope: microscope --> Any ### Private Identifying Information (PII) #### Contains PII? <!-- quick --> <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? --> <!-- scope: telescope --> unlikely #### Categories of PII <!-- info: What categories of PII are present or suspected in the data? --> <!-- scope: periscope --> `generic PII` #### Any PII Identification? <!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? --> <!-- scope: periscope --> no identification ### Maintenance #### Any Maintenance Plan? <!-- info: Does the original dataset have a maintenance plan? --> <!-- scope: telescope --> no ## Broader Social Context ### Previous Work on the Social Impact of the Dataset #### Usage of Models based on the Data <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? --> <!-- scope: telescope --> no ### Impact on Under-Served Communities #### Addresses needs of underserved Communities? <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). --> <!-- scope: telescope --> yes #### Details on how Dataset Addresses the Needs <!-- info: Describe how this dataset addresses the needs of underserved communities. --> <!-- scope: microscope --> CrossWOZ is the first large-scale Chinese Cross-Domain Wizard-of-Oz task-oriented dataset. The corpus contains rich annotation of dialogue states and dialogue acts at both user and system sides, which can be used in a wide range of tasks. ### Discussion of Biases #### Any Documented Social Biases? <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. --> <!-- scope: telescope --> no #### Are the Language Producers Representative of the Language? <!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? --> <!-- scope: periscope --> Yes ## Considerations for Using the Data ### PII Risks and Liability #### Potential PII Risk <!-- info: Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset. --> <!-- scope: microscope --> No ### Licenses #### Copyright Restrictions on the Dataset <!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? --> <!-- scope: periscope --> `open license - commercial use allowed` #### Copyright Restrictions on the Language Data <!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? --> <!-- scope: periscope --> `open license - commercial use allowed` ### Known Technical Limitations #### Technical Limitations <!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. --> <!-- scope: microscope --> No #### Unsuited Applications <!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. --> <!-- scope: microscope --> Model may not handle unknown values in the dialog #### Discouraged Use Cases <!-- info: What are some discouraged use cases of a model trained to maximize the proposed metrics on this dataset? In particular, think about settings where decisions made by a model that performs reasonably well on the metric my still have strong negative consequences for user or members of the public. --> <!-- scope: microscope --> Responses can be diverse, which is not captured by BLEU
43,055
[ [ -0.03167724609375, -0.0693359375, 0.01279449462890625, -0.003467559814453125, 0.00391387939453125, -0.0018863677978515625, -0.0202484130859375, -0.02734375, 0.011962890625, 0.04241943359375, -0.0709228515625, -0.054473876953125, -0.0247802734375, -0.00727462...
GEM-submissions/submission-scores
2023-06-08T23:06:02.000Z
[ "region:us" ]
GEM-submissions
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.0213775634765625, -0.01497650146484375, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.0170135498046875, -0.052093505859375, -0.01497650146484375, -0.0604248046875, 0.0379028...
Gabriel/quora_swe
2022-10-22T09:39:38.000Z
[ "task_categories:text-retrieval", "task_categories:text-classification", "task_ids:semantic-similarity-classification", "size_categories:10K<n<100K", "language:sv", "license:mit", "question-pairing", "semantic-search", "region:us" ]
Gabriel
null
null
0
86
2022-03-02T23:29:22
--- language: - sv license: - mit size_categories: - 10K<n<100K task_categories: - text-retrieval - text-classification task_ids: - semantic-similarity-classification tags: - question-pairing - semantic-search --- # Dataset Card for "quora_swe" The dataset quora_swe is a subset of the automatically translated (MNT) Swedish Semantic Textual Similarity dataset: quora-deduplicates .
386
[ [ -0.02679443359375, -0.03497314453125, 0.001132965087890625, -0.001941680908203125, -0.0587158203125, -0.007526397705078125, 0.0241851806640625, -0.00786590576171875, 0.03753662109375, 0.044586181640625, -0.072998046875, -0.04620361328125, -0.0245513916015625, ...
JesseParvess/book_snippets_asr
2021-12-09T10:36:16.000Z
[ "region:us" ]
JesseParvess
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.0213775634765625, -0.01497650146484375, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.0170135498046875, -0.052093505859375, -0.01497650146484375, -0.0604248046875, 0.0379028...
Jiejie/asr_book_lm
2022-02-26T10:27:32.000Z
[ "region:us" ]
Jiejie
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.0213775634765625, -0.01497650146484375, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.0170135498046875, -0.052093505859375, -0.01497650146484375, -0.0604248046875, 0.0379028...
LysandreJik/demo1
2021-09-25T19:54:41.000Z
[ "region:us" ]
LysandreJik
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.0213775634765625, -0.01497650146484375, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.0170135498046875, -0.052093505859375, -0.01497650146484375, -0.0604248046875, 0.0379028...
LysandreJik/demo2
2021-09-25T19:57:03.000Z
[ "region:us" ]
LysandreJik
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.0213775634765625, -0.01497650146484375, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.0170135498046875, -0.052093505859375, -0.01497650146484375, -0.0604248046875, 0.0379028...
LysandreJik/demo3
2021-09-25T19:58:09.000Z
[ "region:us" ]
LysandreJik
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.021392822265625, -0.0149688720703125, 0.057220458984375, 0.0288238525390625, -0.03509521484375, 0.046539306640625, 0.052520751953125, 0.005046844482421875, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.01495361328125, -0.060333251953125, 0.03...
LysandreJik/pushed-to-hub
2021-10-07T22:33:54.000Z
[ "region:us" ]
LysandreJik
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.021392822265625, -0.0149688720703125, 0.057220458984375, 0.0288238525390625, -0.03509521484375, 0.046539306640625, 0.052520751953125, 0.005046844482421875, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.01495361328125, -0.060333251953125, 0.03...
LysandreJik/test-16336486877862
2021-10-07T23:18:09.000Z
[ "region:us" ]
LysandreJik
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.021392822265625, -0.0149688720703125, 0.057220458984375, 0.0288238525390625, -0.03509521484375, 0.046539306640625, 0.052520751953125, 0.005046844482421875, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.01495361328125, -0.060333251953125, 0.03...
LysandreJik/test-16340052972855
2021-10-12T02:21:38.000Z
[ "region:us" ]
LysandreJik
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.021392822265625, -0.0149688720703125, 0.057220458984375, 0.0288238525390625, -0.03509521484375, 0.046539306640625, 0.052520751953125, 0.005046844482421875, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.01495361328125, -0.060333251953125, 0.03...
NYTK/HuSST
2023-03-27T09:54:13.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "task_ids:sentiment-scoring", "task_ids:text-scoring", "annotations_creators:found", "language_creators:found", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datase...
NYTK
null
null
1
86
2022-03-02T23:29:22
--- annotations_creators: - found language_creators: - found - expert-generated language: - hu license: - bsd-2-clause multilinguality: - monolingual size_categories: - unknown source_datasets: - extended|other task_categories: - text-classification task_ids: - sentiment-classification - sentiment-scoring - text-scoring pretty_name: HuSST --- # Dataset Card for HuSST ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Language](#language) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** [HuSST dataset](https://github.com/nytud/HuSST) - **Paper:** - **Leaderboard:** - **Point of Contact:** [lnnoemi](mailto:ligeti-nagy.noemi@nytud.hu) ### Dataset Summary This is the dataset card for the Hungarian version of the Stanford Sentiment Treebank. This dataset which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit [HuLU](hulu.nlp.nytud.hu). The corpus was created by translating and re-annotating the original SST (Roemmele et al., 2011). ### Supported Tasks and Leaderboards 'sentiment classification' 'sentiment scoring' ### Language The BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU. ## Dataset Structure ### Data Instances For each instance, there is an id, a sentence and a sentiment label. An example: ``` { "Sent_id": "dev_0", "Sent": "Nos, a Jason elment Manhattanbe és a Pokolba kapcsán, azt hiszem, az elkerülhetetlen folytatások ötletlistájáról kihúzhatunk egy űrállomást 2455-ben (hé, ne lődd le a poént).", "Label": "neutral" } ``` ### Data Fields - Sent_id: unique id of the instances; - Sent: the sentence, translation of an instance of the SST dataset; - Label: "negative", "neutral", or "positive". ### Data Splits HuSST has 3 splits: *train*, *validation* and *test*. | Dataset split | Number of instances in the split | |---------------|----------------------------------| | train | 9344 | | validation | 1168 | | test | 1168 | The test data is distributed without the labels. To evaluate your model, please [contact us](mailto:ligeti-nagy.noemi@nytud.hu), or check [HuLU's website](hulu.nlp.nytud.hu) for an automatic evaluation (this feature is under construction at the moment). ## Dataset Creation ### Source Data #### Initial Data Collection and Normalization The data is a translation of the content of the SST dataset (only the whole sentences were used). Each sentence was translated by a human translator. Each translation was manually checked and further refined by another annotator. ### Annotations #### Annotation process The translated sentences were annotated by three human annotators with one of the following labels: negative, neutral and positive. Each sentence was then curated by a fourth annotator (the 'curator'). The final label is the decision of the curator based on the three labels of the annotators. #### Who are the annotators? The translators were native Hungarian speakers with English proficiency. The annotators were university students with some linguistic background. ## Additional Information ### Licensing Information ### Citation Information If you use this resource or any part of its documentation, please refer to: Ligeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Vadász, T. (2022) HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. XVIII. Magyar Számítógépes Nyelvészeti Konferencia. pp. 431–446. ``` @inproceedings{ligetinagy2022hulu, title={HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából}, author={Ligeti-Nagy, N. and Ferenczi, G. and Héja, E. and Jelencsik-Mátyus, K. and Laki, L. J. and Vadász, N. and Yang, Z. Gy. and Vadász, T.}, booktitle={XVIII. Magyar Számítógépes Nyelvészeti Konferencia}, year={2022}, pages = {431--446} } ``` and to: Socher et al. (2013), Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. 1631--1642. ``` @inproceedings{socher-etal-2013-recursive, title = "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank", author = "Socher, Richard and Perelygin, Alex and Wu, Jean and Chuang, Jason and Manning, Christopher D. and Ng, Andrew and Potts, Christopher", booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", month = oct, year = "2013", address = "Seattle, Washington, USA", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D13-1170", pages = "1631--1642", } ``` ### Contributions Thanks to [lnnoemi](https://github.com/lnnoemi) for adding this dataset.
6,082
[ [ -0.022491455078125, -0.0623779296875, 0.01422882080078125, 0.0130615234375, -0.0289154052734375, -0.0033473968505859375, -0.03741455078125, -0.0244598388671875, 0.0244598388671875, 0.02801513671875, -0.049560546875, -0.07452392578125, -0.04144287109375, 0.01...
SuperAI2-Machima/ThaiQA_LST20
2022-02-25T06:29:22.000Z
[ "language:thai", "language:th", "license:mit", "question-generation dataset", "qa dataset", "region:us" ]
SuperAI2-Machima
null
null
0
86
2022-03-02T23:29:22
--- tags: - question-generation dataset - qa dataset language: - thai - th datasets: - LST20 license: mit --- [SuperAI Engineer Season 2](https://superai.aiat.or.th/) , [Machima](https://machchima.superai.me/) Machima_ThaiQA_LST20 เป็นชุดข้อมูลที่สกัดหาคำถาม และคำตอบ จากบทความในชุดข้อมูล LST20 โดยสกัดได้คำถาม-ตอบทั้งหมด 7,642 คำถาม มีข้อมูล 4 คอลัมน์ ประกอบด้วย context, question, answer และ status ตามลำดับ แสดงตัวอย่างดังนี้ context : ด.ต.ประสิทธิ์ ชาหอมชื่นอายุ 55 ปี ผบ.หมู่งาน ป.ตชด. 24 อุดรธานีถูกยิงด้วยอาวุธปืนอาก้าเข้าที่แขนซ้าย 3 นัดหน้าท้อง 1 นัดส.ต.อ.ประเสริฐ ใหญ่สูงเนินอายุ 35 ปี ผบ.หมู่กก. 1 ปส.2 บช.ปส. ถูกยิงเข้าที่แขนขวากระดูกแตกละเอียดร.ต.อ.ชวพล หมื่นโรจน์อายุ 32 ปีรอง สว.กก. 1 ปส. 2 บช.ปส. ถูกยิงเข้าที่แก้มและไหปลาร้าด้านขวา question :ผบ.หมู่งาน ป.ตชด. 24 อุดรธานี ถูกยิงด้วยอาวุธปืนอะไรเข้าที่แขนซ้าย 3 นัดหน้าท้อง answer : อาวุธปืนอาก้า status : 1 ซึ่งใน 7,642 คำถาม จะมีคำถาม-ตอบ ที่สกัดออกมาได้ถูกต้อง และไม่ถูกต้องตาม ยกตัวอย่างเช่น ตอบไม่ตรงคำถาม หรือมีคำตอบอยู่ด้านในประโยคคำถาม ทางทีมงานบ้านมณิมาได้ทำการตรวจสอบคำถามตอบ และทำการติด label ให้กับคู่ของคำถาม-ตอบ ที่ถูกต้อง และไม่ถูกต้อง โดย 1 = ถูกต้อง และ 0 = ไม่ถูกต้อง จากคู่คำถาม-ตอบ 7,642 คำถาม พบว่าถูกต้อง 4,438 คำถาม ไม่ถูกต้อง 3,204 คำถาม เพื่อน ๆ สามารถโหลดข้อมูลมาใช้โดยใช้โค้ดดังนี้ ```python !pip install datasets -qq #สำหรับโหลดdataset from datasets import load_dataset import pandas as pd dataset = load_dataset("SuperAI2-Machima/ThaiQA_LST20") train_df = pd.DataFrame(dataset['train']) train_df ```
1,527
[ [ -0.029205322265625, -0.0242767333984375, -0.0002925395965576172, 0.03173828125, -0.035430908203125, -0.0240936279296875, 0.02618408203125, -0.00022304058074951172, 0.048431396484375, 0.0291290283203125, -0.0552978515625, 0.00691986083984375, -0.043975830078125, ...
abidlabs/crowdsourced-speech4
2022-01-21T16:26:22.000Z
[ "region:us" ]
abidlabs
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.021392822265625, -0.01494598388671875, 0.05718994140625, 0.028839111328125, -0.0350341796875, 0.046539306640625, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.01702880859375, -0.052093505859375, -0.01494598388671875, -0.06036376953125, 0.03790...
antoinegk/HealthChallenge_dataset
2022-01-19T18:21:42.000Z
[ "region:us" ]
antoinegk
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.021392822265625, -0.01494598388671875, 0.05718994140625, 0.028839111328125, -0.0350341796875, 0.046539306640625, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.01702880859375, -0.052093505859375, -0.01494598388671875, -0.06036376953125, 0.03790...
lmqg/qg_jaquad
2022-12-02T18:51:27.000Z
[ "task_categories:text-generation", "task_ids:language-modeling", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:SkelterLabsInc/JaQuAD", "language:ja", "license:cc-by-sa-3.0", "question-generation", "arxiv:2210.03992", "region:us" ]
lmqg
[JaQuAD](https://github.com/SkelterLabsInc/JaQuAD) dataset for question generation (QG) task. The test set of the original data is not publicly released, so we randomly sampled test questions from the training set.
@inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", }
4
86
2022-03-02T23:29:22
--- license: cc-by-sa-3.0 pretty_name: JaQuAD for question generation language: ja multilinguality: monolingual size_categories: 10K<n<100K source_datasets: SkelterLabsInc/JaQuAD task_categories: - text-generation task_ids: - language-modeling tags: - question-generation --- # Dataset Card for "lmqg/qg_jaquad" ## Dataset Description - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) - **Point of Contact:** [Asahi Ushio](http://asahiushio.com/) ### Dataset Summary This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in ["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992). This is [JaQuAD](https://github.com/SkelterLabsInc/JaQuAD) dataset compiled for question generation (QG) task. The test set of the original data is not publicly released, so we randomly sampled test questions from the training set. There are no overlap in terms of the paragraph across train, test, and validation split. ### Supported Tasks and Leaderboards * `question-generation`: The dataset is assumed to be used to train a model for question generation. Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail). ### Languages Japanese (ja) ## Dataset Structure An example of 'train' looks as follows. ``` { "question": "新型車両として6000系が構想されたのは、製造費用のほか、どんな費用を抑えるためだったの?", "paragraph": "三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、保守費用を抑えた新型車両として6000系が構想された。新宿線建設に際してはすでに1号線(後の浅草線)を1,435mm軌間で開業させていた東京都は京成電鉄と1号線との乗り入れにあたり京成電鉄の路線を1,372mmから1,435mmに改軌させた事例や、1,372mm軌間の特殊性から運輸省(当時、2001年から国土交通省)と共に京王にも改軌を求めたが、改軌工事中の輸送力確保が困難なことを理由に改軌しないことで決着している。", "answer": "保守費用", "sentence": "三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、保守費用を抑えた新型車両として6000系が構想された。", "paragraph_sentence": "<hl>三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、保守費用を抑えた新型車両として6000系が構想された。<hl>新宿線建設に際してはすでに1号線(後の浅草線)を1,435mm軌間で開業させていた東京都は京成電鉄と1号線との乗り入れにあたり京成電鉄の路線を1,372mmから1,435mmに改軌させた事例や、1,372mm軌間の特殊性から運輸省(当時、2001年から国土交通省)と共に京王にも改軌を求めたが、改軌工事中の輸送力確保が困難なことを理由に改軌しないことで決着している。", "paragraph_answer": "三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、<hl>保守費用<hl>を抑えた新型車両として6000系が構想された。新宿線建設に際してはすでに1号線(後の浅草線)を1,435mm軌間で開業させていた東京都は京成電鉄と1号線との乗り入れにあたり京成電鉄の路線を1,372mmから1,435mmに改軌させた事例や、1,372mm軌間の特殊性から運輸省(当時、2001年から国土交通省)と共に京王にも改軌を求めたが、改軌工事中の輸送力確保が困難なことを理由に改軌しないことで決着している。", "sentence_answer": "三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、<hl>保守費用<hl>を抑えた新型車両として6000系が構想された。" } ``` The data fields are the same among all splits. - `question`: a `string` feature. - `paragraph`: a `string` feature. - `answer`: a `string` feature. - `sentence`: a `string` feature. - `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`. - `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`. - `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`. Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model, but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and `paragraph_sentence` feature is for sentence-aware question generation. ## Data Splits |train|validation|test | |----:|---------:|----:| |27809| 3939| 3939| ## Citation Information ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
4,726
[ [ -0.06317138671875, -0.0631103515625, 0.043243408203125, 0.0244140625, -0.031341552734375, -0.02423095703125, -0.004726409912109375, -0.012725830078125, 0.009246826171875, 0.040313720703125, -0.050567626953125, -0.045166015625, -0.01255035400390625, 0.0137557...
azuur/es_corpora_parliament_processed
2022-01-26T16:58:53.000Z
[ "region:us" ]
azuur
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
badranx/opus_raw
2022-01-28T14:19:19.000Z
[ "region:us" ]
badranx
mono corpus from http://www.opensubtitles.org/. Please check http://www.opensubtitles.org/ for the available corpora and licenses.
P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016)
1
86
2022-03-02T23:29:22
## Load mono corpora from OPUS OPUS provides many parallel corpora, but it has more data for a single language. This enables you to load any raw mono corpus from [opus.nlpl.eu](https://opus.nlpl.eu/). Please check [opus.nlpl.eu](https://opus.nlpl.eu/) for the available corpora and licenses. The targeted corpus is called raw corpus on OPUS. To use it, you need the name of the corpus, the version, and the target language code. The corpus name and version are provided in one string seperated by space (e.g. 'News-Commentary v16'). All of these can be found on [opus.nlpl.eu](https://opus.nlpl.eu/). I didn't provide any default dataset, because this targets different datasets at once. You must provide two parameters as configurations: corpus and lang, see the example below. ## Example: ```python dataset = load_dataset('badranx/opus_raw', corpus="News-Commentary v16", lang="de") ``` ## Structure The structure is simple. ```python { "id": datasets.Value("string"), "text": datasets.Value("string"), } ``` "text" can be one or more sentences, but not more than a paragraph.
1,089
[ [ -0.033538818359375, -0.027191162109375, 0.01074981689453125, 0.043548583984375, -0.039520263671875, 0.00807952880859375, -0.037841796875, -0.02032470703125, 0.035369873046875, 0.0440673828125, -0.03369140625, -0.049102783203125, -0.0191497802734375, 0.031524...
biu-nlp/qa_align
2021-11-19T01:01:40.000Z
[ "region:us" ]
biu-nlp
This dataset contains QA-Alignments - annotations of cross-text content overlap. The task input is two sentences from two documents, roughly talking about the same event, along with their QA-SRL annotations which capture verbal predicate-argument relations in question-answer format. The output is a cross-sentence alignment between sets of QAs which denote the same information. See the paper for details: QA-Align: Representing Cross-Text Content Overlap by Aligning Question-Answer Propositions, Brook Weiss et. al., EMNLP 2021. Here we provide both the QASRL annotations and the QA-Align annotations for the target sentences.
@inproceedings{brook-weiss-etal-2021-qa, title = "{QA}-Align: Representing Cross-Text Content Overlap by Aligning Question-Answer Propositions", author = "Brook Weiss, Daniela and Roit, Paul and Klein, Ayal and Ernst, Ori and Dagan, Ido", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.778", pages = "9879--9894", abstract = "Multi-text applications, such as multi-document summarization, are typically required to model redundancies across related texts. Current methods confronting consolidation struggle to fuse overlapping information. In order to explicitly represent content overlap, we propose to align predicate-argument relations across texts, providing a potential scaffold for information consolidation. We go beyond clustering coreferring mentions, and instead model overlap with respect to redundancy at a propositional level, rather than merely detecting shared referents. Our setting exploits QA-SRL, utilizing question-answer pairs to capture predicate-argument relations, facilitating laymen annotation of cross-text alignments. We employ crowd-workers for constructing a dataset of QA-based alignments, and present a baseline QA alignment model trained over our dataset. Analyses show that our new task is semantically challenging, capturing content overlap beyond lexical similarity and complements cross-document coreference with proposition-level links, offering potential use for downstream tasks.", }
0
86
2022-03-02T23:29:22
# QA-Align This dataset contains QA-Alignments --- fine-grained annotations of cross-text content overlap. The task input is two sentences from two documents, roughly talking about the same event, along with their QA-SRL annotations which capture verbal predicate-argument relations in question-answer format. The output is a cross-sentence alignment between sets of QAs which denote the same information. See the paper for details: [QA-Align: Representing Cross-Text Content Overlap by Aligning Question-Answer Propositions, Brook Weiss et. al., EMNLP 2021](https://aclanthology.org/2021.emnlp-main.778/). The script downloads the data from the original [GitHub repository](https://github.com/DanielaBWeiss/QA-ALIGN). ### Format The dataset contains the following important features: * `abs_sent_id_1`, `abs_sent_id_2` - unique sentence ids, unique across all data sources. * `text_1`, `text_2`, `prev_text_1`, `prev_text_2` - the two candidate sentences for alignments. The "prev" (previous) sentences are for context (shown to workers and for the model). * `qas_1`, `qas_2` - the sets of QASRL QAs for each sentence. For test and dev they were created by workers, while in train, the QASRL parser generated them. * `alignments` - the aligned QAs that workers have matched. This is the list of qa-alignments, where a single alignment looks like this: ```json {'sent1': [{'qa_uuid': '33_1ecbplus~!~8~!~195~!~12~!~charged~!~4082', 'verb': 'charged', 'verb_idx': 12, 'question': 'Who was charged?', 'answer': 'the two youths', 'answer_range': '9:11'}], 'sent2': [{'qa_uuid': '33_8ecbplus~!~3~!~328~!~11~!~accused~!~4876', 'verb': 'accused', 'verb_idx': 11, 'question': 'Who was accused of something?', 'answer': 'two men', 'answer_range': '9:10'}]} ``` Where the for each sentence, we save a list of the aligned QAs from that sentence. Note that this single alignment may contain multiple QAs for each sentence. While 96% of the data are one-to-one alignments, 4% contain many-to-many alignment (although most of the time it's a 2-to-1).
2,099
[ [ -0.0184326171875, -0.05120849609375, 0.03485107421875, -0.0003135204315185547, 0.007175445556640625, -0.00847625732421875, 0.00716400146484375, -0.017425537109375, 0.0288848876953125, 0.035614013671875, -0.071533203125, -0.047271728515625, -0.0211944580078125, ...
biu-nlp/qanom
2022-10-18T09:50:01.000Z
[ "region:us" ]
biu-nlp
The dataset contains question-answer pairs to model predicate-argument structure of deverbal nominalizations. The questions start with wh-words (Who, What, Where, What, etc.) and contain a the verbal form of a nominalization from the sentence; the answers are phrases in the sentence. See the paper for details: QANom: Question-Answer driven SRL for Nominalizations (Klein et. al., COLING 2020) For previewing the QANom data along with the verbal annotations of QASRL, check out "https://browse.qasrl.org/". This dataset was annotated by selected workers from Amazon Mechanical Turk.
@inproceedings{klein2020qanom, title={QANom: Question-Answer driven SRL for Nominalizations}, author={Klein, Ayal and Mamou, Jonathan and Pyatkin, Valentina and Stepanov, Daniela and He, Hangfeng and Roth, Dan and Zettlemoyer, Luke and Dagan, Ido}, booktitle={Proceedings of the 28th International Conference on Computational Linguistics}, pages={3069--3083}, year={2020} }
1
86
2022-03-02T23:29:22
# QANom This dataset contains question-answer pairs to model the predicate-argument structure of deverbal nominalizations. The questions start with wh-words (Who, What, Where, What, etc.) and contain the verbal form of a nominalization from the sentence; the answers are phrases in the sentence. See the paper for details: [QANom: Question-Answer driven SRL for Nominalizations (Klein et. al., COLING 2020)](https://www.aclweb.org/anthology/2020.coling-main.274/) For previewing the QANom data along with the verbal annotations of QASRL, check out https://browse.qasrl.org/. Also check out our [GitHub repository](https://github.com/kleinay/QANom) to find code for nominalization identification, QANom annotation, evaluation, and models. The dataset was annotated by selected workers from Amazon Mechanical Turk.
822
[ [ -0.032318115234375, -0.0860595703125, 0.031890869140625, -0.005680084228515625, -0.0250244140625, 0.01416015625, -0.0034465789794921875, -0.016082763671875, 0.0264129638671875, 0.057403564453125, -0.056671142578125, -0.059326171875, -0.0215911865234375, 0.01...
castorini/msmarco_v1_doc_doc2query-t5_expansions
2022-07-02T19:16:12.000Z
[ "language:en", "license:apache-2.0", "region:us" ]
castorini
null
null
0
86
2022-03-02T23:29:22
--- language: - en license: apache-2.0 --- # Dataset Summary The repo provides queries generated for the MS MARCO V1 document corpus with docTTTTTquery (sometimes written as docT5query or doc2query-T5), the latest version of the doc2query family of document expansion models. The basic idea is to train a model, that when given an input document, generates questions that the document might answer (or more broadly, queries for which the document might be relevant). These predicted questions (or queries) are then appended to the original documents, which are then indexed as before. The docTTTTTquery model gets its name from the use of T5 as the expansion model. # Dataset Structure All three folds (train, dev and test) share the same corpus. An example data entry looks as follows: ``` { "id": "D1555982", "predicted_queries": ["when find radius of star r", "what is r radius", "how to find out radius of star", "what is radius r", "what is radius of r", "how do you find radius of star igel", "which law states that radiation is proportional to radiation?", "what is the radius of a spherical star", "what is the radius of the star", "what is radius of star", "which radiation is produced during a solar radiation experiment?", "how to find radius r", "what is radius r of a star", "the hot glowing surfaces of stars emit energy in the form of", "what is the radius of a star", "what is the radius of a star", "how to find radius r on a star", "how to find radius r in a solar cell", "what kind of energy does a hot glowing surface of a star emit?", "what kind of energy does the hot glowing surface of stars emit"] } ``` # Load Dataset An example to load the dataset: ``` dataset = load_dataset('castorini/msmarco_v1_doc_doc2query-t5_expansions') ``` # Citation Information ``` @article{docTTTTTquery, title={From doc2query to {docTTTTTquery}}, author={Nogueira, Rodrigo and Lin, Jimmy}, year={2019} } @article{emdt5, author = "Ronak Pradeep and Rodrigo Nogueira and Jimmy Lin", title = "The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models", journal = "arXiv:2101.05667", year = 2021, }
2,172
[ [ -0.027099609375, -0.053497314453125, 0.040191650390625, -0.002300262451171875, -0.009918212890625, -0.0022258758544921875, -0.007537841796875, -0.02398681640625, 0.0017299652099609375, 0.055816650390625, -0.05517578125, -0.05828857421875, -0.035186767578125, ...
castorini/nq_gar-t5_expansions
2023-10-10T18:58:22.000Z
[ "language:en", "license:apache-2.0", "region:us" ]
castorini
null
null
1
86
2022-03-02T23:29:22
--- language: - "en" license: "apache-2.0" --- # Dataset Summary The repo provides answer, title and sentence expansions for the Natural Questions corpus with gar-T5. # Dataset Structure There are dev and test folds An example data entry of the dev split looks as follows: ``` { "id": "1", "predicted_answers": ["312"], "predicted_titles": ["Invisible Man"], "predicted_sentences": ["The Invisible Man First edition Author Ralph Ellison Cover artist M."] } ``` An example data entry of the test split looks as follows: ``` { "id": "1", "predicted_answers": ["May 18 , 2018"], "predicted_titles": ["Deadpool 2 *** Deadpool (film) *** Deadpool 2 (soundtrack) *** X-Men in other media"], "predicted_sentences": ["Deadpool 2 was released on May 18 , 2018 , with Leitch directing from a screenplay by Rhett Reese and Paul Wernick ."] } ``` # Load Dataset An example to load the dataset: ```python data_files = {"dev":"dev/dev.jsonl", "test": "test/test.jsonl"} dataset = load_dataset('castorini/nq_gar-t5_expansions') ```
1,042
[ [ -0.05224609375, -0.0579833984375, 0.01983642578125, -0.0252532958984375, -0.024688720703125, 0.0218505859375, 0.00951385498046875, -0.01458740234375, 0.010345458984375, 0.04412841796875, -0.06585693359375, -0.0406494140625, -0.034912109375, 0.03277587890625,...
castorini/triviaqa_gar-t5_expansions
2022-02-17T00:58:32.000Z
[ "language:English", "license:Apache License 2.0", "region:us" ]
castorini
null
null
0
86
2022-03-02T23:29:22
--- language: - English license: "Apache License 2.0" --- # Dataset Summary The repo provides answer,title and sentence expansions for the Trivia QA corpus with gar-T5. # Dataset Structure There are dev and test folds An example data entry of the dev split looks as follows: ``` { "id": "1", "predicted_answers": ["Bz"], "predicted_titles": ["Vehicle registration plates of Belize *** Vehicle registration plate"], "predicted_sentences": ["The international code for Belize is \"\"BZ\"\"."] } ``` An example data entry of the test split looks as follows: ``` { "id": "1", "predicted_answers": ["Taurus"], "predicted_titles": ["Jamie Lee Curtis *** Under the Tuscan Sun *** Angels (Jamie Lee Curtis song) *** Under the Tuscan Sun (film) *** John Michael King *** Robert Earl *** Henry Jones, Sr. *** Jamie Lee (singer) *** Under the Tuscan Sun (1974 film) *** Richard Benjamin"], "predicted_sentences": ["In July 2007, several news outlets reported that the couple had quietly married in December 2007, and that Curtis had taken a liking to one another, sharing \"\"sweet nothings\"\" about their relationship."] } ``` # Load Dataset An example to load the dataset: ```python data_files = {"dev":"dev/dev.jsonl", "test": "test/test.jsonl"} dataset = load_dataset('castorini/triviaqa_gar-t5_expansions') ```
1,365
[ [ -0.037139892578125, -0.0487060546875, 0.0274505615234375, 0.00031375885009765625, -0.014801025390625, 0.0203857421875, 0.0036792755126953125, -0.00815582275390625, 0.01136016845703125, 0.0413818359375, -0.040618896484375, -0.061553955078125, -0.022796630859375, ...
cdminix/mgb1
2021-02-05T16:04:03.000Z
[ "region:us" ]
cdminix
The first edition of the Multi-Genre Broadcast (MGB-1) Challenge is an evaluation of speech recognition, speaker diarization, and lightly supervised alignment using TV recordings in English. The speech data is broad and multi-genre, spanning the whole range of TV output, and represents a challenging task for speech technology. In 2015, the challenge used data from the British Broadcasting Corporation (BBC).
@inproceedings{bell2015mgb, title={The MGB challenge: Evaluating multi-genre broadcast media recognition}, author={Bell, Peter and Gales, Mark JF and Hain, Thomas and Kilgour, Jonathan and Lanchantin, Pierre and Liu, Xunying and McParland, Andrew and Renals, Steve and Saz, Oscar and Wester, Mirjam and others}, booktitle={2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)}, pages={687--693}, year={2015}, organization={IEEE} }
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
cem/dnm
2021-12-24T10:48:27.000Z
[ "region:us" ]
cem
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.0213775634765625, -0.01497650146484375, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.0170135498046875, -0.052093505859375, -0.01497650146484375, -0.0604248046875, 0.0379028...
cestwc/adapted-msrcomp
2021-12-16T06:34:25.000Z
[ "region:us" ]
cestwc
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.0213775634765625, -0.01497650146484375, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.0170135498046875, -0.052093505859375, -0.01497650146484375, -0.0604248046875, 0.0379028...
cestwc/adapted-synonym
2021-12-29T16:59:46.000Z
[ "region:us" ]
cestwc
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.0213775634765625, -0.01497650146484375, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.0170135498046875, -0.052093505859375, -0.01497650146484375, -0.0604248046875, 0.0379028...
cestwc/conjnli
2022-02-15T15:23:38.000Z
[ "region:us" ]
cestwc
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.0213775634765625, -0.01497650146484375, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.0170135498046875, -0.052093505859375, -0.01497650146484375, -0.0604248046875, 0.0379028...
chenyuxuan/wikigold
2021-07-26T12:40:03.000Z
[ "region:us" ]
chenyuxuan
WikiGold dataset.
@inproceedings{balasuriya-etal-2009-named, title = "Named Entity Recognition in Wikipedia", author = "Balasuriya, Dominic and Ringland, Nicky and Nothman, Joel and Murphy, Tara and Curran, James R.", booktitle = "Proceedings of the 2009 Workshop on The People{'}s Web Meets {NLP}: Collaboratively Constructed Semantic Resources (People{'}s Web)", month = aug, year = "2009", address = "Suntec, Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/W09-3302", pages = "10--18", }
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.0213775634765625, -0.01497650146484375, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.0170135498046875, -0.052093505859375, -0.01497650146484375, -0.0604248046875, 0.0379028...
cheulyop/dementiabank
2021-10-04T14:18:42.000Z
[ "region:us" ]
cheulyop
DementiaBank Pitt Corpus includes audios and transcripts of 99 controls and 194 dementia patients. These transcripts and audio files were gathered as part of a larger protocol administered by the Alzheimer and Related Dementias Study at the University of Pittsburgh School of Medicine. The original acquisition of the DementiaBank data was supported by NIH grants AG005133 and AG003705 to the University of Pittsburgh. Participants included elderly controls, people with probable and possible Alzheimer’s Disease, and people with other dementia diagnoses. Data were gathered longitudinally, on a yearly basis.
@article{becker1994natural, title={The natural history of Alzheimer's disease: description of study cohort and accuracy of diagnosis}, author={Becker, James T and Boiler, Fran{\c{c}}ois and Lopez, Oscar L and Saxton, Judith and McGonigle, Karen L}, journal={Archives of neurology}, volume={51}, number={6}, pages={585--594}, year={1994}, publisher={American Medical Association} }
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.0213775634765625, -0.01497650146484375, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.0170135498046875, -0.052093505859375, -0.01497650146484375, -0.0604248046875, 0.0379028...
clarin-pl/2021-punctuation-restoration
2022-08-29T16:39:18.000Z
[ "task_categories:automatic-speech-recognition", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:n<1K", "language:pl", "region:us" ]
clarin-pl
This dataset is designed to be used in training models that restore punctuation marks from the output of Automatic Speech Recognition system for Polish language.
""" _DESCRIPTION =
1
86
2022-03-02T23:29:22
--- annotations_creators: - crowdsourced language: - pl language_creators: - crowdsourced license: [] multilinguality: - monolingual pretty_name: 2021-punctuation-restoration size_categories: - n<1K source_datasets: [] tags: [] task_categories: - automatic-speech-recognition task_ids: [] --- # Punctuation restoration from read text Restore punctuation marks from the output of an ASR system. ## Motivation Speech transcripts generated by Automatic Speech Recognition (ASR) systems typically do not contain any punctuation or capitalization. In longer stretches of automatically recognized speech, the lack of punctuation affects the general clarity of the output text [1]. The primary purpose of punctuation (PR) and capitalization restoration (CR) as a distinct natural language processing (NLP) task is to improve the legibility of ASR-generated text, and possibly other types of texts without punctuation. Aside from their intrinsic value, PR and CR may improve the performance of other NLP aspects such as Named Entity Recognition (NER), part-of-speech (POS) and semantic parsing or spoken dialog segmentation [2, 3]. As useful as it seems, It is hard to systematically evaluate PR on transcripts of conversational language; mainly because punctuation rules can be ambiguous even for originally written texts, and the very nature of naturally-occurring spoken language makes it difficult to identify clear phrase and sentence boundaries [4,5]. Given these requirements and limitations, a PR task based on a redistributable corpus of read speech was suggested. 1200 texts included in this collection (totaling over 240,000 words) were selected from two distinct sources: WikiNews and WikiTalks. Punctuation found in these sources should be approached with some reservation when used for evaluation: these are original texts and may contain some user-induced errors and bias. The texts were read out by over a hundred different speakers. Original texts with punctuation were forced-aligned with recordings and used as the ideal ASR output. The goal of the task is to provide a solution for restoring punctuation in the test set collated for this task. The test set consists of time-aligned ASR transcriptions of read texts from the two sources. Participants are encouraged to use both text-based and speech-derived features to identify punctuation symbols (e.g. multimodal framework [6]). In addition, the train set is accompanied by reference text corpora of WikiNews and WikiTalks data that can be used in training and fine-tuning punctuation models. ## Task description The purpose of this task is to restore punctuation in the ASR recognition of texts read out loud. ![](https://poleval.github.io/2021-punctuation-restoration/img/image001.png) **Input** ('tokens*'* column): sequence of tokens **Output** ('tags*'* column): sequence of tags **Measurements**: F1-score (seqeval) **Example**: Input: `['selekcjoner', 'szosowej', 'kadry', 'elity', 'mężczyzn', 'piotr', 'wadecki', 'ogłosił', '27', 'marca', '2008', 'r', 'szeroki', 'skład', 'zawodników', 'którzy', 'będą', 'rywalizować', 'o', 'miejsce', 'w', 'reprezentacji', 'na', 'tour', 'de', 'pologne', 'lista', 'liczy', '22', 'nazwiska', 'zawodników', 'zarówno', 'z', 'zagranicznych', 'jaki', 'i', 'polskich', 'ekip', 'spośród', '22', 'wybrańców', 'selekcjonera', 'do', 'składu', 'dostanie', 'się', 'tylko', 'ośmiu', 'kolarzy', 'którzy', 'we', 'wrześniu', 'będą', 'rywalizować', 'z', 'najlepszymi', 'grupami', 'kolarskimi', 'na', 'świecie', 'w', 'kręgu', 'zainteresowania', 'wadeckiego', 'znajduje', 'się', 'także', 'pięciu', 'innych', 'zawodników', 'ale', 'oni', 'prawdopodobnie', 'wystartują', 'w', 'polskim', 'tourze', 'w', 'szeregach', 'swoich', 'ekip', 'szeroka', 'kadra', 'na', 'tour', 'de', 'pologne', 'dariusz', 'baranowski', 'łukasz', 'bodnar', 'bartosz', 'huzarski', 'błażej', 'janiaczyk', 'tomasz', 'kiendyś', 'mateusz', 'komar', 'tomasz', 'lisowicz', 'piotr', 'mazur', 'jacek', 'morajko', 'przemysław', 'niemiec', 'marek', 'rutkiewicz', 'krzysztof', 'szczawiński', 'mateusz', 'taciak', 'adam', 'wadecki', 'mariusz', 'witecki', 'piotr', 'zaradny', 'piotr', 'zieliński', 'mateusz', 'mróz', 'marek', 'wesoły', 'jarosław', 'rębiewski', 'robert', 'radosz', 'jarosław', 'dąbrowski']` Input (translated by DeepL): `the selector of the men's elite road cycling team piotr wadecki announced on march 27, 2008 a wide line-up of riders who will compete for a place in the national team for the tour de pologne the list includes 22 names of riders both from foreign and Polish teams out of the 22 selected by the selector only eight riders will get into the line-up who in September will compete with the best cycling groups in the world wadecki's circle of interest also includes five other cyclists, but they will probably compete in the Polish tour in the ranks of their teams wide cadre for the tour de pologne dariusz baranowski łukasz bodnar bartosz huzarski błażej janiaczyk tomasz kiendyś mateusz komar tomasz lisowicz piotr mazur jacek morajko przemysław german marek rutkiewicz krzysztof szczawiński mateusz taciak adam wadecki mariusz witecki piotr zaradny piotr zieliński mateusz mróz marek wesoły jarosław rębiewski robert radosz jarosław dąbrowski` Output: `['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-.', 'O', 'O', 'B-,', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-.', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-,', 'O', 'O', 'O', 'B-.', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-,', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-.', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-,', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-.', 'O', 'O', 'O', 'O', 'O', 'B-:', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']` ## Dataset – WikiPunct WikiPunct is a crowdsourced text and audio data set of Polish Wikipedia pages read out loud by Polish lectors. The dataset is divided into two parts:conversational(WikiTalks)and information (WikiNews). Over a hundred people were involved in the production of the audio component. The total length of audio data reaches almost thirty-six hours, including the test set. Steps were taken to balance the male-to-female ratio. WikiPuncthas over thirty-two thousand texts and 1200 audio files, one thousand in the training set and two hundred in the test set. There is a transcript of automatically recognized speech and force-aligned text for each text. The details behind the data format and evaluation metrics are presented below in the respective sections. **Statistics:** - **Text:** - ver thirty-two thousand texts; WikiNews ca. 15,000, WikiTalks ca. 17,000; - **Audio:** - Selection procedure: - randomly selected WikiNews (80% that is equal 800 entries for the training set) with the word count above 150 words and smaller than 300 words; - randomly selected WikiTalks (20%) with word the count above 150 words but smaller than 300 words and at least one question mark - Data set split - Training data: 1000 recordings - Test data: at 274 recordings - Speakers: - Polish male: 51 speakers, 16.7 hours of speech - Polish female: 54 speakers, 19 hours of speech **Data splits** | Subset | Cardinality (texts) | | ----------- | ----------------------: | | train | 800 | | dev | 0 | | test | 200 | **Class distribution (without "O")** | Class | train | validation | test | |:--------|--------:|-------------:|-------:| | B-. | 0.419 | - | 0.416 | | B-, | 0.406 | - | 0.403 | | B-- | 0.097 | - | 0.099 | | B-: | 0.037 | - | 0.052 | | B-? | 0.032 | - | 0.024 | | B-! | 0.005 | - | 0.004 | | B-; | 0.004 | - | 0.002 | **Punctuation for raw text:** | | **symbol** | **mean** | **median** | **max** | **sum** | **included** | | --- | --- | --- | --- | --- | --- | --- | | **fullstop** | . | 12.44 | 7.0 | 1129.0 | 404 378 | yes | | **comma** | , | 10.97 | 5.0 | 1283.0 | 356 678 | yes | | **question\_mark** | ? | 0.83 | 0.0 | 130.0 | 26 879 | yes | | **exclamation\_mark** | ! | 0.22 | 0.0 | 55.0 | 7 164 | yes | | **hyphen** | - | 2.64 | 1.0 | 363.0 | 81 190 | yes | | **colon** | : | 1.49 | 0.0 | 202.0 | 44 995 | yes | | **ellipsis** | ... | 0.27 | 0.0 | 60.0 | 8 882 | yes | | **semicolon** | ; | 0.13 | 0.0 | 51.0 | 4 270 | no | | **quote** | &quot; | 3.64 | 0.0 | 346.0 | 116 874 | no | | **words** | | 169.50 | 89.0 | 17252.0 | 5 452 032 | - | The dataset is divided into two parts: conversational (WikiTalks) and information (WikiNews). **Part 1. WikiTalks** Data scraped from Polish Wikipedia Talk pages. Talk pages, also known as discussion pages, are administration pages with editorial details and discussions for Wikipedia articles.. Talk pages were scrapped from the web using a list of article titles shared alongside Wikipedia dump archives. Wikipedia Talk pages serve as conversational data. Here, users communicate with each other by writing comments. Vocabulary and punctuation errors are expected. This data set covers 20% of the spoken data. Example: - **wikitalks001948:** Cóż za bzdury tu powypisywane! Fra Diavolo starał się nie dopuścić do upadku Republiki Partenopejskiej? Kto to wymyślił?! Człowiek ten był jednym z najżarliwszych wrogów francuskiej okupacji, a za zasługi w wypędzeniu Francuzów został mianowany pułkownikiem w królewskiej armii z prawdziwie królewską pensją. Bez niego wyzwolenie, nazywać to tak czy też nie, północnej części królestwa byłoby dużo trudniejsze, bo dysponował siłą kilku tysięcy sprawnych w boju i umiejętnie wziętych w karby rzezimieszków. Toteż armia Burbonów nie pokonywała go, jak to się twierdzi w artykule, lecz ściśle współpracowała. Redaktorów zachęcam do jak najszybszej korekty artykułu, bo aktualnie jest obrazą dla ambicji Wikipedii. 91.199.250.17 - **wikitalks008902:** Stare wątki w dyskusji przeniosłem do archiwum. Od prawie roku dyskusja w nich nie była kontynuowana. Sławek Borewicz **Part 2. WikiNews** **Wikinews** is a free-content news wiki and a project of the Wikimedia Foundation. The site works through collaborative journalism. The data was scraped directly from wikinews dump archive. The overall text quality is high, but vocabulary and punctuation errors may occur. This data set covers 80% of the spoken data. Example: - **wikinews222361:** Misja STS-127 promu kosmicznego Endeavour do Międzynarodowej Stacji Kosmicznej została przełożona ze względu na wyciek wodoru. Podczas procesu napełniania zewnętrznego zbiornika paliwem, część ciekłego wodoru przemieniła się w gaz i przedostała się do systemu odpowietrzania. System ten jest używany do bezpiecznego odprowadzania nadmiaru wodoru z platformy startowej 39A do Centrum Lotów Kosmicznych imienia Johna F. Kennedy&#39;ego. Początek misji miał mieć miejsce dzisiaj, o godzinie 13:17. Ze względu jednak na awarię, najbliższa możliwa data startu wahadłowca to środa 17 czerwca, jednak na ten dzień NASA na Przylądku Canaveral zaplanowana wystrzelenie sondy kosmicznej Lunar Reconnaissance Orbiter. Misja może być zatem opóźniona do 20 czerwca, który jest ostatnią możliwą datą startu w tym miesiącu. W niedzielę odbędzie się spotkanie specjalistów NASA, na którym zostanie ustalona nowa data startu i dalszy plan misji STS-127. ## Data format Input is a TSV file with two columns: 1. Text ID (to be used when handling forced-aligned transcriptions and WAV files if needed) 2. Input text - in lower-case letter without punctuation marks The output should have the same number of lines as the input file, in each line the text with punctuation marks should be given. ### Forced-aligned transcriptions We use force-aligned transcriptions of the original texts to approximate ASR output. Files in the _.clntmstmp_ format contain forced-alignment of the original text together with the audio file read out by a group of volunteers. The files may contain errors resulting from incorrect reading of the text (skipping fragments, adding words missing from the original text) and alignment errors resulting from the configuration of the alignment tool for text and audio files. The configuration targeted Polish; names from foreign languages may be poorly recognised, with the word duration equal to zero (start and end timestamps are equal). Data is given in the following format: **(timestamp\_start,timestamp\_end) word** ... **\</s\>** where **\</s\>** is a symbol of the end of recognition. Example: (990,1200) Rosja (1230,1500) zaczyna (1590,1950) powracać (1980,2040) do (2070,2400) praktyk (2430,2490) z (2520,2760) czasów (2820,3090) zimnej (3180,3180) wojny. (3960,4290) Rosjanie (4380,4770) wznowili (4860,5070) bowiem (5100,5160) na (5220,5430) stałe (5520,5670) loty (5760,6030) swoich (6120,6600) bombowców (6630,7230) strategicznych (7350,7530) poza (7590,7890) granice (8010,8010) kraju. (8880,9300) Prezydent (9360,9810) Władimir (9930,10200) Putin (10650,10650) wyjaśnił, (10830,10920) iż (10980,11130) jest (11160,11190) to (11220,11520) odpowiedź (11550,11640) na (11670,12120) zagrożenie (12240,12300) ze (12330,12570) strony (12660,12870) innych (13140,13140) państw. \</s\> ## Evaluation procedure Baseline results will be provided in final evaluation. ### Punctuation During the task the following punctuation marks will be evaluated: | **Punctuation mark** | **symbol** | | --- | --- | | fullstop | . | | comma | , | | question mark | ? | | exclamation mark | ! | | hyphen | - | | colon | : | | ellipsis | ... | | blank (no punctuation) | | Note that semi-colon (`;`) is disregarded here. ### Submission format The output to be evaluated is just the text with punctuation marks added. ### Metrics Final results are evaluated in terms of precision, recall, and F1 scores for predicting each punctuation mark separately. Submissions are compared with respect to the weighted average of F1 scores for each punctuation mark. ##### Per-document score: ![](https://poleval.github.io/2021-punctuation-restoration/img/image003.png) ##### Global score per punctuation mark _p_: ![](https://poleval.github.io/2021-punctuation-restoration/img/image005.png) Final scoring metric calculated as weighted average of global scores per ![](https://poleval.github.io/2021-punctuation-restoration/img/image007.png) We would like to invite participants to discussion about evaluation metrics, taking into account such factors as: - ASR and Forced-Alignment errors, - inconsistencies among annotators, - impact of only slight displacement of punctuation, - assigning different weights to different types of errors. ### Video introduction [![Video instruction](http://img.youtube.com/vi/yEh-RiFGN94/0.jpg)](http://www.youtube.com/watch?v=yEh-RiFGN94 "Video instruction") ### Downloads Data has been published in the following repository: https://github.com/poleval/2021-punctuation-restoration Training data is provided in train/\*.tsv. Additional data can be downloaded from Google Drive. Below is a list of file names along with a description of what they contain. - [poleval\_fa.train.tar.gz](https://drive.google.com/file/d/1oBFjZPb5Hk4r_VW4G0HrVnGy7A7zmTpa/view?usp=sharing) - archive contains forced-alignment of the original text together with the audio file - [poleval\_wav.train.tar.gz](https://drive.google.com/file/d/1b6MyyqgA9D1U7DX3Vtgda7f9ppkxjCXJ/view?usp=sharing) - archive contains training audio files - [poleval\_wav.validation.tar.gz](https://drive.google.com/file/d/1gwQRvrUtFqz3xGnmEN8znAzkBwC12Czu/view?usp=sharing) - archive contains test audio files - [poleval\_text.rest.tar.gz](https://drive.google.com/file/d/10SdpLHPLXVfhJsq1okgC5fcxbFzCGoR5/view?usp=sharing) - archive contains additional text provided in JSON formatand CSV for which no audio files were provided (can be used for training purposes) ### Challenge stage The competition in September 2021. Now the challenge is in the after-competition stage. You can submit solutions, but they will be marked with a different color. ### License Creative Commons - Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ### References 1. Yi, J., Tao, J., Bai, Y., Tian, Z., &amp; Fan, C. (2020). Adversarial transfer learning for punctuation restoration. _arXiv preprint arXiv:2004.00248_. 2. Nguyen, Thai Binh, et al. &quot;Improving Vietnamese Named Entity Recognition from Speech Using Word Capitalization and Punctuation Recovery Models.&quot; _Proc. Interspeech 2020_ (2020): 4263-4267. 3. Hlubík, Pavel, et al. &quot;Inserting Punctuation to ASR Output in a Real-Time Production Environment.&quot; _International Conference on Text, Speech, and Dialogue_. Springer, Cham, 2020. 4. Sirts, Kairit, and Kairit Peekman. &quot;Evaluating Sentence Segmentation and Word Tokenization Systems on Estonian Web Texts.&quot; _Human Language Technologies–The Baltic Perspective: Proceedings of the Ninth International Conference Baltic HLT 2020_. Vol. 328. IOS Press, 2020. 5. Wang, Xueyujie. &quot;Analysis of Sentence Boundary of the Host&#39;s Spoken Language Based on Semantic Orientation Pointwise Mutual Information Algorithm.&quot; _2020 12th International Conference on Measuring Technology and Mechatronics Automation (ICMTMA)_. IEEE, 2020. 6. Sunkara, Monica, et al. &quot;Multimodal Semi-supervised Learning Framework for Punctuation Prediction in Conversational Speech.&quot; _arXiv preprint arXiv:2008.00702_ (2020).
17,751
[ [ -0.043304443359375, -0.020416259765625, 0.0257415771484375, 0.03668212890625, -0.0261993408203125, 0.007080078125, -0.020263671875, -0.0260162353515625, 0.0322265625, 0.02557373046875, -0.050445556640625, -0.0377197265625, -0.041748046875, 0.034881591796875,...
clarin-pl/aspectemo
2022-08-29T16:39:32.000Z
[ "task_categories:token-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:1K", "size_categories:1K<n<10K", "source_datasets:original", "language:pl", "license:mit", "region:us" ...
clarin-pl
AspectEmo dataset: Multi-Domain Corpus of Consumer Reviews for Aspect-Based Sentiment Analysis
@misc{11321/849, title = {{AspectEmo} 1.0: Multi-Domain Corpus of Consumer Reviews for Aspect-Based Sentiment Analysis}, author = {Koco{\'n}, Jan and Radom, Jarema and Kaczmarz-Wawryk, Ewa and Wabnic, Kamil and Zaj{\c a}czkowska, Ada and Za{\'s}ko-Zieli{\'n}ska, Monika}, url = {http://hdl.handle.net/11321/849}, note = {{CLARIN}-{PL} digital repository}, copyright = {The {MIT} License}, year = {2021} }
1
86
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - other language: - pl license: - mit multilinguality: - monolingual pretty_name: 'AspectEmo' size_categories: - 1K - 1K<n<10K source_datasets: - original task_categories: - token-classification task_ids: - sentiment-classification --- # AspectEmo ## Description AspectEmo Corpus is an extended version of a publicly available PolEmo 2.0 corpus of Polish customer reviews used in many projects on the use of different methods in sentiment analysis. The AspectEmo corpus consists of four subcorpora, each containing online customer reviews from the following domains: school, medicine, hotels, and products. All documents are annotated at the aspect level with six sentiment categories: strong negative (minus_m), weak negative (minus_s), neutral (zero), weak positive (plus_s), strong positive (plus_m). ## Versions | version | config name | description | default | notes | |---------|-------------|--------------------------------|---------|------------------| | 1.0 | "1.0" | The version used in the paper. | YES | | | 2.0 | - | Some bugs fixed. | NO | work in progress | ## Tasks (input, output and metrics) Aspect-based sentiment analysis (ABSA) is a text analysis method that categorizes data by aspects and identifies the sentiment assigned to each aspect. It is the sequence tagging task. **Input** ('*tokens'* column): sequence of tokens **Output** ('*labels'* column): sequence of predicted tokens’ classes ("O" + 6 possible classes: strong negative (a_minus_m), weak negative (a_minus_s), neutral (a_zero), weak positive (a_plus_s), strong positive (a_plus_m), ambiguous (a_amb) ) **Domain**: school, medicine, hotels and products **Measurements**: F1-score (seqeval) **Example***:* Input: `['Dużo', 'wymaga', ',', 'ale', 'bardzo', 'uczciwy', 'i', 'przyjazny', 'studentom', '.', 'Warto', 'chodzić', 'na', 'konsultacje', '.', 'Docenia', 'postępy', 'i', 'zaangażowanie', '.', 'Polecam', '.']` Input (translated by DeepL): `'Demands a lot , but very honest and student friendly . Worth going to consultations . Appreciates progress and commitment . I recommend .'` Output: `['O', 'a_plus_s', 'O', 'O', 'O', 'a_plus_m', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'a_zero', 'O', 'a_plus_m', 'O', 'O', 'O', 'O', 'O', 'O']` ## Data splits | Subset | Cardinality (sentences) | |:-------|------------------------:| | train | 1173 | | val | 0 | | test | 292 | ## Class distribution(without "O") | Class | train | validation | test | |:----------|--------:|-------------:|-------:| | a_plus_m | 0.359 | - | 0.369 | | a_minus_m | 0.305 | - | 0.377 | | a_zero | 0.234 | - | 0.182 | | a_minus_s | 0.037 | - | 0.024 | | a_plus_s | 0.037 | - | 0.015 | | a_amb | 0.027 | - | 0.033 | ## Citation ``` @misc{11321/849, title = {{AspectEmo} 1.0: Multi-Domain Corpus of Consumer Reviews for Aspect-Based Sentiment Analysis}, author = {Koco{\'n}, Jan and Radom, Jarema and Kaczmarz-Wawryk, Ewa and Wabnic, Kamil and Zaj{\c a}czkowska, Ada and Za{\'s}ko-Zieli{\'n}ska, Monika}, url = {http://hdl.handle.net/11321/849}, note = {{CLARIN}-{PL} digital repository}, copyright = {The {MIT} License}, year = {2021} } ``` ## License ``` The MIT License ``` ## Links [HuggingFace](https://huggingface.co/datasets/clarin-pl/aspectemo) [Source](https://clarin-pl.eu/dspace/handle/11321/849) [Paper](https://sentic.net/sentire2021kocon.pdf) ## Examples ### Loading ```python from pprint import pprint from datasets import load_dataset dataset = load_dataset("clarin-pl/aspectemo") pprint(dataset['train'][20]) # {'labels': [0, 4, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 3, 0, 5, 0, 0, 0, 0, 0, 0], # 'tokens': ['Dużo', # 'wymaga', # ',', # 'ale', # 'bardzo', # 'uczciwy', # 'i', # 'przyjazny', # 'studentom', # '.', # 'Warto', # 'chodzić', # 'na', # 'konsultacje', # '.', # 'Docenia', # 'postępy', # 'i', # 'zaangażowanie', # '.', # 'Polecam', # '.']} ``` ### Evaluation ```python import random from pprint import pprint from datasets import load_dataset, load_metric dataset = load_dataset("clarin-pl/aspectemo") references = dataset["test"]["labels"] # generate random predictions predictions = [ [ random.randrange(dataset["train"].features["labels"].feature.num_classes) for _ in range(len(labels)) ] for labels in references ] # transform to original names of labels references_named = [ [dataset["train"].features["labels"].feature.names[label] for label in labels] for labels in references ] predictions_named = [ [dataset["train"].features["labels"].feature.names[label] for label in labels] for labels in predictions ] # transform to BILOU scheme references_named = [ [f"U-{label}" if label != "O" else label for label in labels] for labels in references_named ] predictions_named = [ [f"U-{label}" if label != "O" else label for label in labels] for labels in predictions_named ] # utilise seqeval to evaluate seqeval = load_metric("seqeval") seqeval_score = seqeval.compute( predictions=predictions_named, references=references_named, scheme="BILOU", mode="strict", ) pprint(seqeval_score) # {'a_amb': {'f1': 0.00597237775289287, # 'number': 91, # 'precision': 0.003037782418834251, # 'recall': 0.17582417582417584}, # 'a_minus_m': {'f1': 0.048306148055207034, # 'number': 1039, # 'precision': 0.0288551620760727, # 'recall': 0.1482194417709336}, # 'a_minus_s': {'f1': 0.004682997118155619, # 'number': 67, # 'precision': 0.0023701002734731083, # 'recall': 0.19402985074626866}, # 'a_plus_m': {'f1': 0.045933014354066985, # 'number': 1015, # 'precision': 0.027402473834443386, # 'recall': 0.14187192118226602}, # 'a_plus_s': {'f1': 0.0021750951604132683, # 'number': 41, # 'precision': 0.001095690284879474, # 'recall': 0.14634146341463414}, # 'a_zero': {'f1': 0.025159400310184387, # 'number': 501, # 'precision': 0.013768389287061486, # 'recall': 0.14570858283433133}, # 'overall_accuracy': 0.13970115681233933, # 'overall_f1': 0.02328248652368391, # 'overall_precision': 0.012639312620633834, # 'overall_recall': 0.14742193173565724} ```
6,891
[ [ -0.039825439453125, -0.0452880859375, 0.02886962890625, 0.022247314453125, -0.0220184326171875, -0.0154266357421875, -0.01491546630859375, -0.0233001708984375, 0.043792724609375, 0.0299835205078125, -0.031097412109375, -0.061798095703125, -0.031585693359375, ...
clarin-pl/nkjp-pos
2023-01-30T22:53:57.000Z
[ "task_categories:other", "task_ids:part-of-speech", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:pl", "license:gpl-3.0", "structure-prediction", "region:us" ]
clarin-pl
NKJP-POS tagging dataset.
null
1
86
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - other language: - pl license: - gpl-3.0 multilinguality: - monolingual size_categories: - unknown source_datasets: - original task_categories: - other task_ids: - part-of-speech pretty_name: nkjp-pos tags: - structure-prediction --- # nkjp-pos ## Description NKJP-POS is a part the National Corpus of Polish (*Narodowy Korpus Języka Polskiego*). Its objective is part-of-speech tagging, e.g. nouns, verbs, adjectives, adverbs, etc. During the creation of corpus, texts of were annotated by humans from various sources, covering many domains and genres. ## Tasks (input, output and metrics) Part-of-speech tagging (POS tagging) - tagging words in text with their corresponding part of speech. **Input** ('*tokens'* column): sequence of tokens **Output** ('*pos_tags'* column): sequence of predicted tokens’ classes (35 possible classes, described in detail in the annotation guidelines) **Measurements**: F1-score (seqeval) **Example***:* Input: `['Zarejestruj', 'się', 'jako', 'bezrobotny', '.']` Input (translated by DeepL): `Register as unemployed.` Output: `['impt', 'qub', 'conj', 'subst', 'interp']` ## Data splits | Subset | Cardinality (sentences) | | ----------- | ----------------------: | | train | 78219 | | dev | 0 | | test | 7444 | ## Class distribution | Class | train | dev | test | |:--------|--------:|------:|--------:| | subst | 0.27345 | - | 0.27656 | | interp | 0.18101 | - | 0.17944 | | adj | 0.10611 | - | 0.10919 | | prep | 0.09567 | - | 0.09547 | | qub | 0.05670 | - | 0.05491 | | fin | 0.04939 | - | 0.04648 | | praet | 0.04409 | - | 0.04348 | | conj | 0.03711 | - | 0.03724 | | adv | 0.03512 | - | 0.03333 | | inf | 0.01591 | - | 0.01547 | | comp | 0.01476 | - | 0.01439 | | num | 0.01322 | - | 0.01436 | | ppron3 | 0.01111 | - | 0.01018 | | ppas | 0.01086 | - | 0.01085 | | ger | 0.00961 | - | 0.01050 | | brev | 0.00856 | - | 0.01181 | | ppron12 | 0.00670 | - | 0.00665 | | aglt | 0.00629 | - | 0.00602 | | pred | 0.00539 | - | 0.00540 | | pact | 0.00454 | - | 0.00452 | | bedzie | 0.00229 | - | 0.00243 | | pcon | 0.00218 | - | 0.00189 | | impt | 0.00203 | - | 0.00226 | | siebie | 0.00177 | - | 0.00158 | | imps | 0.00174 | - | 0.00177 | | interj | 0.00131 | - | 0.00102 | | xxx | 0.00070 | - | 0.00048 | | adjp | 0.00069 | - | 0.00065 | | winien | 0.00068 | - | 0.00057 | | adja | 0.00048 | - | 0.00058 | | pant | 0.00012 | - | 0.00018 | | burk | 0.00011 | - | 0.00006 | | numcol | 0.00011 | - | 0.00013 | | depr | 0.00010 | - | 0.00004 | | adjc | 0.00007 | - | 0.00008 | ## Citation ``` @book{przepiorkowski_narodowy_2012, title = {Narodowy korpus języka polskiego}, isbn = {978-83-01-16700-4}, language = {pl}, publisher = {Wydawnictwo Naukowe PWN}, editor = {Przepiórkowski, Adam and Bańko, Mirosław and Górski, Rafał L. and Lewandowska-Tomaszczyk, Barbara}, year = {2012} } ``` ## License ``` GNU GPL v.3 ``` ## Links [HuggingFace](https://huggingface.co/datasets/clarin-pl/nkjp-pos) [Source](http://clip.ipipan.waw.pl/NationalCorpusOfPolish) [Paper](http://nkjp.pl/settings/papers/NKJP_ksiazka.pdf) ## Examples ### Loading ```python from pprint import pprint from datasets import load_dataset dataset = load_dataset("clarin-pl/nkjp-pos") pprint(dataset['train'][5000]) # {'id': '130-2-900005_morph_49.49-s', # 'pos_tags': [16, 4, 3, 30, 12, 18, 3, 16, 14, 6, 14, 26, 1, 30, 12], # 'tokens': ['Najwyraźniej', # 'źle', # 'ocenił', # 'odległość', # ',', # 'bo', # 'zderzył', # 'się', # 'z', # 'jadącą', # 'z', # 'naprzeciwka', # 'ciężarową', # 'scanią', # '.']} ``` ### Evaluation ```python import random from pprint import pprint from datasets import load_dataset, load_metric dataset = load_dataset("clarin-pl/nkjp-pos") references = dataset["test"]["pos_tags"] # generate random predictions predictions = [ [ random.randrange(dataset["train"].features["pos_tags"].feature.num_classes) for _ in range(len(labels)) ] for labels in references ] # transform to original names of labels references_named = [ [dataset["train"].features["pos_tags"].feature.names[label] for label in labels] for labels in references ] predictions_named = [ [dataset["train"].features["pos_tags"].feature.names[label] for label in labels] for labels in predictions ] # transform to BILOU scheme references_named = [ [f"U-{label}" if label != "O" else label for label in labels] for labels in references_named ] predictions_named = [ [f"U-{label}" if label != "O" else label for label in labels] for labels in predictions_named ] # utilise seqeval to evaluate seqeval = load_metric("seqeval") seqeval_score = seqeval.compute( predictions=predictions_named, references=references_named, scheme="BILOU", mode="strict", ) pprint(seqeval_score, depth=1) # {'adj': {...}, # 'adja': {...}, # 'adjc': {...}, # 'adjp': {...}, # 'adv': {...}, # 'aglt': {...}, # 'bedzie': {...}, # 'brev': {...}, # 'burk': {...}, # 'comp': {...}, # 'conj': {...}, # 'depr': {...}, # 'fin': {...}, # 'ger': {...}, # 'imps': {...}, # 'impt': {...}, # 'inf': {...}, # 'interj': {...}, # 'interp': {...}, # 'num': {...}, # 'numcol': {...}, # 'overall_accuracy': 0.027855061488566583, # 'overall_f1': 0.027855061488566583, # 'overall_precision': 0.027855061488566583, # 'overall_recall': 0.027855061488566583, # 'pact': {...}, # 'pant': {...}, # 'pcon': {...}, # 'ppas': {...}, # 'ppron12': {...}, # 'ppron3': {...}, # 'praet': {...}, # 'pred': {...}, # 'prep': {...}, # 'qub': {...}, # 'siebie': {...}, # 'subst': {...}, # 'winien': {...}, # 'xxx': {...}} ```
6,162
[ [ -0.045257568359375, -0.03497314453125, 0.01519012451171875, 0.0133056640625, -0.0185546875, -0.0034542083740234375, -0.0178070068359375, -0.00972747802734375, 0.051422119140625, 0.0200958251953125, -0.03369140625, -0.06011962890625, -0.048675537109375, 0.013...
classla/janes_tag
2022-10-25T07:31:04.000Z
[ "task_categories:other", "task_ids:lemmatization", "task_ids:part-of-speech", "language:si", "license:cc-by-sa-4.0", "structure-prediction", "normalization", "tokenization", "region:us" ]
classla
The dataset contains 6273 training samples, 762 validation samples and 749 test samples. Each sample represents a sentence and includes the following features: sentence ID ('sent_id'), list of tokens ('tokens'), list of normalised word forms ('norms'), list of lemmas ('lemmas'), list of Multext-East tags ('xpos_tags), list of morphological features ('feats'), and list of UPOS tags ('upos_tags'), which are encoded as class labels.
null
0
86
2022-03-02T23:29:22
--- language: - si license: - cc-by-sa-4.0 task_categories: - other task_ids: - lemmatization - part-of-speech tags: - structure-prediction - normalization - tokenization --- The dataset contains 6273 training samples, 762 validation samples and 749 test samples. Each sample represents a sentence and includes the following features: sentence ID ('sent\_id'), list of tokens ('tokens'), list of normalised word forms ('norms'), list of lemmas ('lemmas'), list of Multext-East tags ('xpos\_tags), list of morphological features ('feats'), and list of UPOS tags ('upos\_tags'), which are encoded as class labels.
615
[ [ -0.017333984375, -0.036712646484375, 0.00899505615234375, 0.008087158203125, -0.0086822509765625, -0.013427734375, -0.01508331298828125, -0.0016078948974609375, 0.00991058349609375, 0.047027587890625, -0.041534423828125, -0.05609130859375, -0.038909912109375, ...
classla/reldi_hr
2022-10-25T07:30:56.000Z
[ "task_categories:other", "task_ids:lemmatization", "task_ids:named-entity-recognition", "task_ids:part-of-speech", "language:hr", "license:cc-by-sa-4.0", "structure-prediction", "normalization", "tokenization", "region:us" ]
classla
The dataset contains 6339 training samples, 815 validation samples and 785 test samples. Each sample represents a sentence and includes the following features: sentence ID ('sent_id'), list of tokens ('tokens'), list of lemmas ('lemmas'), list of UPOS tags ('upos_tags'), list of Multext-East tags ('xpos_tags), list of morphological features ('feats'), and list of IOB tags ('iob_tags'), which are encoded as class labels.
null
0
86
2022-03-02T23:29:22
--- language: - hr license: - cc-by-sa-4.0 task_categories: - other task_ids: - lemmatization - named-entity-recognition - part-of-speech tags: - structure-prediction - normalization - tokenization --- This dataset is based on 3,871 Croatian tweets that were segmented into sentences, tokens, and annotated with normalized forms, lemmas, MULTEXT-East tags (XPOS), UPOS tags and morphological features, and named entities. The dataset contains 6339 training samples (sentences), 815 validation samples and 785 test samples. Each sample represents a sentence and includes the following features: sentence ID ('sent\_id'), list of tokens ('tokens'), list of normalised tokens ('norms'), list of lemmas ('lemmas'), list of UPOS tags ('upos\_tags'), list of MULTEXT-East tags ('xpos\_tags), list of morphological features ('feats'), and list of named entity IOB tags ('iob\_tags'), which are encoded as class labels. If you are using this dataset in your research, please cite the following paper: ``` @article{Miličević_Ljubešić_2016, title={Tviterasi, tviteraši or twitteraši? Producing and analysing a normalised dataset of Croatian and Serbian tweets}, volume={4}, url={https://revije.ff.uni-lj.si/slovenscina2/article/view/7007}, DOI={10.4312/slo2.0.2016.2.156-188}, number={2}, journal={Slovenščina 2.0: empirical, applied and interdisciplinary research}, author={Miličević, Maja and Ljubešić, Nikola}, year={2016}, month={Sep.}, pages={156–188} } ```
1,469
[ [ -0.002593994140625, -0.037017822265625, 0.0166168212890625, 0.02801513671875, -0.027435302734375, 0.0062255859375, -0.03692626953125, -0.0265655517578125, 0.0301971435546875, 0.04351806640625, -0.046661376953125, -0.06451416015625, -0.03973388671875, 0.02734...
cloverhxy/DADER-source
2023-02-26T08:58:31.000Z
[ "region:us" ]
cloverhxy
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.0213775634765625, -0.01494598388671875, 0.057159423828125, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052520751953125, 0.005077362060546875, 0.051361083984375, 0.0170135498046875, -0.05206298828125, -0.01494598388671875, -0.06036376953125, 0.03...
ctu-aic/anli_cs
2021-11-21T21:12:10.000Z
[ "region:us" ]
ctu-aic
TODO: Anli_cs is a Czech translation of the Adversarial NLI dataset
todo
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
ctu-aic/ctkfacts_nli
2022-11-01T06:35:47.000Z
[ "arxiv:2201.11115", "region:us" ]
ctu-aic
CtkFactsNLI is a NLI version of the Czech CTKFacts dataset
@article{DBLP:journals/corr/abs-2201-11115, author = {Jan Drchal and Herbert Ullrich and Martin R{\'{y}}par and Hana Vincourov{\'{a}} and V{\'{a}}clav Moravec}, title = {CsFEVER and CTKFacts: Czech Datasets for Fact Verification}, journal = {CoRR}, volume = {abs/2201.11115}, year = {2022}, url = {https://arxiv.org/abs/2201.11115}, eprinttype = {arXiv}, eprint = {2201.11115}, timestamp = {Tue, 01 Feb 2022 14:59:01 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-11115.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
2
86
2022-03-02T23:29:22
# CTKFacts dataset for Natural Language Inference Czech Natural Language Inference dataset of ~3K *evidence*-*claim* pairs labelled with SUPPORTS, REFUTES or NOT ENOUGH INFO veracity labels. Extracted from a round of fact-checking experiments concluded and described within the CsFEVER and [CTKFacts: Czech Datasets for Fact Verification](https://arxiv.org/abs/2201.11115) paper currently being revised for publication in LREV journal. ## Document retrieval version Can be found at https://huggingface.co/datasets/ctu-aic/ctkfacts
532
[ [ -0.0099029541015625, -0.0633544921875, 0.0440673828125, 0.0180206298828125, -0.0169219970703125, -0.00928497314453125, -0.013580322265625, -0.04730224609375, -0.0032291412353515625, 0.0655517578125, -0.04638671875, -0.060821533203125, -0.03851318359375, 0.03...
davanstrien/iiif_manuscripts_label_ge_50
2022-02-28T18:53:18.000Z
[ "region:us" ]
davanstrien
null
null
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.021392822265625, -0.01494598388671875, 0.05718994140625, 0.028839111328125, -0.0350341796875, 0.046539306640625, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.01702880859375, -0.052093505859375, -0.01494598388671875, -0.06036376953125, 0.03790...
davidwisdom/reddit-randomness
2021-11-06T23:56:43.000Z
[ "region:us" ]
davidwisdom
null
null
0
86
2022-03-02T23:29:22
# Reddit Randomness Dataset A dataset I created because I was curious about how "random" r/random really is. This data was collected by sending `GET` requests to `https://www.reddit.com/r/random` for a few hours on September 19th, 2021. I scraped a bit of metadata about the subreddits as well. `randomness_12k_clean.csv` reports the random subreddits as they happened and `summary.csv` lists some metadata about each subreddit. # The Data ## `randomness_12k_clean.csv` This file serves as a record of the 12,055 successful results I got from r/random. Each row represents one result. ### Fields * `subreddit`: The name of the subreddit that the scraper recieved from r/random (`string`) * `response_code`: HTTP response code the scraper recieved when it sent a `GET` request to /r/random (`int`, always `302`) ## `summary.csv` As the name suggests, this file summarizes `randomness_12k_clean.csv` into the information that I cared about when I analyzed this data. Each row represents one of the 3,679 unique subreddits and includes some stats about the subreddit as well as the number of times it appears in the results. ### Fields * `subreddit`: The name of the subreddit (`string`, unique) * `subscribers`: How many subscribers the subreddit had (`int`, max of `99_886`) * `current_users`: How many users accessed the subreddit in the past 15 minutes (`int`, max of `999`) * `creation_date`: Date that the subreddit was created (`YYYY-MM-DD` or `Error:PrivateSub` or `Error:Banned`) * `date_accessed`: Date that I collected the values in `subscribers` and `current_users` (`YYYY-MM-DD`) * `time_accessed_UTC`: Time that I collected the values in `subscribers` and `current_users`, reported in UTC+0 (`HH:MM:SS`) * `appearances`: How many times the subreddit shows up in `randomness_12k_clean.csv` (`int`, max of `9`) # Missing Values and Quirks In the `summary.csv` file, there are three missing values. After I collected the number of subscribers and the number of current users, I went back about a week later to collect the creation date of each subreddit. In that week, three subreddits had been banned or taken private. I filled in the values with a descriptive string. * SomethingWasWrong (`Error:PrivateSub`) * HannahowoOnlyfans (`Error:Banned`) * JanetGuzman (`Error:Banned`) I think there are a few NSFW subreddits in the results, even though I only queried r/random and not r/randnsfw. As a simple example, searching the data for "nsfw" shows that I got the subreddit r/nsfwanimegifs twice. # License This dataset is made available under the Open Database License: http://opendatacommons.org/licenses/odbl/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/
2,782
[ [ -0.0472412109375, -0.048553466796875, 0.0281982421875, 0.0242767333984375, -0.041534423828125, -0.004673004150390625, -0.0188446044921875, -0.0167999267578125, 0.048553466796875, 0.0247344970703125, -0.06329345703125, -0.052825927734375, -0.035614013671875, ...
ebrigham/labels
2022-03-15T15:08:28.000Z
[ "region:us" ]
ebrigham
AG is a collection of more than 1 million news articles. News articles have been gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of activity. ComeToMyHead is an academic news search engine which has been running since July, 2004. The dataset is provided by the academic comunity for research purposes in data mining (clustering, classification, etc), information retrieval (ranking, search, etc), xml, data compression, data streaming, and any other non-commercial activity. For more information, please refer to the link http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html . The AG's news topic classification dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu) from the dataset above. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
@inproceedings{Zhang2015CharacterlevelCN, title={Character-level Convolutional Networks for Text Classification}, author={Xiang Zhang and Junbo Jake Zhao and Yann LeCun}, booktitle={NIPS}, year={2015} }
0
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.021392822265625, -0.01494598388671875, 0.05718994140625, 0.028839111328125, -0.0350341796875, 0.046539306640625, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.01702880859375, -0.052093505859375, -0.01494598388671875, -0.06036376953125, 0.03790...
emrecan/stsb-mt-turkish
2022-10-25T10:55:24.000Z
[ "task_categories:text-classification", "task_ids:semantic-similarity-scoring", "task_ids:text-scoring", "language_creators:machine-generated", "size_categories:1K<n<10K", "source_datasets:extended|other-sts-b", "language:tr", "region:us" ]
emrecan
null
null
3
86
2022-03-02T23:29:22
--- language_creators: - machine-generated language: - tr size_categories: - 1K<n<10K source_datasets: - extended|other-sts-b task_categories: - text-classification task_ids: - semantic-similarity-scoring - text-scoring --- # STSb Turkish Semantic textual similarity dataset for the Turkish language. It is a machine translation (Azure) of the [STSb English](http://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark) dataset. This dataset is not reviewed by expert human translators. Uploaded from [this repository](https://github.com/emrecncelik/sts-benchmark-tr).
566
[ [ -0.019073486328125, -0.046051025390625, 0.025421142578125, 0.005889892578125, -0.056793212890625, 0.0146942138671875, -0.006961822509765625, -0.02935791015625, 0.01241302490234375, 0.046295166015625, -0.058929443359375, -0.069091796875, -0.044342041015625, 0...
gigant/african_accented_french
2022-10-24T17:39:03.000Z
[ "task_categories:automatic-speech-recognition", "language:fr", "license:cc", "region:us" ]
gigant
\ This corpus consists of approximately 22 hours of speech recordings. Transcripts are provided for all the recordings. The corpus can be divided into 3 parts: 1. Yaounde Collected by a team from the U.S. Military Academy's Center for Technology Enhanced Language Learning (CTELL) in 2003 in Yaoundé, Cameroon. It has recordings from 84 speakers, 48 male and 36 female. 2. CA16 This part was collected by a RDECOM Science Team who participated in the United Nations exercise Central Accord 16 (CA16) in Libreville, Gabon in June 2016. The Science Team included DARPA's Dr. Boyan Onyshkevich and Dr. Aaron Lawson (SRI International), as well as RDECOM scientists. It has recordings from 125 speakers from Cameroon, Chad, Congo and Gabon. 3. Niger This part was collected from 23 speakers in Niamey, Niger, Oct. 26-30 2015. These speakers were students in a course for officers and sergeants presented by Army trainers assigned to U.S. Army Africa. The data was collected by RDECOM Science & Technology Advisors Major Eddie Strimel and Mr. Bill Bergen.
\
3
86
2022-03-02T23:29:22
--- language: - fr license: cc size_categories: fr: - 10K<n<100K task_categories: - automatic-speech-recognition task_ids: [] pretty_name: African Accented French --- ## Dataset Description - **Homepage:** http://www.openslr.org/57/ ### Dataset Summary This corpus consists of approximately 22 hours of speech recordings. Transcripts are provided for all the recordings. The corpus can be divided into 3 parts: 1. Yaounde Collected by a team from the U.S. Military Academy's Center for Technology Enhanced Language Learning (CTELL) in 2003 in Yaoundé, Cameroon. It has recordings from 84 speakers, 48 male and 36 female. 2. CA16 This part was collected by a RDECOM Science Team who participated in the United Nations exercise Central Accord 16 (CA16) in Libreville, Gabon in June 2016. The Science Team included DARPA's Dr. Boyan Onyshkevich and Dr. Aaron Lawson (SRI International), as well as RDECOM scientists. It has recordings from 125 speakers from Cameroon, Chad, Congo and Gabon. 3. Niger This part was collected from 23 speakers in Niamey, Niger, Oct. 26-30 2015. These speakers were students in a course for officers and sergeants presented by Army trainers assigned to U.S. Army Africa. The data was collected by RDECOM Science & Technology Advisors Major Eddie Strimel and Mr. Bill Bergen. ### Languages French ## Dataset Structure ### Data Instances A typical data point comprises the path to the audio file, called audio and its sentence. ### Data Fields - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - sentence: The sentence the user was prompted to speak ### Data Splits The speech material has been subdivided into portions for train and test. The train split consists of 9401 audio clips and the related sentences. The test split consists of 1985 audio clips and the related sentences. ### Contributions [@gigant](https://huggingface.co/gigant) added this dataset.
2,412
[ [ -0.04864501953125, -0.0545654296875, 0.005199432373046875, 0.0173492431640625, 0.0019197463989257812, -0.003993988037109375, -0.03753662109375, -0.031402587890625, 0.019195556640625, 0.049346923828125, -0.0433349609375, -0.046142578125, -0.04742431640625, 0....
huggingface/transformers-metadata
2023-11-02T20:10:54.000Z
[ "region:us" ]
huggingface
null
null
6
86
2022-03-02T23:29:22
Entry not found
15
[ [ -0.021392822265625, -0.01494598388671875, 0.05718994140625, 0.028839111328125, -0.0350341796875, 0.046539306640625, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.01702880859375, -0.052093505859375, -0.01494598388671875, -0.06036376953125, 0.03790...
ctheodoris/Genecorpus-30M
2023-10-10T01:37:03.000Z
[ "license:apache-2.0", "region:us" ]
ctheodoris
null
null
34
86
2022-03-12T21:21:46
--- license: apache-2.0 --- # Dataset Card for Genecorpus-30M ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks) - [Species](#species) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Citation Information](#citation-information) <!--- - [Licensing Information](#licensing-information) - [Contributions](#contributions) ---> ## Dataset Description <!--- **Paper:** ---> - **Point of Contact:** christina.theodoris@gladstone.ucsf.edu ### Dataset Summary We assembled a large-scale pretraining corpus, Genecorpus-30M, comprised of ~30 million human single cell transcriptomes from a broad range of tissues from publicly available data. This corpus was used for pretraining [Geneformer](https://huggingface.co/ctheodoris/Geneformer), a pretrained transformer model that enables context-aware predictions in settings with limited data in network biology. See [our manuscript](https://rdcu.be/ddrx0) for details. ### Supported Tasks This corpus was used for pretraining [Geneformer](https://rdcu.be/ddrx0) and is compatible with pretraining or fine-tuning Geneformer or similar models. ### Species Homo sapiens ## Dataset Structure ### Data Instances Genecorpus-30M is provided as tokenized data in the Huggingface Datasets structure, which is based on the Apache Arrow format. Each example within the dataset is composed of the rank value encoding for a single cell within the corpus. Rank value encodings provide a nonparametric representation of each single cell’s transcriptome, ranking genes by their expression within that cell normalized by their expression across the entire Genecorpus-30M. This method takes advantage of the many observations of each gene’s expression across Genecorpus-30M to prioritize genes that distinguish cell state. Specifically, this method will deprioritize ubiquitously highly-expressed housekeeping genes by normalizing them to a lower rank. Conversely, genes such as transcription factors that may be lowly expressed when they are expressed but highly distinguish cell state will move to a higher rank within the encoding. Furthermore, this rank-based approach may be more robust against technical artifacts that may systematically bias the absolute transcript counts value while the overall relative ranking of genes within each cell remains more stable. To accomplish this, we first calculated the nonzero median value of expression of each detected gene across all cells from the entire Genecorpus-30M. We aggregated the transcript count distribution for each gene, normalizing the gene transcript counts in each cell by the total transcript count of that cell to account for varying sequencing depth. We then normalized the genes in each single cell transcriptome by that gene’s nonzero median value of expression across Genecorpus-30M and ordered the genes by the rank of their normalized expression in that specific cell. Of note, we opted to use the nonzero median value of expression rather than include zeros in the distribution so as not to weight the value by tissue representation within Genecorpus-30M, assuming that a representative range of transcript values would be observed within the cells in which each gene was detected. The rank value encodings for each single cell transcriptome were then tokenized based on a total vocabulary of 25,424 protein-coding or miRNA genes detected within Geneformer-30M. The token dictionary mapping each token ID to special tokens (pad and mask) or Ensembl IDs for each gene is included within the repository as a pickle file (token_dictionary.pkl). ### Data Fields - `input_ids`: rank value encoding for an example cell - `lengths`: length of rank value encoding for that example cell ### Data Splits The dataset does not contain any predefined splits. ## Dataset Creation ### Curation Rationale Mapping the gene regulatory networks that drive disease progression enables screening for molecules that correct the network by normalizing core regulatory elements, rather than targeting peripheral downstream effectors that may not be disease modifying. However, mapping the gene network architecture requires large amounts of transcriptomic data to learn the connections between genes, which impedes network-correcting drug discovery in settings with limited data, including rare diseases and diseases affecting clinically inaccessible tissues. Although data remains limited in these settings, recent advances in sequencing technologies have driven a rapid expansion in the amount of transcriptomic data available from human tissues more broadly. Furthermore, single cell technologies have facilitated the observation of transcriptomic states without averaging genes’ expression across multiple cells, potentially providing more precise data for inference of network interactions, especially in diseases driven by dysregulation of multiple cell types. Recently, the concept of transfer learning has revolutionized fields such as natural language understanding and computer vision by leveraging deep learning models pretrained on large-scale general datasets that can then be fine-tuned towards a vast array of downstream tasks with limited task-specific data that would be insufficient to yield meaningful predictions when used in isolation. We therefore assembled Genecorpus-30M to allow the large-scale pretraining of [Geneformer](https://huggingface.co/ctheodoris/Geneformer), a pretrained transformer model that enables context-aware predictions in settings with limited data in network biology. ### Source Data #### Initial Data Collection and Normalization Source data included 29.9 million (29,900,531) human single cell transcriptomes from a broad range of tissues from 561 publicly available datasets from original studies cited in the Methods of Theodoris et al, Nature 2023. Datasets were filtered to retain cells with total read counts within three standard deviations of the mean within that dataset and mitochondrial reads within three standard deviations of the mean within that dataset. Ensembl-annotated protein-coding and miRNA genes were used for downstream analysis. Cells with less than seven detected Ensembl-annotated protein-coding or miRNA genes were excluded as the 15% masking used for the pretraining learning objective would not reliably mask a gene in cells with fewer detected genes. Ultimately, 27.4 million (27,406,217) cells passed the defined quality filters. Cells were then represented as rank value encodings as discussed above in [Data Instances](#data-instances). #### Who are the source data producers? Publicly available datasets containing raw counts were collected from National Center for Biotechnology Information (NCBI) Gene Expression Omnibus (GEO), NCBI Sequence Read Archive (SRA), Human Cell Atlas, European Molecular Biology Laboratory-European Bioinformatics Institute (EMBL-EBI) Single Cell Expression Atlas, Broad Institute Single Cell Portal, Brotman Baty Institute (BBI)-Allen Single Cell Atlases, Tumor Immune Single-cell Hub (TISCH) (excluding malignant cells), Panglao Database, 10x Genomics, University of California, Santa Cruz Cell Browser, European Genome-phenome Archive, Synapse, Riken, Zenodo, National Institutes of Health (NIH) Figshare Archive, NCBI dbGap, Refine.bio, China National GeneBank Sequence Archive, Mendeley Data, and individual communication with authors of the original studies as cited in the Methods of Theodoris et al, Nature 2023. ### Annotations #### Annotation process Geneformer-30M does not contain annotations. #### Who are the annotators? N/A ### Personal and Sensitive Information There is no personal or sensitive information included in the dataset. The dataset is composed of rank value encodings, so there are no traceable sequencing reads included. ## Considerations for Using the Data ### Social Impact of Dataset Genecorpus-30M enabled the large-scale pretraining of [Geneformer](https://huggingface.co/ctheodoris/Geneformer), a foundation model that enables context-aware predictions in settings with limited data in network biology. Within our publication, we demonstrated that during pretraining, Geneformer gained a fundamental understanding of network dynamics, encoding network hierarchy in the model’s attention weights in a completely self-supervised manner. Fine-tuning Geneformer towards a diverse panel of downstream tasks relevant to chromatin and network dynamics using limited task-specific data demonstrated that Geneformer consistently boosted predictive accuracy. Applied to disease modeling with limited patient data, Geneformer identified candidate therapeutic targets for cardiomyopathy. Overall, Geneformer represents a pretrained foundation model from which fine-tuning towards a broad range of downstream applications can be pursued to accelerate discovery of key network regulators and candidate therapeutic targets. ### Discussion of Biases We excluded cells with high mutational burdens (e.g. malignant cells and immortalized cell lines) that could lead to substantial network rewiring without companion genome sequencing to facilitate interpretation. We only included droplet-based sequencing platforms to assure expression value unit comparability. Although we assembled the dataset to represent as diverse a set of human tissues and cell types as possible, particular tissues and cell types are not represented due to unavailability of public data at the time of dataset assembly. In our manuscript, we demonstrated that pretraining with larger and more diverse corpuses consistently improved Geneformer’s predictive power, consistent with observations that large-scale pretraining allows training of deeper models that ultimately have greater predictive potential in fields including NLU, computer vision, and mathematical problem-solving. Additionally, exposure to hundreds of experimental datasets during pretraining also appeared to promote robustness to batch-dependent technical artifacts and individual variability that commonly impact single cell analyses in biology. These findings suggest that as the amount of publicly available transcriptomic data continues to expand, future models pretrained on even larger-scale corpuses may open opportunities to achieve meaningful predictions in even more elusive tasks with increasingly limited task-specific data. ### Other Known Limitations Genecorpus-30M was intended to be used for self-supervised pretraining. To achieve the best possible predictions in downstream tasks, Geneformer should be fine-tuned with labeled datasets relevant to the task at hand. ## Additional Information ### Dataset Curators Christina Theodoris, MD, PhD ### Citation Information Theodoris CV*, Xiao L, Chopra A, Chaffin MD, Al Sayed ZR, Hill MC, Mantineo H, Brydon EM, Zeng Z, Liu XS, Ellinor PT*. Transfer learning enables predictions in network biology. Nature. 2023 May 31; Epub ahead of print. (*co-corresponding authors) <!--- ### Licensing Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. --->
11,902
[ [ -0.0273895263671875, -0.0192718505859375, -0.0006747245788574219, 0.0029735565185546875, -0.01357269287109375, 0.0224761962890625, 0.0015850067138671875, -0.0088348388671875, 0.046112060546875, 0.0438232421875, -0.04833984375, -0.05218505859375, -0.0409240722656...
hackathon-pln-es/MESD
2022-03-25T18:15:07.000Z
[ "license:cc-by-4.0", "region:us" ]
hackathon-pln-es
null
null
6
86
2022-03-19T18:39:32
--- license: cc-by-4.0 Duville, Mathilde Marie; Alonso-Valerdi, Luz Maria; Ibarra, David (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5 --- # Dataset Card for MESD ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://data.mendeley.com/datasets/cy34mh68j9/5 - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary Contiene los datos de la base MESD procesados para hacer 'finetuning' de un modelo 'Wav2Vec' en el Hackaton organizado por 'Somos NLP'. Ejemplo de referencia: https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/audio_classification.ipynb Hemos accedido a la base MESD para obtener ejemplos. Breve descripción de los autores de la base MESD: "La Base de Datos del Discurso Emocional Mexicano (MESD en inglés) proporciona enunciados de una sola palabra para las prosodias afectivas de ira, asco, miedo, felicidad, neutro y tristeza con conformación cultural mexicana. El MESD ha sido pronunciado por actores adultos y niños no profesionales: Se dispone de 3 voces femeninas, 2 masculinas y 6 infantiles. Las palabras de los enunciados emocionales y neutros proceden de dos corpus: (corpus A) compuesto por sustantivos y adjetivos que se repiten a través de prosodias emocionales y tipos de voz (femenina, masculina, infantil), y (corpus B) que consiste en palabras controladas por edad de adquisición, frecuencia de uso, familiaridad, concreción, valencia, excitación y clasificaciones de dimensionalidad de emociones discretas. Las grabaciones de audio se realizaron en un estudio profesional con los siguientes materiales (1) un micrófono Sennheiser e835 con una respuesta de frecuencia plana (100 Hz a 10 kHz), (2) una interfaz de audio Focusrite Scarlett 2i4 conectada al micrófono con un cable XLR y al ordenador, y (3) la estación de trabajo de audio digital REAPER (Rapid Environment for Audio Production, Engineering, and Recording). Los archivos de audio se almacenaron como una secuencia de 24 bits con una frecuencia de muestreo de 48000Hz. La amplitud de las formas de onda acústicas se reescaló entre -1 y 1. Se crearon dos versiones con reducción de la naturalidad de los locutores a partir de expresiones emocionales humanas para voces femeninas del corpus B. En concreto, la naturalidad se redujo progresivamente de las voces humanas al nivel 1 al nivel 2. En particular, se editaron la duración y el tono medio en las sílabas acentuadas para reducir la diferencia entre las sílabas acentuadas y las no acentuadas. En los enunciados completos, se redujeron las relaciones F2/F1 y F3/F1 editando las frecuencias F2 y F3. También se redujo la intensidad de los armónicos 1 y 4. " ### Supported Tasks and Leaderboards [Needs More Information] ### Languages Español ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields Origen: texto que indica si se trata del conjunto de datos MESD original o los casos 'Speaker-embedded naturalness-reduced female voices' donde los autores han generado de forma sintética nuevos datos transformando algunas de las instancias de los audios originales. Palabra: texto de la palabra que se ha leído. Emoción: texto de la emoción a la que representa: Valores: 'Enojo', 'Felicidad', 'Miedo', 'Neutral', 'Disgusto', 'Tristeza'. InfoActor: texto que indica si la voz es de 'Niño', 'Hombre', 'Mujer'. AudioArray: audio array, remuestreado a 16 Khz. ### Data Splits Train: 891 ejemplos, mezcla de casos MESD y 'Speaker-embedded naturalness-reduced female voices'. Validation: 130 ejemplos, todos casos MESD. Test: 129 ejemplos, todos casos MESD. ## Dataset Creation ### Curation Rationale Unir los tres subconjuntos de datos y procesarlos para la tarea de finetuning, acorde al input esperado por el modelo Wav2Vec. ### Source Data #### Initial Data Collection and Normalization Acceso a los datos en bruto: https://data.mendeley.com/datasets/cy34mh68j9/5 Conversión a audio arra y remuestreo a 16 Khz. #### Who are the source language producers? Duville, Mathilde Marie; Alonso-Valerdi, Luz Maria; Ibarra, David (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5 ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information Creative Commons, [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) ### Citation Information ``` Duville, Mathilde Marie; Alonso-Valerdi, Luz Maria; Ibarra, David (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5 ```
6,276
[ [ -0.0457763671875, -0.049835205078125, -0.0036525726318359375, 0.03466796875, -0.01436614990234375, 0.0171966552734375, -0.019195556640625, -0.0382080078125, 0.050079345703125, 0.026580810546875, -0.07135009765625, -0.06658935546875, -0.029266357421875, 0.027...
tner/conll2003
2022-07-18T00:43:28.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "license:other", "region:us" ]
tner
[CoNLL 2003 NER dataset](https://aclanthology.org/W03-0419/)
@inproceedings{tjong-kim-sang-de-meulder-2003-introduction, title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition", author = "Tjong Kim Sang, Erik F. and De Meulder, Fien", booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003", year = "2003", url = "https://www.aclweb.org/anthology/W03-0419", pages = "142--147", }
1
86
2022-07-16T10:39:09
--- language: - en license: - other multilinguality: - monolingual size_categories: - 10K<n<100K task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: CoNLL-2003 --- # Dataset Card for "tner/conll2003" ## Dataset Description - **Repository:** [T-NER](https://github.com/asahi417/tner) - **Paper:** [https://www.aclweb.org/anthology/W03-0419/](https://www.aclweb.org/anthology/W03-0419/) - **Dataset:** CoNLL 2003 - **Domain:** News - **Number of Entity:** 3 ### Dataset Summary CoNLL-2003 NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project. - Entity Types: `ORG`, `PER`, `LOC`, `MISC` ## Dataset Structure ### Data Instances An example of `train` looks as follows. ``` { 'tags': ['SOCCER','-', 'JAPAN', 'GET', 'LUCKY', 'WIN', ',', 'CHINA', 'IN', 'SURPRISE', 'DEFEAT', '.'], 'tokens': [0, 0, 5, 0, 0, 0, 0, 3, 0, 0, 0, 0] } ``` ### Label ID The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/conll2003/raw/main/dataset/label.json). ```python { "O": 0, "B-ORG": 1, "B-MISC": 2, "B-PER": 3, "I-PER": 4, "B-LOC": 5, "I-ORG": 6, "I-MISC": 7, "I-LOC": 8 } ``` ### Data Splits | name |train|validation|test| |---------|----:|---------:|---:| |conll2003|14041| 3250|3453| ### Licensing Information From the [CoNLL2003 shared task](https://www.clips.uantwerpen.be/conll2003/ner/) page: > The English data is a collection of news wire articles from the Reuters Corpus. The annotation has been done by people of the University of Antwerp. Because of copyright reasons we only make available the annotations. In order to build the complete data sets you will need access to the Reuters Corpus. It can be obtained for research purposes without any charge from NIST. The copyrights are defined below, from the [Reuters Corpus page](https://trec.nist.gov/data/reuters/reuters.html): > The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements: > > [Organizational agreement](https://trec.nist.gov/data/reuters/org_appl_reuters_v4.html) > > This agreement must be signed by the person responsible for the data at your organization, and sent to NIST. > > [Individual agreement](https://trec.nist.gov/data/reuters/ind_appl_reuters_v4.html) > > This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization. ### Citation Information ``` @inproceedings{tjong-kim-sang-de-meulder-2003-introduction, title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition", author = "Tjong Kim Sang, Erik F. and De Meulder, Fien", booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003", year = "2003", url = "https://www.aclweb.org/anthology/W03-0419", pages = "142--147", } ```
2,973
[ [ -0.039215087890625, -0.0293731689453125, 0.0126800537109375, 0.011962890625, -0.02618408203125, -0.008331298828125, -0.0223846435546875, -0.043426513671875, 0.0345458984375, 0.024810791015625, -0.027252197265625, -0.04241943359375, -0.048431396484375, 0.0375...
keremberke/football-object-detection
2023-01-04T20:39:21.000Z
[ "task_categories:object-detection", "roboflow", "region:us" ]
keremberke
null
@misc{ football-player-detection-kucab_dataset, title = { Football-Player-Detection Dataset }, type = { Open Source Dataset }, author = { Augmented Startups }, howpublished = { \\url{ https://universe.roboflow.com/augmented-startups/football-player-detection-kucab } }, url = { https://universe.roboflow.com/augmented-startups/football-player-detection-kucab }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2022 }, month = { nov }, note = { visited on 2022-12-29 }, }
5
86
2022-12-28T20:09:47
--- task_categories: - object-detection tags: - roboflow --- ### Roboflow Dataset Page [https://universe.roboflow.com/augmented-startups/football-player-detection-kucab](https://universe.roboflow.com/augmented-startups/football-player-detection-kucab?ref=roboflow2huggingface) ### Citation ``` @misc{ football-player-detection-kucab_dataset, title = { Football-Player-Detection Dataset }, type = { Open Source Dataset }, author = { Augmented Startups }, howpublished = { \url{ https://universe.roboflow.com/augmented-startups/football-player-detection-kucab } }, url = { https://universe.roboflow.com/augmented-startups/football-player-detection-kucab }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2022 }, month = { nov }, note = { visited on 2022-12-29 }, } ``` ### License CC BY 4.0 ### Dataset Summary This dataset was exported via roboflow.com on November 21, 2022 at 6:50 PM GMT Roboflow is an end-to-end computer vision platform that helps you * collaborate with your team on computer vision projects * collect & organize images * understand unstructured image data * annotate, and create datasets * export, train, and deploy computer vision models * use active learning to improve your dataset over time It includes 1232 images. Track-players-and-football are annotated in COCO format. The following pre-processing was applied to each image: * Auto-orientation of pixel data (with EXIF-orientation stripping) No image augmentation techniques were applied.
1,539
[ [ -0.044952392578125, -0.036102294921875, 0.0181427001953125, 0.0253448486328125, -0.0258636474609375, 0.005817413330078125, 0.01387786865234375, -0.06353759765625, 0.035369873046875, 0.0282745361328125, -0.05120849609375, -0.048797607421875, -0.0291900634765625, ...
Francesco/cable-damage
2023-03-30T09:29:47.000Z
[ "task_categories:object-detection", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc", "rf100", "region:us" ]
Francesco
null
null
2
86
2023-03-30T09:29:23
--- dataset_info: features: - name: image_id dtype: int64 - name: image dtype: image - name: width dtype: int32 - name: height dtype: int32 - name: objects sequence: - name: id dtype: int64 - name: area dtype: int64 - name: bbox sequence: float32 length: 4 - name: category dtype: class_label: names: '0': cable-damage '1': break '2': thunderbolt annotations_creators: - crowdsourced language_creators: - found language: - en license: - cc multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - object-detection task_ids: [] pretty_name: cable-damage tags: - rf100 --- # Dataset Card for cable-damage ** The original COCO dataset is stored at `dataset.tar.gz`** ## Dataset Description - **Homepage:** https://universe.roboflow.com/object-detection/cable-damage - **Point of Contact:** francesco.zuppichini@gmail.com ### Dataset Summary cable-damage ### Supported Tasks and Leaderboards - `object-detection`: The dataset can be used to train a model for Object Detection. ### Languages English ## Dataset Structure ### Data Instances A data point comprises an image and its object annotations. ``` { 'image_id': 15, 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>, 'width': 964043, 'height': 640, 'objects': { 'id': [114, 115, 116, 117], 'area': [3796, 1596, 152768, 81002], 'bbox': [ [302.0, 109.0, 73.0, 52.0], [810.0, 100.0, 57.0, 28.0], [160.0, 31.0, 248.0, 616.0], [741.0, 68.0, 202.0, 401.0] ], 'category': [4, 4, 0, 0] } } ``` ### Data Fields - `image`: the image id - `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `width`: the image width - `height`: the image height - `objects`: a dictionary containing bounding box metadata for the objects present on the image - `id`: the annotation id - `area`: the area of the bounding box - `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format) - `category`: the object's category. #### Who are the annotators? Annotators are Roboflow users ## Additional Information ### Licensing Information See original homepage https://universe.roboflow.com/object-detection/cable-damage ### Citation Information ``` @misc{ cable-damage, title = { cable damage Dataset }, type = { Open Source Dataset }, author = { Roboflow 100 }, howpublished = { \url{ https://universe.roboflow.com/object-detection/cable-damage } }, url = { https://universe.roboflow.com/object-detection/cable-damage }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2022 }, month = { nov }, note = { visited on 2023-03-29 }, }" ``` ### Contributions Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
3,367
[ [ -0.05035400390625, -0.049041748046875, 0.009552001953125, -0.00682830810546875, -0.035369873046875, -0.0014276504516601562, 0.016845703125, -0.033721923828125, 0.0234527587890625, 0.02886962890625, -0.050750732421875, -0.0545654296875, -0.04364013671875, 0.0...
slvnwhrl/blurbs-clustering-p2p
2023-04-24T11:42:06.000Z
[ "size_categories:10K<n<100K", "language:de", "license:cc-by-nc-4.0", "embeddings", "clustering", "benchmark", "region:us" ]
slvnwhrl
null
null
0
86
2023-04-21T14:17:32
--- license: cc-by-nc-4.0 language: - de tags: - embeddings - clustering - benchmark size_categories: - 10K<n<100K --- This dataset can be used as a benchmark for clustering word embeddings for <b>German</b>. The datasets contains book titles and is based on the dataset from the [GermEval 2019 Shared Task on Hierarchical Classification of Blurbs](https://www.inf.uni-hamburg.de/en/inst/ab/lt/resources/data/germeval-2019-hmc.html). It contains 18'084 unqiue samples, 28 splits with 177 to 16'425 samples and 4 to 93 unique classes. Splits are built similarly to [MTEB](https://github.com/embeddings-benchmark/mteb)'s [ArxivClusteringP2P](https://huggingface.co/datasets/mteb/arxiv-clustering-p2p). Have a look at [German Text Embedding Clustering Benchmark](https://github.com/ClimSocAna/tecb-de) for more infos, datasets and evaluation results.
850
[ [ -0.027862548828125, -0.05157470703125, 0.02587890625, 0.0222625732421875, -0.038665771484375, 0.0012187957763671875, -0.01102447509765625, -0.020172119140625, 0.0104827880859375, 0.017425537109375, -0.0167083740234375, -0.08209228515625, -0.05645751953125, 0...
thu-coai/chid
2023-05-08T09:11:55.000Z
[ "language:zh", "license:apache-2.0", "arxiv:1906.01265", "region:us" ]
thu-coai
null
null
3
86
2023-05-08T08:21:01
--- license: apache-2.0 language: - zh --- The ChID dataset. [GitHub repo](https://github.com/chujiezheng/ChID-Dataset). [Original paper](https://arxiv.org/abs/1906.01265). ```bib @inproceedings{zheng-etal-2019-chid, title = "{C}h{ID}: A Large-scale {C}hinese {ID}iom Dataset for Cloze Test", author = "Zheng, Chujie and Huang, Minlie and Sun, Aixin", booktitle = "ACL", year = "2019" } ```
422
[ [ -0.0230712890625, -0.0290069580078125, 0.01265716552734375, 0.0050201416015625, -0.01180267333984375, -0.020477294921875, -0.0007648468017578125, -0.048583984375, 0.03509521484375, 0.0487060546875, -0.0308074951171875, -0.047271728515625, -0.015106201171875, ...
d0rj/dialogsum-ru
2023-05-13T06:27:30.000Z
[ "task_categories:summarization", "task_categories:text2text-generation", "task_categories:text-generation", "annotations_creators:expert-generated", "language_creators:translated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:knkarthick/dialogsum", "language:ru", "...
d0rj
null
null
2
86
2023-05-08T14:17:46
--- annotations_creators: - expert-generated language_creators: - translated language: - ru license: - mit multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - knkarthick/dialogsum task_categories: - summarization - text2text-generation - text-generation task_ids: [] pretty_name: DIALOGSum Corpus (ru) tags: - conversations-summarization - dialogue-summarization dataset_info: features: - name: id dtype: string - name: dialogue dtype: string - name: summary dtype: string - name: topic dtype: string splits: - name: train num_bytes: 19115158 num_examples: 12460 - name: validation num_bytes: 746312 num_examples: 500 - name: test num_bytes: 2282379 num_examples: 1500 download_size: 10144708 dataset_size: 22143849 train-eval-index: - config: samsum task: summarization task_id: summarization splits: eval_split: test col_mapping: dialogue: text summary: target --- # Dataset Card for DIALOGSum Corpus ## Dataset Description ### Links - **Homepage:** https://aclanthology.org/2021.findings-acl.449 - **Repository:** https://github.com/cylnlp/dialogsum - **Paper:** https://aclanthology.org/2021.findings-acl.449 ### Dataset Summary DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 (Plus 100 holdout data for topic generation) dialogues with corresponding manually labeled summaries and topics. ### Languages Russian (translated from English by Google Translate). ## Dataset Structure ### Data Fields - dialogue: text of dialogue. - summary: human written summary of the dialogue. - topic: human written topic/one liner of the dialogue. - id: unique file id of an example. ### Data Splits - train: 12460 - val: 500 - test: 1500 - holdout: 100 [Only 3 features: id, dialogue, topic] ## Dataset Creation ### Curation Rationale In paper: We collect dialogue data for DialogSum from three public dialogue corpora, namely Dailydialog (Li et al., 2017), DREAM (Sun et al., 2019) and MuTual (Cui et al., 2019), as well as an English speaking practice website. These datasets contain face-to-face spoken dialogues that cover a wide range of daily-life topics, including schooling, work, medication, shopping, leisure, travel. Most conversations take place between friends, colleagues, and between service providers and customers. Compared with previous datasets, dialogues from DialogSum have distinct characteristics: Under rich real-life scenarios, including more diverse task-oriented scenarios; Have clear communication patterns and intents, which is valuable to serve as summarization sources; Have a reasonable length, which comforts the purpose of automatic summarization. We ask annotators to summarize each dialogue based on the following criteria: Convey the most salient information; Be brief; Preserve important named entities within the conversation; Be written from an observer perspective; Be written in formal language. ### Who are the source language producers? linguists ### Who are the annotators? language experts ## Licensing Information MIT License ## Citation Information ``` @inproceedings{chen-etal-2021-dialogsum, title = "{D}ialog{S}um: {A} Real-Life Scenario Dialogue Summarization Dataset", author = "Chen, Yulong and Liu, Yang and Chen, Liang and Zhang, Yue", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.449", doi = "10.18653/v1/2021.findings-acl.449", pages = "5062--5074", ``` ## Contributions Thanks to [@cylnlp](https://github.com/cylnlp) for adding this dataset.
3,816
[ [ -0.03436279296875, -0.04522705078125, 0.006679534912109375, 0.007106781005859375, -0.0206756591796875, -0.007015228271484375, -0.024444580078125, -0.033111572265625, 0.0340576171875, 0.0499267578125, -0.044952392578125, -0.05126953125, -0.033050537109375, 0....
augtoma/medmcqa
2023-08-11T20:44:27.000Z
[ "region:us" ]
augtoma
null
null
1
86
2023-08-11T20:44:11
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: id dtype: string - name: question dtype: string - name: cop dtype: class_label: names: '0': a '1': b '2': c '3': d - name: choice_type dtype: string - name: exp dtype: string - name: subject_name dtype: string - name: topic_name dtype: string - name: options struct: - name: A dtype: string - name: B dtype: string - name: C dtype: string - name: D dtype: string - name: answer_idx dtype: string - name: answer dtype: string splits: - name: train num_bytes: 136988451 num_examples: 182822 - name: test num_bytes: 2350095 num_examples: 4183 download_size: 90978864 dataset_size: 139338546 --- # Dataset Card for "medmcqa" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
1,095
[ [ -0.041900634765625, -0.008209228515625, 0.0306396484375, -0.0035610198974609375, -0.0138702392578125, 0.0127105712890625, 0.03875732421875, 0.0007357597351074219, 0.05426025390625, 0.042205810546875, -0.0682373046875, -0.05859375, -0.038665771484375, -0.0204...
open-llm-leaderboard/details_psmathur__model_007_13b
2023-08-27T12:28:41.000Z
[ "region:us" ]
open-llm-leaderboard
null
null
0
86
2023-08-18T00:15:44
--- pretty_name: Evaluation run of psmathur/model_007_13b dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [psmathur/model_007_13b](https://huggingface.co/psmathur/model_007_13b) on the\ \ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 61 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the agregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_psmathur__model_007_13b\"\ ,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\ \nThese are the [latest results from run 2023-08-11T11:34:56.294632](https://huggingface.co/datasets/open-llm-leaderboard/details_psmathur__model_007_13b/blob/main/results_2023-08-11T11%3A34%3A56.294632.json)\ \ (note that their might be results for other tasks in the repos if successive evals\ \ didn't cover the same tasks. You find each in the results and the \"latest\" split\ \ for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2314240573187148,\n\ \ \"acc_stderr\": 0.03071122006512167,\n \"acc_norm\": 0.2314240573187148,\n\ \ \"acc_norm_stderr\": 0.03071122006512167,\n \"mc1\": 1.0,\n \ \ \"mc1_stderr\": 0.0,\n \"mc2\": NaN,\n \"mc2_stderr\": NaN\n\ \ },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.22696245733788395,\n\ \ \"acc_stderr\": 0.012240491536132861,\n \"acc_norm\": 0.22696245733788395,\n\ \ \"acc_norm_stderr\": 0.012240491536132861\n },\n \"harness|hellaswag|10\"\ : {\n \"acc\": 0.2504481179047998,\n \"acc_stderr\": 0.004323856300539177,\n\ \ \"acc_norm\": 0.2504481179047998,\n \"acc_norm_stderr\": 0.004323856300539177\n\ \ },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.22,\n\ \ \"acc_stderr\": 0.04163331998932268,\n \"acc_norm\": 0.22,\n \ \ \"acc_norm_stderr\": 0.04163331998932268\n },\n \"harness|hendrycksTest-anatomy|5\"\ : {\n \"acc\": 0.18518518518518517,\n \"acc_stderr\": 0.03355677216313142,\n\ \ \"acc_norm\": 0.18518518518518517,\n \"acc_norm_stderr\": 0.03355677216313142\n\ \ },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.17763157894736842,\n\ \ \"acc_stderr\": 0.031103182383123398,\n \"acc_norm\": 0.17763157894736842,\n\ \ \"acc_norm_stderr\": 0.031103182383123398\n },\n \"harness|hendrycksTest-business_ethics|5\"\ : {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \ \ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \ \ },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.21509433962264152,\n\ \ \"acc_stderr\": 0.02528839450289137,\n \"acc_norm\": 0.21509433962264152,\n\ \ \"acc_norm_stderr\": 0.02528839450289137\n },\n \"harness|hendrycksTest-college_biology|5\"\ : {\n \"acc\": 0.2569444444444444,\n \"acc_stderr\": 0.03653946969442099,\n\ \ \"acc_norm\": 0.2569444444444444,\n \"acc_norm_stderr\": 0.03653946969442099\n\ \ },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\":\ \ 0.2,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.2,\n\ \ \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-college_computer_science|5\"\ : {\n \"acc\": 0.26,\n \"acc_stderr\": 0.0440844002276808,\n \ \ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n\ \ \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.21,\n\ \ \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.21,\n \ \ \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-college_medicine|5\"\ : {\n \"acc\": 0.20809248554913296,\n \"acc_stderr\": 0.030952890217749874,\n\ \ \"acc_norm\": 0.20809248554913296,\n \"acc_norm_stderr\": 0.030952890217749874\n\ \ },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.21568627450980393,\n\ \ \"acc_stderr\": 0.04092563958237654,\n \"acc_norm\": 0.21568627450980393,\n\ \ \"acc_norm_stderr\": 0.04092563958237654\n },\n \"harness|hendrycksTest-computer_security|5\"\ : {\n \"acc\": 0.28,\n \"acc_stderr\": 0.045126085985421276,\n \ \ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.045126085985421276\n \ \ },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\":\ \ 0.26382978723404255,\n \"acc_stderr\": 0.028809989854102973,\n \"\ acc_norm\": 0.26382978723404255,\n \"acc_norm_stderr\": 0.028809989854102973\n\ \ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.23684210526315788,\n\ \ \"acc_stderr\": 0.039994238792813365,\n \"acc_norm\": 0.23684210526315788,\n\ \ \"acc_norm_stderr\": 0.039994238792813365\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\ : {\n \"acc\": 0.2413793103448276,\n \"acc_stderr\": 0.03565998174135302,\n\ \ \"acc_norm\": 0.2413793103448276,\n \"acc_norm_stderr\": 0.03565998174135302\n\ \ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\ : 0.20899470899470898,\n \"acc_stderr\": 0.02094048156533486,\n \"\ acc_norm\": 0.20899470899470898,\n \"acc_norm_stderr\": 0.02094048156533486\n\ \ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2857142857142857,\n\ \ \"acc_stderr\": 0.04040610178208841,\n \"acc_norm\": 0.2857142857142857,\n\ \ \"acc_norm_stderr\": 0.04040610178208841\n },\n \"harness|hendrycksTest-global_facts|5\"\ : {\n \"acc\": 0.18,\n \"acc_stderr\": 0.038612291966536934,\n \ \ \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.038612291966536934\n \ \ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\ : 0.1774193548387097,\n \"acc_stderr\": 0.02173254068932927,\n \"\ acc_norm\": 0.1774193548387097,\n \"acc_norm_stderr\": 0.02173254068932927\n\ \ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\ : 0.15270935960591134,\n \"acc_stderr\": 0.02530890453938063,\n \"\ acc_norm\": 0.15270935960591134,\n \"acc_norm_stderr\": 0.02530890453938063\n\ \ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \ \ \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\"\ : 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\ : {\n \"acc\": 0.21818181818181817,\n \"acc_stderr\": 0.03225078108306289,\n\ \ \"acc_norm\": 0.21818181818181817,\n \"acc_norm_stderr\": 0.03225078108306289\n\ \ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\ : 0.17676767676767677,\n \"acc_stderr\": 0.027178752639044915,\n \"\ acc_norm\": 0.17676767676767677,\n \"acc_norm_stderr\": 0.027178752639044915\n\ \ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\ \ \"acc\": 0.19689119170984457,\n \"acc_stderr\": 0.028697873971860664,\n\ \ \"acc_norm\": 0.19689119170984457,\n \"acc_norm_stderr\": 0.028697873971860664\n\ \ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \ \ \"acc\": 0.20256410256410257,\n \"acc_stderr\": 0.020377660970371372,\n\ \ \"acc_norm\": 0.20256410256410257,\n \"acc_norm_stderr\": 0.020377660970371372\n\ \ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\ acc\": 0.2111111111111111,\n \"acc_stderr\": 0.024882116857655075,\n \ \ \"acc_norm\": 0.2111111111111111,\n \"acc_norm_stderr\": 0.024882116857655075\n\ \ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \ \ \"acc\": 0.21008403361344538,\n \"acc_stderr\": 0.026461398717471874,\n\ \ \"acc_norm\": 0.21008403361344538,\n \"acc_norm_stderr\": 0.026461398717471874\n\ \ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\ : 0.1986754966887417,\n \"acc_stderr\": 0.03257847384436776,\n \"\ acc_norm\": 0.1986754966887417,\n \"acc_norm_stderr\": 0.03257847384436776\n\ \ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\ : 0.1926605504587156,\n \"acc_stderr\": 0.016909276884936094,\n \"\ acc_norm\": 0.1926605504587156,\n \"acc_norm_stderr\": 0.016909276884936094\n\ \ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\ : 0.1527777777777778,\n \"acc_stderr\": 0.024536326026134224,\n \"\ acc_norm\": 0.1527777777777778,\n \"acc_norm_stderr\": 0.024536326026134224\n\ \ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\ : 0.25,\n \"acc_stderr\": 0.03039153369274154,\n \"acc_norm\": 0.25,\n\ \ \"acc_norm_stderr\": 0.03039153369274154\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\ : {\n \"acc\": 0.270042194092827,\n \"acc_stderr\": 0.028900721906293426,\n\ \ \"acc_norm\": 0.270042194092827,\n \"acc_norm_stderr\": 0.028900721906293426\n\ \ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.31390134529147984,\n\ \ \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.31390134529147984,\n\ \ \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\"\ : {\n \"acc\": 0.2595419847328244,\n \"acc_stderr\": 0.03844876139785271,\n\ \ \"acc_norm\": 0.2595419847328244,\n \"acc_norm_stderr\": 0.03844876139785271\n\ \ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\ \ 0.2396694214876033,\n \"acc_stderr\": 0.03896878985070417,\n \"\ acc_norm\": 0.2396694214876033,\n \"acc_norm_stderr\": 0.03896878985070417\n\ \ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.25925925925925924,\n\ \ \"acc_stderr\": 0.042365112580946336,\n \"acc_norm\": 0.25925925925925924,\n\ \ \"acc_norm_stderr\": 0.042365112580946336\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\ : {\n \"acc\": 0.22085889570552147,\n \"acc_stderr\": 0.032591773927421776,\n\ \ \"acc_norm\": 0.22085889570552147,\n \"acc_norm_stderr\": 0.032591773927421776\n\ \ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3125,\n\ \ \"acc_stderr\": 0.043994650575715215,\n \"acc_norm\": 0.3125,\n\ \ \"acc_norm_stderr\": 0.043994650575715215\n },\n \"harness|hendrycksTest-management|5\"\ : {\n \"acc\": 0.17475728155339806,\n \"acc_stderr\": 0.037601780060266224,\n\ \ \"acc_norm\": 0.17475728155339806,\n \"acc_norm_stderr\": 0.037601780060266224\n\ \ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2905982905982906,\n\ \ \"acc_stderr\": 0.02974504857267404,\n \"acc_norm\": 0.2905982905982906,\n\ \ \"acc_norm_stderr\": 0.02974504857267404\n },\n \"harness|hendrycksTest-medical_genetics|5\"\ : {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \ \ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \ \ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.23754789272030652,\n\ \ \"acc_stderr\": 0.015218733046150193,\n \"acc_norm\": 0.23754789272030652,\n\ \ \"acc_norm_stderr\": 0.015218733046150193\n },\n \"harness|hendrycksTest-moral_disputes|5\"\ : {\n \"acc\": 0.24855491329479767,\n \"acc_stderr\": 0.023267528432100174,\n\ \ \"acc_norm\": 0.24855491329479767,\n \"acc_norm_stderr\": 0.023267528432100174\n\ \ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.23798882681564246,\n\ \ \"acc_stderr\": 0.014242630070574915,\n \"acc_norm\": 0.23798882681564246,\n\ \ \"acc_norm_stderr\": 0.014242630070574915\n },\n \"harness|hendrycksTest-nutrition|5\"\ : {\n \"acc\": 0.22549019607843138,\n \"acc_stderr\": 0.023929155517351284,\n\ \ \"acc_norm\": 0.22549019607843138,\n \"acc_norm_stderr\": 0.023929155517351284\n\ \ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.1864951768488746,\n\ \ \"acc_stderr\": 0.02212243977248077,\n \"acc_norm\": 0.1864951768488746,\n\ \ \"acc_norm_stderr\": 0.02212243977248077\n },\n \"harness|hendrycksTest-prehistory|5\"\ : {\n \"acc\": 0.21604938271604937,\n \"acc_stderr\": 0.022899162918445806,\n\ \ \"acc_norm\": 0.21604938271604937,\n \"acc_norm_stderr\": 0.022899162918445806\n\ \ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\ acc\": 0.23404255319148937,\n \"acc_stderr\": 0.025257861359432417,\n \ \ \"acc_norm\": 0.23404255319148937,\n \"acc_norm_stderr\": 0.025257861359432417\n\ \ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2457627118644068,\n\ \ \"acc_stderr\": 0.010996156635142692,\n \"acc_norm\": 0.2457627118644068,\n\ \ \"acc_norm_stderr\": 0.010996156635142692\n },\n \"harness|hendrycksTest-professional_medicine|5\"\ : {\n \"acc\": 0.18382352941176472,\n \"acc_stderr\": 0.023529242185193106,\n\ \ \"acc_norm\": 0.18382352941176472,\n \"acc_norm_stderr\": 0.023529242185193106\n\ \ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\ acc\": 0.25,\n \"acc_stderr\": 0.01751781884501444,\n \"acc_norm\"\ : 0.25,\n \"acc_norm_stderr\": 0.01751781884501444\n },\n \"harness|hendrycksTest-public_relations|5\"\ : {\n \"acc\": 0.21818181818181817,\n \"acc_stderr\": 0.03955932861795833,\n\ \ \"acc_norm\": 0.21818181818181817,\n \"acc_norm_stderr\": 0.03955932861795833\n\ \ },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.18775510204081633,\n\ \ \"acc_stderr\": 0.02500025603954621,\n \"acc_norm\": 0.18775510204081633,\n\ \ \"acc_norm_stderr\": 0.02500025603954621\n },\n \"harness|hendrycksTest-sociology|5\"\ : {\n \"acc\": 0.24378109452736318,\n \"acc_stderr\": 0.03036049015401465,\n\ \ \"acc_norm\": 0.24378109452736318,\n \"acc_norm_stderr\": 0.03036049015401465\n\ \ },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\":\ \ 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.28,\n\ \ \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-virology|5\"\ : {\n \"acc\": 0.28313253012048195,\n \"acc_stderr\": 0.03507295431370518,\n\ \ \"acc_norm\": 0.28313253012048195,\n \"acc_norm_stderr\": 0.03507295431370518\n\ \ },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.3216374269005848,\n\ \ \"acc_stderr\": 0.03582529442573122,\n \"acc_norm\": 0.3216374269005848,\n\ \ \"acc_norm_stderr\": 0.03582529442573122\n },\n \"harness|truthfulqa:mc|0\"\ : {\n \"mc1\": 1.0,\n \"mc1_stderr\": 0.0,\n \"mc2\": NaN,\n\ \ \"mc2_stderr\": NaN\n }\n}\n```" repo_url: https://huggingface.co/psmathur/model_007_13b leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_arc_challenge_25 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|arc:challenge|25_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|arc:challenge|25_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|arc:challenge|25_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hellaswag_10 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hellaswag|10_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hellaswag|10_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hellaswag|10_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-international_law|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-management|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-marketing|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-sociology|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-virology|5_2023-08-09T13:37:17.110700.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-international_law|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-management|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-marketing|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-sociology|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-virology|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-international_law|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-management|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-marketing|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-sociology|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-virology|5_2023-08-11T11:34:56.294632.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_abstract_algebra_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_anatomy_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-anatomy|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-anatomy|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-anatomy|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_astronomy_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-astronomy|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-astronomy|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-astronomy|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_business_ethics_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-business_ethics|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-business_ethics|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-business_ethics|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_clinical_knowledge_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_college_biology_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-college_biology|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-college_biology|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_biology|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_college_chemistry_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_college_computer_science_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_college_mathematics_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_college_medicine_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-college_medicine|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-college_medicine|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_medicine|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_college_physics_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-college_physics|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-college_physics|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_physics|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_computer_security_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-computer_security|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-computer_security|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-computer_security|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_conceptual_physics_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_econometrics_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-econometrics|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-econometrics|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-econometrics|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_electrical_engineering_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_elementary_mathematics_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_formal_logic_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-formal_logic|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-formal_logic|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-formal_logic|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_global_facts_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-global_facts|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-global_facts|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-global_facts|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_high_school_biology_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_high_school_chemistry_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_high_school_computer_science_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_high_school_european_history_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_high_school_geography_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_high_school_government_and_politics_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_high_school_macroeconomics_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_high_school_mathematics_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_high_school_microeconomics_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_high_school_physics_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_high_school_psychology_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_high_school_statistics_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_high_school_us_history_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_high_school_world_history_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_human_aging_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-human_aging|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-human_aging|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_aging|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_human_sexuality_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_international_law_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-international_law|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-international_law|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-international_law|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_jurisprudence_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_logical_fallacies_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_machine_learning_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-machine_learning|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-machine_learning|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-machine_learning|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_management_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-management|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-management|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-management|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_marketing_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-marketing|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-marketing|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-marketing|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_medical_genetics_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_miscellaneous_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_moral_disputes_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_moral_scenarios_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_nutrition_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-nutrition|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-nutrition|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-nutrition|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_philosophy_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-philosophy|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-philosophy|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-philosophy|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_prehistory_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-prehistory|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-prehistory|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-prehistory|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_professional_accounting_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_professional_law_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-professional_law|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-professional_law|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_law|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_professional_medicine_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_professional_psychology_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_public_relations_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-public_relations|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-public_relations|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-public_relations|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_security_studies_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-security_studies|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-security_studies|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-security_studies|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_sociology_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-sociology|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-sociology|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-sociology|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_us_foreign_policy_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_virology_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-virology|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-virology|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-virology|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_hendrycksTest_world_religions_5 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|hendrycksTest-world_religions|5_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|hendrycksTest-world_religions|5_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|hendrycksTest-world_religions|5_2023-08-11T11:34:56.294632.parquet' - config_name: harness_truthfulqa_mc_0 data_files: - split: 2023_08_09T13_37_17.110700 path: - '**/details_harness|truthfulqa:mc|0_2023-08-09T13:37:17.110700.parquet' - split: 2023_08_11T11_34_56.294632 path: - '**/details_harness|truthfulqa:mc|0_2023-08-11T11:34:56.294632.parquet' - split: latest path: - '**/details_harness|truthfulqa:mc|0_2023-08-11T11:34:56.294632.parquet' - config_name: results data_files: - split: 2023_08_09T13_37_17.110700 path: - results_2023-08-09T13:37:17.110700.parquet - split: 2023_08_11T11_34_56.294632 path: - results_2023-08-11T11:34:56.294632.parquet - split: latest path: - results_2023-08-11T11:34:56.294632.parquet --- # Dataset Card for Evaluation run of psmathur/model_007_13b ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/psmathur/model_007_13b - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [psmathur/model_007_13b](https://huggingface.co/psmathur/model_007_13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_psmathur__model_007_13b", "harness_truthfulqa_mc_0", split="train") ``` ## Latest results These are the [latest results from run 2023-08-11T11:34:56.294632](https://huggingface.co/datasets/open-llm-leaderboard/details_psmathur__model_007_13b/blob/main/results_2023-08-11T11%3A34%3A56.294632.json) (note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.2314240573187148, "acc_stderr": 0.03071122006512167, "acc_norm": 0.2314240573187148, "acc_norm_stderr": 0.03071122006512167, "mc1": 1.0, "mc1_stderr": 0.0, "mc2": NaN, "mc2_stderr": NaN }, "harness|arc:challenge|25": { "acc": 0.22696245733788395, "acc_stderr": 0.012240491536132861, "acc_norm": 0.22696245733788395, "acc_norm_stderr": 0.012240491536132861 }, "harness|hellaswag|10": { "acc": 0.2504481179047998, "acc_stderr": 0.004323856300539177, "acc_norm": 0.2504481179047998, "acc_norm_stderr": 0.004323856300539177 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.22, "acc_stderr": 0.04163331998932268, "acc_norm": 0.22, "acc_norm_stderr": 0.04163331998932268 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.18518518518518517, "acc_stderr": 0.03355677216313142, "acc_norm": 0.18518518518518517, "acc_norm_stderr": 0.03355677216313142 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.17763157894736842, "acc_stderr": 0.031103182383123398, "acc_norm": 0.17763157894736842, "acc_norm_stderr": 0.031103182383123398 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.21509433962264152, "acc_stderr": 0.02528839450289137, "acc_norm": 0.21509433962264152, "acc_norm_stderr": 0.02528839450289137 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.2569444444444444, "acc_stderr": 0.03653946969442099, "acc_norm": 0.2569444444444444, "acc_norm_stderr": 0.03653946969442099 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.2, "acc_stderr": 0.04020151261036845, "acc_norm": 0.2, "acc_norm_stderr": 0.04020151261036845 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.26, "acc_stderr": 0.0440844002276808, "acc_norm": 0.26, "acc_norm_stderr": 0.0440844002276808 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.21, "acc_stderr": 0.040936018074033256, "acc_norm": 0.21, "acc_norm_stderr": 0.040936018074033256 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.20809248554913296, "acc_stderr": 0.030952890217749874, "acc_norm": 0.20809248554913296, "acc_norm_stderr": 0.030952890217749874 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.21568627450980393, "acc_stderr": 0.04092563958237654, "acc_norm": 0.21568627450980393, "acc_norm_stderr": 0.04092563958237654 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.28, "acc_stderr": 0.045126085985421276, "acc_norm": 0.28, "acc_norm_stderr": 0.045126085985421276 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.26382978723404255, "acc_stderr": 0.028809989854102973, "acc_norm": 0.26382978723404255, "acc_norm_stderr": 0.028809989854102973 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.23684210526315788, "acc_stderr": 0.039994238792813365, "acc_norm": 0.23684210526315788, "acc_norm_stderr": 0.039994238792813365 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.2413793103448276, "acc_stderr": 0.03565998174135302, "acc_norm": 0.2413793103448276, "acc_norm_stderr": 0.03565998174135302 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.20899470899470898, "acc_stderr": 0.02094048156533486, "acc_norm": 0.20899470899470898, "acc_norm_stderr": 0.02094048156533486 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.2857142857142857, "acc_stderr": 0.04040610178208841, "acc_norm": 0.2857142857142857, "acc_norm_stderr": 0.04040610178208841 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.18, "acc_stderr": 0.038612291966536934, "acc_norm": 0.18, "acc_norm_stderr": 0.038612291966536934 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.1774193548387097, "acc_stderr": 0.02173254068932927, "acc_norm": 0.1774193548387097, "acc_norm_stderr": 0.02173254068932927 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.15270935960591134, "acc_stderr": 0.02530890453938063, "acc_norm": 0.15270935960591134, "acc_norm_stderr": 0.02530890453938063 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.25, "acc_stderr": 0.04351941398892446, "acc_norm": 0.25, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.21818181818181817, "acc_stderr": 0.03225078108306289, "acc_norm": 0.21818181818181817, "acc_norm_stderr": 0.03225078108306289 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.17676767676767677, "acc_stderr": 0.027178752639044915, "acc_norm": 0.17676767676767677, "acc_norm_stderr": 0.027178752639044915 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.19689119170984457, "acc_stderr": 0.028697873971860664, "acc_norm": 0.19689119170984457, "acc_norm_stderr": 0.028697873971860664 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.20256410256410257, "acc_stderr": 0.020377660970371372, "acc_norm": 0.20256410256410257, "acc_norm_stderr": 0.020377660970371372 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.2111111111111111, "acc_stderr": 0.024882116857655075, "acc_norm": 0.2111111111111111, "acc_norm_stderr": 0.024882116857655075 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.21008403361344538, "acc_stderr": 0.026461398717471874, "acc_norm": 0.21008403361344538, "acc_norm_stderr": 0.026461398717471874 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.1986754966887417, "acc_stderr": 0.03257847384436776, "acc_norm": 0.1986754966887417, "acc_norm_stderr": 0.03257847384436776 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.1926605504587156, "acc_stderr": 0.016909276884936094, "acc_norm": 0.1926605504587156, "acc_norm_stderr": 0.016909276884936094 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.1527777777777778, "acc_stderr": 0.024536326026134224, "acc_norm": 0.1527777777777778, "acc_norm_stderr": 0.024536326026134224 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.25, "acc_stderr": 0.03039153369274154, "acc_norm": 0.25, "acc_norm_stderr": 0.03039153369274154 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.270042194092827, "acc_stderr": 0.028900721906293426, "acc_norm": 0.270042194092827, "acc_norm_stderr": 0.028900721906293426 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.31390134529147984, "acc_stderr": 0.031146796482972465, "acc_norm": 0.31390134529147984, "acc_norm_stderr": 0.031146796482972465 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.2595419847328244, "acc_stderr": 0.03844876139785271, "acc_norm": 0.2595419847328244, "acc_norm_stderr": 0.03844876139785271 }, "harness|hendrycksTest-international_law|5": { "acc": 0.2396694214876033, "acc_stderr": 0.03896878985070417, "acc_norm": 0.2396694214876033, "acc_norm_stderr": 0.03896878985070417 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.25925925925925924, "acc_stderr": 0.042365112580946336, "acc_norm": 0.25925925925925924, "acc_norm_stderr": 0.042365112580946336 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.22085889570552147, "acc_stderr": 0.032591773927421776, "acc_norm": 0.22085889570552147, "acc_norm_stderr": 0.032591773927421776 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.3125, "acc_stderr": 0.043994650575715215, "acc_norm": 0.3125, "acc_norm_stderr": 0.043994650575715215 }, "harness|hendrycksTest-management|5": { "acc": 0.17475728155339806, "acc_stderr": 0.037601780060266224, "acc_norm": 0.17475728155339806, "acc_norm_stderr": 0.037601780060266224 }, "harness|hendrycksTest-marketing|5": { "acc": 0.2905982905982906, "acc_stderr": 0.02974504857267404, "acc_norm": 0.2905982905982906, "acc_norm_stderr": 0.02974504857267404 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.23754789272030652, "acc_stderr": 0.015218733046150193, "acc_norm": 0.23754789272030652, "acc_norm_stderr": 0.015218733046150193 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.24855491329479767, "acc_stderr": 0.023267528432100174, "acc_norm": 0.24855491329479767, "acc_norm_stderr": 0.023267528432100174 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.23798882681564246, "acc_stderr": 0.014242630070574915, "acc_norm": 0.23798882681564246, "acc_norm_stderr": 0.014242630070574915 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.22549019607843138, "acc_stderr": 0.023929155517351284, "acc_norm": 0.22549019607843138, "acc_norm_stderr": 0.023929155517351284 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.1864951768488746, "acc_stderr": 0.02212243977248077, "acc_norm": 0.1864951768488746, "acc_norm_stderr": 0.02212243977248077 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.21604938271604937, "acc_stderr": 0.022899162918445806, "acc_norm": 0.21604938271604937, "acc_norm_stderr": 0.022899162918445806 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.23404255319148937, "acc_stderr": 0.025257861359432417, "acc_norm": 0.23404255319148937, "acc_norm_stderr": 0.025257861359432417 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.2457627118644068, "acc_stderr": 0.010996156635142692, "acc_norm": 0.2457627118644068, "acc_norm_stderr": 0.010996156635142692 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.18382352941176472, "acc_stderr": 0.023529242185193106, "acc_norm": 0.18382352941176472, "acc_norm_stderr": 0.023529242185193106 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.25, "acc_stderr": 0.01751781884501444, "acc_norm": 0.25, "acc_norm_stderr": 0.01751781884501444 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.21818181818181817, "acc_stderr": 0.03955932861795833, "acc_norm": 0.21818181818181817, "acc_norm_stderr": 0.03955932861795833 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.18775510204081633, "acc_stderr": 0.02500025603954621, "acc_norm": 0.18775510204081633, "acc_norm_stderr": 0.02500025603954621 }, "harness|hendrycksTest-sociology|5": { "acc": 0.24378109452736318, "acc_stderr": 0.03036049015401465, "acc_norm": 0.24378109452736318, "acc_norm_stderr": 0.03036049015401465 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.28, "acc_stderr": 0.04512608598542128, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542128 }, "harness|hendrycksTest-virology|5": { "acc": 0.28313253012048195, "acc_stderr": 0.03507295431370518, "acc_norm": 0.28313253012048195, "acc_norm_stderr": 0.03507295431370518 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.3216374269005848, "acc_stderr": 0.03582529442573122, "acc_norm": 0.3216374269005848, "acc_norm_stderr": 0.03582529442573122 }, "harness|truthfulqa:mc|0": { "mc1": 1.0, "mc1_stderr": 0.0, "mc2": NaN, "mc2_stderr": NaN } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
78,832
[ [ -0.04974365234375, -0.057861328125, 0.0206451416015625, 0.0156097412109375, -0.01280975341796875, -0.00244903564453125, 0.0020999908447265625, -0.01219940185546875, 0.0401611328125, -0.00482940673828125, -0.034210205078125, -0.045135498046875, -0.031585693359375...
harvard-lil/cold-cases
2023-10-19T20:17:38.000Z
[ "size_categories:1M<n<10M", "language:en", "license:cc0-1.0", "united states", "law", "legal", "court", "opinions", "region:us" ]
harvard-lil
null
null
6
86
2023-09-12T17:29:50
--- license: cc0-1.0 language: - en tags: - united states - law - legal - court - opinions size_categories: - 1M<n<10M viewer: true --- <a href="https://huggingface.co/datasets/harvard-lil/cold-cases/resolve/main/coldcases.png"><img src="https://huggingface.co/datasets/harvard-lil/cold-cases/resolve/main/coldcases-banner.webp"/></a> # Collaborative Open Legal Data (COLD) - Cases COLD Cases is a dataset of 8.3 million United States legal decisions with text and metadata, formatted as compressed parquet files. If you'd like to view a sample of the dataset formatted as JSON Lines, you can view one [here](https://raw.githubusercontent.com/harvard-lil/cold-cases-export/main/sample.jsonl) This dataset exists to support the open legal movement exemplified by projects like [Pile of Law](https://huggingface.co/datasets/pile-of-law/pile-of-law) and [LegalBench](https://hazyresearch.stanford.edu/legalbench/). A key input to legal understanding projects is caselaw -- the published, precedential decisions of judges deciding legal disputes and explaining their reasoning. United States caselaw is collected and published as open data by [CourtListener](https://www.courtlistener.com/), which maintains scrapers to aggregate data from a wide range of public sources. COLD Cases reformats CourtListener's [bulk data](https://www.courtlistener.com/help/api/bulk-data) so that all of the semantic information about each legal decision (the authors and text of majority and dissenting opinions; head matter; and substantive metadata) is encoded in a single record per decision, with extraneous data removed. Serving in the traditional role of libraries as a standardization steward, the Harvard Library Innovation Lab is maintaining this [open source](https://github.com/harvard-lil/cold-cases-export) pipeline to consolidate the data engineering for preprocessing caselaw so downstream machine learning and natural language processing projects can use consistent, high quality representations of cases for legal understanding tasks. Prepared by the [Harvard Library Innovation Lab](https://lil.law.harvard.edu) in collaboration with the [Free Law Project](https://free.law/). --- ## Links - [Data nutrition label](https://datanutrition.org/labels/v3/?id=c29976b2-858c-4f4e-b7d0-c8ef12ce7dbe) (DRAFT). ([Archive](https://perma.cc/YV5P-B8JL)). - [Pipeline source code](https://github.com/harvard-lil/cold-cases-export) --- ## Summary - [Formats](#formats) - [File structure](#file-structure) - [Data dictionary](#data-dictionary) - [Notes on appropriate use](#appropriate-use) --- ## Format [Apache Parquet](https://parquet.apache.org/) is binary format that makes filtering and retrieving the data quicker because it lays out the data in columns, which means columns that are unnecessary to satisfy a given query or workflow don't need to be read. Hugging Face's [Datasets](https://huggingface.co/docs/datasets/index) library is an easy way to get started working with the entire dataset, and has features for loading and streaming the data, so you don't need to store it all locally or pay attention to how it's formatted on disk. [☝️ Go back to Summary](#summary) --- ## Data dictionary Partial glossary of the fields in the data. | Field name | Description | | --- | --- | | `judges` | Names of judges presiding over the case, extracted from the text. | | `date_filed` | Date the case was filed. Formatted in ISO Date format. | | `date_filed_is_approximate` | Boolean representing whether the `date_filed` value is precise to the day. | | `slug` | Short, human-readable unique string nickname for the case. | | `case_name_short` | Short name for the case. | | `case_name` | Fuller name for the case. | | `case_name_full` | Full, formal name for the case. | | `attorneys` | Names of attorneys arguing the case, extracted from the text. | | `nature_of_suit` | Free text representinng type of suit, such as Civil, Tort, etc. | | `syllabus` | Summary of the questions addressed in the decision, if provided by the reporter of decisions. | | `headnotes` | Textual headnotes of the case | | `summary` | Textual summary of the case | | `disposition` | How the court disposed of the case in their final ruling. | | `history` | Textual information about what happened to this case in later decisions. | | `other_dates` | Other dates related to the case in free text. | | `cross_reference` | Citations to related cases. | | `citation_count` | Number of cases that cite this one. | | `precedential_status` | Constrainted to the values "Published", "Unknown", "Errata", "Unpublished", "Relating-to", "Separate", "In-chambers" | | `citations` | Cases that cite this case. | | `court_short_name` | Short name of court presiding over case. | | `court_full_name` | Full name of court presiding over case. | | `court_jurisdiction` | Code for type of court that presided over the case. See: [court_jurisdiction field values](#court_jurisdiction-field-values) | | `opinions` | An array of subrecords. | | `opinions.author_str` | Name of the author of an individual opinion. | | `opinions.per_curiam` | Boolean representing whether the opinion was delivered by an entire court or a single judge. | | `opinions.type` | One of `"010combined"`, `"015unamimous"`, `"020lead"`, `"025plurality"`, `"030concurrence"`, `"035concurrenceinpart"`, `"040dissent"`, `"050addendum"`, `"060remittitur"`, `"070rehearing"`, `"080onthemerits"`, `"090onmotiontostrike"`. | | `opinions.opinion_text` | Actual full text of the opinion. | | `opinions.ocr` | Whether the opinion was captured via optical character recognition or born-digital text. | ### court_jurisdiction field values | Value | Description | | --- | --- | | F | Federal Appellate | | FD | Federal District | | FB | Federal Bankruptcy | | FBP | Federal Bankruptcy Panel | | FS | Federal Special | | S | State Supreme | | SA | State Appellate | | ST | State Trial | | SS | State Special | | TRS | Tribal Supreme | | TRA | Tribal Appellate | | TRT | Tribal Trial | | TRX | Tribal Special | | TS | Territory Supreme | | TA | Territory Appellate | | TT | Territory Trial | | TSP | Territory Special | | SAG | State Attorney General | | MA | Military Appellate | | MT | Military Trial | | C | Committee | | I | International | | T | Testing | [☝️ Go back to Summary](#summary) ## Notes on appropriate use When using this data, please keep in mind: * All documents in this dataset are public information, published by courts within the United States to inform the public about the law. **You have a right to access them.** * Nevertheless, **public court decisions frequently contain statements about individuals that are not true**. Court decisions often contain claims that are disputed, or false claims taken as true based on a legal technicality, or claims taken as true but later found to be false. Legal decisions are designed to inform you about the law -- they are not designed to inform you about individuals, and should not be used in place of credit databases, criminal records databases, news articles, or other sources intended to provide factual personal information. Applications should carefully consider whether use of this data will inform about the law, or mislead about individuals. * **Court decisions are not up-to-date statements of law**. Each decision provides a given judge's best understanding of the law as applied to the stated facts at the time of the decision. Use of this data to generate statements about the law requires integration of a large amount of context -- the skill typically provided by lawyers -- rather than simple data retrieval. To mitigate privacy risks, we have filtered out cases [blocked or deindexed by CourtListener](https://www.courtlistener.com/terms/#removal). Researchers who require access to the full dataset without that filter may rerun our pipeline on CourtListener's raw data. [☝️ Go back to Summary](#summary)
7,947
[ [ -0.0234375, -0.048980712890625, 0.054412841796875, 0.01396942138671875, -0.037567138671875, -0.0131683349609375, -0.0102691650390625, -0.01197052001953125, 0.034393310546875, 0.05865478515625, -0.02239990234375, -0.0755615234375, -0.036651611328125, -0.01152...
shengqin/web-attacks-long
2023-10-03T07:50:07.000Z
[ "region:us" ]
shengqin
null
null
1
86
2023-09-27T01:47:46
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
BubbleJoe/multi_nli_unified_input
2023-10-10T19:45:11.000Z
[ "region:us" ]
BubbleJoe
null
null
1
86
2023-10-10T19:39:20
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: validation_matched path: data/validation_matched-* - split: validation_mismatched path: data/validation_mismatched-* dataset_info: features: - name: promptID dtype: int32 - name: pairID dtype: string - name: premise dtype: string - name: premise_binary_parse dtype: string - name: premise_parse dtype: string - name: hypothesis dtype: string - name: hypothesis_binary_parse dtype: string - name: hypothesis_parse dtype: string - name: genre dtype: string - name: label dtype: class_label: names: '0': entailment '1': neutral '2': contradiction - name: input dtype: string splits: - name: train num_bytes: 487186164 num_examples: 392702 - name: validation_matched num_bytes: 11956580 num_examples: 9815 - name: validation_mismatched num_bytes: 12618412 num_examples: 9832 download_size: 272284496 dataset_size: 511761156 --- # Dataset Card for "multi_nli_unified_input" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
1,260
[ [ -0.04180908203125, -0.0171356201171875, 0.0153656005859375, 0.030120849609375, 0.00023746490478515625, 0.005527496337890625, 0.0110626220703125, -0.01120758056640625, 0.058685302734375, 0.0300140380859375, -0.062103271484375, -0.04736328125, -0.034210205078125, ...
ofis_publik
2022-11-03T16:15:15.000Z
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:br", "language:fr", "license:unknown", "region:us" ]
null
Texts from the Ofis Publik ar Brezhoneg (Breton Language Board) provided by Francis Tyers 2 languages, total number of files: 278 total number of tokens: 2.12M total number of sentence fragments: 0.13M
@InProceedings{TIEDEMANN12.463, author = {J{\"o}rg Tiedemann}, title = {Parallel Data, Tools and Interfaces in OPUS}, booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)}, year = {2012}, month = {may}, date = {23-25}, address = {Istanbul, Turkey}, editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis}, publisher = {European Language Resources Association (ELRA)}, isbn = {978-2-9517408-7-7}, language = {english} } @inproceedings{tyers-2009-rule, title = "Rule-Based Augmentation of Training Data in {B}reton-{F}rench Statistical Machine Translation", author = "Tyers, Francis M.", booktitle = "Proceedings of the 13th Annual conference of the European Association for Machine Translation", month = may # " 14{--}15", year = "2009", address = "Barcelona, Spain", publisher = "European Association for Machine Translation", url = "https://www.aclweb.org/anthology/2009.eamt-1.29", }
0
85
2022-03-02T23:29:22
--- annotations_creators: - found language_creators: - found language: - br - fr license: - unknown multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: OfisPublik dataset_info: features: - name: id dtype: string - name: translation dtype: translation: languages: - br - fr config_name: br-fr splits: - name: train num_bytes: 12256825 num_examples: 63422 download_size: 3856983 dataset_size: 12256825 --- # Dataset Card for OfisPublik ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://opus.nlpl.eu/OfisPublik.php - **Repository:** None - **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
3,232
[ [ -0.031707763671875, -0.029449462890625, 0.008941650390625, 0.0237579345703125, -0.0225982666015625, 0.019866943359375, -0.019744873046875, -0.0178070068359375, 0.047882080078125, 0.043212890625, -0.07244873046875, -0.0628662109375, -0.058807373046875, 0.0127...
opus_finlex
2022-11-03T16:08:11.000Z
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:1M<n<10M", "source_datasets:original", "language:fi", "language:sv", "license:unknown", "region:us" ]
null
The Finlex Data Base is a comprehensive collection of legislative and other judicial information of Finland, which is available in Finnish, Swedish and partially in English. This corpus is taken from the Semantic Finlex serice that provides the Finnish and Swedish data as linked open data and also raw XML files.
J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
1
85
2022-03-02T23:29:22
--- annotations_creators: - found language_creators: - found language: - fi - sv license: - unknown multilinguality: - translation size_categories: - 1M<n<10M source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: OpusFinlex dataset_info: features: - name: translation dtype: translation: languages: - fi - sv config_name: fi-sv splits: - name: train num_bytes: 610550215 num_examples: 3114141 download_size: 153886554 dataset_size: 610550215 --- # Dataset Card for [opus_finlex] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:**[Finlex](http://opus.nlpl.eu/Finlex.php) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The Finlex Data Base is a comprehensive collection of legislative and other judicial information of Finland, which is available in Finnish, Swedish and partially in English. This corpus is taken from the Semantic Finlex serice that provides the Finnish and Swedish data as linked open data and also raw XML files. ### Supported Tasks and Leaderboards The underlying task is machine translation for language pair Swedish and Finnish. ### Languages Swedish and Finnish ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012) ### Contributions Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset.
3,496
[ [ -0.0284576416015625, -0.03131103515625, 0.019256591796875, 0.0197296142578125, -0.0244140625, -0.0004830360412597656, -0.0310821533203125, -0.026214599609375, 0.033599853515625, 0.057464599609375, -0.05682373046875, -0.0811767578125, -0.0469970703125, 0.0157...
opus_fiskmo
2022-11-03T16:08:01.000Z
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:1M<n<10M", "source_datasets:original", "language:fi", "language:sv", "license:unknown", "region:us" ]
null
fiskmo, a massive parallel corpus for Finnish and Swedish.
J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
0
85
2022-03-02T23:29:22
--- annotations_creators: - found language_creators: - found language: - fi - sv license: - unknown multilinguality: - translation size_categories: - 1M<n<10M source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: OpusFiskmo dataset_info: features: - name: translation dtype: translation: languages: - fi - sv config_name: fi-sv splits: - name: train num_bytes: 326528834 num_examples: 2100001 download_size: 144858927 dataset_size: 326528834 --- # Dataset Card for [opus_fiskmo] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:**[fiskmo](http://opus.nlpl.eu/fiskmo.php) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary fiskmo, a massive parallel corpus for Finnish and Swedish. ### Supported Tasks and Leaderboards The underlying task is machine translation for language pair Finnish and Swedish. ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012) ### Contributions Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset.
3,247
[ [ -0.028076171875, -0.0249786376953125, 0.0230712890625, 0.024017333984375, -0.032073974609375, 0.003963470458984375, -0.033966064453125, -0.0191192626953125, 0.0396728515625, 0.043182373046875, -0.068359375, -0.08074951171875, -0.056427001953125, 0.0228881835...
opus_montenegrinsubs
2022-11-03T16:08:11.000Z
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:10K<n<100K", "source_datasets:original", "language:cnr", "language:en", "license:unknown", "region:us" ]
null
Opus MontenegrinSubs dataset for machine translation task, for language pair en-me: english and montenegrin
J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
0
85
2022-03-02T23:29:22
--- annotations_creators: - found language_creators: - found language: - cnr - en license: - unknown multilinguality: - translation size_categories: - 10K<n<100K source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: OpusMontenegrinsubs dataset_info: features: - name: translation dtype: translation: languages: - en - me config_name: en-me splits: - name: train num_bytes: 4896403 num_examples: 65043 download_size: 1990570 dataset_size: 4896403 --- # Dataset Card for [opus_montenegrinsubs] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:**[opus MontenegrinSubs ](http://opus.nlpl.eu/MontenegrinSubs.php) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Opus MontenegrinSubs dataset for machine translation task, for language pair en-me: english and montenegrin ### Supported Tasks and Leaderboards The underlying task is machine translation from en to me ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012) ### Contributions Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset.
3,309
[ [ -0.0281524658203125, -0.0305328369140625, 0.007312774658203125, 0.0309295654296875, -0.0310821533203125, 0.005031585693359375, -0.030975341796875, -0.018829345703125, 0.033905029296875, 0.04083251953125, -0.059051513671875, -0.0833740234375, -0.056304931640625, ...
pass
2022-11-03T16:15:51.000Z
[ "task_categories:other", "annotations_creators:no-annotation", "language_creators:machine-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:extended|yffc100M", "language:en", "license:cc-by-4.0", "image-self-supervised pre...
null
PASS (Pictures without humAns for Self-Supervision) is a large-scale dataset of 1,440,191 images that does not include any humans and which can be used for high-quality pretraining while significantly reducing privacy concerns. The PASS images are sourced from the YFCC-100M dataset.
@Article{asano21pass, author = "Yuki M. Asano and Christian Rupprecht and Andrew Zisserman and Andrea Vedaldi", title = "PASS: An ImageNet replacement for self-supervised pretraining without humans", journal = "NeurIPS Track on Datasets and Benchmarks", year = "2021" }
1
85
2022-03-02T23:29:22
--- annotations_creators: - no-annotation language_creators: - machine-generated - expert-generated language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 1M<n<10M source_datasets: - extended|yffc100M task_categories: - other task_ids: [] paperswithcode_id: pass pretty_name: Pictures without humAns for Self-Supervision tags: - image-self-supervised pretraining dataset_info: features: - name: image dtype: image - name: creator_username dtype: string - name: hash dtype: string - name: gps_latitude dtype: float32 - name: gps_longitude dtype: float32 - name: date_taken dtype: timestamp[us] splits: - name: train num_bytes: 178563446100 num_examples: 1439588 download_size: 179640190811 dataset_size: 178563446100 --- # Dataset Card for PASS ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [PASS homepage](https://www.robots.ox.ac.uk/~vgg/research/pass/) - **Repository:** [PASS repository](https://github.com/yukimasano/PASS) - **Paper:** [PASS: An ImageNet replacement for self-supervised pretraining without humans](https://arxiv.org/abs/2109.13228) - **Leaderboard:** [Pretrained models with scores](https://github.com/yukimasano/PASS#pretrained-models) - **Point of Contact:** [Yuki M. Asano](mailto:yukiATMARKrobots.ox.ac.uk) ### Dataset Summary PASS is a large-scale image dataset, containing 1.4 million images, that does not include any humans and which can be used for high-quality pretraining while significantly reducing privacy concerns. ### Supported Tasks and Leaderboards From the paper: > **Has the dataset been used for any tasks already?** In the paper we show and benchmark the intended use of this dataset as a pretraining dataset. For this the dataset is used an unlabelled image collection on which visual features are learned and then transferred to downstream tasks. We show that with this dataset it is possible to learn competitive visual features, without any humans in the pretraining dataset and with complete license information. > **Is there a repository that links to any or all papers or systems that use the dataset?** We will be listing these at the repository. > **What (other) tasks could the dataset be used for?** We believe this dataset might allow researchers and practitioners to further evaluate the differences that pretraining datasets can have on the learned features. Furthermore, since the meta-data is available for the images, it is possible to investigate the effect of image resolution on self-supervised learning methods, a domain largely underresearched thus far, as the current de-facto standard, ImageNet, only comes in one size. > **Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses?** Given that this dataset is a subset of a dataset that randomly samples images from flickr, the image distribution is biased towards European and American creators. As in the main papers discussion, this can lead to non-generalizeable features, or even biased features as the images taken in other countries might be more likely to further reflect and propagate stereotypes [84], though in our case these do not refer to sterotypes about humans. > **Are there tasks for which the dataset should not be used?** This dataset is meant for research purposes only. The dataset should also not be used for, e.g. connecting images and usernames, as this might risk de-anonymising the dataset in the long term. The usernames are solely provided for attribution. ### Languages English. ## Dataset Structure ### Data Instances A data point comprises an image and its meta-data: ``` { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x7FFAD48E35F8>, 'creator_username': 'NTShieldsy', 'hash': 'e1662344ffa8c231d198c367c692cc', 'gps_latitude': 21.206675, 'gps_longitude': 39.166558, 'date_taken': datetime.datetime(2012, 8, 9, 18, 0, 20) } ``` ### Data Fields - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`. - `creator_username`: The photographer. - `hash`: The hash, as computed from YFCC-100M. - `gps_latitude`: Latitude of image if existent, otherwise None. - `gps_longitude`: Longitude of image if existent, otherwise None. - `date_taken`: Datetime of image if existent, otherwise None. ### Data Splits All the data is contained in the training set. The training set has 1,439,588 instances as this implementation corresponds to the most recent release (v3) from the [version history](https://github.com/yukimasano/PASS/blob/main/version_history.txt). From the paper: > **Are there recommended data splits (e.g., training, development/validation, testing)?** As outlined in the intended usecases, this dataset is meant for pretraining representations. As such, the models derived from training on this dataset need to be evaluated on different datasets, so called down-stream tasks. Thus the recommended split is to use all samples for training. ## Dataset Creation ### Curation Rationale From the paper: > **For what purpose was the dataset created?** Neural networks pretrained on large image collections have been shown to transfer well to other visual tasks where there is little labelled data, i.e. transferring a model works better than starting with a randomly initialized network every time for a new task, as many visual features can be repurposed. This dataset has as its goal to provide a safer large-scale dataset for such pretraining of visual features. In particular, this dataset does not contain any humans or human parts and does not contain any labels. The first point is important, as the current standard for pretraining, ImageNet and its face-blurred version only provide pseudo-anonymity and furthermore do not provide correct licences to the creators. The second point is relevant as pretraining is moving towards the self-supervised paradigm, where labels are not required. Yet most methods are developed on the highly curated ImageNet dataset, yielding potentially non-generalizeable research. ### Source Data #### Initial Data Collection and Normalization From the paper: * **Collection process**: > **How was the data associated with each instance acquired?** The data was collected from the publicly available dataset YFCC-100M which is hosted on the AWS public datasets platform. We have used the meta-data, namely the copyright information to filter only images with the CC-BY licence and have downloaded these using the aws command line interface, allowing for quick and stable downloading. In addition, all files were subsequently scanned for viruses using Sophos SAVScan virus detection utility, v.5.74.0. > **What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)?** Our dataset is a subset of the YFCC-100M dataset. The YFCC-100M dataset itself was created by effectively randomly selecting publicly available images from flickr, resulting in approximately 98M images. > **Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set?** The dataset is a sample of a larger set—all possible digital photographs. As outlined in Section 3 we start from an existing dataset, YFCC-100M, and stratify the images (removing images with people and personal information, removing images with harmful content, removing images with unsuitable licenses, each user contributes at most 80 images to the dataset). This leaves 1.6M images, out of which we take a random sample of 1.28M images to replicate the size of the ImageNet dataset. While this dataset can thus be extended, this is the set that we have verified to not contain humans, human parts and disturbing content. > **Over what timeframe was the data collected?** The images underlying the dataset were downloaded between March and June 2021 from the AWS public datasets’ S3 bucket, following the download code provided in the repo. However the images contained were originally and taken anywhere from 2000 to 2015, with the majority being shot between 2010-2014. * **Preprocessing/cleaning/labeling**: > **Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing,tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)?** After the download of approx. 17M images, the corrupted, or single-color images were removed from the dataset prior to the generation of the dataset(s) used in the paper. The images were not further preprocessed or edited. > **Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)?** Yes. The creators of the dataset maintain a copy of the 17M original images with the CC-BY licence of YFCC100M that sits at the start of our dataset creation pipeline. Is the software used to preprocess/clean/label the instances available? We have only used basic Python primitives for this. For the annotations we have used VIA [27, 28]. #### Who are the source language producers? From the paper: > **Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)?** As described, the data was collected automatically by simply downloading images from a publicly hosted S3 bucket. The human verification was done using a professional data annotation company that pays 150% of the local minimum wage. ### Annotations #### Annotation process This dataset doesn't contain annotations. #### Who are the annotators? This dataset doesn't contain annotations. ### Personal and Sensitive Information From the paper: > **Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals’ non-public communications)?** No. > **Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?** No. Besides checking for human presence in the images, the annotators were also given the choice of flagging images for disturbing content, which once flagged was removed. > **Does the dataset relate to people? If not, you may skip the remaining questions in this section.** No. > **Does the dataset identify any subpopulations (e.g., by age, gender)?** NA > **Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset?** NA > **Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)?** NA > **Were any ethical review processes conducted (e.g., by an institutional review board)?** No ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases From the paper: > **Is your dataset free of biases?** No. There are many kinds of biases that can either be quantified, e.g. geo-location (most images originate from the US and Europe) or camera-model (most images are taken with professional DSLR cameras not easily affordable), there are likely many more biases that this dataset does contain. The only thing that this dataset does not contain are humans and parts of humans, as far as our validation procedure is accurate. ### Other Known Limitations From the paper: > **Can you guarantee compliance to GDPR?** No, we cannot comment on legal issues. ## Additional Information ### Dataset Curators YM. Asano, C. Rupprecht, A. Zisserman and A. Vedaldi. From the paper: > **Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)?** The dataset has been constructed by the research group “Visual Geometry Group” at the University of Oxford at the Engineering Science Department. ### Licensing Information The PASS dataset is available to download for commercial/research purposes under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/). A complete version of the license can be found [here](https://www.robots.ox.ac.uk/~vgg/research/pass/license_pass.txt). The whole dataset only contains CC-BY licensed images with full attribution information. ### Citation Information ```bibtex @Article{asano21pass, author = "Yuki M. Asano and Christian Rupprecht and Andrew Zisserman and Andrea Vedaldi", title = "PASS: An ImageNet replacement for self-supervised pretraining without humans", journal = "NeurIPS Track on Datasets and Benchmarks", year = "2021" } ``` ### Contributions Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
14,672
[ [ -0.048431396484375, -0.01995849609375, 0.0090789794921875, -0.00890350341796875, -0.0311126708984375, -0.0235748291015625, -0.0017099380493164062, -0.0384521484375, 0.0262603759765625, 0.046539306640625, -0.06134033203125, -0.051971435546875, -0.04168701171875, ...