author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Datatang | null | null | null | false | 2 | false | Datatang/Tibetan_Colloquial_Video_Speech_Data | 2022-06-24T09:00:53.000Z | null | false | a7ce20751e76c33b8e84b63dc4889d722999125b | [] | [] | https://huggingface.co/datasets/Datatang/Tibetan_Colloquial_Video_Speech_Data/resolve/main/README.md | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Datatang/Tibetan_Colloquial_Video_Speech_Data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://bit.ly/3HLf53a
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
300 Hours - Tibetan Colloquial Video Speech Data, collected from real website, covering multiple fields. Various attributes such as text content and speaker identity are annotated. This data set can be used for voiceprint recognition model training, construction of corpus for machine translation and algorithm research.
For more details, please refer to the link: https://bit.ly/3HLf53a
### Supported Tasks and Leaderboards
automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
### Languages
Tibetan
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions
|
Datatang | null | null | null | false | 2 | false | Datatang/Filipino_Speech_Data_by_Mobile_Phone | 2022-06-24T08:54:24.000Z | null | false | 4d246f90773fb8b3c6e8fe69075d2300dbcec781 | [] | [] | https://huggingface.co/datasets/Datatang/Filipino_Speech_Data_by_Mobile_Phone/resolve/main/README.md | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Datatang/Filipino_Speech_Data_by_Mobile_Phone
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://bit.ly/3zVeZ79
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
500 Hours - Filipino Speech Data by Mobile Phone,the data were recorded by Filipino speakers with authentic Filipino accents.The text is manually proofread with high accuracy. Match mainstream Android, Apple system phones.
For more details, please refer to the link: https://bit.ly/3zVeZ79
### Supported Tasks and Leaderboards
automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
### Languages
Filipino
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions
|
Datatang | null | null | null | false | 2 | false | Datatang/Chinese_Mandarin_Synthesis_Corpus-Female-Customer_Service | 2022-06-24T08:54:33.000Z | null | false | 39cd2e384776e13a398aeb087207a8ec3a9107a9 | [] | [] | https://huggingface.co/datasets/Datatang/Chinese_Mandarin_Synthesis_Corpus-Female-Customer_Service/resolve/main/README.md | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Datatang/Chinese_Mandarin_Synthesis_Corpus-Female-Customer_Service
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://bit.ly/3HFGh3c
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Chinese Mandarin Synthesis Corpus-Female, Customer Service, It is recorded by Chinese native speakers, with lively and frindly voice. The phoneme coverage is balanced. Professional phonetician participates in the annotation. It precisely matches with the research and development needs of the speech synthesis.
For more details, please refer to the link: https://bit.ly/3HFGh3c
### Supported Tasks and Leaderboards
tts: The dataset can be used to train a model for Text to Speech (TTS).
### Languages
Chinese Mandarin
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions
|
Datatang | null | null | null | false | 2 | false | Datatang/Chinese_Mandarin_Average_Tone_Speech_Synthesis_Corpus_General | 2022-06-24T08:54:57.000Z | null | false | 6d862468b638d55d6034a983e643b275dcd0679d | [] | [] | https://huggingface.co/datasets/Datatang/Chinese_Mandarin_Average_Tone_Speech_Synthesis_Corpus_General/resolve/main/README.md | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Datatang/Chinese_Mandarin_Average_Tone_Speech_Synthesis_Corpus_General
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://bit.ly/3zQaN8B
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
100 People - Chinese Mandarin Average Tone Speech Synthesis Corpus, General. It is recorded by Chinese native speaker. It covers news, dialogue, audio books, poetry, advertising, news broadcasting, entertainment; and the phonemes and tones are balanced. Professional phonetician participates in the annotation. It precisely matches with the research and development needs of the speech synthesis.
For more details, please refer to the link: https://bit.ly/3zQaN8B
### Supported Tasks and Leaderboards
tts: The dataset can be used to train a model for Text to Speech (TTS).
### Languages
Chinese Mandarin
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions
|
Datatang | null | null | null | false | 2 | false | Datatang/Chinese_Mandarin_Synthesis_Corpus-Female_General | 2022-06-24T08:54:44.000Z | null | false | 68bf016fc61d17d52ad1396ef0212fbec5a99e51 | [] | [] | https://huggingface.co/datasets/Datatang/Chinese_Mandarin_Synthesis_Corpus-Female_General/resolve/main/README.md | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Datatang/Chinese_Mandarin_Synthesis_Corpus-Female_General
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://bit.ly/3HGQvQG
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Chinese Mandarin Synthesis Corpus-Female, General. It is recorded by Chinese native speaker. It covers oral sentences, audio books, news, advertising, customer service and movie commentary, and the phonemes and tones are balanced. Professional phonetician participates in the annotation. It precisely matches with the research and development needs of the speech synthesis.
For more details, please refer to the link: https://bit.ly/3HGQvQG
### Supported Tasks and Leaderboards
tts: The dataset can be used to train a model for Text to Speech (TTS).
### Languages
Chinese Mandarin
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions
|
Datatang | null | null | null | false | 2 | false | Datatang/Chinese_Mandarin_Synthesis_Corpus-Female_Emotional | 2022-06-24T08:55:12.000Z | null | false | 192fa80334c77ffdd0cef2fc127b744cf275bd55 | [] | [] | https://huggingface.co/datasets/Datatang/Chinese_Mandarin_Synthesis_Corpus-Female_Emotional/resolve/main/README.md | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Datatang/Chinese_Mandarin_Synthesis_Corpus-Female_Emotional
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://bit.ly/3zYDJLB
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
13.3 Hours - Chinese Mandarin Synthesis Corpus-Female, Emotional. It is recorded by Chinese native speaker,emotional text, and the syllables, phonemes and tones are balanced. Professional phonetician participates in the annotation. It precisely matches with the research and development needs of the speech synthesis.
For more details, please refer to the link: https://bit.ly/3zYDJLB
### Supported Tasks and Leaderboards
tts: The dataset can be used to train a model for Text to Speech (TTS).
### Languages
Chinese Mandarin
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions
|
Datatang | null | null | null | false | 2 | false | Datatang/Chinese_Average_Tone_Speech_Synthesis_Corpus-Three_Styles | 2022-06-24T08:55:23.000Z | null | false | 1f329b76712e5e7fc97c386684ebf7c0389a962e | [] | [] | https://huggingface.co/datasets/Datatang/Chinese_Average_Tone_Speech_Synthesis_Corpus-Three_Styles/resolve/main/README.md | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Datatang/Chinese_Average_Tone_Speech_Synthesis_Corpus-Three_Styles
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://bit.ly/3tOwuSr
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
50 People - Chinese Average Tone Speech Synthesis Corpus-Three Styles.It is recorded by Chinese native speakers. Corpus includes cunstomer service,news and story. The syllables, phonemes and tones are balanced. Professional phonetician participates in the annotation. It precisely matches with the research and development needs of the speech synthesis.
For more details, please refer to the link: https://bit.ly/3tOwuSr
### Supported Tasks and Leaderboards
tts: The dataset can be used to train a model for Text to Speech (TTS).
### Languages
Chinese Mandarin
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions
|
Datatang | null | null | null | false | 1 | false | Datatang/Chinese_Mandarin_Songs_in_Acapella__Female | 2022-06-24T08:56:53.000Z | null | false | 0b08d4cbd1e1331889296ae2824ffa8a275bdb97 | [] | [] | https://huggingface.co/datasets/Datatang/Chinese_Mandarin_Songs_in_Acapella__Female/resolve/main/README.md | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Datatang/Chinese_Mandarin_Songs_in_Acapella__Female
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://bit.ly/3HKUHPi
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
103 Chinese Mandarin Songs in Acapella - Female. It is recorded by Chinese professional singer, with sweet voice. Professional phonetician participates in the annotation. It precisely matches with the research and development needs of the song synthesis.
For more details, please refer to the link: https://bit.ly/3HKUHPi
### Supported Tasks and Leaderboards
tts,: The dataset can be used to train a model for Text to Speech (TTS).
### Languages
Chinese Mandarin
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions
|
Datatang | null | null | null | false | 1 | false | Datatang/American_English_Speech_Synthesis_Corpus-Male | 2022-06-24T08:55:41.000Z | null | false | d6a778b49cb053d0d1107ea9a31e24a6d0136494 | [] | [] | https://huggingface.co/datasets/Datatang/American_English_Speech_Synthesis_Corpus-Male/resolve/main/README.md | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Datatang/American_English_Speech_Synthesis_Corpus-Male
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://bit.ly/3HPdSrp
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Male audio data of American English. It is recorded by American English native speakers, with authentic accent. The phoneme coverage is balanced. Professional phonetician participates in the annotation. It precisely matches with the research and development needs of the speech synthesis.
For more details, please refer to the link: https://bit.ly/3HPdSrp
### Supported Tasks and Leaderboards
tts: The dataset can be used to train a model for Text to Speech (TTS).
### Languages
American English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions
|
Datatang | null | null | null | false | 2 | false | Datatang/Chinese_Mandarin_Synthesis_Corpus-Female_Customer_Service_Conversational_Speech | 2022-06-24T08:57:04.000Z | null | false | 255fce9c5ed3650351a2a54653eadf8cd28c3b87 | [] | [] | https://huggingface.co/datasets/Datatang/Chinese_Mandarin_Synthesis_Corpus-Female_Customer_Service_Conversational_Speech/resolve/main/README.md | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Datatang/Chinese_Mandarin_Synthesis_Corpus-Female_Customer_Service_Conversational_Speech
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://bit.ly/3tRCNoi
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
20 Hours - Chinese Mandarin Synthesis Corpus-Female, Customer Service, Conversational Speech, It is recorded by Chinese native speakers, with sweet voice. Professional phonetician participates in the annotation. It precisely matches with the research and development needs of the speech synthesis.
For more details, please refer to the link: https://bit.ly/3tRCNoi
### Supported Tasks and Leaderboards
tts: The dataset can be used to train a model for Text to Speech (TTS).
### Languages
Chinese Mandarin
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions
|
GEM-submissions | null | null | null | false | 1 | false | GEM-submissions/lewtun__this-is-a-test-name__1655900658 | 2022-06-22T12:24:21.000Z | null | false | 4b37cb089a33454dcfd5c1af2902e58464a41fdb | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:This is a test name",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/lewtun__this-is-a-test-name__1655900658/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: This is a test name
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: This is a test name
|
gopalkalpande | null | null | null | false | 3 | false | gopalkalpande/bbc-news-summary | 2022-06-22T13:08:15.000Z | null | false | e529817c203e680865a51ea9940f2ee1eb85b2af | [] | [
"license:cc0-1.0"
] | https://huggingface.co/datasets/gopalkalpande/bbc-news-summary/resolve/main/README.md | ---
license: cc0-1.0
---
# About Dataset
### Context
Text summarization is a way to condense the large amount of information into a concise form by the process of selection of important information and discarding unimportant and redundant information. With the amount of textual information present in the world wide web the area of text summarization is becoming very important. The extractive summarization is the one where the exact sentences present in the document are used as summaries. The extractive summarization is simpler and is the general practice among the automatic text summarization researchers at the present time. Extractive summarization process involves giving scores to sentences using some method and then using the sentences that achieve highest scores as summaries. As the exact sentence present in the document is used the semantic factor can be ignored which results in generation of less calculation intensive summarization procedure. This kind of summary is generally completely unsupervised and language independent too. Although this kind of summary does its job in conveying the essential information it may not be necessarily smooth or fluent. Sometimes there can be almost no connection between adjacent sentences in the summary resulting in the text lacking in readability.
Content
This dataset for extractive text summarization has four hundred and seventeen political news articles of BBC from 2004 to 2005 in the News Articles folder. For each articles, five summaries are provided in the Summaries folder. The first clause of the text of articles is the respective title.
Acknowledgements
This dataset was created using a dataset used for data categorization that onsists of 2225 documents from the BBC news website corresponding to stories in five topical areas from 2004-2005 used in the paper of D. Greene and P. Cunningham. "Practical Solutions to the Problem of Diagonal Dominance in Kernel Document Clustering", Proc. ICML 2006; whose all rights, including copyright, in the content of the original articles are owned by the BBC. More at http://mlg.ucd.ie/datasets/bbc.html
**Kaggle Link:** https://www.kaggle.com/datasets/pariza/bbc-news-summary |
GEM-submissions | null | null | null | false | 1 | false | GEM-submissions/lewtun__this-is-a-test-name__1655905032 | 2022-06-22T13:37:16.000Z | null | false | a5e415dfc7d7b5c370a6f8a4d18ffb679aa61f04 | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:This is a test name",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/lewtun__this-is-a-test-name__1655905032/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: This is a test name
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: This is a test name
|
imvladikon | null | @mastersthesis{naama,
title={Hebrew Named Entity Recognition},
author={Ben-Mordecai, Naama},
advisor={Elhadad, Michael},
year={2005},
url="https://www.cs.bgu.ac.il/~elhadad/nlpproj/naama/",
institution={Department of Computer Science, Ben-Gurion University},
school={Department of Computer Science, Ben-Gurion University},
},
@misc{bareket2020neural,
title={Neural Modeling for Named Entities and Morphology (NEMO^2)},
author={Dan Bareket and Reut Tsarfaty},
year={2020},
eprint={2007.15620},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | \ | false | 12 | false | imvladikon/bmc | 2022-07-01T19:21:08.000Z | null | false | 986013ac7e11240aba94c8734206dc7e94fad39a | [] | [
"arxiv:2007.15620",
"annotations_creators:crowdsourced",
"language_creators:found",
"language:he",
"license:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-reuters-corpus",
"task_categories:token-classification",
"task_ids:named-entity-recogniti... | https://huggingface.co/datasets/imvladikon/bmc/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- he
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-reuters-corpus
task_categories:
- token-classification
task_ids:
- named-entity-recognition
train-eval-index:
- config: bmc
task: token-classification
task_id: entity_extraction
splits:
train_split: train
eval_split: validation
test_split: test
col_mapping:
tokens: tokens
ner_tags: tags
metrics:
- type: seqeval
name: seqeval
---
# Splits for the Ben-Mordecai and Elhadad Hebrew NER Corpus (BMC)
In order to evaluate performance in accordance with the original Ben-Mordecai and Elhadad (2005) work, we provide three 75%-25% random splits.
* Only the 7 entity categories viable for evaluation were kept (DATE, LOC, MONEY, ORG, PER, PERCENT, TIME) --- all MISC entities were filtered out.
* Sequence label scheme was changed from IOB to BIOES
* The dev sets are 10% taken out of the 75%
## Citation
If you use use the BMC corpus, please cite the original paper as well as our paper which describes the splits:
* Ben-Mordecai and Elhadad (2005):
```console
@mastersthesis{naama,
title={Hebrew Named Entity Recognition},
author={Ben-Mordecai, Naama},
advisor={Elhadad, Michael},
year={2005},
url="https://www.cs.bgu.ac.il/~elhadad/nlpproj/naama/",
institution={Department of Computer Science, Ben-Gurion University},
school={Department of Computer Science, Ben-Gurion University},
}
```
* Bareket and Tsarfaty (2020)
```console
@misc{bareket2020neural,
title={Neural Modeling for Named Entities and Morphology (NEMO^2)},
author={Dan Bareket and Reut Tsarfaty},
year={2020},
eprint={2007.15620},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
osanseviero | null | null | null | false | 1 | false | osanseviero/kaggle-animal-crossing-new-horizons-nookplaza-dataset | 2022-10-25T10:32:48.000Z | null | false | c80a08ca133af5409d996361a5ae8fd57e2a3e38 | [] | [
"kaggle_id:jessicali9530/animal-crossing-new-horizons-nookplaza-dataset",
"license:cc0-1.0"
] | https://huggingface.co/datasets/osanseviero/kaggle-animal-crossing-new-horizons-nookplaza-dataset/resolve/main/README.md | ---
kaggle_id: jessicali9530/animal-crossing-new-horizons-nookplaza-dataset
license:
- cc0-1.0
---
# Dataset Card for Animal Crossing New Horizons Catalog
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/jessicali9530/animal-crossing-new-horizons-nookplaza-dataset
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
### Context
This dataset comes from this [spreadsheet](https://tinyurl.com/acnh-sheet), a comprehensive Item Catalog for Animal Crossing New Horizons (ACNH). As described by [Wikipedia](https://en.wikipedia.org/wiki/Animal_Crossing:_New_Horizons),
> ACNH is a life simulation game released by Nintendo for Nintendo Switch on March 20, 2020. It is the fifth main series title in the Animal Crossing series and, with 5 million digital copies sold, has broken the record for Switch title with most digital units sold in a single month. In New Horizons, the player assumes the role of a customizable character who moves to a deserted island. Taking place in real-time, the player can explore the island in a nonlinear fashion, gathering and crafting items, catching insects and fish, and developing the island into a community of anthropomorphic animals.
### Content
There are 30 csvs each listing various items, villagers, clothing, and other collectibles from the game. The data was collected by a dedicated group of AC fans who continue to collaborate and build this [spreadsheet](https://tinyurl.com/acnh-sheet) for public use. The database contains the original data and full list of contributors and raw data. At the time of writing, the only difference between the spreadsheet and this version is that the Kaggle version omits all columns with images of the items, but is otherwise identical.
### Acknowledgements
Thanks to every contributor listed on the [spreadsheet!](https://tinyurl.com/acnh-sheet) Please attribute this spreadsheet and group for any use of the data. They also have a Discord server linked in the spreadsheet in case you want to contact them.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@jessicali9530](https://kaggle.com/jessicali9530)
### Licensing Information
The license for this dataset is cc0-1.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] |
GEM-submissions | null | null | null | false | 2 | false | GEM-submissions/lewtun__this-is-a-test-name__1655913671 | 2022-06-22T16:01:18.000Z | null | false | 934f9ae2fe4a4bacc3fa69d6a0aeefccf247377a | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:This is a test name",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/lewtun__this-is-a-test-name__1655913671/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: This is a test name
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: This is a test name
|
GEM-submissions | null | null | null | false | 2 | false | GEM-submissions/lewtun__this-is-a-test-name__1655913794 | 2022-06-22T16:03:20.000Z | null | false | 6fc1cfe32c1b2f07ddbdbbf7c800826ee2781f12 | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:This is a test name",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/lewtun__this-is-a-test-name__1655913794/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: This is a test name
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: This is a test name
|
GEM-submissions | null | null | null | false | 2 | false | GEM-submissions/lewtun__this-is-a-test-name__1655913835 | 2022-06-22T16:04:02.000Z | null | false | a7dcebed891356a4bf9963ca8519f7bef271698b | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:This is a test name",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/lewtun__this-is-a-test-name__1655913835/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: This is a test name
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: This is a test name
|
GEM-submissions | null | null | null | false | 1 | false | GEM-submissions/lewtun__this-is-a-test-name__1655913900 | 2022-06-22T16:05:06.000Z | null | false | 19ce9b32b6f7a676acee224d2b986636e583f3d3 | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:This is a test name",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/lewtun__this-is-a-test-name__1655913900/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: This is a test name
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: This is a test name
|
GEM-submissions | null | null | null | false | 1 | false | GEM-submissions/lewtun__this-is-a-test-name__1655914374 | 2022-06-22T16:13:01.000Z | null | false | 0a1065290fa91ec7b57e3d5ebea57f985b0d106f | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:This is a test name",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/lewtun__this-is-a-test-name__1655914374/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: This is a test name
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: This is a test name
|
phihung | null | null | null | false | 3 | false | phihung/titanic | 2022-06-22T16:25:32.000Z | null | false | 9753139e0b9d454ab4fd22e884290260db5fc7b6 | [] | [
"license:other"
] | https://huggingface.co/datasets/phihung/titanic/resolve/main/README.md | ---
license: other
---
The legendary Titanic dataset from [this](https://www.kaggle.com/competitions/titanic/overview) Kaggle competition |
tykimos | null | null | null | false | 2 | false | tykimos/company_rules | 2022-06-22T17:23:52.000Z | null | false | a8becac0d70a7cb499edbd1f0b480bc24f733a86 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/tykimos/company_rules/resolve/main/README.md | ---
license: afl-3.0
---
|
jalFaizy | null | null | The "Object Detection for Chess Pieces" dataset is a toy dataset created (as suggested by the name!) to introduce object detection in a beginner friendly way. | false | 5 | false | jalFaizy/detect_chess_pieces | 2022-10-25T10:34:41.000Z | null | false | 31cc015b8ffbafc4168ccef186e3045b181deaf8 | [] | [
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"language:en",
"license:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"task_categories:object-detection"
] | https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: Object Detection for Chess Pieces
size_categories:
- n<1K
source_datasets: []
task_categories:
- object-detection
task_ids: []
---
# Dataset Card for Object Detection for Chess Pieces
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/faizankshaikh/chessDetection
- **Repository:** https://github.com/faizankshaikh/chessDetection
- **Paper:** -
- **Leaderboard:** -
- **Point of Contact:** [Faizan Shaikh](mailto:faizankshaikh@gmail.com)
### Dataset Summary
The "Object Detection for Chess Pieces" dataset is a toy dataset created (as suggested by the name!) to introduce object detection in a beginner friendly way. It is structured in a one object-one image manner, with the objects being of four classes, namely, Black King, White King, Black Queen and White Queen
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train and evaluate simplistic object detection models
### Languages
The text (labels) in the dataset is in English
## Dataset Structure
### Data Instances
A data point comprises an image and the corresponding objects in bounding boxes.
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=224x224 at 0x23557C66160>,
'objects': { "label": [ 0 ], "bbox": [ [ 151, 151, 26, 26 ] ] }
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing the 224x224 image.
- `label`: An integer between 0 and 3 representing the classes with the following mapping:
| Label | Description |
| --- | --- |
| 0 | blackKing |
| 1 | blackQueen |
| 2 | whiteKing |
| 3 | whiteQueen |
- `bbox`: A list of integers having sequence [x_center, y_center, width, height] for a particular bounding box
### Data Splits
The data is split into training and validation set. The training set contains 204 images and the validation set 52 images.
## Dataset Creation
### Curation Rationale
The dataset was created to be a simple benchmark for object detection
### Source Data
#### Initial Data Collection and Normalization
The data is obtained by machine generating images from "python-chess" library. Please refer [this code](https://github.com/faizankshaikh/chessDetection/blob/main/code/1.1%20create_images_with_labels.ipynb) to understand data generation pipeline
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
The annotations were done manually.
#### Who are the annotators?
The annotations were done manually.
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
The dataset can be considered as a beginner-friendly toy dataset for object detection. It should not be used for benchmarking state of the art object detection models, or be used for a deployed model.
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
The dataset only contains four classes for simplicity. The complexity can be increased by considering all types of chess pieces, and by making it a multi-object detection problem
## Additional Information
### Dataset Curators
The dataset was created by Faizan Shaikh
### Licensing Information
The dataset is licensed as CC-BY-SA:2.0
### Citation Information
[Needs More Information] |
nateraw | null | null | null | false | 1 | false | nateraw/parti-prompts | 2022-06-22T19:17:49.000Z | null | false | 944b156abfdad7627c3221b5ec4f6a6fb060a197 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/nateraw/parti-prompts/resolve/main/README.md | ---
license: apache-2.0
---
# Dataset Card for PartiPrompts (P2)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://parti.research.google/
- **Repository:** https://github.com/google-research/parti
- **Paper:** https://gweb-research-parti.web.app/parti_paper.pdf
### Dataset Summary
PartiPrompts (P2) is a rich set of over 1600 prompts in English that we release
as part of this work. P2 can be used to measure model capabilities across
various categories and challenge aspects.

P2 prompts can be simple, allowing us to gauge the progress from scaling. They
can also be complex, such as the following 67-word description we created for
Vincent van Gogh’s *The Starry Night* (1889):
*Oil-on-canvas painting of a blue night sky with roiling energy. A fuzzy and
bright yellow crescent moon shining at the top. Below the exploding yellow stars
and radiating swirls of blue, a distant village sits quietly on the right.
Connecting earth and sky is a flame-like cypress tree with curling and swaying
branches on the left. A church spire rises as a beacon over rolling blue hills.*
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text descriptions are in English.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The license for this dataset is the apache-2.0 license.
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@nateraw](https://github.com/nateraw) for adding this dataset. |
iohadrubin | null | false | 2 | false | iohadrubin/mapped_nq | 2022-06-22T20:18:51.000Z | null | false | 5577fca208b28d0b227eb24cdb9696bae5b99bea | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/iohadrubin/mapped_nq/resolve/main/README.md | ---
license: apache-2.0
---
| ||
GEM-submissions | null | null | null | false | 2 | false | GEM-submissions/lewtun__this-is-a-test-name__1655928558 | 2022-06-22T20:09:24.000Z | null | false | c112309ffcd4fb1a8f1567b2941be69bafd8ce24 | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:This is a test name",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/lewtun__this-is-a-test-name__1655928558/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: This is a test name
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: This is a test name
|
shouzen | null | null | null | false | 2 | false | shouzen/final_data_sale | 2022-06-23T10:53:35.000Z | null | false | c34843b2d4e69200f273ac50a9f2b8a46c35f8a8 | [] | [] | https://huggingface.co/datasets/shouzen/final_data_sale/resolve/main/README.md | |
martosinc | null | null | null | false | 2 | false | martosinc/morrowtext | 2022-06-22T23:17:49.000Z | null | false | fcc14d4bc7b2c7d4270ffe34355a62229dbb0838 | [] | [
"license:mit"
] | https://huggingface.co/datasets/martosinc/morrowtext/resolve/main/README.md | ---
license: mit
---
Contains all TES3:Morrowind dialogues and journal queries.
There are in total 4 labels: Journal, Greeting, Persuasion, Topic (Last one being the usual dialogues).
The text is already formatted and does not contain duplicates or NaNs. |
justpyschitry | null | null | null | false | 2 | false | justpyschitry/autotrain-data-Wikipeida_Article_Classifier_by_Chap | 2022-10-25T10:34:57.000Z | null | false | 3b988d737cc1358ca694149c628bdabe07275fb2 | [] | [
"language:en",
"task_categories:text-classification"
] | https://huggingface.co/datasets/justpyschitry/autotrain-data-Wikipeida_Article_Classifier_by_Chap/resolve/main/README.md | ---
language:
- en
task_categories:
- text-classification
---
# AutoTrain Dataset for project: Wikipeida_Article_Classifier_by_Chap
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project Wikipeida_Article_Classifier_by_Chap.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "diffuse actinic keratinocyte dysplasia",
"target": 15
},
{
"text": "cholesterol atheroembolism",
"target": 8
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=20, names=['Certain infectious or parasitic diseases', 'Developmental anaomalies', 'Diseases of the blood or blood forming organs', 'Diseases of the genitourinary system', 'Mental behavioural or neurodevelopmental disorders', 'Neoplasms', 'certain conditions originating in the perinatal period', 'conditions related to sexual health', 'diseases of the circulatroy system', 'diseases of the digestive system', 'diseases of the ear or mastoid process', 'diseases of the immune system', 'diseases of the musculoskeletal system or connective tissue', 'diseases of the nervous system', 'diseases of the respiratory system', 'diseases of the skin', 'diseases of the visual system', 'endocrine nutritional or metabolic diseases', 'pregnanacy childbirth or the puerperium', 'sleep-wake disorders'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 9828 |
| valid | 2468 |
|
seoyeon-22 | null | null | null | false | 2 | false | seoyeon-22/test-2 | 2022-06-23T08:22:04.000Z | null | false | 1f12e77c97f01d4d16dcf22051e5abeccb0c7d18 | [] | [
"license:other"
] | https://huggingface.co/datasets/seoyeon-22/test-2/resolve/main/README.md | ---
license: other
---
|
GEM-submissions | null | null | null | false | 2 | false | GEM-submissions/lewtun__this-is-another-test-name__1655982268 | 2022-06-23T11:04:35.000Z | null | false | bcb86928d649893705003b9311a1170651e396ce | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:This is another test name",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/lewtun__this-is-another-test-name__1655982268/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: This is another test name
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: This is another test name
|
GEM-submissions | null | null | null | false | 1 | false | GEM-submissions/lewtun__this-is-another-test-name__1655983106 | 2022-06-23T11:18:33.000Z | null | false | 2e883f2ebd5e3c5178b114c1a7d65376d08f7294 | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:This is another test name",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/lewtun__this-is-another-test-name__1655983106/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: This is another test name
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: This is another test name
|
GEM-submissions | null | null | null | false | 1 | false | GEM-submissions/lewtun__this-is-another-test-name__1655983383 | 2022-06-23T11:23:10.000Z | null | false | b3d6c03c801f1ccabe8afeb5bd139904cee1b6b5 | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:This is another test name",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/lewtun__this-is-another-test-name__1655983383/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: This is another test name
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: This is another test name
|
GEM-submissions | null | null | null | false | 1 | false | GEM-submissions/lewtun__this-is-another-test-name__1655985826 | 2022-06-23T12:03:51.000Z | null | false | b2932abe00535d815b067005fe46064c5296fcb3 | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:This is another test name",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/lewtun__this-is-another-test-name__1655985826/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: This is another test name
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: This is another test name
|
scikit-learn | null | null | null | false | 2 | false | scikit-learn/tips | 2022-06-23T12:21:40.000Z | null | false | f2cb20374e200823a62809449f27dc2f0bebb289 | [] | [] | https://huggingface.co/datasets/scikit-learn/tips/resolve/main/README.md | ## A Waiter's Tips
The following description was retrieved from Kaggle page.
Food servers’ tips in restaurants may be influenced by many
factors, including the nature of the restaurant, size of the party, and table
locations in the restaurant. Restaurant managers need to know which factors
matter when they assign tables to food servers. For the sake of staff morale,
they usually want to avoid either the substance or the appearance of unfair
treatment of the servers, for whom tips (at least in restaurants in the United
States) are a major component of pay.
In one restaurant, a food server recorded the following data on all cus-
tomers they served during an interval of two and a half months in early 1990.
The restaurant, located in a suburban shopping mall, was part of a national
chain and served a varied menu. In observance of local law, the restaurant
offered to seat in a non-smoking section to patrons who requested it. Each
record includes a day and time, and taken together, they show the server’s
work schedule.
**Acknowledgements**
The data was reported in a collection of case studies for business statistics.
Bryant, P. G. and Smith, M (1995) Practical Data Analysis: Case Studies in Business Statistics. Homewood, IL: Richard D. Irwin Publishing
The dataset is also available through the Python package Seaborn.
|
HekmatTaherinejad | null | null | null | false | 3 | false | HekmatTaherinejad/Transparent | 2022-06-24T08:45:10.000Z | null | false | ccb4d7c47eb82c25b865fb5052e998789b64d95f | [] | [] | https://huggingface.co/datasets/HekmatTaherinejad/Transparent/resolve/main/README.md | Transparent |
CShorten | null | null | null | false | 17 | false | CShorten/ML-ArXiv-Papers | 2022-06-27T12:15:11.000Z | null | false | c878972daa0a5ec5f0d684354b6c8018f27d1316 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/CShorten/ML-ArXiv-Papers/resolve/main/README.md | ---
license: afl-3.0
---
This dataset contains the subset of ArXiv papers with the "cs.LG" tag to indicate the paper is about Machine Learning.
The core dataset is filtered from the full ArXiv dataset hosted on Kaggle: https://www.kaggle.com/datasets/Cornell-University/arxiv. The original dataset contains roughly 2 million papers. This dataset contains roughly 100,000 papers following the category filtering.
The dataset is maintained by with requests to the ArXiv API.
The current iteration of the dataset only contains the title and abstract of the paper.
The ArXiv dataset contains additional features that we may look to include in future releases. We have highlighted the top two features on the roadmap for integration:
<ul>
<li> <b>authors</b> </li>
<li> <b>update_date</b> </li>
<li> Submitter </li>
<li> Comments </li>
<li> Journal-ref </li>
<li> doi </li>
<li> report-no </li>
<li> categories </li>
<li> license </li>
<li> versions </li>
<li> authors_parsed </li>
</ul> |
PedroDKE | null | null | null | false | 1 | false | PedroDKE/LibriS2S | 2022-11-15T14:23:13.000Z | null | false | 1dad2d3d8aa031bfea7b6324a29ec5b8e9d7dca1 | [] | [
"arxiv:2204.10593",
"arxiv:1910.07924",
"language:en",
"language:de",
"license:cc-by-nc-sa-4.0",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"tags:LibriS2S",
"tags:LibrivoxDeEn",
"tags:Speech-to-Speech translation",
"tags:LREC2022",
"task_categories:text-to-speech",
"task_c... | https://huggingface.co/datasets/PedroDKE/LibriS2S/resolve/main/README.md | ---
annotations_creators: []
language:
- en
- de
language_creators: []
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
pretty_name: LibriS2S German-English Speech and Text pairs
size_categories:
- 10K<n<100K
source_datasets: []
tags:
- LibriS2S
- LibrivoxDeEn
- Speech-to-Speech translation
- LREC2022
task_categories:
- text-to-speech
- automatic-speech-recognition
- translation
task_ids: []
---
# LibriS2S
This repo contains scripts and alignment data to create a dataset build further upon [librivoxDeEn](https://www.cl.uni-heidelberg.de/statnlpgroup/librivoxdeen/) such that it contains (German audio, German transcription, English audio, English transcription) quadruplets and can be used for Speech-to-Speech translation research. Because of this, the alignments are released under the same [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/) <div>
These alignments were collected by downloading the English audiobooks and using [aeneas](https://github.com/readbeyond/aeneas) to align the book chapters to the transcripts. For more information read the original [paper](https://arxiv.org/abs/2204.10593) (Presented at LREC 2022)
### The data
The English/German audio are available in the folder EN/DE respectively and can be downloaded from [this onedrive](https://1drv.ms/u/s!Aox92ivMmuTc-i1Hf4iTugnhQ0Yi?e=pvvPeH). In case there are any problems with the download, feel free to open an issue <br/>
The repo structure is as follow:
- Alignments : Contains all the alignments for each book and chapter
- DE : Contains the German audio for each chapter per book.
- EN : Contains the English audio for each chapter per book.
- Example : contains example files on for the scraping and aligning explanations that were used to build this dataset.
- LibrivoxDeEn_alignments : Contains the base alignments from the LibrivoxDeEn dataset. <br/>
In case you feel a part of the data is missing, feel free to open an issue!
The full zipfile is about 52 GB of size.
### Scraping a book from Librivox
To download all chapters from a librivox url the following command can be used:
```
python scrape_audio_from_librivox.py \
--url https://librivox.org/undine-by-friedrich-de-la-motte-fouque/ \
--save_dir ./examples
```
### Allign a book from Librivox with the text from LibrivoxDeEn
To allign the previously downloaded book with the txt files and tsv tables provided by LibrivoxDeEn the following command, based on the example provided with this repo, can be used:
```
python align_text_and_audio.py \
--text_dir ./example/en_text/ \
--audio_path ./example/audio_chapters/ \
--aeneas_path ./example/aeneas/ \
--en_audio_export_path ./example/sentence_level_audio/ \
--total_alignment_path ./example/bi-lingual-alignment/ \
--librivoxdeen_alignment ./example/undine_data.tsv \
--aeneas_head_max 120 \
--aeneas_tail_min 5 \
```
**note:** the example folder in this repo already contains the first two chapters from [Undine](https://librivox.org/undine-by-friedrich-de-la-motte-fouque/) scraped from librivox and their transcripts and (modified to only contain the first 2 chapters) tsv table retrieved from LibrivoxDeEn.
Additional data to align can be scraped by using the same file shown previously and combined with the provided data from LibriVoxDeEn
Additionally with this repo the full alignment for the 8 following books with following LibrivoxDeEn id's are also given:
[9](https://librivox.org/the-picture-of-dorian-gray-1891-version-by-oscar-wilde/), [10](https://librivox.org/pandoras-box-by-frank-wedekind/), [13](https://librivox.org/survivors-of-the-chancellor-by-jules-verne/), [18](https://librivox.org/undine-by-friedrich-de-la-motte-fouque/), [23](https://librivox.org/around-the-world-in-80-days-by-jules-verne/), [108](https://librivox.org/elective-affinities-by-johann-wolfgang-von-goethe/), [110](https://librivox.org/candide-by-voltaire-3/), [120](https://librivox.org/the-metamorphosis-by-franz-kafka/).
Other books such as [11](https://librivox.org/the-castle-of-otranto-by-horace-walpole/), [36](https://librivox.org/the-rider-on-the-white-horse-by-theodor-storm/), [67](https://librivox.org/frankenstein-or-the-modern-prometheus-1818-by-mary-wollstonecraft-shelley/) and [54](https://librivox.org/white-nights-other-stories-by-fyodor-dostoyevsky/) are also inside of the librivoxDeEn dataset but the chapters do not correspond in a 1:1 mannner(for example: the German version of book 67 has 27 chapters but the English version has 29 and thus need to be re-aligned before the allignment script in this repo will work). Therefore these alignments are given but might have be different if you scrape them yourselves as the re-alignments might be different for you.
### Metrics on the alignment given in this repo.
Using the alignments given in this repo some metrics were collected and quickly displayed here, for this table and the next figure the books which were manually alligned, although provided in the zip, were not accounted for, but the full table can be found in the original paper.
| | German | English |
| :---: | :-: | :-: |
|number of files | 18868 | 18868 |
|total time (hh:mm:ss) | 39:11:08 | 40:52:31 |
|Speakers | 41 |22 |
note: the speakers were counted for each book seperatly so some speakers might be counter more than once.
the number of hours for each book aligned in this repo:<br>
<img src="https://user-images.githubusercontent.com/43861296/122250648-1f5f7f80-ceca-11eb-84fd-344a2261bf47.png" width="500">
when using this work, please cite the original paper and the LibrivoxDeEn authors
```
@misc{jeuris2022,
title = {LibriS2S: A German-English Speech-to-Speech Translation Corpus},
author = {Jeuris, Pedro and Niehues, Jan},
doi = {10.48550/ARXIV.2204.10593},
url = {https://arxiv.org/abs/2204.10593},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}
```
```
@article{beilharz19,
title = {LibriVoxDeEn: A Corpus for German-to-English Speech Translation and Speech Recognition},
author = {Beilharz, Benjamin and Sun, Xin and Karimova, Sariya and Riezler, Stefan},
journal = {Proceedings of the Language Resources and Evaluation Conference},
journal-abbrev = {LREC},
year = {2020},
city = {Marseille, France},
url = {https://arxiv.org/pdf/1910.07924.pdf}
}
```
|
fever | null | null | null | false | 3 | false | fever/feverous | 2022-10-25T05:50:36.000Z | feverous | false | 96a6c960623e1b4ad83b38f5e345c9c5632857f7 | [] | [
"arxiv:2106.05707",
"annotations_creators:crowdsourced",
"language_creators:found",
"language:en",
"license:cc-by-sa-3.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|wikipedia",
"task_categories:text-classification",
"tags:knowledge-verification"
] | https://huggingface.co/datasets/fever/feverous/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|wikipedia
task_categories:
- text-classification
task_ids: []
paperswithcode_id: feverous
pretty_name: FEVEROUS
tags:
- knowledge-verification
---
# Dataset Card for FEVEROUS
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://fever.ai/dataset/feverous.html
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [FEVEROUS: Fact Extraction and VERification Over Unstructured and Structured information](https://arxiv.org/abs/2106.05707)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
With billions of individual pages on the web providing information on almost every conceivable topic, we should have
the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this
information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to
transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot
of recent research and media coverage: false information coming from unreliable sources.
The FEVER workshops are a venue for work in verifiable knowledge extraction and to stimulate progress in this direction.
FEVEROUS (Fact Extraction and VERification Over Unstructured and Structured information) is a fact
verification dataset which consists of 87,026 verified claims. Each claim is annotated with evidence in the form of
sentences and/or cells from tables in Wikipedia, as well as a label indicating whether this evidence supports, refutes,
or does not provide enough information to reach a verdict. The dataset also contains annotation metadata such as
annotator actions (query keywords, clicks on page, time signatures), and the type of challenge each claim poses.
### Supported Tasks and Leaderboards
The task is verification of textual claims against textual sources.
When compared to textual entailment (TE)/natural language inference, the key difference is that in these tasks the
passage to verify each claim is given, and in recent years it typically consists a single sentence, while in
verification systems it is retrieved from a large set of documents in order to form the evidence.
### Languages
The dataset is in English (`en`).
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 187.82 MB
- **Size of the generated dataset:** 123.25 MB
- **Total amount of disk used:** 311.07 MB
An example of 'wikipedia_pages' looks as follows:
```
{'id': 24435,
'label': 1,
'claim': 'Michael Folivi competed with ten teams from 2016 to 2021, appearing in 54 games and making seven goals in total.',
'evidence': [{'content': ['Michael Folivi_cell_1_2_0',
'Michael Folivi_cell_1_7_0',
'Michael Folivi_cell_1_8_0',
'Michael Folivi_cell_1_9_0',
'Michael Folivi_cell_1_12_0'],
'context': [['Michael Folivi_title',
'Michael Folivi_section_4',
'Michael Folivi_header_cell_1_0_0'],
['Michael Folivi_title',
'Michael Folivi_section_4',
'Michael Folivi_header_cell_1_0_0'],
['Michael Folivi_title',
'Michael Folivi_section_4',
'Michael Folivi_header_cell_1_0_0'],
['Michael Folivi_title',
'Michael Folivi_section_4',
'Michael Folivi_header_cell_1_0_0'],
['Michael Folivi_title',
'Michael Folivi_section_4',
'Michael Folivi_header_cell_1_0_0']]},
{'content': ['Michael Folivi_cell_0_13_1',
'Michael Folivi_cell_0_14_1',
'Michael Folivi_cell_0_15_1',
'Michael Folivi_cell_0_16_1',
'Michael Folivi_cell_0_18_1'],
'context': [['Michael Folivi_title',
'Michael Folivi_header_cell_0_13_0',
'Michael Folivi_header_cell_0_11_0'],
['Michael Folivi_title',
'Michael Folivi_header_cell_0_14_0',
'Michael Folivi_header_cell_0_11_0'],
['Michael Folivi_title',
'Michael Folivi_header_cell_0_15_0',
'Michael Folivi_header_cell_0_11_0'],
['Michael Folivi_title',
'Michael Folivi_header_cell_0_16_0',
'Michael Folivi_header_cell_0_11_0'],
['Michael Folivi_title',
'Michael Folivi_header_cell_0_18_0',
'Michael Folivi_header_cell_0_11_0']]}],
'annotator_operations': [{'operation': 'start',
'value': 'start',
'time': 0.0},
{'operation': 'Now on', 'value': '?search=', 'time': 0.78},
{'operation': 'search', 'value': 'Michael Folivi', 'time': 78.101},
{'operation': 'Now on', 'value': 'Michael Folivi', 'time': 78.822},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_1_2_0',
'time': 96.202},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_1_7_0',
'time': 96.9},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_1_8_0',
'time': 97.429},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_1_9_0',
'time': 97.994},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_1_12_0',
'time': 99.02},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_0_13_1',
'time': 106.108},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_0_14_1',
'time': 106.702},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_0_15_1',
'time': 107.423},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_0_16_1',
'time': 108.186},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_0_17_1',
'time': 108.788},
{'operation': 'Highlighting',
'value': 'Michael Folivi_header_cell_0_17_0',
'time': 108.8},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_0_18_1',
'time': 109.469},
{'operation': 'Highlighting deleted',
'value': 'Michael Folivi_cell_0_17_1',
'time': 124.28},
{'operation': 'Highlighting deleted',
'value': 'Michael Folivi_header_cell_0_17_0',
'time': 124.293},
{'operation': 'finish', 'value': 'finish', 'time': 141.351}],
'expected_challenge': '',
'challenge': 'Numerical Reasoning'}
```
### Data Fields
The data fields are the same among all splits.
- `id` (int): ID of the sample.
- `label` (ClassLabel): Annotated label for the claim. Can be one of {"SUPPORTS", "REFUTES", "NOT ENOUGH INFO"}.
- `claim` (str): Text of the claim.
- `evidence` (list of dict): Evidence sets (at maximum three). Each set consists of dictionaries with two fields:
- `content` (list of str): List of element IDs serving as the evidence for the claim. Each element ID is in the format
`"[PAGE ID]_[EVIDENCE TYPE]_[NUMBER ID]"`, where `[EVIDENCE TYPE]` can be: `sentence`, `cell`, `header_cell`,
`table_caption`, `item`.
- `context` (list of list of str): List (for each element ID in `content`) of a list of Wikipedia elements that are
automatically associated with that element ID and serve as context. This includes an article's title, relevant
sections (the section and sub-section(s) the element is located in), and for cells the closest row and column
header (multiple row/column headers if they follow each other).
- `annotator_operations` (list of dict): List of operations an annotator used to find the evidence and reach a verdict,
given the claim. Each element in the list is a dictionary with the fields:
- `operation` (str): Operation name. Any of the following:
- `start`, `finish`: Annotation started/finished. The value is the name of the operation.
- `search`: Annotator used the Wikipedia search function. The value is the entered search term or the term selected
from the automatic suggestions. If the annotator did not select any of the suggestions but instead went into
advanced search, the term is prefixed with "contains...".
- `hyperlink`: Annotator clicked on a hyperlink in the page. The value is the anchor text of the hyperlink.
- `Now on`: The page the annotator has landed after a search or a hyperlink click. The value is the PAGE ID.
- `Page search`: Annotator search on a page. The value is the search term.
- `page-search-reset`: Annotator cleared the search box. The value is the name of the operation.
- `Highlighting`, `Highlighting deleted`: Annotator selected/unselected an element on the page. The value is
`ELEMENT ID`.
- `back-button-clicked`: Annotator pressed the back button. The value is the name of the operation.
- `value` (str): Value associated with the operation.
- `time` (float): Time in seconds from the start of the annotation.
- `expected_challenge` (str): The challenge the claim generator selected will be faced when verifying the claim, one
out of the following: `Numerical Reasoning`, `Multi-hop Reasoning`, `Entity Disambiguation`,
`Combining Tables and Text`, `Search terms not in claim`, `Other`.
- `challenge` (str): Main challenge to verify the claim, one out of the following: `Numerical Reasoning`,
`Multi-hop Reasoning`, `Entity Disambiguation`, `Combining Tables and Text`, `Search terms not in claim`, `Other`.
### Data Splits
| | train | validation | test |
|--------------------|------:|-----------:|-----:|
| Number of examples | 71291 | 7890 | 7845 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
```
These data annotations incorporate material from Wikipedia, which is licensed pursuant to the Wikipedia Copyright Policy. These annotations are made available under the license terms described on the applicable Wikipedia article pages, or, where Wikipedia license terms are unavailable, under the Creative Commons Attribution-ShareAlike License (version 3.0), available at http://creativecommons.org/licenses/by-sa/3.0/ (collectively, the “License Termsâ€). You may not use these files except in compliance with the applicable License Terms.
```
### Citation Information
If you use this dataset, please cite:
```bibtex
@inproceedings{Aly21Feverous,
author = {Aly, Rami and Guo, Zhijiang and Schlichtkrull, Michael Sejr and Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Cocarascu, Oana and Mittal, Arpit},
title = {{FEVEROUS}: Fact Extraction and {VERification} Over Unstructured and Structured information},
eprint={2106.05707},
archivePrefix={arXiv},
primaryClass={cs.CL},
year = {2021}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
|
kiran957 | null | null | null | false | 4 | false | kiran957/railway_complaints | 2022-06-23T15:40:24.000Z | null | false | b013d587c3c3d399afd83d14eb5c2e1b01f2c740 | [] | [
"license:other"
] | https://huggingface.co/datasets/kiran957/railway_complaints/resolve/main/README.md | ---
license: other
---
|
NbAiLab | null | null | null | false | 3 | false | NbAiLab/newspaperimagescompletetop | 2022-06-27T07:58:47.000Z | null | false | dd1d7ea30f8c25883a98b3683797302653bc6330 | [] | [] | https://huggingface.co/datasets/NbAiLab/newspaperimagescompletetop/resolve/main/README.md | |
simarora | null | null | null | false | 2 | false | simarora/ConcurrentQA | 2022-06-23T20:36:06.000Z | null | false | f058f77364946ea97656400f4f1592633ba71071 | [] | [
"license:mit"
] | https://huggingface.co/datasets/simarora/ConcurrentQA/resolve/main/README.md | ---
license: mit
---
ConcurrentQA is a textual multi-hop QA benchmark to require concurrent retrieval over multiple data-distributions (i.e. Wikipedia and email data). It follows the data collection process and schema of HotpotQA.
The data set is downloadable here: https://github.com/facebookresearch/concurrentqa. It also contains model and result analysis code. This benchmark can also be used to study privacy when reasoning over data distributed in multiple privacy scopes --- i.e. Wikipedia in the public domain and emails in the private domain. |
GEM-submissions | null | null | null | false | 2 | false | GEM-submissions/lewtun__this-is-a-test-submission__1656013291 | 2022-06-23T19:41:36.000Z | null | false | 4e3152110827c8b80f508b1b02677a043756441a | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:This is a test submission",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/lewtun__this-is-a-test-submission__1656013291/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: This is a test submission
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: This is a test submission
|
GEM-submissions | null | null | null | false | 3 | false | GEM-submissions/lewtun__this-is-a-test-submission-1__1656014763 | 2022-06-23T20:06:09.000Z | null | false | c06f6ad845a32812535e6ecef534efd6342dacfb | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:This is a test submission 1",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/lewtun__this-is-a-test-submission-1__1656014763/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: This is a test submission 1
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: This is a test submission 1
|
rjac | null | null | null | false | 3 | false | rjac/kaggle-entity-annotated-corpus-ner-dataset | 2022-10-25T10:37:24.000Z | null | false | 64179b8f08613459a2265125c29d5290e41baac1 | [] | [
"annotations_creators:Abhinav Walia (Owner)",
"language:en",
"license:odbl"
] | https://huggingface.co/datasets/rjac/kaggle-entity-annotated-corpus-ner-dataset/resolve/main/README.md | ---
annotations_creators:
- Abhinav Walia (Owner)
language:
- en
license:
- odbl
---
**Date**: 2022-07-10<br/>
**Files**: ner_dataset.csv<br/>
**Source**: [Kaggle entity annotated corpus](https://www.kaggle.com/datasets/abhinavwalia95/entity-annotated-corpus)<br/>
**notes**: The dataset only contains the tokens and ner tag labels. Labels are uppercase.
# About Dataset
[**from Kaggle Datasets**](https://www.kaggle.com/datasets/abhinavwalia95/entity-annotated-corpus)
## Context
Annotated Corpus for Named Entity Recognition using GMB(Groningen Meaning Bank) corpus for entity classification with enhanced and popular features by Natural Language Processing applied to the data set.
Tip: Use Pandas Dataframe to load dataset if using Python for convenience.
## Content
This is the extract from GMB corpus which is tagged, annotated and built specifically to train the classifier to predict named entities such as name, location, etc.
Number of tagged entities:
'O': 1146068', geo-nam': 58388, 'org-nam': 48034, 'per-nam': 23790, 'gpe-nam': 20680, 'tim-dat': 12786, 'tim-dow': 11404, 'per-tit': 9800, 'per-fam': 8152, 'tim-yoc': 5290, 'tim-moy': 4262, 'per-giv': 2413, 'tim-clo': 891, 'art-nam': 866, 'eve-nam': 602, 'nat-nam': 300, 'tim-nam': 146, 'eve-ord': 107, 'per-ini': 60, 'org-leg': 60, 'per-ord': 38, 'tim-dom': 10, 'per-mid': 1, 'art-add': 1
## Essential info about entities
* geo = Geographical Entity
* org = Organization
* per = Person
* gpe = Geopolitical Entity
* tim = Time indicator
* art = Artifact
* eve = Event
* nat = Natural Phenomenon
* Total Words Count = 1354149
* Target Data Column: "tag" (ner_tag in this repo)
Inspiration: This dataset is getting more interested because of more features added to the recent version of this dataset. Also, it helps to create a broad view of Feature Engineering with respect to this dataset.
## Modifications
the ner_dataset.csv was modified to have a similar data Structure as [CoNLL-2003 dataset](https://huggingface.co/datasets/conll2003)
## Licensing information
Database: Open Database, Contents: Database Contents.
|
mh53 | null | null | null | false | 2 | false | mh53/asr_radio_ru | 2022-06-24T02:20:50.000Z | null | false | b869bbefd9ccc4ef35f61c3676b5d85a787856e8 | [] | [
"license:cc"
] | https://huggingface.co/datasets/mh53/asr_radio_ru/resolve/main/README.md | ---
license: cc
---
|
smangrul | null | null | null | false | 66 | false | smangrul/MuDoConv | 2022-06-29T06:39:30.000Z | null | false | cf29923953a1580840b263b22f800a2e4cbd66d9 | [] | [
"license:cc-by-nc-4.0"
] | https://huggingface.co/datasets/smangrul/MuDoConv/resolve/main/README.md | ---
license: cc-by-nc-4.0
---
Collated datasets from 10 sources and preprocessed it to have ["texts", "labels"] columns to train/finetune sequence-to-sequence models such as T5/Blenderbot ... Below are the 10 datasets:
1. blended_skill_talk,
2. conv_ai_2
3. empathetic_dialogues
4. wizard_of_wikipedia
5. meta_woz
6. multi_woz,
7. spolin
8. dailydialog
9. cornell_movie_dialogues
10. taskmaster
The data access and preprocessing code is [here](https://github.com/pacman100/accelerate-deepspeed-test/blob/main/src/data_preprocessing/DataPreprocessing.ipynb) |
SurfaceData | null | null | null | false | 3 | false | SurfaceData/translation_MorisienMT | 2022-07-06T04:28:58.000Z | null | false | e8cf147945ceb22020889cd7f14eb711fe0ea1e9 | [] | [
"task_categories:translation",
"language:en",
"language:cr",
"license:cc-by-4.0"
] | https://huggingface.co/datasets/SurfaceData/translation_MorisienMT/resolve/main/README.md | ---
task_categories:
- translation
language:
- en
- cr
license:
- cc-by-4.0
---
MorisienMT is a dataset for Mauritian Creole Machine Translation. This dataset
consists of training, development and test set splits for English--Creole as
well as French--Creole translation. The data comes from a variety of sources
and hence can be considered as belonging to the general domain.
The training set for English--Creole contains 21,810 lines.
Finally, we also provide a Creole monolingual corpus of 45,364 lines.
Note that a significant portion of the dataset is a dictionary of word
pairs/triplets, nevertheless it is a start.
Feel free to use the dataset for your research but don't forget to attribute
our upcoming paper which will be uploaded to arxiv shortly.
NOTE: MorisienMT was originally partly developed by Dr Aneerav Sukhoo from the
University of Mauritius in 2014 when he was a visiting researcher in IIT
Bombay. Dr Sukhoo and Raj Dabre worked on the MT experiments together, but
never publicly released the dataset back then. Furthermore, the dataset splits
and experiments were not done in a highly principled manner, which is required
in the present day. Therefore, we improve the quality of splits and officially
release the data for people to use.
To use this dataset, request access via the [Surface Catalog](https://catalog.surfacedata.org/).
|
malteos | null | null | null | false | 3 | false | malteos/wechsel_de | 2022-07-30T18:57:02.000Z | null | false | 76cd1995c3c8251656115f75187e1ceeae407448 | [] | [
"language:de",
"task_categories:text-generation",
"size_categories:100k<n<1M",
"task_ids:language-modeling",
"task_ids:masked-language-modeling"
] | https://huggingface.co/datasets/malteos/wechsel_de/resolve/main/README.md | ---
language:
- de
task_categories:
- text-generation
size_categories:
- 100k<n<1M
task_ids:
- language-modeling
- masked-language-modeling
---
German validation dataset from WECHSEL () to evaluate LLM perplexity.
JSON-line files (on JSON object per line):
- `valid.json.gz`: Gzipped validation set as generated by the paper (163,698 docs)
- `valid.random_1636.json.gz`: Random 1% (1636 docs) of the validation set
|
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-1c7ef613-7224755 | 2022-06-24T08:41:24.000Z | null | false | 0e417a4b73fec1352fdad25aa009950f74ea943f | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:ag_news"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-1c7ef613-7224755/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- ag_news
eval_info:
task: multi_class_classification
model: mrm8488/distilroberta-finetuned-age_news-classification
dataset_name: ag_news
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: mrm8488/distilroberta-finetuned-age_news-classification
* Dataset: ag_news
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@abhishek](https://huggingface.co/abhishek) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-1c7ef613-7224756 | 2022-06-24T08:41:49.000Z | null | false | cd036c57e3d2827cbabd8009bcd2fa182c48279c | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:ag_news"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-1c7ef613-7224756/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- ag_news
eval_info:
task: multi_class_classification
model: nateraw/bert-base-uncased-ag-news
dataset_name: ag_news
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: nateraw/bert-base-uncased-ag-news
* Dataset: ag_news
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@abhishek](https://huggingface.co/abhishek) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-61110342-7234758 | 2022-06-24T08:52:40.000Z | null | false | 5d34bc138f12780d17ed89c92845e6ee6dfe0eb1 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:xtreme"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-61110342-7234758/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xtreme
eval_info:
task: entity_extraction
model: transformersbook/xlm-roberta-base-finetuned-panx-de
dataset_name: xtreme
dataset_config: PAN-X.de
dataset_split: validation
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: transformersbook/xlm-roberta-base-finetuned-panx-de
* Dataset: xtreme
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 3 | false | autoevaluate/autoeval-staging-eval-project-6a6944f2-7244759 | 2022-06-24T08:58:55.000Z | null | false | 1ef0e0717148e428e134f4ceb3ebc845f917db63 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:wikiann"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-6a6944f2-7244759/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- wikiann
eval_info:
task: entity_extraction
model: transformersbook/xlm-roberta-base-finetuned-panx-all
dataset_name: wikiann
dataset_config: en
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: transformersbook/xlm-roberta-base-finetuned-panx-all
* Dataset: wikiann
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-6a6944f2-7244760 | 2022-06-24T08:58:21.000Z | null | false | 209c35a42f9d52530a83550cefed4e6ee30cd7e8 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:wikiann"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-6a6944f2-7244760/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- wikiann
eval_info:
task: entity_extraction
model: philschmid/distilroberta-base-ner-wikiann
dataset_name: wikiann
dataset_config: en
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: philschmid/distilroberta-base-ner-wikiann
* Dataset: wikiann
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 3 | false | autoevaluate/autoeval-staging-eval-project-29af5371-7254761 | 2022-06-30T15:09:04.000Z | null | false | b3e3cb383d1d26bd35c1ac55dc18c8c572ac9a12 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:conll2003"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-29af5371-7254761/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- conll2003
eval_info:
task: entity_extraction
model: elastic/distilbert-base-cased-finetuned-conll03-english
dataset_name: conll2003
dataset_config: conll2003
dataset_split: validation
col_mapping:
tokens: tokens
tags: ner_tags
metrics: []
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: elastic/distilbert-base-cased-finetuned-conll03-english
* Dataset: conll2003
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@douwekiela](https://huggingface.co/douwekiela) for evaluating this model. |
autoevaluate | null | null | null | false | 3 | false | autoevaluate/autoeval-staging-eval-project-29af5371-7254762 | 2022-06-24T09:02:06.000Z | null | false | a422adecc19262f6b1e0501423e18109664f247a | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:conll2003"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-29af5371-7254762/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- conll2003
eval_info:
task: entity_extraction
model: elastic/distilbert-base-uncased-finetuned-conll03-english
dataset_name: conll2003
dataset_config: conll2003
dataset_split: validation
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: elastic/distilbert-base-uncased-finetuned-conll03-english
* Dataset: conll2003
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@douwekiela](https://huggingface.co/douwekiela) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-29af5371-7254763 | 2022-06-24T09:02:20.000Z | null | false | c79ece872cd8e360a115f690aee73394ece734a5 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:conll2003"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-29af5371-7254763/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- conll2003
eval_info:
task: entity_extraction
model: huggingface-course/bert-finetuned-ner
dataset_name: conll2003
dataset_config: conll2003
dataset_split: validation
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: huggingface-course/bert-finetuned-ner
* Dataset: conll2003
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@douwekiela](https://huggingface.co/douwekiela) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-29af5371-7254765 | 2022-06-24T09:02:22.000Z | null | false | dc45572a60c24c4d731641aed222ef23f1e02a21 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:conll2003"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-29af5371-7254765/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- conll2003
eval_info:
task: entity_extraction
model: philschmid/distilroberta-base-ner-conll2003
dataset_name: conll2003
dataset_config: conll2003
dataset_split: validation
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: philschmid/distilroberta-base-ner-conll2003
* Dataset: conll2003
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@douwekiela](https://huggingface.co/douwekiela) for evaluating this model. |
autoevaluate | null | null | null | false | 3 | false | autoevaluate/autoeval-staging-eval-project-be45ecbd-7284772 | 2022-06-24T10:01:24.000Z | null | false | 6f9f190ca006db0fc95cad396463b020b7002e61 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cnn_dailymail"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-be45ecbd-7284772/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: patrickvonplaten/bert2bert_cnn_daily_mail
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: patrickvonplaten/bert2bert_cnn_daily_mail
* Dataset: cnn_dailymail
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 3 | false | autoevaluate/autoeval-staging-eval-project-be45ecbd-7284773 | 2022-06-24T09:27:34.000Z | null | false | a93e51a0086f1bad502798f81b6d8821f8f1090c | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cnn_dailymail"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-be45ecbd-7284773/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: echarlaix/bart-base-cnn-r2-19.4-d35-hybrid
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: echarlaix/bart-base-cnn-r2-19.4-d35-hybrid
* Dataset: cnn_dailymail
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 3 | false | autoevaluate/autoeval-staging-eval-project-be45ecbd-7284774 | 2022-06-24T09:27:22.000Z | null | false | 865abe187dd995261689af51bd95b20d12fcceca | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cnn_dailymail"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-be45ecbd-7284774/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: echarlaix/bart-base-cnn-r2-18.7-d23-hybrid
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: echarlaix/bart-base-cnn-r2-18.7-d23-hybrid
* Dataset: cnn_dailymail
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
albertvillanova | null | null | null | false | 3 | false | albertvillanova/tmp-mention | 2022-09-22T11:26:20.000Z | null | false | ddff094ce88bfe41c0b749637146722fcc552ddf | [] | [
"arxiv:2012.03411",
"license:cc-by-4.0",
"tags:zenodo"
] | https://huggingface.co/datasets/albertvillanova/tmp-mention/resolve/main/README.md | ---
license: cc-by-4.0
tags:
- zenodo
---
# Dataset Card for MultiLingual LibriSpeech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [MultiLingual LibriSpeech ASR corpus](http://www.openslr.org/94)
- **Repository:** [Needs More Information]
- **Paper:** [MLS: A Large-Scale Multilingual Dataset for Speech Research](https://arxiv.org/abs/2012.03411)
- **Leaderboard:** [Paperswithcode Leaderboard](https://paperswithcode.com/dataset/multilingual-librispeech)
### Dataset Summary
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"><p><b>Deprecated:</b> Not every model supports a fast tokenizer. Take a look at this <a href="index#supported-frameworks">table</a> to check if a model has fast tokenizer support.</p></div>
Multilingual LibriSpeech (MLS) dataset is a large multilingual corpus suitable for speech research. The dataset is derived from read audiobooks from LibriVox and consists of 8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`, `audio-speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/dataset/multilingual-librispeech and ranks models based on their WER.
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"><p><b>Deprecated:</b> Not every model supports a fast tokenizer. Take a look at this <a href="index#supported-frameworks">table</a> to check if a model has fast tokenizer support.</p></div>
<div class="alert alert-danger d-flex align-items-center" role="alert">
<svg class="bi flex-shrink-0 me-2" width="24" height="24" role="img" aria-label="Danger:"><use xlink:href="#exclamation-triangle-fill"/></svg>
<div>
An example danger alert with an icon
</div>
</div>
<div class="alert alert-block alert-warning"> ⚠ In general, just avoid the red boxes. </div>
<div class="alert alert-block alert-danger"> In general, just avoid the red boxes. </div>
<div class="alert alert-danger" role="alert"> In general, just avoid the red boxes. </div>
<div class="alert" role="alert"> In general, just avoid the red boxes. </div>
<div class="course-tip-orange">
<strong>Error:</strong>
</div>
<div class="alert alert-danger" role="alert">
<div class="row vertical-align">
<div class="col-xs-1 text-center">
<i class="fa fa-exclamation-triangle fa-2x"></i>
</div>
<div class="col-xs-11">
<strong>Error:</strong>
</div>
</div>
</div>
>[!WARNING]
>This is a warning
_**Warning:** Be very careful here._
<Deprecated>
This is a warning
</Deprecated>
<Tip warning>
This is a warning
</Tip>
<Tip warning={true}>
This is a warning
</Tip>
> **Warning**
> This is a warning |
autoevaluate | null | null | null | false | 3 | false | autoevaluate/autoeval-staging-eval-project-38643302-7294782 | 2022-06-24T10:11:47.000Z | null | false | c46a7c127048d9a3e7464821c50286437a64360e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:xsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-38643302-7294782/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xsum
eval_info:
task: summarization
model: human-centered-summarization/financial-summarization-pegasus
dataset_name: xsum
dataset_config: default
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: human-centered-summarization/financial-summarization-pegasus
* Dataset: xsum
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-84760c85-7314784 | 2022-06-24T09:51:31.000Z | null | false | bfb745c7878dc97e211a4d0369d39fde72b8faef | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-84760c85-7314784/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: philschmid/bart-base-samsum
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: philschmid/bart-base-samsum
* Dataset: samsum
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 3 | false | autoevaluate/autoeval-staging-eval-project-84760c85-7314785 | 2022-06-24T09:53:26.000Z | null | false | d2f15534b513134d94a76bca71c15745fa89c28a | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-84760c85-7314785/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: philschmid/bart-large-cnn-samsum
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: philschmid/bart-large-cnn-samsum
* Dataset: samsum
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 3 | false | autoevaluate/autoeval-staging-eval-project-84760c85-7314786 | 2022-06-24T09:52:53.000Z | null | false | 990b496e4d876f019f7d2519dfdaa9a2ea633bcf | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-84760c85-7314786/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: philschmid/distilbart-cnn-12-6-samsum
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: philschmid/distilbart-cnn-12-6-samsum
* Dataset: samsum
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
hellokitty | null | null | null | false | 2 | false | hellokitty/accident | 2022-06-24T11:56:36.000Z | null | false | 860153d106a47d3551325e1549c190fce6e2f2fb | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/hellokitty/accident/resolve/main/README.md | ---
license: apache-2.0
---
|
IsaMaks | null | null | null | false | 3 | false | IsaMaks/try_connll | 2022-06-24T13:34:49.000Z | null | false | 651a3884c54ceac558631d299d4fe8fa836fc60d | [] | [
"license:cc0-1.0"
] | https://huggingface.co/datasets/IsaMaks/try_connll/resolve/main/README.md | ---
license: cc0-1.0
---
|
hashir123 | null | null | null | false | 2 | false | hashir123/huma | 2022-06-24T13:16:32.000Z | null | false | 9ff25aa40b3d09e98c1c5494acfafb12bc0a37ad | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/hashir123/huma/resolve/main/README.md | ---
license: apache-2.0
---
|
joelito | null | null | null | false | 7,174 | false | joelito/brazilian_court_decisions | 2022-09-22T13:43:42.000Z | null | false | e937c2db8eab109cafc4f5279a396957d38251c5 | [] | [
"arxiv:1905.10348",
"annotations_creators:found",
"language_creators:found",
"language:pt",
"license:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:multi-class-classification"
] | https://huggingface.co/datasets/joelito/brazilian_court_decisions/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- pt
license:
- 'other'
multilinguality:
- monolingual
pretty_name: predicting-brazilian-court-decisions
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for predicting-brazilian-court-decisions
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/lagefreitas/predicting-brazilian-court-decisions
- **Paper:** Lage-Freitas, A., Allende-Cid, H., Santana, O., & Oliveira-Lage, L. (2022). Predicting Brazilian Court
Decisions. PeerJ. Computer Science, 8, e904–e904. https://doi.org/10.7717/peerj-cs.904
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
### Dataset Summary
The dataset is a collection of 4043 *Ementa* (summary) court decisions and their metadata from
the *Tribunal de Justiça de Alagoas* (TJAL, the State Supreme Court of Alagoas (Brazil). The court decisions are labeled
according to 7 categories and whether the decisions were unanimous on the part of the judges or not. The dataset
supports the task of Legal Judgment Prediction.
### Supported Tasks and Leaderboards
Legal Judgment Prediction
### Languages
Brazilian Portuguese
## Dataset Structure
### Data Instances
The file format is jsonl and three data splits are present (train, validation and test) for each configuration.
### Data Fields
The dataset contains the following fields:
- `process_number`: A number assigned to the decision by the court
- `orgao_julgador`: Judging Body: one of '1ª Câmara Cível', '2ª Câmara Cível', '3ª Câmara Cível', 'Câmara Criminal', '
Tribunal Pleno', 'Seção Especializada Cível'
- `publish_date`: The date, when the decision has been published (14/12/2018 - 03/04/2019). At that time (in 2018-2019),
the scraping script was limited and not configurable to get data based on date range. Therefore, only the data from
the last months has been scraped.
- `judge_relator`: Judicial panel
- `ementa_text`: Summary of the court decision
- `decision_description`: **Suggested input**. Corresponds to ementa_text - judgment_text - unanimity_text. Basic
statistics (number of words): mean: 119, median: 88, min: 12, max: 1400
- `judgment_text`: The text used for determining the judgment label
- `judgment_label`: **Primary suggested label**. Labels that can be used to train a model for judgment prediction:
- `no`: The appeal was denied
- `partial`: For partially favourable decisions
- `yes`: For fully favourable decisions
- removed labels (present in the original dataset):
- `conflito-competencia`: Meta-decision. For example, a decision just to tell that Court A should rule this case
and not Court B.
- `not-cognized`: The appeal was not accepted to be judged by the court
- `prejudicada`: The case could not be judged for any impediment such as the appealer died or gave up on the
case for instance.
- `unanimity_text`: Portuguese text to describe whether the decision was unanimous or not.
- `unanimity_label`: **Secondary suggested label**. Unified labels to describe whether the decision was unanimous or
not (in some cases contains ```not_determined```); they can be used for model training as well (Lage-Freitas et al.,
2019).
### Data Splits
The data has been split randomly into 80% train (3234), 10% validation (404), 10% test (405).
There are two tasks possible for this dataset.
#### Judgment
Label Distribution
| judgment | train | validation | test |
|:----------|---------:|-----------:|--------:|
| no | 1960 | 221 | 234 |
| partial | 677 | 96 | 93 |
| yes | 597 | 87 | 78 |
| **total** | **3234** | **404** | **405** |
#### Unanimity
In this configuration, all cases that have `not_determined` as `unanimity_label` can be removed.
Label Distribution
| unanimity_label | train | validation | test |
|:-----------------|----------:|---------------:|---------:|
| not_determined | 1519 | 193 | 201 |
| unanimity | 1681 | 205 | 200 |
| not-unanimity | 34 | 6 | 4 |
| **total** | **3234** | **404** | **405** |
## Dataset Creation
### Curation Rationale
This dataset was created to further the research on developing models for predicting Brazilian court decisions that are
also able to predict whether the decision will be unanimous.
### Source Data
The data was scraped from *Tribunal de Justiça de Alagoas* (TJAL, the State Supreme Court of Alagoas (Brazil).
#### Initial Data Collection and Normalization
*“We developed a Web scraper for collecting data from Brazilian courts. The scraper first searched for the URL that
contains the list of court cases […]. Then, the scraper extracted from these HTML files the specific case URLs and
downloaded their data […]. Next, it extracted the metadata and the contents of legal cases and stored them in a CSV file
format […].”* (Lage-Freitas et al., 2022)
#### Who are the source language producers?
The source language producer are presumably attorneys, judges, and other legal professionals.
### Annotations
#### Annotation process
The dataset was not annotated.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The court decisions might contain sensitive information about individuals.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton
Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset
consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the
dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that,
differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to
have a look at the conversion script ```convert_to_hf_dataset.py``` in order to retrace the steps for converting the
original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to
the bibliographical references and the original Github repositories and/or web pages provided in this dataset card.
## Additional Information
Lage-Freitas, A., Allende-Cid, H., Santana Jr, O., & Oliveira-Lage, L. (2019). Predicting Brazilian court decisions:
- "In Brazil [...] lower court judges decisions might be appealed to Brazilian courts (*Tribiunais de Justiça*) to be
reviewed by second instance court judges. In an appellate court, judges decide together upon a case and their
decisions are compiled in Agreement reports named *Acóordãos*."
### Dataset Curators
The names of the original dataset curators and creators can be found in references given below, in the section *Citation
Information*. Additional changes were made by Joel Niklaus ([Email](mailto:joel.niklaus.2@bfh.ch)
; [Github](https://github.com/joelniklaus)) and Veton Matoshi ([Email](mailto:veton.matoshi@bfh.ch)
; [Github](https://github.com/kapllan)).
### Licensing Information
No licensing information was provided for this dataset. However, please make sure that you use the dataset according to
Brazilian law.
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.1905.10348,
author = {Lage-Freitas, Andr{\'{e}} and Allende-Cid, H{\'{e}}ctor and Santana, Orivaldo and de Oliveira-Lage, L{\'{i}}via},
doi = {10.48550/ARXIV.1905.10348},
keywords = {Computation and Language (cs.CL),FOS: Computer and information sciences,Social and Information Networks (cs.SI)},
publisher = {arXiv},
title = {{Predicting Brazilian court decisions}},
url = {https://arxiv.org/abs/1905.10348},
year = {2019}
}
```
```
@article{Lage-Freitas2022,
author = {Lage-Freitas, Andr{\'{e}} and Allende-Cid, H{\'{e}}ctor and Santana, Orivaldo and Oliveira-Lage, L{\'{i}}via},
doi = {10.7717/peerj-cs.904},
issn = {2376-5992},
journal = {PeerJ. Computer science},
keywords = {Artificial intelligence,Jurimetrics,Law,Legal,Legal NLP,Legal informatics,Legal outcome forecast,Litigation prediction,Machine learning,NLP,Portuguese,Predictive algorithms,judgement prediction},
language = {eng},
month = {mar},
pages = {e904--e904},
publisher = {PeerJ Inc.},
title = {{Predicting Brazilian Court Decisions}},
url = {https://pubmed.ncbi.nlm.nih.gov/35494851 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9044329/},
volume = {8},
year = {2022}
}
```
### Contributions
Thanks to [@kapllan](https://github.com/kapllan) and [@joelniklaus](https://github.com/joelniklaus) for adding this
dataset.
|
rkstgr | null | @conference {bogdanov2019mtg,
author = "Bogdanov, Dmitry and Won, Minz and Tovstogan, Philip and Porter, Alastair and Serra, Xavier",
title = "The MTG-Jamendo Dataset for Automatic Music Tagging",
booktitle = "Machine Learning for Music Discovery Workshop, International Conference on Machine Learning (ICML 2019)",
year = "2019",
address = "Long Beach, CA, United States",
url = "http://hdl.handle.net/10230/42015"
} | Repackaging of the MTG Jamendo dataset.
We present the MTG-Jamendo Dataset, a new open dataset for music auto-tagging.
It is built using music available at Jamendo under Creative Commons licenses and tags provided by content creators.
The dataset contains over 55,000 full audio tracks with 195 tags from genre, instrument, and mood/theme categories. | false | 1 | false | rkstgr/mtg-jamendo | 2022-07-22T12:56:25.000Z | null | false | 8265518f6b5127d386a85ab5c380d867ff9ae70b | [] | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"source_datasets:original"
] | https://huggingface.co/datasets/rkstgr/mtg-jamendo/resolve/main/README.md | ---
license:
- apache-2.0
size_categories:
- 10K<n<100K
source_datasets:
- original
pretty_name: MTG Jamendo
---
# Dataset Card for MTG Jamendo Dataset
## Dataset Description
- **Repository:** [MTG Jamendo dataset repository](https://github.com/MTG/mtg-jamendo-dataset)
### Dataset Summary
MTG-Jamendo Dataset, a new open dataset for music auto-tagging. It is built using music available at Jamendo under Creative Commons licenses and tags provided by content uploaders. The dataset contains over 55,000 full audio tracks with 195 tags from genre, instrument, and mood/theme categories. We provide elaborated data splits for researchers and report the performance of a simple baseline approach on five different sets of tags: genre, instrument, mood/theme, top-50, and overall.
## Dataset structure
### Data Fields
- `id`: an integer containing the id of the track
- `artist_id`: an integer containing the id of the artist
- `album_id`: an integer containing the id of the album
- `duration_in_sec`: duration of the track as a float
- `genres`: list of strings, describing genres the track is assigned to
- `instruments`: list of strings for the main instruments of the track
- `moods`: list of strings, describing the moods the track is assigned to
- `audio`: audio of the track
### Data Splits
This dataset has 2 balanced splits: _train_ (90%) and _validation_ (10%)
### Licensing Information
This dataset version 1.0.0 is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
```
@conference {bogdanov2019mtg,
author = "Bogdanov, Dmitry and Won, Minz and Tovstogan, Philip and Porter, Alastair and Serra, Xavier",
title = "The MTG-Jamendo Dataset for Automatic Music Tagging",
booktitle = "Machine Learning for Music Discovery Workshop, International Conference on Machine Learning (ICML 2019)",
year = "2019",
address = "Long Beach, CA, United States",
url = "http://hdl.handle.net/10230/42015"
}
``` |
israfelsr | null | null | null | false | 12 | false | israfelsr/img-wikipedia-simple | 2022-08-26T16:13:05.000Z | null | false | 8587e5a368f814fd15928af0254ee8d2b19e4471 | [] | [
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:en",
"multilinguality:monolingual",
"task_categories:image-to-text"
] | https://huggingface.co/datasets/israfelsr/img-wikipedia-simple/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license: []
multilinguality:
- monolingual
pretty_name: image-wikipedia-simple
size_categories: []
source_datasets: []
task_categories:
- image-to-text
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed] |
autoevaluate | null | null | null | false | 3 | false | autoevaluate/autoeval-staging-eval-project-f87a1758-7384796 | 2022-06-24T14:18:39.000Z | null | false | 462a6f032ed4f919672273793be2713f2baaeff8 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:banking77"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-f87a1758-7384796/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- banking77
eval_info:
task: multi_class_classification
model: mrm8488/distilroberta-finetuned-banking77
dataset_name: banking77
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: mrm8488/distilroberta-finetuned-banking77
* Dataset: banking77
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-f87a1758-7384797 | 2022-06-24T14:18:40.000Z | null | false | 3de56007c5bfa71ef9157a2dd2b89d3e45870769 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:banking77"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-f87a1758-7384797/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- banking77
eval_info:
task: multi_class_classification
model: optimum/distilbert-base-uncased-finetuned-banking77
dataset_name: banking77
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: optimum/distilbert-base-uncased-finetuned-banking77
* Dataset: banking77
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-f87a1758-7384798 | 2022-06-24T14:18:48.000Z | null | false | c83252ae6274b5adcd8f46d5c8bb87df1b30b49e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:banking77"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-f87a1758-7384798/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- banking77
eval_info:
task: multi_class_classification
model: philschmid/RoBERTa-Banking77
dataset_name: banking77
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: philschmid/RoBERTa-Banking77
* Dataset: banking77
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 3 | false | autoevaluate/autoeval-staging-eval-project-f87a1758-7384799 | 2022-06-24T14:18:59.000Z | null | false | 9b02d3e673661c78a8ab7da08d5403c363315754 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:banking77"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-f87a1758-7384799/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- banking77
eval_info:
task: multi_class_classification
model: philschmid/BERT-Banking77
dataset_name: banking77
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: philschmid/BERT-Banking77
* Dataset: banking77
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-f87a1758-7384800 | 2022-06-24T14:18:54.000Z | null | false | 046dcc16b3100df42a0fdd1e0a6369c7be2b443c | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:banking77"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-f87a1758-7384800/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- banking77
eval_info:
task: multi_class_classification
model: philschmid/DistilBERT-Banking77
dataset_name: banking77
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: philschmid/DistilBERT-Banking77
* Dataset: banking77
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
flexthink | null | null | Grapheme-to-Phoneme training, validation and test sets | false | 2 | false | flexthink/librig2p-nostress-space-cmu | 2022-06-28T04:16:14.000Z | null | false | 5169b6b1d2ac64e73b7395e49993e0cca0a2b7af | [] | [] | https://huggingface.co/datasets/flexthink/librig2p-nostress-space-cmu/resolve/main/README.md | # librig2p-nostress - Grapheme-To-Phoneme Dataset
This dataset contains samples that can be used to train a Grapheme-to-Phoneme system **without** stress information.
The dataset is derived from the following pre-existing datasets:
* [LibriSpeech ASR Corpus](https://www.openslr.org/12)
* [LibriSpeech Alignments](https://github.com/CorentinJ/librispeech-alignments)
* [Wikipedia Homograph Disambiguation Data](https://github.com/google/WikipediaHomographData)
* [CMUDict] (http://www.speech.cs.cmu.edu/cgi-bin/cmudict)
This version of the dataset applies a correction to LibriSpeech Alignments phoneme annotations by looking up the pronunciations of known words in CMUDict and replacing them with their CMUDict counterparts only if a perfect unique match is found. This reduces the number of discrepancies between homograph data and LibriSpeech data. |
chinoll | null | null | null | false | 1 | false | chinoll/animeNet | 2022-06-24T17:33:05.000Z | null | false | f8034fb9872ee3f48913e7f8f21b3e0fdd73d86b | [] | [
"license:cc-by-nc-4.0"
] | https://huggingface.co/datasets/chinoll/animeNet/resolve/main/README.md | ---
license: cc-by-nc-4.0
---
|
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-72b4615f-7404801 | 2022-06-24T18:19:06.000Z | null | false | 132f1d1626d354057f3db3de7ee421ed0e8a314a | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:adversarial_qa"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-72b4615f-7404801/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- adversarial_qa
eval_info:
task: extractive_question_answering
model: osanseviero/distilbert-base-uncased-finetuned-squad-d5716d28
dataset_name: adversarial_qa
dataset_config: adversarialQA
dataset_split: train
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: osanseviero/distilbert-base-uncased-finetuned-squad-d5716d28
* Dataset: adversarial_qa
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@osanseviero](https://huggingface.co/osanseviero) for evaluating this model. |
codeparrot | null | null | null | false | 25 | false | codeparrot/codecomplex | 2022-10-25T09:30:16.000Z | null | false | aa0988c3b274ae9ec75bfbac2029ed14a3241ff2 | [] | [
"language_creators:expert-generated",
"language:code",
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:unknown",
"task_categories:text-generation",
"task_ids:language-modeling"
] | https://huggingface.co/datasets/codeparrot/codecomplex/resolve/main/README.md | ---
annotations_creators: []
language_creators:
- expert-generated
language:
- code
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: CodeComplex
---
# CodeComplex Dataset
## Dataset Description
[CodeComplex](https://github.com/yonsei-toc/CodeComple) consists of 4,200 Java codes submitted to programming competitions by human programmers and their complexity labels annotated by a group of algorithm experts.
### How to use it
You can load and iterate through the dataset with the following two lines of code:
```python
from datasets import load_dataset
ds = load_dataset("codeparrot/codecomplex", split="train")
print(next(iter(ds)))
```
## Data Structure
```
DatasetDict({
train: Dataset({
features: ['src', 'complexity', 'problem', 'from'],
num_rows: 4517
})
})
```
### Data Instances
```python
{'src': 'import java.io.*;\nimport java.math.BigInteger;\nimport java.util.InputMismatchException;...',
'complexity': 'quadratic',
'problem': '1179_B. Tolik and His Uncle',
'from': 'CODEFORCES'}
```
### Data Fields
* src: a string feature, representing the source code in Java.
* complexity: a string feature, giving program complexity.
* problem: a string of the feature, representing the problem name.
* from: a string feature, representing the source of the problem.
complexity filed has 7 classes, where each class has around 500 codes each. The seven classes are constant, linear, quadratic, cubic, log(n), nlog(n) and NP-hard.
### Data Splits
The dataset only contains a train split.
## Dataset Creation
The authors first collected problem and solution codes in Java from CodeForces and they were inspected by experienced human annotators to label each code by their time complexity. After the labelling, they used different programming experts to verify the class of each data that the human annotators assigned.
## Citation Information
```
@article{JeonBHHK22,
author = {Mingi Jeon and Seung-Yeop Baik and Joonghyuk Hahn and Yo-Sub Han and Sang-Ki Ko},
title = {{Deep Learning-based Code Complexity Prediction}},
year = {2022},
}
``` |
rjac | null | null | null | false | 3 | false | rjac/kaggle-entity-annotated-corpus-ner-dataset-oversampled | 2022-06-26T01:48:24.000Z | null | false | ee7c27097d3f5b1c296f6f5d88328942beb45435 | [] | [] | https://huggingface.co/datasets/rjac/kaggle-entity-annotated-corpus-ner-dataset-oversampled/resolve/main/README.md | this dataset is the same as [rjac/kaggle-entity-annotated-corpus-ner-dataset](https://huggingface.co/datasets/rjac/kaggle-entity-annotated-corpus-ner-dataset)
with oversampled instances of 'ART', 'EVE'and 'NAT' entities (25K of all three classes).
|
jvanz | null | null | null | false | 2 | false | jvanz/querido_diario | 2022-07-06T02:29:33.000Z | null | false | 8c6732f1029b37d4a31d6354b940a192bffc5fa5 | [] | [] | https://huggingface.co/datasets/jvanz/querido_diario/resolve/main/README.md | Dataset generated from the files crawled by the [Querido Diario](https://github.com/okfn-brasil/querido-diario) project. |
LHF | null | @misc{TODO
} | Spanish dataset | false | 3 | false | LHF/escorpius | 2022-07-15T13:57:59.000Z | null | false | 2fe88697bc5c4351202b4bcc03a826967a681f1c | [] | [
"arxiv:2206.15147",
"license:cc-by-nc-nd-4.0",
"language:es",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"task_categories:text-generation"
] | https://huggingface.co/datasets/LHF/escorpius/resolve/main/README.md | ---
license: cc-by-nc-nd-4.0
language:
- es
multilinguality:
- monolingual
size_categories:
- 100M<n<1B
source_datasets:
- original
task_categories:
- language-modelling
- text-generation
- sequence-modelling
---
# esCorpius: A Massive Spanish Crawling Corpus
## Introduction
In the recent years, Transformer-based models have lead to significant advances in language modelling for natural language processing. However, they require a vast amount of data to be (pre-)trained and there is a lack of corpora in languages other than English. Recently, several initiatives have presented multilingual datasets obtained from automatic web crawling. However, the results in Spanish present important shortcomings, as they are either too small in comparison with other languages, or present a low quality derived from sub-optimal cleaning and deduplication. In this paper, we introduce esCorpius, a Spanish crawling corpus obtained from near 1 Pb of Common Crawl data. It is the most extensive corpus in Spanish with this level of quality in the extraction, purification and deduplication of web textual content. Our data curation process involves a novel highly parallel cleaning pipeline and encompasses a series of deduplication mechanisms that together ensure the integrity of both document and paragraph boundaries. Additionally, we maintain both the source web page URL and the WARC shard origin URL in order to complain with EU regulations. esCorpius has been released under CC BY-NC-ND 4.0 license.
## Statistics
| **Corpus** | OSCAR<br>22.01 | mC4 | CC-100 | ParaCrawl<br>v9 | esCorpius<br>(ours) |
|-------------------------|----------------|--------------|-----------------|-----------------|-------------------------|
| **Size (ES)** | 381.9 GB | 1,600.0 GB | 53.3 GB | 24.0 GB | 322.5 GB |
| **Docs (ES)** | 51M | 416M | - | - | 104M |
| **Words (ES)** | 42,829M | 433,000M | 9,374M | 4,374M | 50,773M |
| **Lang.<br>identifier** | fastText | CLD3 | fastText | CLD2 | CLD2 + fastText |
| **Elements** | Document | Document | Document | Sentence | Document and paragraph |
| **Parsing quality** | Medium | Low | Medium | High | High |
| **Cleaning quality** | Low | No cleaning | Low | High | High |
| **Deduplication** | No | No | No | Bicleaner | dLHF |
| **Language** | Multilingual | Multilingual | Multilingual | Multilingual | Spanish |
| **License** | CC-BY-4.0 | ODC-By-v1.0 | Common<br>Crawl | CC0 | CC-BY-NC-ND |
## Citation
Link to the paper: https://arxiv.org/abs/2206.15147
Cite this work:
```
@misc{https://doi.org/10.48550/arxiv.2206.15147,
doi = {10.48550/ARXIV.2206.15147},
url = {https://arxiv.org/abs/2206.15147},
author = {Gutiérrez-Fandiño, Asier and Pérez-Fernández, David and Armengol-Estapé, Jordi and Griol, David and Callejas, Zoraida},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {esCorpius: A Massive Spanish Crawling Corpus},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
## Disclaimer
We did not perform any kind of filtering and/or censorship to the corpus. We expect users to do so applying their own methods. We are not reliable for any misuse of the corpus. |
NbAiLab | null | null | null | false | 3 | false | NbAiLab/newspaperimagescomplete | 2022-06-27T06:57:15.000Z | null | false | 030af287cc0a6c6f5662ca5e41b49cb19763eefe | [] | [] | https://huggingface.co/datasets/NbAiLab/newspaperimagescomplete/resolve/main/README.md | |
bazyl | null | null | null | false | 45 | false | bazyl/GTSRB | 2022-10-25T10:39:19.000Z | null | false | a8093a9c7757b59d64702f892002542e8f3a1fb0 | [] | [
"annotations_creators:crowdsourced",
"language_creators:found",
"license:gpl-3.0",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:image-classification",
"task_ids:multi-label-image-classification"
] | https://huggingface.co/datasets/bazyl/GTSRB/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language: []
license:
- gpl-3.0
multilinguality: []
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- image-classification
task_ids:
- multi-label-image-classification
pretty_name: GTSRB
---
# Dataset Card for GTSRB
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** http://www.sciencedirect.com/science/article/pii/S0893608012000457
- **Repository:** https://github.com/bazylhorsey/gtsrb/
- **Paper:** Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition
- **Leaderboard:** https://benchmark.ini.rub.de/gtsrb_results.html
- **Point of Contact:** bhorsey16@gmail.com
### Dataset Summary
The German Traffic Sign Benchmark is a multi-class, single-image classification challenge held at the International Joint Conference on Neural Networks (IJCNN) 2011. We cordially invite researchers from relevant fields to participate: The competition is designed to allow for participation without special domain knowledge. Our benchmark has the following properties:
- Single-image, multi-class classification problem
- More than 40 classes
- More than 50,000 images in total
- Large, lifelike database
### Supported Tasks and Leaderboards
[Kaggle](https://www.kaggle.com/datasets/meowmeowmeowmeowmeow/gtsrb-german-traffic-sign) \
[Original](https://benchmark.ini.rub.de/gtsrb_results.html)
## Dataset Structure
### Data Instances
```
{
"Width": 31,
"Height": 31,
"Roi.X1": 6,
"Roi.Y1": 6,
"Roi.X2": 26,
"Roi.Y2": 26,
"ClassId": 20,
"Path": "Train/20/00020_00004_00002.png",
}
```
### Data Fields
- Width: width of image
- Height: Height of image
- Roi.X1: Upper left X coordinate
- Roi.Y1: Upper left Y coordinate
- Roi.X2: Lower right t X coordinate
- Roi.Y2: Lower right Y coordinate
- ClassId: Class of image
- Path: Path of image
### Data Splits
Categories: 42
Train: 39209
Test: 12630
## Dataset Creation
### Curation Rationale
Recognition of traffic signs is a challenging real-world problem of high industrial relevance. Although commercial systems have reached the market and several studies on this topic have been published, systematic unbiased comparisons of different approaches are missing and comprehensive benchmark datasets are not freely available.
Traffic sign recognition is a multi-class classification problem with unbalanced class frequencies. Traffic signs can provide a wide range of variations between classes in terms of color, shape, and the presence of pictograms or text. However, there exist subsets of classes (e. g., speed limit signs) that are very similar to each other.
The classifier has to cope with large variations in visual appearances due to illumination changes, partial occlusions, rotations, weather conditions, etc.
Humans are capable of recognizing the large variety of existing road signs with close to 100% correctness. This does not only apply to real-world driving, which provides both context and multiple views of a single traffic sign, but also to the recognition from single images.
<!-- ### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] -->
|
Mithil | null | null | null | false | 2 | false | Mithil/amazonFakeReview | 2022-06-25T02:12:18.000Z | null | false | 17c2878bdcf8b76fd8fc626c61644b456875ef1f | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Mithil/amazonFakeReview/resolve/main/README.md | ---
license: afl-3.0
---
|
pcy | null | null | null | false | 2 | false | pcy/autotrain-data-test_sum | 2022-10-23T06:18:13.000Z | null | false | b7ab718383f81b57ab16ebd780990265e234f79d | [] | [
"language:zh"
] | https://huggingface.co/datasets/pcy/autotrain-data-test_sum/resolve/main/README.md | ---
language:
- zh
task_categories:
- conditional-text-generation
---
# AutoTrain Dataset for project: test_sum
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project test_sum.
### Languages
The BCP-47 code for the dataset's language is zh.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "7\u67086\u65e5\uff0c\u4e2d\u963f\u5408\u4f5c\u8bba\u575b\u7b2c\u4e5d\u5c4a\u90e8\u957f\u7ea7\u4f1a\u8bae\u56e0\u65b0\u51a0\u80ba\u708e\u75ab\u60c5\u4ee5\u89c6\u9891\u8fde\u7ebf\u65b9\u5f0f\u4e3e\u884c\u3002\n\u672c\u5c4a\u4f1a\u8bae\u53d6\u5f97\u4e86\u5706\u6ee1\u6210\u529f\uff0c\u53d1\u8868\u4e09\u4efd\u6210\u679c\u6587\u4ef6\uff0c\u9ad8\u5ea6\u51dd\u805a\u4e2d\u963f\u5171\u8bc6\u3002\u300a\u4e2d\u56fd\u548c\u963f\u62c9\u4f2f\u56fd\u5bb6\u56e2\u7ed3\u6297\u51fb\u65b0\u51a0\u80ba\u708e\u75ab\u60c5\u8054\u5408\u58f0\u660e\u300b\u5c55\u73b0\u4e86\u4e2d\u963f\u6218\u80dc\u75ab\u60c5[...]",
"target": "\u671b\u6d77\u697c\u52a0\u5f3a\u5408\u4f5c\u5171\u514b\u65f6\u8270\u643a\u624b\u524d\u884c"
},
{
"text": "\u4e60\u8fd1\u5e73\u603b\u4e66\u8bb0\u6307\u51fa\uff1a\u201c\u6293\u4f4f\u4e86\u521b\u65b0\uff0c\u5c31\u6293\u4f4f\u4e86\u7275\u52a8\u7ecf\u6d4e\u793e\u4f1a\u53d1\u5c55\u5168\u5c40\u7684\u2018\u725b\u9f3b\u5b50\u2019\u3002\u201d\u201c\u8c01\u5728\u521b\u65b0\u4e0a\u5148\u884c\u4e00\u6b65\uff0c\u8c01\u5c31\u80fd\u62e5\u6709\u5f15\u9886\u53d1\u5c55\u7684\u4e3b\u52a8\u6743\u3002\u201d\n\u6293\u521b\u65b0\u5c31\u662f\u6293\u53d1\u5c55\uff0c\u8c0b\u521b\u65b0\u5c31\u662f\u8c0b\u672a\u6765\u3002\u5317\u4eac\u9ad8\u6807\u51c6\u63a8\u8fdb\u201c\u4e24\u533a\u201d\u5efa\u8bbe\uff0c\u6838\u5fc3\u4efb[...]",
"target": "\u6293\u521b\u65b0\u5c31\u662f\u6293\u53d1\u5c55"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1343 |
| valid | 336 |
|
mustapha | null | null | null | false | 3 | false | mustapha/QuranExe | 2022-07-20T15:33:24.000Z | null | false | b7b32323718ea1811372e7dd85079d4f0be1f16c | [] | [
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"language:ar",
"license:mit",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:sentence-similarity",... | https://huggingface.co/datasets/mustapha/QuranExe/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- ar
license:
- mit
multilinguality:
- multilingual
paperswithcode_id: null
pretty_name: QuranExe
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- sentence-similarity
task_ids:
- language-modeling
- masked-language-modeling
---
## Dataset Description
- **Size of downloaded dataset files:** 126 MB
This dataset contains the exegeses/tafsirs (تفسير القرآن) of the holy Quran in arabic by 8 exegetes.
This is a non Official dataset. It have been scrapped from the `Quran.com Api`
This dataset contains `49888` records with `+14` Million words. `8` records per Quranic verse
Usage Example :
```python
from datasets import load_dataset
tafsirs = load_dataset("mustapha/QuranExe")
``` |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-5ece7d74-70d9-4701-a9b7-1777e66ed4b0-5145 | 2022-06-25T08:05:40.000Z | null | false | 9ecd0450d4ce5378973825ae2f93e15648c0da3d | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:emotion"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-5ece7d74-70d9-4701-a9b7-1777e66ed4b0-5145/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- emotion
eval_info:
task: multi_class_classification
model: autoevaluate/multi-class-classification
metrics: []
dataset_name: emotion
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 3 | false | autoevaluate/autoeval-staging-eval-project-bba54b81-5330-48f8-b7bf-1cb797f93bcf-5246 | 2022-06-25T08:17:13.000Z | null | false | 4c8baf4b8f039e38a101b9e18ac1c7c5b3cc7a51 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:emotion"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-bba54b81-5330-48f8-b7bf-1cb797f93bcf-5246/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- emotion
eval_info:
task: multi_class_classification
model: autoevaluate/multi-class-classification
metrics: ['matthews_correlation']
dataset_name: emotion
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-21811dfd-a09c-4692-82b2-7e358a2520ce-5347 | 2022-06-25T08:26:38.000Z | null | false | 8466e829412dd77cd4bd6d7ff5b17176bcb68bff | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:glue"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-21811dfd-a09c-4692-82b2-7e358a2520ce-5347/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: autoevaluate/binary-classification
dataset_name: glue
dataset_config: sst2
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-840224bd-ff8b-4526-8827-e12d96f6c7bf-5448 | 2022-06-25T08:34:15.000Z | null | false | b9b11cf76caa251ce544c1567b8f1af8be4dc04e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:glue"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-840224bd-ff8b-4526-8827-e12d96f6c7bf-5448/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: autoevaluate/binary-classification
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: sst2
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 3 | false | autoevaluate/autoeval-staging-eval-project-896d78da-9e5e-4706-b736-32d4a31ff571-5549 | 2022-06-25T08:40:11.000Z | null | false | e60de7b9cf5a2e12c9321c6a1f012d929869c05f | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:autoevaluate/mnist-sample"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-896d78da-9e5e-4706-b736-32d4a31ff571-5549/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- autoevaluate/mnist-sample
eval_info:
task: image_multi_class_classification
model: autoevaluate/image-multi-class-classification
metrics: ['matthews_correlation']
dataset_name: autoevaluate/mnist-sample
dataset_config: autoevaluate--mnist-sample
dataset_split: test
col_mapping:
image: image
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: autoevaluate/image-multi-class-classification
* Dataset: autoevaluate/mnist-sample
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-6715a17f-ec96-4660-9a86-49fe175a04f1-5650 | 2022-06-25T08:48:52.000Z | null | false | 1cc3c98dba3490e9baf21032dbb0e22478bd021d | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:wmt16"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-6715a17f-ec96-4660-9a86-49fe175a04f1-5650/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- wmt16
eval_info:
task: translation
model: autoevaluate/translation
metrics: []
dataset_name: wmt16
dataset_config: ro-en
dataset_split: test
col_mapping:
source: translation.ro
target: translation.en
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Translation
* Model: autoevaluate/translation
* Dataset: wmt16
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 3 | false | autoevaluate/autoeval-staging-eval-project-62ca8f86-389e-4833-9ccf-a97cadcf4874-5751 | 2022-06-25T08:59:10.000Z | null | false | f2c69440251afcf9073cf02763f78d5e4028c80c | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:xsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-62ca8f86-389e-4833-9ccf-a97cadcf4874-5751/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xsum
eval_info:
task: summarization
model: autoevaluate/summarization
metrics: []
dataset_name: xsum
dataset_config: default
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: autoevaluate/summarization
* Dataset: xsum
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 3 | false | autoevaluate/autoeval-staging-eval-project-fed20ca6-7444804 | 2022-06-25T09:25:01.000Z | null | false | dcd8aacae4514b44aae68d36afdc61a22ef98534 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:wikiann"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-fed20ca6-7444804/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- wikiann
eval_info:
task: entity_extraction
model: transformersbook/xlm-roberta-base-finetuned-panx-all
metrics: ['matthews_correlation']
dataset_name: wikiann
dataset_config: en
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: transformersbook/xlm-roberta-base-finetuned-panx-all
* Dataset: wikiann
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 3 | false | autoevaluate/autoeval-staging-eval-project-17e9fcc1-7454805 | 2022-06-25T09:34:15.000Z | null | false | b076ba7227761f3e25116ea7b40f0cb0115d946e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:ag_news"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-17e9fcc1-7454805/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- ag_news
eval_info:
task: multi_class_classification
model: andi611/distilbert-base-uncased-ner-agnews
metrics: []
dataset_name: ag_news
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: andi611/distilbert-base-uncased-ner-agnews
* Dataset: ag_news
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 3 | false | autoevaluate/autoeval-staging-eval-project-17e9fcc1-7454810 | 2022-06-25T09:35:01.000Z | null | false | cbc9a1fccd0d5c7e84ca53b2c5744ec75e4ce334 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:ag_news"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-17e9fcc1-7454810/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- ag_news
eval_info:
task: multi_class_classification
model: mrm8488/distilroberta-finetuned-age_news-classification
metrics: []
dataset_name: ag_news
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: mrm8488/distilroberta-finetuned-age_news-classification
* Dataset: ag_news
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
smangrul | null | null | null | false | 2 | false | smangrul/taskmaster-processed | 2022-06-25T11:31:15.000Z | null | false | ff943fb71483817023c827dd7bf1f9a1edff052e | [] | [
"license:cc-by-nc-4.0"
] | https://huggingface.co/datasets/smangrul/taskmaster-processed/resolve/main/README.md | ---
license: cc-by-nc-4.0
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.