text-classification bool 2 classes | text stringlengths 0 664k |
|---|---|
false |
# Plantations Segmentation
The images consist of aerial photography of agricultural plantations with crops such as cabbage and zucchini. The dataset addresses agricultural tasks such as plant detection and counting, health assessment, and irrigation planning. The dataset consists of plantations' photographs with object and class segmentation of cabbage.
# Get the Dataset
**This is just an example of the data.** If you need access to the entire dataset, contact us via **[sales@trainingdata.pro](mailto:sales@trainingdata.pro)** or leave a request on **[https://trainingdata.pro/data-market](https://trainingdata.pro/data-market?utm_source=huggingface)**

# Dataset structure
- **Plantations_Segmentation** - contains of original plantation images (folder **img**) and file with annotations (.xml)
- **Object_Segmentation** - includes object segmentation masks for the original images
- **Class_Segmentation** - includes class segmentation masks for the original images
# Types of segmentation
The dataset includes two types of segmentation:
- **Class Segmentation** - objects corresponding to one class are identified
- **Object Segmentation** - all objects are identified separately
# Data Format
Each image from `img` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the coordinates of the polygons. For each point, the x and y coordinates are provided.
# Example of XML file structure
.png?generation=1685973058340642&alt=media)
# Plantation segmentation might be made in accordance with your requirements.
## **[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/trainingdata-pro** |
false | # M2CRB
## How to get the data with a given language combination
```
from datasets import load_dataset
def get_dataset(prog_lang, nat_lang):
test_data = load_dataset("blindsubmissions/M2CRB")
test_data = test_data.filter(
lambda example: example["docstring_language"] == nat_lang
and example["language"] == prog_lang
)
test_data = datasets_loader.dataset(test_data)
return test_data
```
## Licensing Information
M2CRB is a subset filtered and pre-processed from [The Stack](https://huggingface.co/datasets/bigcode/the-stack), a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in M2CRB must abide by the terms of the original licenses. |
false |
## Instruction Tuning: GeoSignal
Scientific domain adaptation has two main steps during instruction tuning.
- Instruction tuning with general instruction-tuning data. Here we use Alpaca-GPT4.
- Instruction tuning with restructured domain knowledge, which we call expertise instruction tuning. For K2, we use knowledge-intensive instruction data, GeoSignal.
***The following is the illustration of the training domain-specific language model recipe:***

- **Adapter Model on [Huggingface](https://huggingface.co/): [daven3/k2_it_adapter](https://huggingface.co/daven3/k2_it_adapter)**
For the design of the GeoSignal, we collect knowledge from various data sources, like:

GeoSignal is designed for knowledge-intensive instruction tuning and used for aligning with experts.
The full-version will be upload soon, or email [daven](mailto:davendw@sjtu.edu.cn) for potential research cooperation.
|
false |
# Benchmark: GeoBenchmark
In GeoBenchmark, we collect 183 multiple-choice questions in NPEE, and 1,395 in AP Test, for objective tasks.
Meanwhile, we gather all 939 subjective questions in NPEE to be the subjective tasks set and use 50 to measure the baselines with human evaluation.
|
false | |
true | |
false |
Alpaca tasks dataset translated in Greek from GPT3.5
Translation is done in chunks of 10K. |
false | 1111 |
false | # Google Conceptual Captions in Vietnamese
This is Vietnamese version of Google Conceptual Captions dateset. It has more than 3.3 million image urls with captions. It was built by using Google Translate API. The Vietnamese version has the exact metadata as English one. The only difference is the caption content.
I provide both English and Vietnamese `.tsv` files. For the English one, one can go to alternative sources:
- https://huggingface.co/datasets/conceptual_captions
- https://github.com/google-research-datasets/conceptual-captions
To download the dataset, one can use the tool:
- https://github.com/rom1504/img2dataset/blob/main/dataset_examples/cc3m.md
Or just iterate line by line (`caption<tab>url`)
⚠ Note:
- Some of image urls might die over the time ([liuhaotian/LLaVA-CC3M-Pretrain-595K](https://huggingface.co/datasets/liuhaotian/LLaVA-CC3M-Pretrain-595K) reported that 15% of original dataset are inaccessible). I'm not responsible for them. |
false |
# Outdoor Garbage Dataset
The dataset consisting of garbage cans of various capacities and types. Best to train a neural network to monitor the timely removal of garbage and organize the logistics of vehicles for garbage collection. Dataset is useful for the recommendation systems, optimization and automization the work of coomunity services, smart city.
.png?generation=1686047397390850&alt=media)
# Get the Dataset
This is just an example of the data.
If you need access to the entire dataset, contact us via [sales@trainingdata.pro](mailto:sales@trainingdata.pro) or leave a request on **https://trainingdata.pro/data-market?utm_source=huggingface**
# Content
Dataset includes 10 000 images of trash cans:
- in different times of day
- in different weather conditions
## Types of garbage cans capacity
- **is_full** - at least one of the trash cans shown in the photo is completely full. This type includes filled to the top, overflown cans.
- **is_empty** - garbage cans have free space, it could be half full or completely empty.
- **is_scattered** - the tag is added with is_empty or is_full. The tag means that the garbage (volumetric garbage bags, or building waste, but not single elements) is scattered nearby.
# Data Format
Each image from `img` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the labeled types of garbage cans capacities for each image in the dataset.
# Example of XML file structure
.png?generation=1686076026295933&alt=media)
**[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface)** provides high-quality data annotation tailored to your needs.
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/trainingdata-pro** |
false | # Docstring to code data
## Licensing Information
M2CRB is a subset filtered and pre-processed from [The Stack](https://huggingface.co/datasets/bigcode/the-stack), a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in M2CRB must abide by the terms of the original licenses. |
false |
# Dataset of bald people
Dataset consists of 5000 photos of people with 7 stages of hairloss according to the Norwood scale. Dataset is useful for training neural networks for the recommendation systems, optimizing the work processes of trichologists and applications in the Med / Beauty spheres.
# Get the Dataset
This is just an example of the data.
If you need access to the entire dataset, contact us via [sales@trainingdata.pro](mailto:sales@trainingdata.pro) or leave a request on **https://trainingdata.pro/data-market?utm_source=huggingface**
# Image
Similar images are presented in the dataset:

# Hamilton–Norwood scale
- **type_1**: There is a lack of bilateral recessions along the anterior border of the hairline in the frontoparietal regions. No notable hair loss or recession of the hairline.
- **type_2**: There is a small recession of the hairline around the temples. Hair is also lost, or sparse, along the midfrontal border of the scalp, but the depth of the affected area is much less than in the frontoparietal regions. This is commonly referred to as an adult or mature hairline.
- **type_3**: The first signs of significant balding appear. There is a deep, symmetrical recession at the temples that are only sparsely covered by hair.
- **type_4**: The hairline recession is harsher than in stage 2, and there is scattered hair or no hair on the vertex. There are deep frontotemporal recessions, usually symmetrical, and are either bare or very sparsely covered by hair.
- **type_5**: The areas of hair loss are more significant than in stage 4. They are still divided, but the band of hair between them is thinner and sparser.
- **type_6**: The connection of hair that crosses the crown is gone with only sparse hair remaining. The frontotemporal and vertex regions are joined together, and the extent of hair loss is more significant.
- **type_7**: The most drastic stage of hair loss, only a band of hair, going around the sides of the head persists. This hair usually is not thick and might be dainty.

# Data Format
Each image from `img` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the Hamilton–Norwood type of hairloss for each person in the dataset.
# Example of XML file structure

**[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface)** provides high-quality data annotation tailored to your needs.
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/trainingdata-pro** |
false | # COCO 2017 image captions in Vietnamese
The dataset is firstly introduced in [dinhanhx/VisualRoBERTa](https://github.com/dinhanhx/VisualRoBERTa/tree/main). I use VinAI tools to translate [COCO 2027 image caption](https://cocodataset.org/#download) (2017 Train/Val annotations) from English to Vietnamese. Then we merge [UIT-ViIC](https://arxiv.org/abs/2002.00175) dataset into it. To load the dataset, one can take a look at [this code in VisualRoBERTa](https://github.com/dinhanhx/VisualRoBERTa/blob/main/src/data.py#L22-L100).
I provide both English original and Vietnamese version (including UIT-ViIC).
⚠ Note:
- UIT-ViIC splits are originated from `en/captions_train2017.json`. Therefore, I combine all UIT-ViIC splits then I merge into `vi/captions_train2017_trans.json`. As a result, I get `captions_train2017_trans_plus.json`.
- `vi/captions_train2017_trans.json` and `vi/captions_val2017_trans.json` are VinAI-translated from the ones in `en/`. |
false |
Original Dataset [JeanKaddour/minipile](https://huggingface.co/datasets/JeanKaddour/minipile)
See the [Thought Tokens Repository](https://github.com/ZelaAI/thought-tokens) for demonstration of streaming usage of this dataset and specific implementation of how this dataset was prepared.
Tokenized with the GPTNeoX tokenizer, split into sequences of length 513, intended for 512 input and 512 target ids. |
true | |
false |
## GitHub R repositories dataset
R source files from GitHub.
This dataset has been created using the public GitHub datasets from Google BigQuery.
This is the actual query that has been used to export the data:
```
EXPORT DATA
OPTIONS (
uri = 'gs://your-bucket/gh-r/*.parquet',
format = 'PARQUET') as
(
select
f.id, f.repo_name, f.path,
c.content, c.size
from (
SELECT distinct
id, repo_name, path
FROM `bigquery-public-data.github_repos.files`
where ends_with(path, ".R")
) as f
left join `bigquery-public-data.github_repos.contents` as c on f.id = c.id
)
EXPORT_DATA
OPTIONS (
uri = 'gs://your-bucket/licenses.parquet',
format = 'PARQUET') as
(select * from `bigquery-public-data.github_repos.licenses`)
```
Files were then exported and processed locally with files in the root of this repository.
Datasets in this repository contain data from reositories with different licenses.
The data schema is:
```
id: string
repo_name: string
path: string
content: string
size: int32
license: string
```
Last updated: Jun 6th 2023
|
false |
# Dataset Card for the Qur'anic Reading Comprehension Dataset (QRCD)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://sites.google.com/view/quran-qa-2022/home
- **Repository:** https://gitlab.com/bigirqu/quranqa/-/tree/main/
- **Paper:** https://dl.acm.org/doi/10.1145/3400396
- **Leaderboard:**
- **Point of Contact:** @piraka9011
### Dataset Summary
The QRCD (Qur'anic Reading Comprehension Dataset) is composed of 1,093 tuples of question-passage pairs that are
coupled with their extracted answers to constitute 1,337 question-passage-answer triplets.
### Supported Tasks and Leaderboards
This task is evaluated as a ranking task.
To give credit to a QA system that may retrieve an answer (not necessarily at the first rank) that does not fully
match one of the gold answers but partially matches it, we use partial Reciprocal Rank (pRR) measure.
It is a variant of the traditional Reciprocal Rank evaluation metric that considers partial matching.
pRR is the official evaluation measure of this shared task.
We will also report Exact Match (EM) and F1@1, which are evaluation metrics applied only on the top predicted answer.
The EM metric is a binary measure that rewards a system only if the top predicted answer exactly matches one of the
gold answers.
Whereas, the F1@1 metric measures the token overlap between the top predicted answer and the best matching gold answer.
To get an overall evaluation score, each of the above measures is averaged over all questions.
### Languages
Qur'anic Arabic
## Dataset Structure
### Data Instances
To simplify the structure of the dataset, each tuple contains one passage, one question and a list that may contain
one or more answers to that question, as shown below:
```json
{
"pq_id": "38:41-44_105",
"passage": "واذكر عبدنا أيوب إذ نادى ربه أني مسني الشيطان بنصب وعذاب. اركض برجلك هذا مغتسل بارد وشراب. ووهبنا له أهله ومثلهم معهم رحمة منا وذكرى لأولي الألباب. وخذ بيدك ضغثا فاضرب به ولا تحنث إنا وجدناه صابرا نعم العبد إنه أواب.",
"surah": 38,
"verses": "41-44",
"question": "من هو النبي المعروف بالصبر؟",
"answers": [
{
"text": "أيوب",
"start_char": 12
}
]
}
```
Each Qur’anic passage in QRCD may have more than one occurrence; and each passage occurrence is paired with a different
question.
Likewise, each question in QRCD may have more than one occurrence; and each question occurrence is paired with a
different Qur’anic passage.
The source of the Qur'anic text in QRCD is the Tanzil project download page, which provides verified versions of the
Holy Qur'an in several scripting styles.
We have chosen the simple-clean text style of Tanzil version 1.0.2.
### Data Fields
* `pq_id`: Sample ID
* `passage`: Context text
* `surah`: Surah number
* `verses`: Verse range
* `question`: Question text
* `answers`: List of answers and their start character
### Data Splits
| **Dataset** | **%** | **# Question-Passage Pairs** | **# Question-Passage-Answer Triplets** |
|-------------|:-----:|:-----------------------------:|:---------------------------------------:|
| Training | 65% | 710 | 861 |
| Development | 10% | 109 | 128 |
| Test | 25% | 274 | 348 |
| All | 100% | 1,093 | 1,337 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The QRCD v1.1 dataset is distributed under the CC-BY-ND 4.0 License https://creativecommons.org/licenses/by-nd/4.0/legalcode
For a human-readable summary of (and not a substitute for) the above CC-BY-ND 4.0 License, please refer to https://creativecommons.org/licenses/by-nd/4.0/
### Citation Information
```
@article{malhas2020ayatec,
author = {Malhas, Rana and Elsayed, Tamer},
title = {AyaTEC: Building a Reusable Verse-Based Test Collection for Arabic Question Answering on the Holy Qur’an},
year = {2020},
issue_date = {November 2020},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {19},
number = {6},
issn = {2375-4699},
url = {https://doi.org/10.1145/3400396},
doi = {10.1145/3400396},
journal = {ACM Trans. Asian Low-Resour. Lang. Inf. Process.},
month = {oct},
articleno = {78},
numpages = {21},
keywords = {evaluation, Classical Arabic}
}
```
### Contributions
Thanks to [@piraka9011](https://github.com/piraka9011) for adding this dataset.
|
false |
Dataset Name: Eng-Sinhala Translation Dataset
Description: This dataset contains approximately 80,000 lines of English-Sinhala translation pairs. It can be used to train models for machine translation tasks and other natural language processing applications.
Files:
1. src.txt: This file contains the source sentences in English. Each line corresponds to an English sentence.
2. tgt.txt: This file contains the target sentences in Sinhala. Each line corresponds to the Sinhala translation of the corresponding English sentence in src.txt.
Data License: GPL (GNU General Public License). Please ensure that you comply with the terms and conditions of the GPL when using the dataset.
Note: While you mentioned that some sentences in the dataset might be incorrect due to its large size, it is important to ensure the quality and accuracy of the data for training purposes. Consider performing data cleaning and validation to improve the reliability of your model.
If you plan to make this dataset publicly available, you can share the dataset files (src.txt and tgt.txt) along with the dataset card to provide information about the dataset's contents and usage. |
false |
# Dataset Card for OpenFire
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://pyronear.org/pyro-vision/datasets.html#openfire
- **Repository:** https://github.com/pyronear/pyro-vision
- **Point of Contact:** Pyronear <https://pyronear.org/en/>
### Dataset Summary
OpenFire is an image classification dataset for wildfire detection, collected
from web searches.
### Supported Tasks and Leaderboards
- `image-classification`: The dataset can be used to train a model for Image Classification.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image URL and its binary label.
```
{
'image_url': 'https://cdn-s-www.ledauphine.com/images/13C08274-6BA6-4577-B3A0-1E6C1B2A573C/FB1200/photo-1338240831.jpg',
'is_wildfire': true,
}
```
### Data Fields
- `image_url`: the download URL of the image.
- `is_wildfire`: a boolean value specifying whether there is an ongoing wildfire on the image.
### Data Splits
The data is split into training and validation sets. The training set contains 7143 images and the validation set 792 images.
## Dataset Creation
### Curation Rationale
The curators state that the current wildfire classification datasets typically contain close-up shots of wildfires, with limited variations of weather conditions, luminosity and backrgounds,
making it difficult to assess for real world performance. They argue that the limitations of datasets have partially contributed to the failure of some algorithms in coping
with sun flares, foggy / cloudy weather conditions and small scale.
### Source Data
#### Initial Data Collection and Normalization
OpenFire was collected using images publicly indexed by the search engine DuckDuckGo using multiple relevant queries. The images were then manually cleaned to remove errors.
### Annotations
#### Annotation process
Each web search query was designed to yield a single label (with wildfire or without), and additional human verification was used to remove errors.
#### Who are the annotators?
François-Guillaume Fernandez
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
François-Guillaume Fernandez
### Licensing Information
[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
```
@software{Pyronear_PyroVision_2019,
title={Pyrovision: wildfire early detection},
author={Pyronear contributors},
year={2019},
month={October},
publisher = {GitHub},
howpublished = {\url{https://github.com/pyronear/pyro-vision}}
}
```
|
true | |
false | # Dataset Card for "TALI-small"
## Table of Contents
1. Dataset Description
1. Abstract
2. Brief Description
2. Dataset Information
1. Modalities
2. Dataset Variants
3. Dataset Statistics
4. Data Fields
5. Data Splits
3. Dataset Creation
4. Dataset Use
5. Additional Information
## Dataset Description
### Abstract
TALI is a large-scale, tetramodal dataset designed to facilitate a shift from unimodal and duomodal to tetramodal research in deep learning. It aligns text, video, images, and audio, providing a rich resource for innovative self-supervised learning tasks and multimodal research. TALI enables exploration of how different modalities and data/model scaling affect downstream performance, with the aim of inspiring diverse research ideas and enhancing understanding of model capabilities and robustness in deep learning.
### Brief Description
TALI (Temporally and semantically Aligned Audio, Language and Images) is a dataset that uses the Wikipedia Image Text (WIT) captions and article titles to search Youtube for videos that match the captions. It then downloads the video, audio, and subtitles from these videos. The result is a rich multimodal dataset that has multiple caption types related to both the WiT Images, and the Youtube videos. This enables learning to take place between either temporally or semantically aligned text, images, audio and video.
## Dataset Information
### Modalities
The TALI dataset consists of the following modalities:
1. Image:
1. Wikipedia caption image
2. Randomly sampled image from youtube video
2. Text
1. Wikipedia Caption Text
2. Wikipedia Title Text
3. Wikipedia Main Body Text
4. YouTube Subtitle Text
5. YouTube Description Text
6. YouTube Title Text
3. Audio
1. YouTube Content Audio
4. Video
1. YouTube Content Video
### Dataset Variants
The TALI dataset comes in three variants that differ in the training set size:
- TALI-small: Contains about 1.3 million 30-second video clips, aligned with 120K WiT entries.
- TALI-base: Contains about 6.5 million 30-second video clips, aligned with 120K WiT entries.
- TALI-big: Contains about 13 million 30-second video clips, aligned with 120K WiT entries.
The validation and test sets remain consistent across all three variants at about 80K Videos aligned to 8K wikipedia entries (10 subclips for each Wikipedia entry) each.
### Dataset Statistics
TBA
## Dataset Creation
The TALI dataset was created by starting from the WiT dataset and using either the context_page_description or page_title as a source-query to search YouTube for video that were creative commons opted-in, and, not age restricted. The top 100 result titles were returned and compared with the source-query using the CLIP text embeddings of the largest CLIP model available. The top-1 title’s video based on the CLIP ranking was chosen and downloaded. The video was broken into 30-second segments and the top-10 segments for eachvideo were chosen based on the distance between the CLIP image embedding of the first image of each segment and the video’s title text. The image, audio, and subtitle frames were extracted from these segments. At sampling time, one of these 10 segments is randomly selected, and a 10-second segment is chosen out of the 30-second clip. The result is 200 video frames (spread throughout the 10-second segment), and 160000 audio frames (10 seconds).
## Dataset Use
TALI is designed for use in a wide range of multimodal research tasks, including but not limited to:
- Multimodal understanding and reasoning
- Self-supervised learning
- Multimodal alignment and translation
- Multimodal summarization
- Multimodal question answering
## Dataset Curators: Antreas Antoniou
Citation Information: TBA
Contributions: Thanks to all contributors including data curators, annotators, and software developers. |
false | # EVJVQA - Multilingual Visual Question Answering
## Abstract
Visual Question Answering (VQA) is a challenging task of natural language processing (NLP) and computer vision (CV), attracting significant attention from researchers. English is a resource-rich language that has witnessed various developments in datasets and models for visual question answering. Visual question answering in other languages also would be developed for resources and models. In addition, there is no multilingual dataset targeting the visual content of a particular country with its own objects and cultural characteristics. To address the weakness, we provide the research community with a benchmark dataset named EVJVQA, including 33,000+ pairs of question-answer over three languages: Vietnamese, English, and Japanese, on approximately 5,000 images taken from Vietnam for evaluating multilingual VQA systems or models. EVJVQA is used as a benchmark dataset for the challenge of multilingual visual question answering at the 9th Workshop on Vietnamese Language and Speech Processing (VLSP 2022). This task attracted 62 participant teams from various universities and organizations. In this article, we present details of the organization of the challenge, an overview of the methods employed by shared-task participants, and the results. The highest performances are 0.4392 in F1-score and 0.4009 in BLUE on the private test set. The multilingual QA systems proposed by the top 2 teams use ViT for the pre-trained vision model and mT5 for the pre-trained language model, a powerful pre-trained language model based on the transformer architecture. EVJVQA is a challenging dataset that motivates NLP and CV researchers to further explore the multilingual models or systems for visual question answering systems. We released the challenge on the Codalab evaluation system for further research.
## Links
- https://arxiv.org/abs/2302.11752
- https://codalab.lisn.upsaclay.fr/competitions/12274 |
false |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false |
﷽
# Dataset Card for Tarteel AI's EveryAyah Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Tarteel AI](https://www.tarteel.ai/)
- **Repository:** [Needs More Information]
- **Point of Contact:** [Mohamed Saad Ibn Seddik](mailto:ms.ibnseddik@tarteel.ai)
### Dataset Summary
This dataset is a collection of Quranic verses and their transcriptions, with diacritization, by different reciters.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The audio is in Arabic.
## Dataset Structure
### Data Instances
A typical data point comprises the audio file `audio`, and its transcription called `text`.
The `duration` is in seconds, and the author is `reciter`.
An example from the dataset is:
```
{
'audio': {
'path': None,
'array': array([ 0. , 0. , 0. , ..., -0.00057983,
-0.00085449, -0.00061035]),
'sampling_rate': 16000
},
'duration': 6.478375,
'text': 'بِسْمِ اللَّهِ الرَّحْمَنِ الرَّحِيمِ',
'reciter': 'abdulsamad'
}
```
### Length:
Training:
Total duration: 2985111.2642479446 seconds
Total duration: 49751.85440413241 minutes
Total duration: 829.1975734022068 hours
Validation:
Total duration: 372720.43139099434 seconds
Total duration: 6212.007189849905 minutes
Total duration: 103.5334531641651 hours
Test:
Total duration: 375509.96909399604 seconds
Total duration: 6258.499484899934 minutes
Total duration: 104.30832474833224 hours
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: The transcription of the audio file.
- duration: The duration of the audio file.
- reciter: The reciter of the verses.
### Data Splits
| | Train | Test | Validation |
| ----- | ----- | ---- | ---------- |
| dataset | 187785 | 23473 | 23474 |
### reciters
- reciters_count: 36
- reciters: {'abdul_basit',
'abdullah_basfar',
'abdullah_matroud',
'abdulsamad',
'abdurrahmaan_as-sudais',
'abu_bakr_ash-shaatree',
'ahmed_ibn_ali_al_ajamy',
'ahmed_neana',
'akram_alalaqimy',
'alafasy',
'ali_hajjaj_alsuesy',
'aziz_alili',
'fares_abbad',
'ghamadi',
'hani_rifai',
'husary',
'karim_mansoori',
'khaalid_abdullaah_al-qahtaanee',
'khalefa_al_tunaiji',
'maher_al_muaiqly',
'mahmoud_ali_al_banna',
'menshawi',
'minshawi',
'mohammad_al_tablaway',
'muhammad_abdulkareem',
'muhammad_ayyoub',
'muhammad_jibreel',
'muhsin_al_qasim',
'mustafa_ismail',
'nasser_alqatami',
'parhizgar',
'sahl_yassin',
'salaah_abdulrahman_bukhatir',
'saood_ash-shuraym',
'yaser_salamah',
'yasser_ad-dussary'}
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
```
### Contributions
This dataset was created by:
|
false | |
true | |
false | # Vision-CAIR cc_sbu_align in multilang
This is Google-translated versions of [Vision-CAIR/cc_sbu_align](https://huggingface.co/datasets/Vision-CAIR/cc_sbu_align). Please visit [2. Second finetuning stage](https://huggingface.co/datasets/Vision-CAIR/cc_sbu_align#training) to understand how the English one was created.
Here I put `filter_cap.json` of each folder for each language.
Current languages:
- en
- vi
There will be more if I have time. |
false | # Dataset Card for OKD-CL
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
true |
### Labels
|label|meaning|
|:---|:-----------|
|achievement_P | in favor of achievement |
|achievement_N | against achievement |
|power_dominance_P | in favor of power: dominance |
|power_dominance_N | against power: dominance |
|power_resources_P | in favor of power: resources |
|power_resources_N | against power: resources | |
true | # Dataset Card for Dataset Name
## Name
Motivación Diaria
## Dataset Description
- **Autor:** Rubén Darío Jaramillo
- **Email:** rubend18@hotmail.com
- **WhatsApp:** +593 93 979 6676
### Dataset Summary
Scrapeado de http://www.motivaciondiaria.com/
### Languages
[Spanish] |
false |
**F**unds **R**eport **F**ront **P**age **E**ntities (FRFPE) is a dataset for document understanding and token classification.
It contains 356 titles/front pages of annual and semi-annual reports as well as extracted text and annotations for five different token categories.
FRFPE serves as an example of how to train and evaluate multimodal models such as LayoutLM using the deepdoctection framework on a custom dataset.
FRFPE contains documents in three different languages
- english: 167
- german: 149
- french: 9
as well as the token categories:
- report_date (1096 samples) - reporting date of the report
- report_type (738 samples) - annual/semi-annual report
- umbrella (912 samples) - fund issued as umbrella
- fund_name (2122 samples) - Subfund, as part of an umbrella fund or standalone fund
- other (12903 samples) - None of the above categories
The annotations have been made to the best of our knowledge and belief, but there is no claim on correctness.
Some cursory notes:
- The images were created by converting PDF files. A resolution of 300 dpi was applied during the conversion.
- The text was extracted from the PDF file using PDFPlumber. In some cases the PDF contains embedded images, which in turn contain text, such as corporate names. These are not extracted and are therefore not taken into account.
- The annotation was carried out with the annotation tool Prodigy.
- The category `report_date` is self-explanatory. `report_type` was used to indicate whether the report is an annual semi-annual report or a report in a different cycle.
- `umbrella`/`fund_name` is the classification of any token that is part of a fund name that represents either an umbrella, subfund or individual fund.
The distinction between whether a fund represents an umbrella, or single fund is not always apparent from the context of the document, which makes the classification
particularly challenging. In order to remain correct in the annotation, information from the Bafin database was used for cases that could not be clarified from the context.
To explore the dataset we suggest to use **deep**doctection. Place the unzipped folder in the `**deep**doctection ~/.cache/datasets` folder.
```python
import deepdoctection as dd
from pathlib import Path
@dd.object_types_registry.register("ner_first_page")
class FundsFirstPage(dd.ObjectTypes):
report_date = "report_date"
umbrella = "umbrella"
report_type = "report_type"
fund_name = "fund_name"
dd.update_all_types_dict()
path = Path("~/.cache/datasets/fund_ar_front_page/40952248ba13ae8bfdd39f56af22f7d9_0.json")
page = dd.Page.from_file(path)
page.image = dd.load_image_from_file(path.parents[0] / "image" / page.file_name.replace("pdf","png"))
page.viz(interactive=True,show_words=True) # close interactive window with q
for word in page.words:
print(f"text: {word.characters}, token class: {word.token_class}")
``` |
false |
# Sol: Simian Opertional Lexicon
The dataset
|
true | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false |
Dataset redistributed without change with permission from the author. If you use this dataset in your research, please cite the following paper: https://doi.org/10.3390/rs6064907 |
false | |
false |
Instructions created from Amazon ESCI dataset as the alpaca style, includes 20k instruction pairs. Used for *query generation*. Following the schema:
```json
[
...,
{
"instruction": "Generate a search query from the give product description.",
"input": "FLYDAY Flying Disc with LED Lights ...",
"output": "175 gram led frisbee gram not dollar",
}
]
```
where the:
*input*: Product description (from Amazon ESCI, E&S section).
*output*: User query.
This dataset can be used to instruction tuning LLM for query (keyword) generation. |
true |
# IPCC Confidence in Climate Statements
_What do LLMs know about climate? Let's find out!_
## ICCS Dataset
We introduce the **ICCS dataset (IPCC Confidence in Climate Statements)** is a novel, curated, expert-labeled, natural language dataset of 8094 statements extracted or paraphrased from the IPCC Assessment Report 6: [Working Group I report](https://www.ipcc.ch/report/ar6/wg1/), [Working Group II report](https://www.ipcc.ch/report/ar6/wg2/), and [Working Group III report](https://www.ipcc.ch/report/ar6/wg3/), respectively.
Each statement is labeled with the corresponding IPCC report source, the page number in the report PDF, and the corresponding confidence level, along with their associated confidence levels (`low`, `medium`, `high`, or `very high`) as assessed by IPCC climate scientists based on available evidence and agreement among their peers.
## Confidence Labels
The authors of the United Nations International Panel on Climate Change (IPCC) reports have developed a structured framework to communicate the confidence and uncertainty levels of statements regarding our knowledge of climate change ([Mastrandrea, 2010](https://link.springer.com/article/10.1007/s10584-011-0178-6)).
Our dataset leverages this distinctive and consistent approach to labelling uncertainty across topics, disciplines, and report chapters, to help NLP and climate communication researchers evaluate how well LLMs can assess human expert confidence in a set of climate science statements from the IPCC reports.

Source: [IPCC AR6 Working Group I report](https://www.ipcc.ch/report/ar6/wg1/)
## Dataset Construction
To construct the dataset, we retrieved the complete raw text from each of the three IPCC report PDFs that are available online using an open-source library [pypdf2](https://pypi.org/project/PyPDF2/). We then normalized the whitespace, tokenized the text into sentences using [NLTK](https://www.nltk.org/) , and used regex search to filter for complete sentences including a parenthetical confidence label at the end of the statement, of the form _sentence (low|medium|high|very high confidence)_. The final ICCS dataset contains 8094 labeled sentences.
From the full 8094 labeled sentences, we further selected **300 statements to form a smaller and more tractable test dataset**. We performed a random selection of sentences within each report and confidence category, with the following objectives:
- Making the test set distribution representative of the confidence class distribution in the overall train set and within each report;
- Making the breakdown between source reports representative of the number of statements from each report;
- Making sure the test set contains at least 5 sentences from each class and from each source, to ensure our results are statistically robust.
Then, we manually reviewed and cleaned each sentence in the test set to provide for a fairer assessment of model capacity.
- We removed 26 extraneous references to figures, call-outs, boxes, footnotes, or subscript typos (`CO 2');
- We split 19 compound statements with conflicting confidence sub-labels, and removed 6 extraneous mid-sentence labels of the same category as the end-of-sentence label;
- We added light context to 23 sentences, and replaced 5 sentences by others when they were meaningless outside of a longer paragraph;
- We removed qualifiers at the beginning of 29 sentences to avoid biasing classification (e.g. 'But...', 'In summary...', 'However...').
**The remaining 7794 sentences not allocated to the test split form our train split.**
Of note: while the IPCC report uses a 5 levels scale for confidence, almost no `very low confidence` statement makes it through the peer review process to the final reports, such that no statement of the form _sentence (very low confidence)_ was retrievable. Therefore, we chose to build our data set with only statements labeled as `low`, `medium`, `high` and `very high` confidence.
## Code Download
The code to reproduce dataset collection and our LLM benchmarking experiments is [released on GitHub](https://github.com/rlacombe/Climate-LLMs).
## Paper
We use this dataset to evaluate how recent LLMs fare at classifying the scientific confidence associated with each statement in a statistically representative, carefully constructed test split of the dataset.
We show that `gpt3.5-turbo` and `gpt4` assess the correct confidence level with reasonable accuracy even in the zero-shot setting; but that, along with other language models we tested, they consistently overstate the certainty level associated with low and medium confidence labels. Models generally perform better on reports before their knowledge cutoff, and demonstrate intuitive classifications on a baseline of non-climate statements. However, we caution it is still not fully clear why these models perform well, and whether they may also pick up on linguistic cues within the climate statements and not just prior exposure to climate knowledge and/or IPCC reports.
Our results have implications for climate communications and the use of generative language models in knowledge retrieval systems. We hope the ICCS dataset provides the NLP and climate sciences communities with a valuable tool with which to evaluate and improve model performance in this critical domain of human knowledge.
Pre-print upcomping. |
true |
# Dataset Card for "super_glue"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/google-research-datasets/boolean-questions](https://github.com/google-research-datasets/boolean-questions)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 58.36 MB
- **Size of the generated dataset:** 249.57 MB
- **Total amount of disk used:** 307.94 MB
### Dataset Summary
SuperGLUE (https://super.gluebenchmark.com/) is a new benchmark styled after
GLUE with a new set of more difficult language understanding tasks, improved
resources, and a new public leaderboard.
BoolQ (Boolean Questions, Clark et al., 2019a) is a QA task where each example consists of a short
passage and a yes/no question about the passage. The questions are provided anonymously and
unsolicited by users of the Google search engine, and afterwards paired with a paragraph from a
Wikipedia article containing the answer. Following the original work, we evaluate with accuracy.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### axb
- **Size of downloaded dataset files:** 0.03 MB
- **Size of the generated dataset:** 0.24 MB
- **Total amount of disk used:** 0.27 MB
An example of 'test' looks as follows.
```
```
#### axg
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.05 MB
- **Total amount of disk used:** 0.06 MB
An example of 'test' looks as follows.
```
```
#### boolq
- **Size of downloaded dataset files:** 4.12 MB
- **Size of the generated dataset:** 10.40 MB
- **Total amount of disk used:** 14.52 MB
An example of 'train' looks as follows.
```
```
#### cb
- **Size of downloaded dataset files:** 0.07 MB
- **Size of the generated dataset:** 0.20 MB
- **Total amount of disk used:** 0.28 MB
An example of 'train' looks as follows.
```
```
#### copa
- **Size of downloaded dataset files:** 0.04 MB
- **Size of the generated dataset:** 0.13 MB
- **Total amount of disk used:** 0.17 MB
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### axb
- `sentence1`: a `string` feature.
- `sentence2`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `entailment` (0), `not_entailment` (1).
#### axg
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `entailment` (0), `not_entailment` (1).
#### boolq
- `question`: a `string` feature.
- `passage`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `False` (0), `True` (1).
#### cb
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `entailment` (0), `contradiction` (1), `neutral` (2).
#### copa
- `premise`: a `string` feature.
- `choice1`: a `string` feature.
- `choice2`: a `string` feature.
- `question`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `choice1` (0), `choice2` (1).
### Data Splits
#### axb
| |test|
|---|---:|
|axb|1104|
#### axg
| |test|
|---|---:|
|axg| 356|
#### boolq
| |train|validation|test|
|-----|----:|---------:|---:|
|boolq| 9427| 3270|3245|
#### cb
| |train|validation|test|
|---|----:|---------:|---:|
|cb | 250| 56| 250|
#### copa
| |train|validation|test|
|----|----:|---------:|---:|
|copa| 400| 100| 500|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{clark2019boolq,
title={BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions},
author={Clark, Christopher and Lee, Kenton and Chang, Ming-Wei, and Kwiatkowski, Tom and Collins, Michael, and Toutanova, Kristina},
booktitle={NAACL},
year={2019}
}
@article{wang2019superglue,
title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},
author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},
journal={arXiv preprint arXiv:1905.00537},
year={2019}
}
Note that each SuperGLUE dataset has its own citation. Please see the source to
get the correct citation for each contained dataset.
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
|
false | |
false | # million-faces
Welcome to "million-faces", one of the largest facesets available to the public. Comprising a staggering one million faces, all images in this dataset are entirely AI-generated.
Due to the nature of AI-generated images, please be aware that some artifacts may be present in the dataset.
The dataset is currently being uploaded to Hugging Face, a renowned platform for hosting datasets and models for the machine learning community.
## Usage
Feel free to use this dataset for your projects and research. However, please do not hold me liable for any issues that might arise from its use. If you use this dataset and create something amazing, consider linking back to this GitHub project. Recognition of work is a pillar of the open-source community!
## Dataset Details
- **Number of faces:** 1,000,000
- **Source:** AI-generated
- **Artifacts:** Some images may contain artifacts
- **Availability:** Almost fully uploaded on Hugging Face
## About
This project is about creating and sharing one of the largest AI-generated facesets. With one million faces, it offers a significant resource for researchers and developers in AI, machine learning, and computer vision. |
false | #### Warning: Due to the nature of the source, certain images are very large.
Large number of artistic images, mostly (but hardly exclusively) sourced from Wikimedia Commons. <br>
Pull requests are allowed, and even encouraged. |
true | # AutoTrain Dataset for project: bhaav-sentiment
## Dataset Description
This dataset has been automatically processed by AutoTrain for project bhaav-sentiment.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "\u0914\u0930 \u0926\u094b\u0928\u094b\u0902 \u091f\u0940\u0932\u0947 \u0915\u0947 \u0905\u0932\u0917 \u0905\u0932\u0917 \u0915\u094b\u0928\u0947 \u092e\u0947\u0902 \u091c\u093e \u092a\u0939\u0941\u0902\u091a\u0947",
"target": 3
},
{
"text": "\u0909\u0938\u0915\u0947 \u092e\u0941\u0901\u0939 \u0938\u0947 \u090f\u0915 \u091a\u0940\u0916 \u0928\u093f\u0915\u0932 \u0917\u092f\u0940",
"target": 2
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['0', '1', '2', '3', '4'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 16241 |
| valid | 4063 |
|
false | # FICLE Dataset
The dataset can be loaded and utilized through the following:
```python
from datasets import load_dataset
ficle_data = load_dataset("tathagataraha/ficle")
```
# Dataset card for Falcon RefinedWeb
## Dataset Description
* **GitHub Repo:** https://github.com/blitzprecision/FICLE
* **Paper:**
* **Point of Contact:**
### Dataset Summary
The FICLE dataset is a derivative of the FEVER dataset, which is a collection of 185,445 claims generated by modifying sentences obtained from Wikipedia.
These claims were then verified without knowledge of the original sentences they were derived from. Each sample in the FEVER dataset consists of a claim sentence, a context sentence extracted from a Wikipedia URL as evidence, and a type label indicating whether the claim is supported, refuted, or lacks sufficient information.
### Languages
The FICLE Dataset contains only English.
## Dataset Structure
### Data Fields
* `Claim (string)`: A statement or proposition relating to the consistency or inconsistency of certain facts or information.
* `Context (string)`: The surrounding information or background against which the claim is being evaluated or compared. It provides additional details or evidence that can support or challenge the claim.
* `Source (string)`: It is the linguistic chunk containing the entity lying to the left of the main verb/relating chunk.
* `Source Indices (string)`: Source indices refer to the specific indices or positions within the source string that indicate the location of the relevant information.
* `Relation (string)`: It is the linguistic chunk containing the verb/relation at the core of the identified inconsistency.
* `Relation Indices (string)`: Relation indices indicate the specific indices or positions within the relation string that highlight the location of the relevant information.
* `Target (string)`: It is the linguistic chunk containing the entity lying to the right of the main verb/relating chunk.
* `Target Indices (string)`: Target indices represent the specific indices or positions within the target string that indicate the location of the relevant information.
* `Inconsistent Claim Component (string)`: The inconsistent claim component refers to a specific linguistic chunk within the claim that is identified as inconsistent with the context. It helps identify which part of the claim triple is problematic in terms of its alignment with the surrounding information.
* `Inconsistent Context-Span (string)`: A span or portion marked within the context sentence that is found to be inconsistent with the claim. It highlights a discrepancy or contradiction between the information in the claim and the corresponding context.
* `Inconsistent Context-Span Indices (string)`: The specific indices or location within the context sentence that indicate the inconsistent span.
* `Inconsistency Type (string)`: The category or type of inconsistency identified in the claim and context.
* `Fine-grained Inconsistent Entity-Type (string)`: The specific detailed category or type of entity causing the inconsistency within the claim or context. It provides a more granular classification of the entity associated with the inconsistency.
* `Coarse Inconsistent Entity-Type (string)`: The broader or general category or type of entity causing the inconsistency within the claim or context. It provides a higher-level classification of the entity associated with the inconsistency.
### Data Splits
The FICLE dataset comprises a total of 8,055 samples in the English language, each representing different instances of inconsistencies.
These inconsistencies are categorized into five types: Taxonomic Relations (4,842 samples), Negation (1,630 samples), Set Based (642 samples), Gradable (526 samples), and Simple (415 samples).
Within the dataset, there are six possible components that contribute to the inconsistencies found in the claim sentences.
These components are distributed as follows: Target-Head (3,960 samples), Target-Modifier (1,529 samples), Relation-Head (951 samples), Relation-Modifier (1,534 samples), Source-Head (45 samples), and Source-Modifier (36 samples).
The dataset is split into `train`, `validation`, and `test`.
* `train`: 6.44k rows
* `validation`: 806 rows
* `test`: 806 rows
## Dataset Creation
### Curation Rationale
We propose a linguistically enriched dataset to help detect inconsistencies and explain them.
To this end, the broad requirements are to locate where the inconsistency is present between a claim and a context and to have a classification scheme for better explainability.
### Data Collection and Preprocessing
The FICLE dataset is derived from the FEVER dataset, using the following-
ing processing steps. FEVER (Fact Extraction and VERification) consists of
185,445 claims were generated by altering sentences extracted from Wikipedia and
subsequently verified without knowledge of the sentence they were derived from.
Every sample in the FEVER dataset contains the claim sentence, evidence (or
context) sentence from a Wikipedia URL, and a type label (‘supports’, ‘refutes’, or
‘not enough info’). Out of these, we leverage only the samples with the ‘refutes’ label
to build our dataset.
### Annotations
You can see the annotation guidelines [here](https://github.com/blitzprecision/FICLE/blob/main/ficle_annotation_guidelines.pdf).
In order to provide detailed explanations for inconsistencies, extensive annotations were conducted for each sample in the FICLE dataset. The annotation process involved two iterations, with each iteration focusing on different aspects of the dataset.
In the first iteration, the annotations were primarily "syntactic-oriented." These fields included identifying the inconsistent claim fact triple, marking inconsistent context spans, and categorizing the six possible inconsistent claim components.
The second iteration of annotations concentrated on "semantic-oriented" aspects. Annotators labeled semantic fields for each sample, such as the type of inconsistency, coarse inconsistent entity types, and fine-grained inconsistent entity types.
This stage aimed to capture the semantic nuances and provide a deeper understanding of the inconsistencies present in the dataset.
The annotation process was carried out by a group of four annotators, two of whom are also authors of the dataset. The annotators possess a strong command of the English language and hold Bachelor's degrees in Computer Science, specializing in computational linguistics.
Their expertise in the field ensured accurate and reliable annotations. The annotators' ages range from 20 to 22 years, indicating their familiarity with contemporary language usage and computational linguistic concepts.
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Citation Information
```
@misc{raha2023neural,
title={Neural models for Factual Inconsistency Classification with Explanations},
author={Tathagata Raha and Mukund Choudhary and Abhinav Menon and Harshit Gupta and KV Aditya Srivatsa and Manish Gupta and Vasudeva Varma},
year={2023},
eprint={2306.08872},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contact
|
false | # VQAv2 in Vietnamese
This is Google-translated version of [VQAv2](https://visualqa.org/) in Vietnamese. The process of building Vietnamese version as follows:
- In `en/` folder,
- Download `v2_OpenEnded_mscoco_train2014_questions.json` and `v2_mscoco_train2014_annotations.json` from [VQAv2](https://visualqa.org/).
- Remove key `answers` of key `annotations` from `v2_mscoco_train2014_annotations.json`. I shall use key `multiple_choice_answer` of key `annotations` only. Let call the new file `v2_OpenEnded_mscoco_train2014_answers.json`
- By using [set data structure](https://docs.python.org/3/tutorial/datastructures.html#sets), I generate `question_list.txt` and `answer_list.txt` of unique text. There are 152050 unique questions and 22531 unique answers from 443757 image-question-answer triplets.
- In `vi/` folder,
- By translating two `en/.txt` files, I generate `answer_list.jsonl` and `question_list.jsonl`. In each of entry of each file, the key is the original english text, the value is the translated text in vietnamese.
To load Vietnamese version in your code, you need original English version. Then just use English text as key to retrieve Vietnamese value from `answer_list.jsonl` and `question_list`. I provide both English and Vietnamese version.
|
false | |
false |

General information
The overall ACDC dataset was created from real clinical exams acquired at the University Hospital of Dijon. Acquired data were fully anonymized and handled within the regulations set by the local ethical committee of the Hospital of Dijon (France). Our dataset covers several well-defined pathologies with enough cases to (1) properly train machine learning methods and (2) clearly assess the variations of the main physiological parameters obtained from cine-MRI (in particular diastolic volume and ejection fraction). The dataset is composed of 150 exams (all from different patients) divided into 5 evenly distributed subgroups (4 pathological plus 1 healthy subject groups) as described below. Furthermore, each patient comes with the following additional information : weight, height, as well as the diastolic and systolic phase instants.
Tasks
The main task of this dataset is the semantic segmentation of the heart in cardiac magnetic resonance images, specifically the endocardium and myocardium. The present task is very relevant for the detection of cardiovascular diseases. Segmentation is a very time-consuming process, so automatically performing the segmentation with Artificial Intelligence algorithms can be extremely beneficial to reduce the time spent in a manual segmentation. In this way, a very relevant bottleneck can be avoided and cardiovascular diseases can be detected in a timely manner.
Reference
O. Bernard, A. Lalande, C. Zotti, F. Cervenansky, et al.
"Deep Learning Techniques for Automatic MRI Cardiac Multi-structures Segmentation and Diagnosis: Is the Problem Solved ?" in IEEE Transactions on Medical Imaging, vol. 37, no. 11, pp. 2514-2525, Nov. 2018
doi: 10.1109/TMI.2018.2837502 |
false |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false |
# Helmet Detection Dataset
The dataset consist of photographs of construction workers during the work. The dataset provides helmet detection using bounding boxes, and addresses public safety tasks such as providing compliance with safety regulations, authomizing the processes of identification of rules violations and reducing accidents during the construction work.
# Get the Dataset
This is just an example of the data. If you need access to the entire dataset, contact us via **[sales@trainingdata.pro](mailto:sales@trainingdata.pro)** or leave a request on **[https://trainingdata.pro/data-market](https://trainingdata.pro/data-market?utm_source=huggingface)**

# Dataset structure
- **img** - contains of original images of construction workers
- **boxes** - includes bounding box labeling for the original images
- **annotations.xml** - contains coordinates of the bounding boxes and labels (helmet, no_helmet), created for the original photo
# Data Format
Each image from `img` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the coordinates of the bounding boxes and labels for helmet detection. For each point, the x and y coordinates are provided.
# Example of XML file structure
.png?generation=1686295970420156&alt=media)
# Helmet Detection might be made in accordance with your requirements.
## **[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/trainingdata-pro**
|
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage: m2sodai.jonggyu.me**
- **Repository: temporarily private**
- **Paper: under review**
- **Point of Contact: jgjang0123 [at] gmail [dot] com**
### Dataset Summary
The M<sup>2</sup>SODAI dataset is the first multi-modal, bounding-box-labeled, and synchronized aerial dataset.
Used Sensor:
- Hyperspectral image
- RGB image
## Dataset Structure
```md
data
├── label.txt
├── train
│ ├── 1.jpg
│ ├── 1.mat
│ ├── 1.json
│ └── ...
├── val
│ ├── 0.jpg
│ ├── 0.mat
│ └── 0.json
└── test
├── 17.jpg
├── 17.mat
└── 17.json
```
### Data Instances
For object detection, we annotated the bounding boxes on the floating matters and ships in the RGB and HSI data.
We note that the floating matter contains buoys, rescue tubes, lifeboats, etc.
Since small objects are hard to be recognized, we refer to the infrared visualization map of the HSI data for bounding box annotation.
### Data Splits
After the data processing, we obtained 1,257 pairs of synchronized RGB and HSI data, where the total number of instances in the dataset is 11,892.
For experiments, we randomly divided the dataset into 1,007 training data, 125 validation data, and 125 test data.
## Dataset Creation
### Source Data
Our focus is to create a public dataset consisting of synchronized maritime aerial RGB and HSI data.
To this end, we built a data collection system by leveraging a single-engine utility aircraft (Cessna Grand Caravan 208B).
An HSI sensor (AsiaFENIX, Specim, Oulu, Finland) and an RGB sensor (DMC, Z/I Imaging, Aalen, Germany) are equipped on the bottom of the aircraft, the direction of which is downward.
The raw data was acquired through 59 flight strips in 12 flight measurement campaigns, which cover a total area of 299.7 km<sup>2</sup>.
During the flight strips, the aircraft maintains its speed of 260 km/h and altitude of 1 km.
The below table shows the detailed specifications of the sensors used in the data collection.
The HSI sensor (AsiaFENIX) scans the wavelength range from 400 nm to 1000 nm in steps of 4.5 nm, a total of 127 spectrum bands.
The wavelength range includes visible spectrum (VIS) and near-infrared (NIR) spectrum, generally used for remote sensing and machine vision tasks.
The RGB sensor (DMC) captures high-resolution RGB data in three channels: Red (590-675 nm), Green (500-650 nm), and Blue (400-580 nm).
We note that RGB and HSI data are collected simultaneously, in which the spatial resolutions of RGB and HSI sensors are approximately 0.1 m and 0.7 m, respectively.
| | HSI sensor | RGB sensor |
|---------------|------------------------------------------------|----------------------------------------------------|
| Name | AsiaFENIX (@Specim) | DMC (@Z/I Imaging) |
| Spectrum | 400-1000 nm, 127 channels (in steps of 4.5 nm) | Blue: 400-580 nm Green: 500-650 nm Red: 590-675 nm |
| Altitude | 1km | 1km |
| Field of View | 40 degree | 74 degree |
| Resolution | 0.7 m | 0.1 m |
<img src="https://s3.amazonaws.com/moonup/production/uploads/6487107c86b4bc5a09f9d62e/ihWLuijyA1j38RbSoEC7_.png" width="50%">
Illustration of the collected raw data. We collected the data on twelve spots. The first row shows the collected raw RGB data. The second and third rows show the overall HSI data and collected raw HSI data in each flight strip. In this figure, since the sensors have different specifications on the field of view (FoV), the raw RGB data and HSI data have different coordinates.
### Data Processing and Annotations
Since the size of the raw data is too large for object detection (HSI: 3,220<sup>2</sup> pixels, and RGB: 22,520<sup>2</sup> pixels on average), we cropped the raw data into a fixed size. We note that RGB and HSI data are cropped in size of 1600 X 1600 X 3 and 224 X 224 X 127, respectively.
However, the problem is that the coordinates of the collected RGB and HSI pairs are not matched. Hence, we employ an image registration method to correct pixel offsets between RGB and HSI pairs. In the below figure, our data processing procedure is depicted.
<img src="https://s3.amazonaws.com/moonup/production/uploads/6487107c86b4bc5a09f9d62e/f4XBxM5Q1Qij7T4UkCf34.png" width="50%">
1. We transform the raw RGB and HSI data into grayscale images.
2. We apply contrast-limited adaptive histogram equalization (CLAHE)-based contrast enhancer to the grayscale RGB data and grayscale HSI data.
3. To estimate the homography matrix between the enhanced RGB data and enhanced HSI data, we carry out the oriented FAST and rotated BRIEF (ORB) feature descriptor to both data, thereby extracting features of the data.
4. We use a Brute-force matcher to find the matched feature among the ORB features; then, the homography matrix is computed from least square optimization for synchronizing the matched features.
5. We crop the registered data in the same size and generate corresponding bounding box annotation data.
### Personal and Sensitive Information
There is no personal/sensitive information in our dataset
### Citation Information
[N/A]
|
false | |
false |
# ORCHESTRA-simple-1M
GitHub: [nk2028/ORCHESTRA-dataset](https://github.com/nk2028/ORCHESTRA-dataset)
**中文簡介**
ORCHESTRA (c**O**mp**R**ehensive **C**lassical c**H**in**ES**e poe**TR**y d**A**taset) 是一個全面的古典中文詩歌的數據集,數據來自[搜韻網](https://sou-yun.cn/)。本數據集由 [nk2028](https://nk2028.shn.hk/) 進行格式轉換並發佈,希望透過公開高品質的古典中文詩歌數據,促進對古典中文詩歌及古典中文自然語言處理的研究。
ORCHESTRA-simple 是 ORCHESTRA 數據集的簡化格式,僅保留 `id`, `title`, `group_index`, `type`, `dynasty`, `author`, `content` 這 7 個欄位,而去除其他欄位,以簡化使用。
本資料集可用於大型語言模型的訓練。如欲作其他用途,請向數據提供者[搜韻網](https://sou-yun.cn/)諮詢。
**English Introduction**
ORCHESTRA (c**O**mp**R**ehensive **C**lassical c**H**in**ES**e poe**TR**y d**A**taset) is a comprehensive dataset of classical Chinese poetry, with data sourced from [SouYun Website](https://sou-yun.cn/). This dataset was converted and published by [nk2028](https://nk2028.shn.hk/), with the hope that by publicly releasing high-quality classical Chinese poetry data, it can promote research in classical Chinese poetry and natural language processing of classical Chinese.
ORCHESTRA-simple is a simplified format of the ORCHESTRA dataset, retaining only 7 fields: `id`, `title`, `group_index`, `type`, `dynasty`, `author`, and `content`, while removing other fields to simplify the usage.
This dataset can be used for training large language models. If you wish to use it for other purposes, please consult with the data provider, [SouYun Website](https://sou-yun.cn/).
|
true | |
false | # TextCaps in Vietnamese
This is Vietnamese version of [TextCaps dataset](https://textvqa.org/textcaps/). It has 109765 image-caption pairs for training, and 15830 ones for validation. It was built by using Google Translate API. The Vietnamese version has the almost metadata as English one. The Vietnamese version doesn't have the following keys for each data points.
- `caption_tokens`
- `reference_tokens`
- `reference_strs`
- `image_classes`
For English version, these keys are in English. Because my main focus is `caption_str`, there aren't Vietnamese version for them. I'm limited by time and disk space.
I provide both English and Vietnamese .json files. |
false | # TextVQA in Vietnamese
This is Google-translated version of [TextVQA](https://textvqa.org/) in Vietnamese. The process of building Vietnamese version as follows:
- In en/ folder,
- Download `TextVQA_0.5.1_train.json`, `TextVQA_0.5.1_val.json`.
- By using [set data structure](https://docs.python.org/3/tutorial/datastructures.html#sets), generate txt files of unique text: train_answer_list.txt, train_question_list.txt, val_answer_list.txt, val_question_list.txt.
- In vi/ folder
- By translating 4 en/.txt files, generate train_answer_list.jsonl, train_question_list.jsonl, val_answer_list.jsonl, val_question_list.jsonl. In each of entry of each file, the key is the original english text, the value is the translated text in vietnamese.
To load Vietnamese version in your code, you need original English version. Then just use English text as key to retrieve Vietnamese value from jsonl files. I provide both English and Vietnamese version. |
false | # Dataset Card for "code_x_glue_tc_text_to_code"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits-sample-size)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Text-Code/text-to-code
### Dataset Summary
CodeXGLUE text-to-code dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Text-Code/text-to-code
The dataset we use is crawled and filtered from Microsoft Documentation, whose document located at https://github.com/MicrosoftDocs/.
### Supported Tasks and Leaderboards
- `machine-translation`: The dataset can be used to train a model for generating Java code from an **English** natural language description.
### Languages
- Java **programming** language
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
"code": "boolean function ( ) { return isParsed ; }",
"id": 0,
"nl": "check if details are parsed . concode_field_sep Container parent concode_elem_sep boolean isParsed concode_elem_sep long offset concode_elem_sep long contentStartPosition concode_elem_sep ByteBuffer deadBytes concode_elem_sep boolean isRead concode_elem_sep long memMapSize concode_elem_sep Logger LOG concode_elem_sep byte[] userType concode_elem_sep String type concode_elem_sep ByteBuffer content concode_elem_sep FileChannel fileChannel concode_field_sep Container getParent concode_elem_sep byte[] getUserType concode_elem_sep void readContent concode_elem_sep long getOffset concode_elem_sep long getContentSize concode_elem_sep void getContent concode_elem_sep void setDeadBytes concode_elem_sep void parse concode_elem_sep void getHeader concode_elem_sep long getSize concode_elem_sep void parseDetails concode_elem_sep String getType concode_elem_sep void _parseDetails concode_elem_sep String getPath concode_elem_sep boolean verify concode_elem_sep void setParent concode_elem_sep void getBox concode_elem_sep boolean isSmallBox"
}
```
### Data Fields
In the following each data field in go is explained for each config. The data fields are the same among all splits.
#### default
|field name| type | description |
|----------|------|---------------------------------------------|
|id |int32 | Index of the sample |
|nl |string| The natural language description of the task|
|code |string| The programming source code for the task |
### Data Splits
| name |train |validation|test|
|-------|-----:|---------:|---:|
|default|100000| 2000|2000|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
https://github.com/microsoft, https://github.com/madlag
### Licensing Information
Computational Use of Data Agreement (C-UDA) License.
### Citation Information
```
@article{iyer2018mapping,
title={Mapping language to code in programmatic context},
author={Iyer, Srinivasan and Konstas, Ioannis and Cheung, Alvin and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:1808.09588},
year={2018}
}
```
### Contributions
Thanks to @madlag (and partly also @ncoop57) for adding this dataset. |
false | # curation-corpus-ru
## Dataset Description
- **Repository:** [https://github.com/CurationCorp/curation-corpus](https://github.com/CurationCorp/curation-corpus)
Translated version of [d0rj/curation-corpus](https://huggingface.co/datasets/d0rj/curation-corpus) into Russian. |
false | |
false | # OK-VQA in multilang
This is Google-translated versions of [OK-VQA](https://okvqa.allenai.org/index.html) in many languages. Each language version stays in each folder.
The process of building Vietnamese version as follows:
- In `en/` folder,
- From [OK-VQA](https://okvqa.allenai.org/index.html), obtain all json files: `mscoco_train2014_annotations.json`, `mscoco_val2014_annotations.json`, `OpenEnded_mscoco_train2014_questions.json`, `OpenEnded_mscoco_val2014_questions.json`.
- By using [set data structure](https://docs.python.org/3/tutorial/datastructures.html#sets), generate txt files of unique text: `train_answer_list.txt`, `train_question_list.txt`, `val_answer_list.txt`, `val_question_list.txt`.
- In `vi/` folder,
- By translating 4 `en/.txt` files, generate `train_answer_list.jsonl`, `train_question_list.jsonl`, `val_answer_list.jsonl`, `val_question_list.jsonl`. In each of entry of each file, the key is the original english text, the value is the translated text in vietnamese.
To load Vietnamese version in your code, you need original English version. Then just use English text as key to retrieve Vietnamese value from jsonl files. I provide both English and Vietnamese version.
Other languages (if added) shall follow the same process.
Current languages:
- en
- vi
There will be more if I have time. |
false |
# Grocery Shelves Dataset
## Facing is the process of arranging products on shelves and counters.
The dataset consist of labeled photographs of grocery store shelves.
The Grocery Shelves Dataset can be used to analyze and optimize product placement data, develop strategies for increasing product visibility, maximize the effectiveness of the product placements and increase sales.
# Get the Dataset
This is just an example of the data. If you need access to the entire dataset, contact us via **[sales@trainingdata.pro](mailto:sales@trainingdata.pro)** or leave a request on **[https://trainingdata.pro/data-market](https://trainingdata.pro/data-market?utm_source=huggingface)**

# Dataset structure
- **img** - contains of original images of grocery store shelves
- **labels** - includes polyline labeling for the original images
- **annotations.xml** - contains coordinates of the polylines and labels, created for the original photo
# Data Format
Each image from `img` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the coordinates of the polylines for product placement. For each point, the x and y coordinates are provided.
### Attributes
- **is_flipped** - the product position (*true* if the product is flipped)
- **is_facing** - the product visability (*true* if the product's cover is turned towards us and can be clearly seen)
# Example of XML file structure
.png?generation=1686606438563238&alt=media)
# Product Facing might be made in accordance with your requirements.
## **[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/trainingdata-pro** |
false | |
false | |
false |
# Dataset Card for Common Voice Corpus 6.1
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Anton Lozhkov](mailto:anton@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 9283 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 7335 validated hours in 60 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
### Languages
```
Abkhaz, Arabic, Assamese, Basque, Breton, Catalan, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Dhivehi, Dutch, English, Esperanto, Estonian, Finnish, French, Frisian, Georgian, German, Greek, Hakha Chin, Hindi, Hungarian, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Lithuanian, Luganda, Maltese, Mongolian, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Slovenian, Sorbian, Upper, Spanish, Swedish, Tamil, Tatar, Thai, Turkish, Ukrainian, Vietnamese, Votic, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_6_1", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
|
false | # AutoTrain Dataset for project: aniaitokenclassification
## Dataset Description
This dataset has been automatically processed by AutoTrain for project aniaitokenclassification.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"tokens": [
"I",
" booked",
"a",
" flight",
"to",
"London."
],
"tags": [
4,
2,
2,
5,
2,
1
]
},
{
"tokens": [
"Apple",
"Inc.",
"is",
"planning",
"to",
"open",
"a",
"new",
"store",
"in",
"Paris."
],
"tags": [
3,
3,
2,
2,
2,
2,
2,
2,
2,
2,
1
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"tokens": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"tags": "Sequence(feature=ClassLabel(names=['COMPANY', 'LOC', 'O', 'ORG', 'PER', 'THING'], id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 23 |
| valid | 6 |
|
false | # RED<sup>FM</sup>: a Filtered and Multilingual Relation Extraction Dataset
This is the human-filtered dataset from the 2023 ACL paper [RED^{FM}: a Filtered and Multilingual Relation Extraction Dataset](https://arxiv.org/abs/2306.09802). If you use the model, please reference this work in your paper:
@inproceedings{huguet-cabot-et-al-2023-redfm-dataset,
title = "RED$^{\rm FM}$: a Filtered and Multilingual Relation Extraction Dataset",
author = "Huguet Cabot, Pere-Llu{\'\i}s and Tedeschi, Simone and Ngonga Ngomo, Axel-Cyrille and
Navigli, Roberto",
booktitle = "Proc. of the 61st Annual Meeting of the Association for Computational Linguistics: ACL 2023",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2306.09802",
}
## License
RED<sup>FM</sup> is licensed under the CC BY-SA 4.0 license. The text of the license can be found [here](https://creativecommons.org/licenses/by-sa/4.0/). |
false |
# Ayaka/MoeDict-cmn-hak-10k
|
false | |
true | |
true |
This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt).
- `label.txt`: the label name for each class
- `train.jsonl`: The original training set.
- `valid.jsonl`: The original validation set.
- `test.jsonl`: The original test set.
- `simprompt.jsonl`: The training data generated by the simple prompt.
- `attrprompt.jsonl`: The training data generated by the attributed prompt.
Please check our original paper for details. Moreover, we provide the generated dataset using LLM as follows:
- `regen.jsonl`: The training data generated by [ReGen](https://github.com/yueyu1030/ReGen).
- `regen_llm_augmented.jsonl`: The training data generated by ReGen, with the subtopics generated by the LLM.
- `progen.jsonl`: The training data generated by [ProGen](https://github.com/hkunlp/progen).
Please cite the original paper if you use this dataset for your study. Thanks!
```
@inproceedings{meng2019weakly,
title={Weakly-supervised hierarchical text classification},
author={Meng, Yu and Shen, Jiaming and Zhang, Chao and Han, Jiawei},
booktitle={Proceedings of the AAAI conference on artificial intelligence},
pages={6826--6833},
year={2019}
}
``` |
true |
This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt).
- `label.txt`: the label name for each class
- `train.jsonl`: The original training set.
- `valid.jsonl`: The original validation set.
- `test.jsonl`: The original test set.
- `simprompt.jsonl`: The training data generated by the simple prompt.
- `attrprompt.jsonl`: The training data generated by the attributed prompt.
Please check our original paper for details. Moreover, we provide the generated dataset using LLM as follows:
- `regen.jsonl`: The training data generated by [ReGen](https://github.com/yueyu1030/ReGen).
- `regen_llm_augmented.jsonl`: The training data generated by ReGen, with the subtopics generated by the LLM.
- `progen.jsonl`: The training data generated by [ProGen](https://github.com/hkunlp/progen).
Please cite the original paper if you use this dataset for your study. Thanks!
```
@inproceedings{blitzer2007biographies,
title={Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification},
author={Blitzer, John and Dredze, Mark and Pereira, Fernando},
booktitle={Proceedings of the 45th annual meeting of the association of computational linguistics},
pages={440--447},
year={2007}
}
``` |
true |
This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt).
- `label.txt`: the label name for each class
- `train.jsonl`: The original training set.
- `valid.jsonl`: The original validation set.
- `test.jsonl`: The original test set.
- `simprompt.jsonl`: The training data generated by the simple prompt.
- `attrprompt.jsonl`: The training data generated by the attributed prompt.
Please cite the original paper if you use this dataset for your study. Thanks!
```
@article{geigle:2021:arxiv,
author = {Gregor Geigle and
Nils Reimers and
Andreas R{\"u}ckl{\'e} and
Iryna Gurevych},
title = {TWEAC: Transformer with Extendable QA Agent Classifiers},
journal = {arXiv preprint},
volume = {abs/2104.07081},
year = {2021},
url = {http://arxiv.org/abs/2104.07081},
archivePrefix = {arXiv},
eprint = {2104.07081}
}
``` |
true | This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt).
- `label.txt`: the label name for each class
- `train.jsonl`: The original training set.
- `valid.jsonl`: The original validation set.
- `test.jsonl`: The original test set.
- `simprompt.jsonl`: The training data generated by the simple prompt.
- `attrprompt.jsonl`: The training data generated by the attributed prompt. |
true |
This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt).
- `label.txt`: the label name for each class
- `train.jsonl`: The original training set.
- `valid.jsonl`: The original validation set.
- `test.jsonl`: The original test set.
- `simprompt.jsonl`: The training data generated by the simple prompt.
- `attrprompt.jsonl`: The training data generated by the attributed prompt. |
true |
This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt).
- `label.txt`: the label name for each class
- `train.jsonl`: The original training set.
- `valid.jsonl`: The original validation set.
- `test.jsonl`: The original test set.
- `simprompt.jsonl`: The training data generated by the simple prompt.
- `attrprompt.jsonl`: The training data generated by the attributed prompt. |
true |
This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt).
- `label.txt`: the label name for each class
- `train.jsonl`: The original training set.
- `valid.jsonl`: The original validation set.
- `test.jsonl`: The original test set.
- `simprompt.jsonl`: The training data generated by the simple prompt.
- `attrprompt.jsonl`: The training data generated by the attributed prompt. |
false |
# Basketball Tracking
## Tracking is a deep learning process where the algorithm tracks the movement of an object.
The dataset consist of screenshots from videos of basketball games with the ball labeled with a bounging box.
The dataset can be used to train a neural network in ball control recognition. The dataset is useful for automating the camera operator's work during a match, allowing the ball to be efficiently kept in frame.
# Get the Dataset
## This is just an example of the data
## Contact us via **[sales@trainingdata.pro](mailto:sales@trainingdata.pro)** or leave a request on **[https://trainingdata.pro/data-market](https://trainingdata.pro/data-market?utm_source=huggingface)** to get the dataset**

# Dataset structure
- **img** - contains of original images of basketball players.
- **boxes** - includes bounding box labeling for a ball in the original images.
- **annotations.xml** - contains coordinates of the boxes and labels, created for the original photo
# Data Format
Each image from `img` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the coordinates of the bounding boxes for the ball position. For each point, the x and y coordinates are provided.
### Attributes
- **occluded** - the ball visability (*true* if the the ball is occluded by 30%)
- **basket** - the position related to the basket (*true* if the ball is covered with a basket on any distinguishable area)
# Example of XML file structure

# Basketball Tracking might be made in accordance with your requirements.
## **[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/trainingdata-pro** |
false | |
false |
# Dataset Card for Never Ending Language Learning (NELL)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
http://rtw.ml.cmu.edu/rtw/
- **Repository:**
http://rtw.ml.cmu.edu/rtw/
- **Paper:**
Never-Ending Learning.
T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. Platanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, J. Welling. In Proceedings of the Conference on Artificial Intelligence (AAAI), 2015
### Dataset Summary
This dataset provides version 1115 of the belief
extracted by CMU's Never Ending Language Learner (NELL) and version
1110 of the candidate belief extracted by NELL. See
http://rtw.ml.cmu.edu/rtw/overview. NELL is an open information
extraction system that attempts to read the Clueweb09 of 500 million
web pages (http://boston.lti.cs.cmu.edu/Data/clueweb09/) and general
web searches.
The dataset has 4 configurations: nell_belief, nell_candidate,
nell_belief_sentences, and nell_candidate_sentences. nell_belief is
certainties of belief are lower. The two sentences config extracts the
CPL sentence patterns filled with the applicable 'best' literal string
for the entities filled into the sentence patterns. And also provides
sentences found using web searches containing the entities and
relationships.
There are roughly 21M entries for nell_belief_sentences, and 100M
sentences for nell_candidate_sentences.
From the NELL website:
- **Research Goal**
To build a never-ending machine learning system that acquires the ability to extract structured information from unstructured web pages. If successful, this will result in a knowledge base (i.e., a relational database) of structured information that mirrors the content of the Web. We call this system NELL (Never-Ending Language Learner).
- **Approach**
The inputs to NELL include (1) an initial ontology defining hundreds of categories (e.g., person, sportsTeam, fruit, emotion) and relations (e.g., playsOnTeam(athlete,sportsTeam), playsInstrument(musician,instrument)) that NELL is expected to read about, and (2) 10 to 15 seed examples of each category and relation.
Given these inputs, plus a collection of 500 million web pages and access to the remainder of the web through search engine APIs, NELL runs 24 hours per day, continuously, to perform two ongoing tasks:
Extract new instances of categories and relations. In other words, find noun phrases that represent new examples of the input categories (e.g., "Barack Obama" is a person and politician), and find pairs of noun phrases that correspond to instances of the input relations (e.g., the pair "Jason Giambi" and "Yankees" is an instance of the playsOnTeam relation). These new instances are added to the growing knowledge base of structured beliefs.
Learn to read better than yesterday. NELL uses a variety of methods to extract beliefs from the web. These are retrained, using the growing knowledge base as a self-supervised collection of training examples. The result is a semi-supervised learning method that couples the training of hundreds of different extraction methods for a wide range of categories and relations. Much of NELL’s current success is due to its algorithm for coupling the simultaneous training of many extraction methods.
For more information, see: http://rtw.ml.cmu.edu/rtw/resources
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
en, and perhaps some others
## Dataset Structure
### Data Instances
There are four configurations for the dataset: nell_belief, nell_candidate, nell_belief_sentences, nell_candidate_sentences.
nell_belief and nell_candidate defines:
``
{'best_entity_literal_string': 'Aspect Medical Systems',
'best_value_literal_string': '',
'candidate_source': '%5BSEAL-Iter%3A215-2011%2F02%2F26-04%3A27%3A09-%3Ctoken%3Daspect_medical_systems%2Cbiotechcompany%3E-From%3ACategory%3Abiotechcompany-using-KB+http%3A%2F%2Fwww.unionegroup.com%2Fhealthcare%2Fmfg_info.htm+http%3A%2F%2Fwww.conventionspc.com%2Fcompanies.html%2C+CPL-Iter%3A1103-2018%2F03%2F08-15%3A32%3A34-%3Ctoken%3Daspect_medical_systems%2Cbiotechcompany%3E-grant+support+from+_%092%09research+support+from+_%094%09unrestricted+educational+grant+from+_%092%09educational+grant+from+_%092%09research+grant+support+from+_%091%09various+financial+management+positions+at+_%091%5D',
'categories_for_entity': 'concept:biotechcompany',
'categories_for_value': 'concept:company',
'entity': 'concept:biotechcompany:aspect_medical_systems',
'entity_literal_strings': '"Aspect Medical Systems" "aspect medical systems"',
'iteration_of_promotion': '1103',
'relation': 'generalizations',
'score': '0.9244426550775064',
'source': 'MBL-Iter%3A1103-2018%2F03%2F18-01%3A35%3A42-From+ErrorBasedIntegrator+%28SEAL%28aspect_medical_systems%2Cbiotechcompany%29%2C+CPL%28aspect_medical_systems%2Cbiotechcompany%29%29',
'value': 'concept:biotechcompany',
'value_literal_strings': ''}
``
nell_belief_sentences, nell_candidate_sentences defines:
``
{'count': 4,
'entity': 'biotechcompany:aspect_medical_systems',
'relation': 'generalizations',
'score': '0.9244426550775064',
'sentence': 'research support from [[ Aspect Medical Systems ]]',
'sentence_type': 'CPL',
'url': '',
'value': 'biotechcompany'}
``
### Data Fields
For nell_belief and nell_canddiate configurations. From http://rtw.ml.cmu.edu/rtw/faq:
* entity: The Entity part of the (Entity, Relation, Value) tripple. Note that this will be the name of a concept and is not the literal string of characters seen by NELL from some text source, nor does it indicate the category membership of that concept
* relation: The Relation part of the (Entity, Relation, Value) tripple. In the case of a category instance, this will be "generalizations". In the case of a relation instance, this will be the name of the relation.
* value: The Value part of the (Entity, Relation, Value) tripple. In the case of a category instance, this will be the name of the category. In the case of a relation instance, this will be another concept (like Entity).
* iteration_of_promotion: The point in NELL's life at which this category or relation instance was promoted to one that NELL beleives to be true. This is a non-negative integer indicating the number of iterations of bootstrapping NELL had gone through.
* score: A confidence score for the belief. Note that NELL's scores are not actually probabilistic at this time.
* source: A summary of the provenance for the belief indicating the set of learning subcomponents (CPL, SEAL, etc.) that had submitted this belief as being potentially true.
* entity_literal_strings: The set of actual textual strings that NELL has read that it believes can refer to the concept indicated in the Entity column.
* value_literal_strings: For relations, the set of actual textual strings that NELL has read that it believes can refer to the concept indicated in the Value column. For categories, this should be empty but may contain something spurious.
* best_entity_literal_string: Of the set of strings in the Entity literalStrings, column, which one string can best be used to describe the concept.
* best_value_literal_string: Same thing, but for Value literalStrings.
* categories_for_entity: The full set of categories (which may be empty) to which NELL belives the concept indicated in the Entity column to belong.
* categories_for_value: For relations, the full set of categories (which may be empty) to which NELL believes the concept indicated in the Value column to belong. For categories, this should be empty but may contain something spurious.
* candidate_source: A free-form amalgamation of more specific provenance information describing the justification(s) NELL has for possibly believing this category or relation instance.
For the nell_belief_sentences and nell_candidate_sentences, we have extracted the underlying sentences, sentence count and URLs and provided a shortened version of the entity, relation and value field by removing the string "concept:" and "candidate:". There are two types of sentences, 'CPL' and 'OE', which are generated by two of the modules of NELL, pattern matching and open web searching, respectively. There may be duplicates. The configuration is as follows:
* entity: The Entity part of the (Entity, Relation, Value) tripple. Note that this will be the name of a concept and is not the literal string of characters seen by NELL from some text source, nor does it indicate the category membership of that concept
* relation: The Relation part of the (Entity, Relation, Value) tripple. In the case of a category instance, this will be "generalizations". In the case of a relation instance, this will be the name of the relation.
* value: The Value part of the (Entity, Relation, Value) tripple. In the case of a category instance, this will be the name of the category. In the case of a relation instance, this will be another concept (like Entity).
* score: A confidence score for the belief. Note that NELL's scores are not actually probabilistic at this time.
* sentence: the raw sentence. For 'CPL' type sentences, there are "[[" "]]" arounds the entity and value. For 'OE' type sentences, there are no "[[" and "]]".
* url: the url if there is one from which this sentence was extracted
* count: the count for this sentence
* sentence_type: either 'CPL' or 'OE'
### Data Splits
There are no splits.
## Dataset Creation
### Curation Rationale
This dataset was gathered and created over many years of running the NELL system on web data.
### Source Data
#### Initial Data Collection and Normalization
See the research paper on NELL. NELL searches a subset of the web
(Clueweb09) and the open web using various open information extraction
algorithms, including pattern matching.
#### Who are the source language producers?
The NELL authors at Carnegie Mellon Univiersty and data from Cluebweb09 and the open web.
### Annotations
#### Annotation process
The various open information extraction modules of NELL.
#### Who are the annotators?
Machine annotated.
### Personal and Sensitive Information
Unkown, but likely there are names of famous individuals.
## Considerations for Using the Data
### Social Impact of Dataset
The goal for the work is to help machines learn to read and understand the web.
### Discussion of Biases
Since the data is gathered from the web, there is likely to be biased text and relationships.
[More Information Needed]
### Other Known Limitations
The relationships and concepts gathered from NELL are not 100% accurate, and there could be errors (maybe as high as 30% error).
See https://en.wikipedia.org/wiki/Never-Ending_Language_Learning
We did not 'tag' the entity and value in the 'OE' sentences, and this might be an extension in the future.
## Additional Information
### Dataset Curators
The authors of NELL at Carnegie Mellon Univeristy
### Licensing Information
There does not appear to be a license on http://rtw.ml.cmu.edu/rtw/resources. The data is made available by CMU on the web.
### Citation Information
@inproceedings{mitchell2015,
added-at = {2015-01-27T15:35:24.000+0100},
author = {Mitchell, T. and Cohen, W. and Hruscha, E. and Talukdar, P. and Betteridge, J. and Carlson, A. and Dalvi, B. and Gardner, M. and Kisiel, B. and Krishnamurthy, J. and Lao, N. and Mazaitis, K. and Mohammad, T. and Nakashole, N. and Platanios, E. and Ritter, A. and Samadi, M. and Settles, B. and Wang, R. and Wijaya, D. and Gupta, A. and Chen, X. and Saparov, A. and Greaves, M. and Welling, J.},
biburl = {https://www.bibsonomy.org/bibtex/263070703e6bb812852cca56574aed093/hotho},
booktitle = {AAAI},
description = {Papers by William W. Cohen},
interhash = {52d0d71f6f5b332dabc1412f18e3a93d},
intrahash = {63070703e6bb812852cca56574aed093},
keywords = {learning nell ontology semantic toread},
note = {: Never-Ending Learning in AAAI-2015},
timestamp = {2015-01-27T15:35:24.000+0100},
title = {Never-Ending Learning},
url = {http://www.cs.cmu.edu/~wcohen/pubs.html},
year = 2015
}
### Contributions
Thanks to [@ontocord](https://github.com/ontocord) for adding this dataset. |
true |
This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt).
- `label.txt`: the label name for each class
- `train.jsonl`: The original training set.
- `valid.jsonl`: The original validation set.
- `test.jsonl`: The original test set.
- `simprompt.jsonl`: The training data generated by the simple prompt.
- `attrprompt.jsonl`: The training data generated by the attributed prompt.
**Note**: Different than the other datasets, the `labels` for training/validation/test data are all a *list* instead of an integer as it is a multi-label classification dataset. |
false | # AutoTrain Dataset for project: fhdd_arabic_chatbot
## Dataset Description
This dataset has been automatically processed by AutoTrain for project fhdd_arabic_chatbot.
### Languages
The BCP-47 code for the dataset's language is en2ar.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_sourceLang": "ara",
"feat_targetlang": "eng",
"target": "\u064a\u0646\u0628\u063a\u064a \u0623\u0646 \u062a\u064f\u0638\u0647\u0631 \u0627\u0644\u0646\u0651\u0633\u0627\u0621 \u0648\u062c\u0648\u0647\u0647\u0646\u0651.",
"source": "Women should have their faces visible."
},
{
"feat_sourceLang": "ara",
"feat_targetlang": "eng",
"target": "\u0623\u062a\u062f\u0631\u0633 \u0627\u0644\u0625\u0646\u062c\u0644\u064a\u0632\u064a\u0629\u061f",
"source": "Do you study English?"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_sourceLang": "Value(dtype='string', id=None)",
"feat_targetlang": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)",
"source": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 15622 |
| valid | 3906 |
|
true | # Dataset Card for "pandassdcctest"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false |
# Dataset Card for OpusBooks
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/Books.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
Here are some examples of questions and facts:
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
false | # Dataset Card for Invoices (Sparrow)
This dataset contains 500 invoice documents annotated and processed to be ready for Donut ML model fine-tuning.
Annotation and data preparation task was done by [Katana ML](https://www.katanaml.io) team.
[Sparrow](https://github.com/katanaml/sparrow/tree/main) - open-source data extraction solution by Katana ML.
Original dataset [info](https://data.mendeley.com/datasets/tnj49gpmtz): Kozłowski, Marek; Weichbroth, Paweł (2021), “Samples of electronic invoices”, Mendeley Data, V2, doi: 10.17632/tnj49gpmtz.2 |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage** https://sites.google.com/view/v-lol/home
- **Repository** https://github.com/ml-research/vlol-dataset-gen
- **Paper** https://arxiv.org/abs/2306.07743
- **Point of Contact:** lukas_henrik.helff@tu-darmstadt.de
### Dataset Summary
This diagnostic dataset is specifically designed to evaluate the visual logical learning capabilities of machine learning models.
It offers a seamless integration of visual and logical challenges, providing 2D images of complex visual trains,
where the classification is derived from rule-based logic.
The fundamental idea of V-LoL remains to integrate the explicit logical learning tasks of classic symbolic AI benchmarks into visually complex scenes,
creating a unique visual input that retains the challenges and versatility of explicit logic.
In doing so, V-LoL bridges the gap between symbolic AI challenges and contemporary deep learning datasets offering various visual logical learning tasks
that pose challenges for AI models across a wide spectrum of AI research, from symbolic to neural and neuro-symbolic AI.
Moreover, we provide a flexible [dataset generator](https://github.com/ml-research/vlol-dataset-gen) that
empowers researchers to easily exchange or modify the logical rules, thereby enabling the creation of new datasets incorperating novel logical learning challenges.
By combining visual input with logical reasoning, this dataset serves as a comprehensive benchmark for assessing the ability
of machine learning models to learn and apply logical reasoning within a visual context.
### Supported Tasks and Leaderboards
We offer a diverse set of datasets that present challenging AI tasks targeting various reasoning abilities. The following provides an overview of the available datasets:
Logical complexity:
- Theory X: The train has either a short, closed car or a car with a barrel load is somewhere behind a car with a golden vase load. This rule was originally introduced as "Theory X" in the new East-West Challenge.
- Numerical rule: The train has a car where its car position equals its number of payloads which equals its number of wheel axles.
- Complex rule: Either, there is a car with a car number which is smaller than its number of wheel axles count and smaller than the number of loads, or there is a short and a long car with the same colour where the position number of the short car is smaller than the number of wheel axles of the long car, or the train has three differently coloured cars. We refer to Tab. 3 in the supp. for more insights on required reasoning properties for each rule.
Visual complexity:
- Realistic train representaions.
- Block representation.
OOD Trains:
- A train carrying 2-4 cars.
- A train carrying 7 cars.
Train attribute distributions:
- Michalski attribute distribution.
- Random attribute distribution.
### Languages
English
## Dataset Structure
### Data Instances
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=480x270 at 0x1351D0EE0>,
'label': 1
}
```
### Data Fields
The data instances have the following fields:
- image: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded
Decoding of a large number of image files might take a significant amount of time.
Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- label: an int classification label.
Class labels mapping:
| ID | Class |
| --- | ----------- |
| 0 | Westbound |
| 1 | Eastbound |
### Data Splits
| | Train | Validation |
| --- | --- | ----------- |
| # of samples | 10000 | 2000 |
## Dataset Creation
### Curation Rationale
Despite the successes of recent developments in visual AI, different shortcomings still exist;
from missing exact logical reasoning, to abstract generalization abilities, to understanding complex and noisy scenes.
Unfortunately, existing benchmarks, were not designed to capture more than a few of these aspects.
Whereas deep learning datasets focus on visually complex data but simple visual reasoning tasks,
inductive logic datasets involve complex logical learning tasks, however, lack the visual component.
To address this, we propose the visual logical learning dataset, V-LoL, that seamlessly combines visual and logical challenges.
Notably, we introduce the first instantiation of V-LoL, V-LoL-Train, -- a visual rendition of a classic benchmark in symbolic AI, the Michalski train problem.
By incorporating intricate visual scenes and flexible logical reasoning tasks within a versatile framework,
V-LoL-Train provides a platform for investigating a wide range of visual logical learning challenges.
To create new V-LoL challenges, we provide a comprehensive guide and resources in our [GitHub repository](https://github.com/ml-research/vlol-dataset-gen).
The repository offers a collection of tools and code that enable researchers and practitioners to easily generate new V-LoL challenges based on their specific requirements. By referring to our GitHub repository, users can access the necessary documentation, code samples, and instructions to create and customize their own V-LoL challenges.
### Source Data
#### Initial Data Collection and Normalization
The individual datasets are generated using the V-LoL-Train generator. See [GitHub repository](https://github.com/ml-research/vlol-dataset-gen).
#### Who are the source language producers?
See [GitHub repository](https://github.com/ml-research/vlol-dataset-gen).
### Annotations
#### Annotation process
The images are generated in two steps: first sampling a valid symbolic representation of a train and then visualizing it within a 3D scene.
#### Who are the annotators?
Annotations are automatically derived using a python, prolog, and blender pipline. See [GitHub repository](https://github.com/ml-research/vlol-dataset-gen).
### Personal and Sensitive Information
The dataset does not contain personal nor sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset has no social impact.
### Discussion of Biases
Please refer to our paper.
### Other Known Limitations
Please refer to our paper.
## Additional Information
### Dataset Curators
Lukas Helff
### Licensing Information
MIT License
### Citation Information
@misc{helff2023vlol,
title={V-LoL: A Diagnostic Dataset for Visual Logical Learning},
author={Lukas Helff and Wolfgang Stammer and Hikaru Shindo and Devendra Singh Dhami and Kristian Kersting},
journal={Dataset available from https://sites.google.com/view/v-lol},
year={2023},
eprint={2306.07743},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
### Contributions
Lukas Helff, Wolfgang Stammer, Hikaru Shindo, Devendra Singh Dhami, Kristian Kersting |
false | |
false | # Dataset Card for "symbolic-instruction-tuning-sql"
Original component (=no Flan) from the symbolic instruction tuning dataset, with flan column names.
[From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning](https://arxiv.org/abs/2304.07995). The training code can be found in [here](https://github.com/sail-sg/symbolic-instruction-tuning).
```
@article{liu2023zero,
title={From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning},
author={Liu, Qian and Zhou, Fan and Jiang, Zhengbao and Dou, Longxu and Lin, Min},
eprint={2304.07995},
year={2023}
}
``` |
true | |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false | |
false | Mostly unfiltered anime-style images generated by various text to image models, collected from various sources (some were submitted for inclusion by their creators).<br>
Includes a subset of [p1atdev/niji-v5](https://huggingface.co/datasets/p1atdev/niji-v5/), albeit captioned differently than the source. <br>
Contains 2224 image & caption pairs.
<br>As it is unfiltered, some adult content may be included.<br>
Captions may not be completely accurate.<br>
If you wish to submit content, do it as a pull request. |
false | |
true |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The RTE3-FR dataset is the French translation of the Textual Entailment English dataset used in the [RTE-3 Challenge](https://nlp.stanford.edu/RTE3-pilot/).
Like its English counterpart, the French RTE-3 dataset is composed of a development set and a test set, each containing 800 T/H pairs.
All T/H pairs were manually translated into French and proofread.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `id`: Index number.
- `language`: The language of the concerned pair of sentences.
- `premise`: The translated premise in the target language.
- `hypothesis`: The translated premise in the target language.
- `label`: The classification label, with possible values 0 (`entailment`), 1 (`neutral`), 2 (`contradiction`).
- `label_text`: The classification label, with possible values `entailment` (0), `neutral` (1), `contradiction` (2).
- `task`: The particular NLP task that the data was drawn from (IE, IR, QA and SUM).
- `length`: The length of the text of the pair.
### Data Splits
| name |entailment|neutral|contradiction|
|-------------|---------:|------:|------------:|
| dev | 412 | 299 | 89 |
| test | 410 | 318 | 72 |
| name |short|long|
|-------------|----:|---:|
| dev | 665 | 135|
| test | 683 | 117|
| name | IE| IR| QA|SUM|
|-------------|--:|--:|--:|--:|
| dev |200|200|200|200|
| test |200|200|200|200|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
TBA
### Contributions
[More Information Needed] |
false | # HaVG: Hausa Visual Genome
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Hausa Visual Genome (HaVG) dataset contains the description of an image or a section within the image in Hausa and its equivalent in English. The dataset was prepared by automatically translating the English description of the images in the Hindi Visual Genome (HVG). The synthetic Hausa data was then carefully post-edited, taking into cognizance the respective images. The data is made of 32,923 images and their descriptions that are divided into training, development, test, and challenge test set. The Hausa Visual Genome is the first dataset of its kind and can be used for Hausa-English machine translation, multi-modal research, image description, among various other natural language processing and generation tasks.
### Supported Tasks
- Translation
- Image-to-Text
- Text-to-Image
### Languages
- Hausa
- English
## Dataset Structure
### Data Fields
All the text files have seven columns as follows:
- Column1 - image_id
- Column2 - X
- Column3 - Y
- Column4 - Width
- Column5 - Height
- Column6 - English Text
- Column7 - Hausa Text
### Data Splits
| Dataset | Segments | English Words | Hausa Words |
| -------- | ----- | ----- | ----- |
| Train | 28,930 | 143,106 | 140,981 |
| Dev | 998 | 4922 | 4857 |
| Test | 1595 | 7853 | 7736 |
| Challenge Test | 1400 | 8186 | 8752 |
| Total | 32,923 | 164067 | 162326 |
The word counts are approximate, prior to tokenization.
## Dataset Creation
### Source Data
The source data was obtained from the Hindi Visual Genome dataset, a subset of the Visual Genome data.
### Annotation process
The translations were obtained using a web application that was developed specifically for this task.
### Who are the annotators?
The dataset was created by professional translators at HausaNLP and Bayero University Kano.
### Personal and Sensitive Information
The dataset do not contain any personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
HaVG will enable the creation of more qualitative models for natural language applications in Hausa language.
## Additional Information
### Licensing Information
This dataset is shared under the Creative Commons [BY-NC-SA](https://creativecommons.org/licenses/by-nc-sa/4.0/) license.
### Citation Information
If you use this dataset in your work, please cite us.
```
@inproceedings{abdulmumin-etal-2022-hausa,
title = "{H}ausa Visual Genome: A Dataset for Multi-Modal {E}nglish to {H}ausa Machine Translation",
author = "Abdulmumin, Idris and Dash, Satya Ranjan and Dawud, Musa Abdullahi and Parida, Shantipriya and Muhammad, Shamsuddeen and Ahmad, Ibrahim Sa{'}id and Panda, Subhadarshi and Bojar, Ond{\v{r}}ej and Galadanci, Bashir Shehu and Bello, Bello Shehu",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.694",
pages = "6471--6479"
}
```
### Contributions
[More Information Needed] |
true |
<p align="center">
<img src="https://raw.githubusercontent.com/afrisenti-semeval/afrisent-semeval-2023/main/images/afrisenti-twitter.png", width="700" height="500">
--------------------------------------------------------------------------------
## Dataset Description
- **Homepage:** https://github.com/afrisenti-semeval/afrisent-semeval-2023
- **Repository:** [GitHub](https://github.com/afrisenti-semeval/afrisent-semeval-2023)
- **Paper:** [AfriSenti: AfriSenti: A Twitter Sentiment Analysis Benchmark for African Languages](https://arxiv.org/pdf/2302.08956.pdf)
- **Paper:** [NaijaSenti: A Nigerian Twitter Sentiment Corpus for Multilingual Sentiment Analysis](https://arxiv.org/pdf/2201.08277.pdf)
- **Leaderboard:** N/A
- **Point of Contact:** [Shamsuddeen Muhammad](shamsuddeen2004@gmail.com)
### Dataset Summary
AfriSenti is the largest sentiment analysis dataset for under-represented African languages, covering 110,000+ annotated tweets in 14 African languages (Amharic, Algerian Arabic, Hausa, Igbo, Kinyarwanda, Moroccan Arabic, Mozambican Portuguese, Nigerian Pidgin, Oromo, Swahili, Tigrinya, Twi, Xitsonga, and Yoruba).
The datasets are used in the first Afrocentric SemEval shared task, SemEval 2023 Task 12: Sentiment analysis for African languages (AfriSenti-SemEval). AfriSenti allows the research community to build sentiment analysis systems for various African languages and enables the study of sentiment and contemporary language use in African languages.
### Supported Tasks and Leaderboards
The AfriSenti can be used for a wide range of sentiment analysis tasks in African languages, such as sentiment classification, sentiment intensity analysis, and emotion detection. This dataset is suitable for training and evaluating machine learning models for various NLP tasks related to sentiment analysis in African languages.
[SemEval 2023 Task 12 : Sentiment Analysis for African Languages](https://codalab.lisn.upsaclay.fr/competitions/7320)
### Languages
14 African languages (Amharic (amh), Algerian Arabic (ary), Hausa(hau), Igbo(ibo), Kinyarwanda(kin), Moroccan Arabic/Darija(arq), Mozambican Portuguese(por), Nigerian Pidgin (pcm), Oromo (oro), Swahili(swa), Tigrinya(tir), Twi(twi), Xitsonga(tso), and Yoruba(yor)).
## Dataset Structure
### Data Instances
For each instance, there is a string for the tweet and a string for the label. See the AfriSenti [dataset viewer](https://huggingface.co/datasets/HausaNLP/AfriSenti-Twitter/viewer/amh/train) to explore more examples.
```
{
"tweet": "string",
"label": "string"
}
```
### Data Fields
The data fields are:
```
tweet: a string feature.
label: a classification label, with possible values including positive, negative and neutral.
```
### Data Splits
The AfriSenti dataset has 3 splits: train, validation, and test. Below are the statistics for Version 1.0.0 of the dataset.
| | ama | arq | hau | ibo | ary | orm | pcm | pt-MZ | kin | swa | tir | tso | twi | yo |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| train | 5,982 | 1,652 | 14,173 | 10,193 | 5,584| - | 5,122 | 3,064 | 3,303 | 1,811 | - | 805 | 3,482| 8,523 |
| dev | 1,498 | 415 | 2,678 | 1,842 | 1,216 | 397 | 1,282 | 768 | 828 | 454 | 399 | 204 | 389 | 2,091 |
| test | 2,000 | 959 | 5,304 | 3,683 | 2,962 | 2,097 | 4,155 | 3,663 | 1,027 | 749 | 2,001 | 255 | 950 | 4,516 |
| total | 9,483 | 3,062 | 22,155 | 15,718 | 9,762 | 2,494 | 10,559 | 7,495 | 5,158 | 3,014 | 2,400 | 1,264 | 4,821 | 15,130 |
### How to use it
```python
from datasets import load_dataset
# you can load specific languages (e.g., Amharic). This download train, validation and test sets.
ds = load_dataset("HausaNLP/AfriSenti-Twitter", "amh")
# train set only
ds = load_dataset("HausaNLP/AfriSenti-Twitter", "amh", split = "train")
# test set only
ds = load_dataset("HausaNLP/AfriSenti-Twitter", "amh", split = "test")
# validation set only
ds = load_dataset("HausaNLP/AfriSenti-Twitter", "amh", split = "validation")
```
## Dataset Creation
### Curation Rationale
AfriSenti Version 1.0.0 aimed to be used in the first Afrocentric SemEval shared task **[SemEval 2023 Task 12: Sentiment analysis for African languages (AfriSenti-SemEval)](https://afrisenti-semeval.github.io)**.
### Source Data
Twitter
### Personal and Sensitive Information
We anonymized the tweets by replacing all *@mentions* by *@user* and removed all URLs.
## Considerations for Using the Data
### Social Impact of Dataset
The Afrisenti dataset has the potential to improve sentiment analysis for African languages, which is essential for understanding and analyzing the diverse perspectives of people in the African continent. This dataset can enable researchers and developers to create sentiment analysis models that are specific to African languages, which can be used to gain insights into the social, cultural, and political views of people in African countries. Furthermore, this dataset can help address the issue of underrepresentation of African languages in natural language processing, paving the way for more equitable and inclusive AI technologies.
## Additional Information
### Dataset Curators
AfriSenti is an extension of NaijaSenti, a dataset consisting of four Nigerian languages: Hausa, Yoruba, Igbo, and Nigerian-Pidgin. This dataset has been expanded to include other 10 African languages, and was curated with the help of the following:
| Language | Dataset Curators |
|---|---|
| Algerian Arabic (arq) | Nedjma Ousidhoum, Meriem Beloucif |
| Amharic (ama) | Abinew Ali Ayele, Seid Muhie Yimam |
| Hausa (hau) | Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Ibrahim Said, Bello Shehu Bello |
| Igbo (ibo) | Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Ibrahim Said, Bello Shehu Bello |
| Kinyarwanda (kin)| Samuel Rutunda |
| Moroccan Arabic/Darija (ary) | Oumaima Hourrane |
| Mozambique Portuguese (pt-MZ) | Felermino Dário Mário António Ali |
| Nigerian Pidgin (pcm) | Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Ibrahim Said, Bello Shehu Bello |
| Oromo (orm) | Abinew Ali Ayele, Seid Muhie Yimam, Hagos Tesfahun Gebremichael, Sisay Adugna Chala, Hailu Beshada Balcha, Wendimu Baye Messell, Tadesse Belay |
| Swahili (swa) | Davis Davis |
| Tigrinya (tir) | Abinew Ali Ayele, Seid Muhie Yimam, Hagos Tesfahun Gebremichael, Sisay Adugna Chala, Hailu Beshada Balcha, Wendimu Baye Messell, Tadesse Belay |
| Twi (twi) | Salomey Osei, Bernard Opoku, Steven Arthur |
| Xithonga (tso) | Felermino Dário Mário António Ali |
| Yoruba (yor) | Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Ibrahim Said, Bello Shehu Bello |
### Licensing Information
This AfriSenti is licensed under a Creative Commons Attribution 4.0 International License
### Citation Information
```
@inproceedings{Muhammad2023AfriSentiAT,
title={AfriSenti: A Twitter Sentiment Analysis Benchmark for African Languages},
author={Shamsuddeen Hassan Muhammad and Idris Abdulmumin and Abinew Ali Ayele and Nedjma Ousidhoum and David Ifeoluwa Adelani and Seid Muhie Yimam and Ibrahim Sa'id Ahmad and Meriem Beloucif and Saif Mohammad and Sebastian Ruder and Oumaima Hourrane and Pavel Brazdil and Felermino D'ario M'ario Ant'onio Ali and Davis Davis and Salomey Osei and Bello Shehu Bello and Falalu Ibrahim and Tajuddeen Gwadabe and Samuel Rutunda and Tadesse Belay and Wendimu Baye Messelle and Hailu Beshada Balcha and Sisay Adugna Chala and Hagos Tesfahun Gebremichael and Bernard Opoku and Steven Arthur},
year={2023}
}
```
```
@article{muhammad2023semeval,
title={SemEval-2023 Task 12: Sentiment Analysis for African Languages (AfriSenti-SemEval)},
author={Muhammad, Shamsuddeen Hassan and Abdulmumin, Idris and Yimam, Seid Muhie and Adelani, David Ifeoluwa and Ahmad, Ibrahim Sa'id and Ousidhoum, Nedjma and Ayele, Abinew and Mohammad, Saif M and Beloucif, Meriem},
journal={arXiv preprint arXiv:2304.06845},
year={2023}
}
``` |
true |
<p align="center">
<img src="https://raw.githubusercontent.com/hausanlp/NaijaSenti/main/image/naijasenti_logo1.png", width="500">
--------------------------------------------------------------------------------
## Dataset Description
- **Homepage:** https://github.com/hausanlp/NaijaSenti
- **Repository:** [GitHub](https://github.com/hausanlp/NaijaSenti)
- **Paper:** [NaijaSenti: A Nigerian Twitter Sentiment Corpus for Multilingual Sentiment Analysis](https://aclanthology.org/2022.lrec-1.63/)
- **Leaderboard:** N/A
- **Point of Contact:** [Shamsuddeen Hassan Muhammad](shamsuddeen2004@gmail.com)
### Dataset Summary
NaijaSenti is the first large-scale human-annotated Twitter sentiment dataset for the four most widely spoken languages in Nigeria — Hausa, Igbo, Nigerian-Pidgin, and Yorùbá — consisting of around 30,000 annotated tweets per language, including a significant fraction of code-mixed tweets.
### Supported Tasks and Leaderboards
The NaijaSenti can be used for a wide range of sentiment analysis tasks in Nigerian languages, such as sentiment classification, sentiment intensity analysis, and emotion detection. This dataset is suitable for training and evaluating machine learning models for various NLP tasks related to sentiment analysis in African languages. It was part of the datasets that were used for [SemEval 2023 Task 12: Sentiment Analysis for African Languages](https://codalab.lisn.upsaclay.fr/competitions/7320).
### Languages
4 most spoken Nigerian languages
* Hausa (hau)
* Igbo (ibo)
* Nigerian Pidgin (pcm)
* Yoruba (yor)
## Dataset Structure
### Data Instances
For each instance, there is a string for the tweet and a string for the label. See the NaijaSenti [dataset viewer](https://huggingface.co/datasets/HausaNLP/NaijaSenti-Twitter/viewer/hau/train) to explore more examples.
```
{
"tweet": "string",
"label": "string"
}
```
### Data Fields
The data fields are:
```
tweet: a string feature.
label: a classification label, with possible values including positive, negative and neutral.
```
### Data Splits
The NaijaSenti dataset has 3 splits: train, validation, and test. Below are the statistics for Version 1.0.0 of the dataset.
| | hau | ibo | pcm | yor |
|---|---|---|---|---|
| train | 14,172 | 10,192 | 5,121 | 8,522 |
| dev | 2,677 | 1,841 | 1,281 | 2,090 |
| test | 5,303 | 3,682 | 4,154 | 4,515 |
| total | 22,152 | 15,715 | 10,556 | 15,127 |
### How to use it
```python
from datasets import load_dataset
# you can load specific languages (e.g., Hausa). This download train, validation and test sets.
ds = load_dataset("HausaNLP/NaijaSenti-Twitter", "hau")
# train set only
ds = load_dataset("HausaNLP/NaijaSenti-Twitter", "hau", split = "train")
# test set only
ds = load_dataset("HausaNLP/NaijaSenti-Twitter", "hau", split = "test")
# validation set only
ds = load_dataset("HausaNLP/NaijaSenti-Twitter", "hau", split = "validation")
```
## Dataset Creation
### Curation Rationale
NaijaSenti Version 1.0.0 aimed to be used sentiment analysis and other related task in Nigerian indigenous and creole languages - Hausa, Igbo, Nigerian Pidgin and Yoruba.
### Source Data
Twitter
### Personal and Sensitive Information
We anonymized the tweets by replacing all *@mentions* by *@user* and removed all URLs.
## Considerations for Using the Data
### Social Impact of Dataset
The NaijaSenti dataset has the potential to improve sentiment analysis for Nigerian languages, which is essential for understanding and analyzing the diverse perspectives of people in Nigeria. This dataset can enable researchers and developers to create sentiment analysis models that are specific to Nigerian languages, which can be used to gain insights into the social, cultural, and political views of people in Nigerian. Furthermore, this dataset can help address the issue of underrepresentation of Nigerian languages in natural language processing, paving the way for more equitable and inclusive AI technologies.
## Additional Information
### Dataset Curators
* Shamsuddeen Hassan Muhammad
* Idris Abdulmumin
* Ibrahim Said Ahmad
* Bello Shehu Bello
### Licensing Information
This NaijaSenti is licensed under a Creative Commons Attribution BY-NC-SA 4.0 International License
### Citation Information
```
@inproceedings{muhammad-etal-2022-naijasenti,
title = "{N}aija{S}enti: A {N}igerian {T}witter Sentiment Corpus for Multilingual Sentiment Analysis",
author = "Muhammad, Shamsuddeen Hassan and
Adelani, David Ifeoluwa and
Ruder, Sebastian and
Ahmad, Ibrahim Sa{'}id and
Abdulmumin, Idris and
Bello, Bello Shehu and
Choudhury, Monojit and
Emezue, Chris Chinenye and
Abdullahi, Saheed Salahudeen and
Aremu, Anuoluwapo and
Jorge, Al{\'\i}pio and
Brazdil, Pavel",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.63",
pages = "590--602",
}
```
### Contributions
> This work was carried out with support from Lacuna Fund, an initiative co-founded by The Rockefeller Foundation, Google.org, and Canada’s International Development Research Centre. The views expressed herein do not necessarily represent those of Lacuna Fund, its Steering Committee, its funders, or Meridian Institute. |
false | # wikisum
## Dataset Description
- **Homepage:** https://registry.opendata.aws/wikisum/
- **Repository:** https://github.com/tensorflow/tensor2tensor/tree/master/tensor2tensor/data_generators/wikisum
- **Paper:** [Generating Wikipedia by Summarizing Long Sequences](https://arxiv.org/abs/1801.10198)
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [nachshon](mailto:nachshon@amazon.com)
|
false | |
false | |
true |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false |
## Data Origins
Original dataset: https://huggingface.co/datasets/jondurbin/rosettacode-raw/
Cleaner code: https://github.com/the-crypt-keeper/rosettacode-parser
## Data Fields
|Field|Type|Description|
|---|---|---|
|title|string|problem title|
|task|string|problem description|
|language|string|solution language/variant|
|soulution|string|solution source code|
## Languages
One .jsonl is provided per language group, the sublanguage field in the data denotes the specific language version/variant or the source language the example was ported from.
```
Language Python problems 510 rows 621
Language C problems 350 rows 350
Language C++ problems 403 rows 416
Language C sharp problems 322 rows 342
Language Go problems 496 rows 503
Language JavaScript problems 269 rows 301
Language Java problems 470 rows 512
Language Lua problems 335 rows 339
Language Kotlin problems 435 rows 435
Language Ruby problems 418 rows 444
Total 4894 done 565 skip 4329 failed 0 rows 4263
``` |
false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.