text-classification bool 2 classes | text stringlengths 0 664k |
|---|---|
true |
`yangwang825/reuters-21578` is an 8-class subset of the Reuters 21578 news dataset.
|
false |
---
## Cashew Disease Identication with Artificial Intelligence (CADI-AI) Dataset
This repository contains a comprehensive dataset of cashew images captured by drones, accompanied by meticulously annotated labels.
Each high-resolution image in the dataset has a resolution of 1600x1300 pixels, providing fine details for analysis and model training.
To facilitate efficient object detection, each image is paired with a corresponding text file in YOLO format.
The YOLO format file contains annotations, including class labels and bounding box coordinates.
### Dataset Labels
```
['abiotic', 'insect', 'disease']
```
### Number of Images
```json
{'train': 3788, 'valid': 710, 'test': 238}
```
### Number of Instances Annotated
```json
{'insect':1618, 'abiotic':13960, 'disease':7032}
```
### Folder structure after unzipping repective folders
```markdown
Data/
└── train/
├── images
├── labels
└── val/
├── images
├── labels
└── test/
├── images
├── labels
```
### Dataset Information
The dataset was created by a team of data scientists from the KaraAgro AI Foundation,
with support from agricultural scientists and officers.
The creation of this dataset was made possible through funding of the
Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) through their projects
[Market-Oriented Value Chains for Jobs & Growth in the ECOWAS Region (MOVE)](https://www.giz.de/en/worldwide/108524.html) and
[FAIR Forward - Artificial Intelligence for All](https://www.bmz-digital.global/en/overview-of-initiatives/fair-forward/), which GIZ implements on
behalf the German Federal Ministry for Economic Cooperation and Development (BMZ).
For detailed information regarding the dataset, we invite you to explore the accompanying datasheet available [here](https://drive.google.com/file/d/1viv-PtZC_j9S_K1mPl4R1lFRKxoFlR_M/view?usp=sharing).
This comprehensive resource offers a deeper understanding of the dataset's composition, variables, data collection methodologies, and other relevant details.
|
true |
# Dataset Card for WS353-semantics-sim-and-rel with ~2K entries.
### Dataset Summary
License: Apache-2.0. Contains CSV of a list of word1, word2, their `connection score`, type of connection and language.
- ### Original Datasets are available here:
- https://leviants.com/multilingual-simlex999-and-wordsim353/
### Paper of original Dataset:
- https://arxiv.org/pdf/1508.00106v5.pdf |
false | # AutoTrain Dataset for project: imagetest
## Dataset Description
This dataset has been automatically processed by AutoTrain for project imagetest.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<32x32 RGB PIL image>",
"feat_fine_label": 19,
"target": 11
},
{
"image": "<32x32 RGB PIL image>",
"feat_fine_label": 29,
"target": 15
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"feat_fine_label": "ClassLabel(names=['apple', 'aquarium_fish', 'baby', 'bear', 'beaver', 'bed', 'bee', 'beetle', 'bicycle', 'bottle', 'bowl', 'boy', 'bridge', 'bus', 'butterfly', 'camel', 'can', 'castle', 'caterpillar', 'cattle', 'chair', 'chimpanzee', 'clock', 'cloud', 'cockroach', 'couch', 'cra', 'crocodile', 'cup', 'dinosaur', 'dolphin', 'elephant', 'flatfish', 'forest', 'fox', 'girl', 'hamster', 'house', 'kangaroo', 'keyboard', 'lamp', 'lawn_mower', 'leopard', 'lion', 'lizard', 'lobster', 'man', 'maple_tree', 'motorcycle', 'mountain', 'mouse', 'mushroom', 'oak_tree', 'orange', 'orchid', 'otter', 'palm_tree', 'pear', 'pickup_truck', 'pine_tree', 'plain', 'plate', 'poppy', 'porcupine', 'possum', 'rabbit', 'raccoon', 'ray', 'road', 'rocket', 'rose', 'sea', 'seal', 'shark', 'shrew', 'skunk', 'skyscraper', 'snail', 'snake', 'spider', 'squirrel', 'streetcar', 'sunflower', 'sweet_pepper', 'table', 'tank', 'telephone', 'television', 'tiger', 'tractor', 'train', 'trout', 'tulip', 'turtle', 'wardrobe', 'whale', 'willow_tree', 'wolf', 'woman', 'worm'], id=None)",
"target": "ClassLabel(names=['aquatic_mammals', 'fish', 'flowers', 'food_containers', 'fruit_and_vegetables', 'household_electrical_devices', 'household_furniture', 'insects', 'large_carnivores', 'large_man-made_outdoor_things', 'large_natural_outdoor_scenes', 'large_omnivores_and_herbivores', 'medium_mammals', 'non-insect_invertebrates', 'people', 'reptiles', 'small_mammals', 'trees', 'vehicles_1', 'vehicles_2'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 50000 |
| valid | 10000 |
|
false |
Audio files sampled at 48000Hz of an American male pronouncing the names of the Esperanto letters in three ways. Retroflex-r and trilled-r are included. |
false | # Dataset Card for Dataset Name
## Dataset Description
Old ChatGPT scrapes, the RAW version.
### Dataset Summary
This is a result of a colab in a virtual shed. Really old stuff, before Plus even. Everything was generated by the model itself.
I think this is from what we call "alpha" now? Might even be before alpha idfk.
### Supported Tasks and Leaderboards
See dataset for more info.
### Languages
English only iirc, might be some translations thrown in there.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
Not much data was actually curated, it is recommended to go over the data yourself and fix some answers.
### Source Data
#### Initial Data Collection and Normalization
First, user queries were generated, then Assistant's answers.
#### Who are the source language producers?
OpenAI?
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
None. Z E R O.
### Discussion of Biases
Has some biases towards talking about OpenAI stuff and some weird-ish stuff. "NDA" stuff is missing.
### Other Known Limitations
Some of the quries contain answers, hence models trained on data as is will be fucked up. Raw data contains "today's date" and other stuff I didn't include in my Neo(X) finetune.
## Additional Information
### Dataset Curators
MrSteyk and old ChatGPT. RIP in pepperoni, you will be missed.
### Licensing Information
[More Information Needed]
### Citation Information
Don't
### Contributions
They know themselves, apart from OAI. |
false |
I'm too lazy to fill in the dataset card template! Think of it like r1, but after NY - timestamp is XX-01-2023. This is not turbo at this point, it was before 26ths. This must be "alpha", I'm 99% sure.
Has same problems, additional one is missing greetings! "NDA" stuff is missing from this as well! |
true | |
true | |
false | |
false | # Dataset Card for "prepared-yagpt"
## Short Description
This dataset is aimed for training of chatbots on russian language.
It consists plenty of dialogues that allows you to train you model answer user prompts.
## Notes
1. Special tokens
- history, speaker1, speaker2 (history can be optionally removed, i.e. substituted on empty string)
2. Dataset is based on
- [Matreshka](https://huggingface.co/datasets/zjkarina/matreshka)
- [Yandex-Q](https://huggingface.co/datasets/its5Q/yandex-q)
- [Diasum](https://huggingface.co/datasets/bragovo/diasum)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false | |
true | Persian dataset with Myers-Briggs 16 types. crawled on twitter persian users. |
false |
Dataset for anime head detection (include the entire head, not only the face parts).
| Dataset | Train | Test | Validate | Description |
|------------------------|-------|------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| ani_face_detection.v1i | 25698 | 113 | 253 | A high-quality third-party dataset (seems to no longer be publicly available, please contact me for removal if it infringes your rights) that can be used for training directly. Although its name includes `face`, but what it actually annotated are `head`. |
We provide an [online demo](https://huggingface.co/spaces/deepghs/anime_object_detection) here. |
false |
Dataset for anime face detection (face only, not the entire head).
| Dataset | Train | Test | Validate | Description |
|:-----------------------:|:-----:|:----:|:--------:|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| v1.4 | 12798 | 622 | 1217 | Additional images from different categories have been annotated based on the `v1` dataset. Furthermore, all automatically annotated data samples from the `v1` dataset have been manually corrected. |
| v1.4-raw | 4266 | 622 | 1217 | Same as `v1.4`, without any preprocess and data augmentation. Suitable for directly upload to Roboflow platform. |
| v1 | 5943 | 293 | 566 | Primarily consists of illustrations, auto-annotated with [hysts/anime-face-detector](https://github.com/hysts/anime-face-detector), and necessary manual corrections is performed. |
| raw | 1981 | 293 | 566 | Same as `v1`, without any preprocess and data augmentation. Suitable for directly upload to Roboflow platform. |
| Anime Face CreateML.v1i | 4263 | 609 | 1210 | Third-party dataset, source: https://universe.roboflow.com/my-workspace-mph8o/anime-face-createml/dataset/1 |
The best practice is to combine the `Anime Face CreateML.v1i` dataset with the `v1.4` dataset for training. We provide an [online demo](https://huggingface.co/spaces/deepghs/anime_object_detection). |
false | |
false | # Dataset Card for "piqa-ja-mbartm2m"
## Dataset Description
This is the Japanese Translation version of [piqa](https://huggingface.co/datasets/piqa).
The translator used in it was [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt).
## License
The same as the original piqa.
|
false |
# Dataset Card for LFQA Summary
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [Repo](https://github.com/utcsnlp/lfqa_summary)
- **Paper:** [Concise Answers to Complex Questions: Summarization of Long-Form Answers](TODO)
- **Point of Contact:** acpotluri[at]utexas.edu
### Dataset Summary
This dataset contains summarization data for long-form question answers.
### Languages
The dataset contains data in English.
## Dataset Structure
### Data Instances
Each instance is a (question, long-form answer) pair from one of the three data sources -- ELI5, WebGPT, and NQ.
### Data Fields
Each instance is in a json dictionary format with the following fields:
* `type`: The type of the annotation, all data should have `summary` as the value.
* `dataset`: The dataset this QA pair belongs to, one of [`NQ`, `ELI5`, `Web-GPT`].
* `q_id`: The question id, same as the original NQ or ELI5 dataset.
* `a_id`: The answer id, same as the original ELI5 dataset. For NQ, we populate a dummy `a_id` (1).
* `question`: The question.
* `answer_paragraph`: The answer paragraph.
* `answer_sentences`: The list of answer sentences, tokenzied from the answer paragraph.
* `summary_sentences`: The list of summary sentence index (starting from 1).
* `is_summary_count`: The list of count of annotators selecting this sentence as summary for the sentence in `answer_sentences`.
* `is_summary_1`: List of boolean value indicating whether annotator one selected the corresponding sentence as a summary sentence.
* `is_summary_2`: List of boolean value indicating whether annotator two selected the corresponding sentence as a summary sentence.
* `is_summary_3`: List of boolean value indicating whether annotator three selected the corresponding sentence as a summary sentence.
### Data Splits
The train/dev/test are provided in the uploaded dataset.
## Dataset Creation
Please refer to our [paper](TODO) and datasheet for details on dataset creation, annotation process, and discussion of limitations.
## Additional Information
### Licensing Information
https://creativecommons.org/licenses/by-sa/4.0/legalcode
### Citation Information
```
@inproceedings{TODO,
title = {Concise Answers to Complex Questions: Summarization of Long-Form Answers},
author = {Potluri,Abhilash and Xu, Fangyuan and Choi, Eunsol},
year = 2023,
booktitle = {Proceedings of the Annual Meeting of the Association for Computational Linguistics},
note = {Long paper}
}
``` |
false | |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
just for test
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false | # Facial Keypoints
The dataset is designed for computer vision and machine learning tasks involving the identification and analysis of key points on a human face. It consists of images of human faces, each accompanied by key point annotations in XML format.
# Get the Dataset
This is just an example of the data. If you need access to the entire dataset, contact us via **[sales@trainingdata.pro](mailto:sales@trainingdata.pro)** or leave a request on **[https://trainingdata.pro/data-market](https://trainingdata.pro/data-market?utm_source=huggingface)**

# Data Format
Each image from `FKP` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the coordinates of the key points. For each point, the x and y coordinates are provided, and there is a `Presumed_Location` attribute, indicating whether the point is presumed or accurately defined.
# Example of XML file structure

# Labeled Keypoints
**1.** Left eye, the closest point to the nose
**2.** Left eye, pupil's center
**3.** Left eye, the closest point to the left ear
**4.** Right eye, the closest point to the nose
**5.** Right eye, pupil's center
**6.** Right eye, the closest point to the right ear
**7.** Left eyebrow, the closest point to the nose
**8.** Left eyebrow, the closest point to the left ear
**9.** Right eyebrow, the closest point to the nose
**10.** Right eyebrow, the closest point to the right ear
**11.** Nose, center
**12.** Mouth, left corner point
**13.** Mouth, right corner point
**14.** Mouth, the highest point in the middle
**15.** Mouth, the lowest point in the middle
# Keypoint annotation is made in accordance with your requirements.
## **[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/trainingdata-pro** |
false | # Pose Estimation
The dataset is primarly intended to dentify and predict the positions of major joints of a human body in an image. It consists of people's photographs with body part labeled with keypoints.
# Get the Dataset
This is just an example of the data. If you need access to the entire dataset, contact us via **[sales@trainingdata.pro](mailto:sales@trainingdata.pro)** or leave a request on **[https://trainingdata.pro/data-market](https://trainingdata.pro/data-market?utm_source=huggingface)**

# Data Format
Each image from `EP` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the coordinates of the key points. For each point, the x and y coordinates are provided, and there is a `Presumed_Location` attribute, indicating whether the point is presumed or accurately defined.
# Example of XML file structure
.png?generation=1684358333663868&alt=media)
# Labeled body parts
Each keypoint is ordered and corresponds to the concrete part of the body:
0. **Nose**
1. **Neck**
2. **Right shoulder**
3. **Right elbow**
4. **Right wrist**
5. **Left shoulder**
6. **Left elbow**
7. **Left wrist**
8. **Right hip**
9. **Right knee**
10. **Right foot**
11. **Left hip**
12. **Left knee**
13. **Left foot**
14. **Right eye**
15. **Left eye**
16. **Right ear**
17. **Left ear**
# Keypoint annotation is made in accordance with your requirements.
## **[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/trainingdata-pro** |
false |
# Dataset Card for duorc
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [DuoRC](https://duorc.github.io/)
- **Repository:** [GitHub](https://github.com/duorc/duorc)
- **Paper:** [arXiv](https://arxiv.org/abs/1804.07927)
- **Leaderboard:** [DuoRC Leaderboard](https://duorc.github.io/#leaderboard)
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The DuoRC dataset is an English language dataset of questions and answers gathered from crowdsourced AMT workers on Wikipedia and IMDb movie plots. The workers were given freedom to pick answer from the plots or synthesize their own answers. It contains two sub-datasets - SelfRC and ParaphraseRC. SelfRC dataset is built on Wikipedia movie plots solely. ParaphraseRC has questions written from Wikipedia movie plots and the answers are given based on corresponding IMDb movie plots.
### Supported Tasks and Leaderboards
- `abstractive-qa` : The dataset can be used to train a model for Abstractive Question Answering. An abstractive question answering model is presented with a passage and a question and is expected to generate a multi-word answer. The model performance is measured by exact-match and F1 score, similar to [SQuAD V1.1](https://huggingface.co/metrics/squad) or [SQuAD V2](https://huggingface.co/metrics/squad_v2). A [BART-based model](https://huggingface.co/yjernite/bart_eli5) with a [dense retriever](https://huggingface.co/yjernite/retribert-base-uncased) may be used for this task.
- `extractive-qa`: The dataset can be used to train a model for Extractive Question Answering. An extractive question answering model is presented with a passage and a question and is expected to predict the start and end of the answer span in the passage. The model performance is measured by exact-match and F1 score, similar to [SQuAD V1.1](https://huggingface.co/metrics/squad) or [SQuAD V2](https://huggingface.co/metrics/squad_v2). [BertForQuestionAnswering](https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering) or any other similar model may be used for this task.
### Languages
The text in the dataset is in English, as spoken by Wikipedia writers for movie plots. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
```
{'answers': ['They arrived by train.'], 'no_answer': False, 'plot': "200 years in the future, Mars has been colonized by a high-tech company.\nMelanie Ballard (Natasha Henstridge) arrives by train to a Mars mining camp which has cut all communication links with the company headquarters. She's not alone, as she is with a group of fellow police officers. They find the mining camp deserted except for a person in the prison, Desolation Williams (Ice Cube), who seems to laugh about them because they are all going to die. They were supposed to take Desolation to headquarters, but decide to explore first to find out what happened.They find a man inside an encapsulated mining car, who tells them not to open it. However, they do and he tries to kill them. One of the cops witnesses strange men with deep scarred and heavily tattooed faces killing the remaining survivors. The cops realise they need to leave the place fast.Desolation explains that the miners opened a kind of Martian construction in the soil which unleashed red dust. Those who breathed that dust became violent psychopaths who started to build weapons and kill the uninfected. They changed genetically, becoming distorted but much stronger.The cops and Desolation leave the prison with difficulty, and devise a plan to kill all the genetically modified ex-miners on the way out. However, the plan goes awry, and only Melanie and Desolation reach headquarters alive. Melanie realises that her bosses won't ever believe her. However, the red dust eventually arrives to headquarters, and Melanie and Desolation need to fight once again.", 'plot_id': '/m/03vyhn', 'question': 'How did the police arrive at the Mars mining camp?', 'question_id': 'b440de7d-9c3f-841c-eaec-a14bdff950d1', 'title': 'Ghosts of Mars'}
```
### Data Fields
- `plot_id`: a `string` feature containing the movie plot ID.
- `plot`: a `string` feature containing the movie plot text.
- `title`: a `string` feature containing the movie title.
- `question_id`: a `string` feature containing the question ID.
- `question`: a `string` feature containing the question text.
- `answers`: a `list` of `string` features containing list of answers.
- `no_answer`: a `bool` feature informing whether the question has no answer or not.
### Data Splits
The data is split into a training, dev and test set in such a way that the resulting sets contain 70%, 15%, and 15% of the total QA pairs and no QA pairs for any movie seen in train are included in the test set. The final split sizes are as follows:
Name Train Dec Test
SelfRC 60721 12961 12599
ParaphraseRC 69524 15591 15857
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
Wikipedia and IMDb movie plots
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
For SelfRC, the annotators were allowed to mark an answer span in the plot or synthesize their own answers after reading Wikipedia movie plots.
For ParaphraseRC, questions from the Wikipedia movie plots from SelfRC were used and the annotators were asked to answer based on IMDb movie plots.
#### Who are the annotators?
Amazon Mechanical Turk Workers
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was intially created by Amrita Saha, Rahul Aralikatte, Mitesh M. Khapra, and Karthik Sankaranarayanan in a collaborated work between IIT Madras and IBM Research.
### Licensing Information
[MIT License](https://github.com/duorc/duorc/blob/master/LICENSE)
### Citation Information
```
@inproceedings{DuoRC,
author = { Amrita Saha and Rahul Aralikatte and Mitesh M. Khapra and Karthik Sankaranarayanan},
title = {{DuoRC: Towards Complex Language Understanding with Paraphrased Reading Comprehension}},
booktitle = {Meeting of the Association for Computational Linguistics (ACL)},
year = {2018}
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchhablani) for adding this dataset. |
false |
# Dataset Card for MNIST
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://yann.lecun.com/exdb/mnist/
- **Repository:**
- **Paper:** MNIST handwritten digit database by Yann LeCun, Corinna Cortes, and CJ Burges
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The MNIST dataset consists of 70,000 28x28 black-and-white images of handwritten digits extracted from two NIST databases. There are 60,000 images in the training dataset and 10,000 images in the validation dataset, one class per digit so a total of 10 classes, with 7,000 images (6,000 train images and 1,000 test images) per class.
Half of the image were drawn by Census Bureau employees and the other half by high school students (this split is evenly distributed in the training and testing sets).
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image of a handwritten digit into one of 10 classes representing integer values from 0 to 9, inclusively. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-mnist).
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its label:
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x276021F6DD8>,
'label': 5
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing the 28x28 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `label`: an integer between 0 and 9 representing the digit.
### Data Splits
The data is split into training and test set. All the images in the test set were drawn by different individuals than the images in the training set. The training set contains 60,000 images and the test set 10,000 images.
## Dataset Creation
### Curation Rationale
The MNIST database was created to provide a testbed for people wanting to try pattern recognition methods or machine learning algorithms while spending minimal efforts on preprocessing and formatting. Images of the original dataset (NIST) were in two groups, one consisting of images drawn by Census Bureau employees and one consisting of images drawn by high school students. In NIST, the training set was built by grouping all the images of the Census Bureau employees, and the test set was built by grouping the images form the high school students.
The goal in building MNIST was to have a training and test set following the same distributions, so the training set contains 30,000 images drawn by Census Bureau employees and 30,000 images drawn by high school students, and the test set contains 5,000 images of each group. The curators took care to make sure all the images in the test set were drawn by different individuals than the images in the training set.
### Source Data
#### Initial Data Collection and Normalization
The original images from NIST were size normalized to fit a 20x20 pixel box while preserving their aspect ratio. The resulting images contain grey levels (i.e., pixels don't simply have a value of black and white, but a level of greyness from 0 to 255) as a result of the anti-aliasing technique used by the normalization algorithm. The images were then centered in a 28x28 image by computing the center of mass of the pixels, and translating the image so as to position this point at the center of the 28x28 field.
#### Who are the source language producers?
Half of the source images were drawn by Census Bureau employees, half by high school students. According to the dataset curator, the images from the first group are more easily recognizable.
### Annotations
#### Annotation process
The images were not annotated after their creation: the image creators annotated their images with the corresponding label after drawing them.
#### Who are the annotators?
Same as the source data creators.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Chris Burges, Corinna Cortes and Yann LeCun
### Licensing Information
MIT Licence
### Citation Information
```
@article{lecun2010mnist,
title={MNIST handwritten digit database},
author={LeCun, Yann and Cortes, Corinna and Burges, CJ},
journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist},
volume={2},
year={2010}
}
```
### Contributions
Thanks to [@sgugger](https://github.com/sgugger) for adding this dataset. |
false | |
false | |
true |
# Dataset Card for semantics-ws-qna-oa with ~2K entries.
### Dataset Summary
License: Apache-2.0. Contains parquet of INSTRUCTION, RESPONSE, SOURCE and METADATA.
- ### Original Datasets are available here:
- https://leviants.com/multilingual-simlex999-and-wordsim353/
### Paper of original Dataset:
- https://arxiv.org/pdf/1508.00106v5.pdf |
false |
Embeddings of the [english Wikipedia](https://huggingface.co/datasets/wikipedia) [paragraphs](https://huggingface.co/datasets/olmer/wiki_paragraphs) using [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) sentence transformers encoder.
The dataset contains 43 911 155 paragraphs from 6 458 670 Wikipedia articles.
The size of each paragraph varies from 20 to 2000 characters.
For each paragraph there is an embedding of size 768.
Embeddings are stored in numpy files, 1 000 000 embeddings per file.
For each embedding file, there is an ids file that contains the list of ids of the corresponding paragraphs.
__Be careful, dataset size is 151Gb__. |
false |
# Public Ground-Truth Dataset for Handwritten Circuit Diagrams (GTDB-HD)
This repository contains images of hand-drawn electrical circuit diagrams as well as accompanying bounding box annotation for object detection as well as segmentation ground truth files. This dataset is intended to train (e.g. neural network) models for the purpose of the extraction of electrical graphs from raster graphics.
## Structure
The folder structure is made up as follows:
```
gtdh-hd
│ README.md # This File
│ classes.json # Classes List
│ classes_color.json # Classes to Color Map
│ classes_discontinuous.json # Classes Morphology Info
│ classes_ports.json # Electrical Port Descriptions for Classes
│ consistency.py # Dataset Statistics and Consistency Check
| loader.py # Simple Dataset Loader and Storage Functions
│ segmentation.py # Multiclass Segmentation Generation
│ utils.py # Helper Functions
└───drafter_D
│ └───annotations # Bounding Box Annotations
│ │ │ CX_DY_PZ.xml
│ │ │ ...
│ │
│ └───images # Raw Images
│ │ │ CX_DY_PZ.jpg
│ │ │ ...
│ │
│ └───instances # Instance Segmentation Polygons
│ │ │ CX_DY_PZ.json
│ │ │ ...
│ │
│ └───segmentation # Binary Segmentation Maps (Strokes vs. Background)
│ │ │ CX_DY_PZ.jpg
│ │ │ ...
...
```
Where:
- `D` is the (globally) running number of a drafter
- `X` is the (globally) running number of the circuit (12 Circuits per Drafter)
- `Y` is the Local Number of the Circuit's Drawings (2 Drawings per Circuit)
- `Z` is the Local Number of the Drawing's Image (4 Pictures per Drawing)
### Image Files
Every image is RGB-colored and either stored as `jpg`, `jpeg` or `png` (both uppercase and lowercase suffixes exist).
### Bounding Box Annotations
A complete list of class labels including a suggested mapping table to integer numbers for training and prediction purposes can be found in `classes.json`. The annotations contains **BB**s (Bounding Boxes) of **RoI**s (Regions of Interest) like electrical symbols or texts within the raw images and are stored in the [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/) format.
Please note: *For every Raw image in the dataset, there is an accompanying bounding box annotation file.*
#### Known Labeled Issues
- C25_D1_P4 cuts off a text
- C27 cuts of some texts
- C29_D1_P1 has one additional text
- C31_D2_P4 has a text less
- C33_D1_P4 has a text less
- C46_D2_P2 cuts of a text
### Instance Segmentation
For every binary segmentation map, there is an accompanying polygonal annotation file for instance segmentation purposes, which is stored in the [labelme](https://github.com/wkentaro/labelme) format. Note that the contained polygons are quite coarse, intended to be used in conjunction with the binary segmentation maps for connection extraction and to tell individual instances with overlapping BBs apart.
### Segmentation Maps
Binary Segmentation images are available for some samples and bear the same resolution as the respective image files. They are considered to contain only black and white pixels indicating areas of drawings strokes and background respectively.
### Netlists
For some images, there are also netlist files available, which are stored in the [ASC](http://ltwiki.org/LTspiceHelp/LTspiceHelp/Spice_Netlist.htm) format.
### Consistency and Statistics
This repository comes with a stand-alone script to:
- Obtain Statistics on
- Class Distribution
- BB Sizes
- Check the BB Consistency
- Classes with Regards to the `classes.json`
- Counts between Pictures of the same Drawing
- Ensure a uniform writing style of the Annotation Files (indent)
The respective script is called without arguments to operate on the **entire** dataset:
```
$ python3 consistency.py
```
Note that due to a complete re-write of the annotation data, the script takes several seconds to finish. A drafter can be specified as CLI argument to restrict the evaluation (for example drafter 15):
```
$ python3 consistency.py 15
```
### Multi-Class (Instance) Segmentation Processing
This dataset comes with a script to process both new and existing (instance) segmentation files. It is invoked as follows:
```
$ python3 segmentation.py <command> <drafter_id> <target> <source>
```
Where:
- `<command>` has to be one of:
- `transform`
- Converts existing BB Annotations to Polygon Annotations
- Default target folder: `instances`
- Existing polygon files will not be overridden in the default settings, hence this command will take no effect in an completely populated dataset.
- Intended to be invoked after adding new binary segmentation maps
- **This step has to be performed before all other commands**
- `wire`
- Generates Wire Describing Polygons
- Default target folder: `wires`
- `keypoint`
- Generates Keypoints for Component Terminals
- Default target folder: `keypoints`
- `create`
- Generates Multi-Class segmentation Maps
- Default target folder: `segmentation_multi_class`
- `refine`
- Refines Coarse Polygon Annotations to precisely match the annotated objects
- Default target folder: `instances_refined`
- For instance segmentation purposes
- `pipeline`
- executes `wire`,`keypoint` and `refine` stacked, with one common `source` and `target` folder
- Default target folder: `instances_refined`
- `assign`
- Connector Point to Port Type Assignment by Geometric Transformation Matching
- `<drafter_id>` **optionally** restricts the process to one of the drafters
- `<target>` **optionally** specifies a divergent target folder for results to be placed in
- `<source>` **optionally** specifies a divergent source folder to read from
Please note that source and target forlders are **always** subfolder inside the individual drafter folders. Specifying source and target folders allow to stack the results of individual processing steps. For example, to perform the entire pipeline for drafter 20 manually, use:
```
python3 segmentation.py wire 20 instances_processed instances
python3 segmentation.py keypoint 20 instances_processed instances_processed
python3 segmentation.py refine 20 instances_processed instances_processed
```
### Dataset Loader
This dataset is also shipped with a set of loader and writer functions, which are internally used by the segmentation and consistency scripts and can be used for training. The dataset loader is simple, framework-agnostic and has been prepared to be callable from any location in the file system. Basic usage:
```
from loader import read_dataset
db_bb = read_dataset() # Read all BB Annotations
db_seg = read_dataset(segmentation=True) # Read all Polygon Annotations
db_bb_val = read_dataset(drafter=12) # Read Drafter 12 BB Annotations
len(db_bb) # Get The Amount of Samples
db_bb[5] # Get an Arbitrary Sample
db = read_images(drafter=12) # Returns a list of (Image, Annotation) pairs
db = read_snippets(drafter=12) # Returns a list of (Image, Annotation) pairs
```
## Citation
If you use this dataset for scientific publications, please consider citing us as follows:
```
@inproceedings{thoma2021public,
title={A Public Ground-Truth Dataset for Handwritten Circuit Diagram Images},
author={Thoma, Felix and Bayer, Johannes and Li, Yakun and Dengel, Andreas},
booktitle={International Conference on Document Analysis and Recognition},
pages={20--27},
year={2021},
organization={Springer}
}
```
## How to Contribute
If you want to contribute to the dataset as a drafter or in case of any further questions, please send an email to: <johannes.bayer@dfki.de> (corresponding author), <yakun.li@dfki.de>, <andreas.dengel@dfki.de>
## Guidelines
These guidelines are used throughout the generation of the dataset. They can be used as an instruction for participants and data providers.
### Drafter Guidelines
- 12 Circuits should be drawn, each of them twice (24 drawings in total)
- Most important: The drawing should be as natural to the drafter as possible
- Free-Hand sketches are preferred, using rulers and drawing Template stencils should be avoided unless it appears unnatural to the drafter
- Different types of pens/pencils should be used for different drawings
- Different kinds of (colored, structured, ruled, lined) paper should be used
- One symbol set (European/American) should be used throughout one drawing (consistency)
- It is recommended to use the symbol set that the drafter is most familiar with
- It is **strongly** recommended to share the first one or two circuits for review by the dataset organizers before drawing the rest to avoid problems (complete redrawing in worst case)
### Image Capturing Guidelines
- For each drawing, 4 images should be taken (96 images in total per drafter)
- Angle should vary
- Lighting should vary
- Moderate (e.g. motion) blur is allowed
- All circuit-related aspects of the drawing must be _human-recognicable_
- The drawing should be the main part of the image, but _naturally_ occurring objects from the environment are welcomed
- The first image should be _clean_, i.e. ideal capturing conditions
- Kinks and Buckling can be applied to the drawing between individual image capturing
- Try to use the file name convention (`CX_DY_PZ.jpg`) as early as possible
- The circuit range `X` will be given to you
- `Y` should be `1` or `2` for the drawing
- `Z` should be `1`,`2`,`3` or `4` for the picture
### Object Annotation Guidelines
- General Placement
- A **RoI** must be **completely** surrounded by its **BB**
- A **BB** should be as tight as possible to the **RoI**
- In case of connecting lines not completely touching the symbol, the BB should extended (only by a small margin) to enclose those gaps (epecially considering junctions)
- Characters that are part of the **essential symbol definition** should be included in the BB (e.g. the `+` of a polarized capacitor should be included in its BB)
- **Junction** annotations
- Used for actual junction points (Connection of three or more wire segments with a small solid circle)
- Used for connection of three or more sraight line wire segements where a physical connection can be inferred by context (i.e. can be distinuished from **crossover**)
- Used for wire line corners
- Redundant Junction Points should **not** be annotated (small solid circle in the middle of a straight line segment)
- Should not be used for corners or junctions that are part of the symbol definition (e.g. Transistors)
- **Crossover** Annotations
- If dashed/dotted line: BB should cover the two next dots/dashes
- **Text** annotations
- Individual Text Lines should be annotated Individually
- Text Blocks should only be annotated If Related to Circuit or Circuit's Components
- Semantically meaningful chunks of information should be annotated Individually
- component characteristics enclosed in a single annotation (e.g. __100Ohms__, __10%__ tolerance, __5V__ max voltage)
- Component Names and Types (e.g. __C1__, __R5__, __ATTINY2313__)
- Custom Component Terminal Labels (i.e. __Integrated Circuit__ Pins)
- Circuit Descriptor (e.g. "Radio Amplifier")
- Texts not related to the Circuit should be ignored
- e.g. Brief paper, Company Logos
- Drafters auxiliary markings for internal organization like "D12"
- Texts on Surrounding or Background Papers
- Characters which are part of the essential symbol definition should __not__ be annotated as Text dedicatedly
- e.g. Schmitt Trigger __S__, , and gate __&__, motor __M__, Polarized capacitor __+__
- Only add terminal text annotation if the terminal is not part of the essential symbol definition
- **Table** cells should be annotated independently
- **Operation Amplifiers**
- Both the triangular US symbols and the european IC-like symbols symbols for OpAmps should be labeled `operational_amplifier`
- The `+` and `-` signs at the OpAmp's input terminals are considered essential and should therefore not be annotated as texts
- **Complex Components**
- Both the entire Component and its sub-Components and internal connections should be annotated:
| Complex Component | Annotation |
| ----------------- | ------------------------------------------------------ |
| Optocoupler | 0. `optocoupler` as Overall Annotation |
| | 1. `diode.light_emitting` |
| | 2. `transistor.photo` (or `resistor.photo`) |
| | 3. `optical` if LED and Photo-Sensor arrows are shared |
| | Then the arrows area should be includes in all |
| Relay | 0. `relay` as Overall Annotation |
| (also for | 1. `inductor` |
| coupled switches) | 2. `switch` |
| | 3. `mechanical` for the dashed line between them |
| Transformer | 0. `transformer` as Overall Annotation |
| | 1. `inductor` or `inductor.coupled` (watch the dot) |
| | 3. `magnetic` for the core |
#### Rotation Annotations
The Rotation (integer in degree) should capture the overall rotation of the symbol shape. However, the position of the terminals should also be taked into consideration. Under idealized circumstances (no perspective distorion and accurately drawn symbols according to the symbol library), these two requirements equal each other. For pathological cases however, in which shape and the set of terminals (or even individual terminals) are conflicting, the rotation should compromise between all factors.
Rotation annotations are currently work in progress. They should be provided for at least the following classes:
- "voltage.dc"
- "resistor"
- "capacitor.unpolarized"
- "diode"
- "transistor.bjt"
#### Text Annotations
- The Character Sequence in the Text Label Annotations should describe the actual Characters depicted in the respective Bounding Box as Precisely as Possible
- Bounding Box Annotations of class `text`
- Bear an additional `<text>` tag in which their content is given as string
- The `Omega` and `Mikro` Symbols are escaped respectively
- Currently Work in Progress
- The utils script allows for migrating text annotations from one annotation file to another: `python3 utils.py source target`
### Segmentation Map Guidelines
- Areas of __Intended__ drawing strokes (ink and pencil abrasion respectively) should be marked black, all other pixels (background) should be white
- shining through the paper (from the rear side or other sheets) should be considered background
### Polygon Annotation Guidelines
0. Before starting, make sure the respective files exist for the image sample to be polygon-annotated:
- BB Annotations (Pascal VOC XML File)
- (Binary) Segmentation Map
1. Transform the BB annotations into raw polygons
- Use: `python3 segmentation.py transform`
2. Refine the Polygons
- **To Avoid Embedding Image Data into the resulting JSON**, use: `labelme --nodata`
- Just make sure there are no overlaps between instances
- Especially take care about overlaps with structural elements like junctions and crossovers
3. Generate Multi-Class Segmentation Maps from the refined polygons
- Use: `python3 segmentation.py create`
- Use the generated images for a visual inspection
- After spotting problems, continue with Step 2
### Terminal Annotation Guidelines
```
labelme --labels "connector" --config "{shift_auto_shape_color: 1}" --nodata
```
|
true |
The dataset is relevant to Ukrainian reviews in three different domains:
1) Hotels.
2) Reustarants.
3) Products.
The dataset is comrpised of several .csv files, which one can found useful:
1) processed_data.csv - the processed dataset itself.
2) train_val_test_indices.csv - csv file with train/val/test indices. The split was stratified w.r.t dataset name (hotels, reustarants, products) and rating.
3) bad_ids.csv - csv file with ids of bad samples marked using model filtering approach, only ids of those samples for which difference between actual and predicted rating is bigger than 2 points are maintained in this file.
The data is scrapped from Tripadvisor (https://www.tripadvisor.com/) and Rozetka (https://rozetka.com.ua/).
The dataset was initially used for extraction of key-phrases relevant to one of rating categories, based on trained machine learning model (future article link will be here).
Dataset is processed to include two additional columns: one with lemmatized tokens and another one with POS tags. Both lemmatization and POS tagging are done using pymorphy2 (https://pymorphy2.readthedocs.io/en/stable/) library.
The words are tokenized using a specific regex tokenizer to account for usage of apostroph.
Those reviews which weren't in Ukrainian were translated to it using Microsoft translator and re-checked manually afterwards.
|
false |
# Dataset card for dominoes
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset description](#dataset-description)
- [Dataset categories](#dataset-categories)
## Dataset description
- **Homepage:** https://segments.ai/ant/dominoes
This dataset was created using [Segments.ai](https://segments.ai). It can be found [here](https://segments.ai/ant/dominoes).
## Dataset categories
| Id | Name | Description |
| --- | ---- | ----------- |
| 1 | domino | - |
|
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false | |
false |
[seahorse](https://github.com/google-research-datasets/seahorse)
```
@misc{clark2023seahorse,
title={SEAHORSE: A Multilingual, Multifaceted Dataset for Summarization Evaluation},
author={Elizabeth Clark and Shruti Rijhwani and Sebastian Gehrmann and Joshua Maynez and Roee Aharoni and Vitaly Nikolaev and Thibault Sellam and Aditya Siddhant and Dipanjan Das and Ankur P. Parikh},
year={2023},
eprint={2305.13194},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
false |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
A copied data set from CIFAR10 as a demonstration
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
true |
This is dataset for test |
true | # AutoTrain Dataset for project: analytics-intent-reasoning
## Dataset Description
This dataset has been automatically processed by AutoTrain for project analytics-intent-reasoning.
### Languages
The BCP-47 code for the dataset's language is zh.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "\u9500\u552e\u91d1\u989d\u7684\u540c\u6bd4",
"target": 1
},
{
"text": "\u676d\u5dde\u54ea\u4e2a\u533a\u7684\u9500\u552e\u91d1\u989d\u6700\u9ad8",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['\u62a5\u8868\u6784\u5efa', '\u67e5\u8be2\u7c7b', '\u67e5\u8be2\u7c7b\u67e5\u8be2\u7c7b'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 72 |
| valid | 20 |
|
false |
## Loading the dataset with a specific configuration
There are 3 different OCR versions to choose from with their original format or standardized DUE format, as well as the option to load the documents as filepaths or as binaries (PDF).
To load a specific configuration, pass a config from one of the following:
```python
#{bin_}{Amazon,Azure,Tesseract}_{original,due}
['Amazon_due', 'Amazon_original', 'Azure_due', 'Azure_original', 'Tesseract_due', 'Tesseract_original',
'bin_Amazon_due', 'bin_Amazon_original', 'bin_Azure_due', 'bin_Azure_original', 'bin_Tesseract_due', 'bin_Tesseract_original']
```
```python
from datasets import load_dataset
ds = load_dataset("DUDE2023/DUDE", 'Amazon_original')
```
|
false | |
false |
# VoxCeleb 1
VoxCeleb1 contains over 100,000 utterances for 1,251 celebrities, extracted from videos uploaded to YouTube.
## Verification Split
| | train | validation | test |
| :---: | :---: | :---: | :---: |
| # of speakers | 1211 | 1211 | 40 |
| # of samples | 133777 | 14865 | 4874 |
## References
- https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html |
false |
## "Say It Again, Kid!" Speech data collection##
## Training data for pronunciation quality classifiers for childred learning English ##
Used in papers ...
Train set and test set in Flac format.
File id key, for example: train001fifi05_609_t10892805_living-room.flac
Speaker key indicates train or test set, and a running number: train001
Native language: "fifi" for Finnish, enuk for UK English, othr for other.
Age of speaker in years (if known): "05"
Sample number: "609" (Some kids really enjoyed contributing!)
Seconds from first sample given: "t10892805"
Targer utterance text with spaces etc replaced by dashes: "living-room"
---
license: cc-by-nd-4.0
---
We emphasize, that by no derivatives we mean that you cannot use the audio samples as part of any work that is not directly related to describing the dataset in a speech technology or scientific language learning context. You may include them in a scientific presentation when the context is clearly to present the original data and not to use the data in another fashion.
Commercial use of speech samples for building and evaluation of speech technology models is not prohibited. |
false |
# VoxCeleb 1
VoxCeleb1 contains over 100,000 utterances for 1,251 celebrities, extracted from videos uploaded to YouTube.
## Identification Split
| | train | validation | test |
| :---: | :---: | :---: | :---: |
| # of speakers | 1251 | 1251 | 1251 |
| # of samples | 306208 | 14479 | 4874 |
## References
- https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html |
false | # AutoTrain Dataset for project: hhhh
## Dataset Description
This dataset has been automatically processed by AutoTrain for project hhhh.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<256x256 RGBA PIL image>",
"target": 0
},
{
"image": "<256x256 RGBA PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['lion', 'tiger'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 360 |
| valid | 40 |
|
false | # ES2Bash
This dataset contains a collection of natural language requests (in Spanish) and their corresponding bash commands. The purpose of this dataset is to provide examples of requests and their associated bash commands to facilitate machine learning and the development of natural language processing systems related to command-line operations.
# Features
The dataset consists of two main features:
* Natural Language Request (ES): This feature contains natural language requests written in Spanish. The requests represent tasks or actions to be performed using command-line commands.
* Bash Command: This feature contains the bash commands associated with each natural language request. The bash commands represent the way to execute the requested task or action using the command line.
# Initial Commands
The dataset initially contains requests related to the following commands:
* cat: Requests involving reading text files.
* ls: Requests related to obtaining information about files and directories at a specific location.
* cd: Requests to change the current directory.
# Dataset Expansion
In addition to the initial commands mentioned above, there are plans to expand this dataset to include more common command-line commands. The expansion will cover a broader range of tasks and actions that can be performed using command-line operations.
Efforts will also be made to improve the existing examples and ensure that they are clear, accurate, and representative of typical requests that users may have when working with command lines.
# Request Statistics
In the future, statistical data will be provided on the requests present in this dataset. This data may include information about the distribution of requests in different categories, the frequency of use of different commands, and any other relevant analysis to better understand the usage and needs of command-line users.
# Request Collection Process
This dataset is the result of a combination of requests generated by language models and manually added requests. The requests generated by language models were based on existing examples and prior knowledge related to the usage of command lines. A manual review was then conducted to ensure the quality and relevance of the requests. |
false |
# VoxCeleb 2
VoxCeleb2 contains over 1 million utterances for 6,112 celebrities, extracted from videos uploaded to YouTube.
## Verification Split
| | train | validation | test |
| :---: | :---: | :---: | :---: |
| # of speakers | 5,994 | 5,994 | 118 |
| # of samples | 982,808 | 109,201 | 36,237 |
## Data Fields
- ID (string): The ID of the sample with format `<spk_id--utt_id_start_stop>`.
- duration (float64): The duration of the segment in seconds.
- wav (string): The filepath of the waveform.
- start (int64): The start index of the segment, which is (start seconds) × (sample rate).
- stop (int64): The stop index of the segment, which is (stop seconds) × (sample rate).
- spk_id (string): The ID of the speaker.
Example:
```
{
'ID': 'id09056--00112_0_89088',
'duration': 5.568,
'wav': 'id09056/U2mRgZ1tW04/00112.wav',
'start': 0,
'stop': 89088,
'spk_id': 'id09056'
}
```
## References
- https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox2.html |
false |
A simple classification task for generic anime images. Including the following 4 classes:
| Class | Images | Description |
|:------------:|:------:|---------------------------------------------------------------|
| comic | 5746 | comic images in color or greyscale |
| illustration | 6064 | illustration images |
| bangumi | 4914 | video screenshots or key visual images in bangumi |
| 3d | 4649 | 3d works including koikatsu, mikumikudance and other 3d types |
|
true | # Dataset Card for "github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false |
*** Image Captioning Dataset
Overview
This dataset is designed for image captioning tasks and consists of a collection of images paired with corresponding captions. The dataset aims to facilitate research and development in the field of image captioning and can be used for training and evaluating image captioning models.
Dataset Details
Number of Images: 9228
Image Sources: Filckr30K
Caption Language: Arabic
|
false |
# VoxCeleb 2
VoxCeleb2 contains over 1 million utterances for 6,112 celebrities, extracted from videos uploaded to YouTube.
## Verification Split
| | train | validation | test |
| :---: | :---: | :---: | :---: |
| # of speakers | 5,994 | 5,994 | 118 |
| # of samples | 982,808 | 109,201 | 36,237 |
## Data Fields
- ID (string): The ID of the sample with format `<spk_id--utt_id_start_stop>`.
- duration (float64): The duration of the segment in seconds.
- wav (string): The filepath of the waveform.
- start (int64): The start index of the segment, which is (start seconds) × (sample rate).
- stop (int64): The stop index of the segment, which is (stop seconds) × (sample rate).
- spk_id (string): The ID of the speaker.
Example:
```
{
'ID': 'id09056--00112_0_89088',
'duration': 5.568,
'wav': 'id09056/U2mRgZ1tW04/00112.wav',
'start': 0,
'stop': 89088,
'spk_id': 'id09056'
}
```
## References
- https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox2.html |
false |
# Dataset Card for Piano Sound Quality Database
## Requirements
```
python 3.8-3.10
soundfile
librosa
```
## Usage
```
from datasets import load_dataset
data = load_dataset("ccmusic-database/piano_sound_quality", split="5_Kawai")
labels = data.features['label'].names
for item in data:
print('audio info: ', item['audio'])
print('label name: ' + labels[item['label']])
```
## Maintenance
```
git clone git@hf.co:datasets/ccmusic-database/piano_sound_quality
```
## Dataset Description
- **Homepage:** <https://ccmusic-database.github.io>
- **Repository:** <https://huggingface.co/datasets/CCMUSIC/piano_sound_quality>
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
- **Leaderboard:** <https://ccmusic-database.github.io/team.html>
- **Point of Contact:** N/A
### Dataset Summary
This database contains 12 full-range audio files (.wav/.mp3/.m4a format) of 7 models of piano (KAWAI upright piano, KAWAI grand piano, Yingchang upright piano, Xinghai upright piano, Grand Theatre Steinway piano, Steinway grand piano, Pearl River upright piano) and 1320 split monophonic audio files (. wav/.mp3/.m4a format), for a total of 1332 files.
A score sheet (.xls format) of the piano sound quality rated by 29 people who participated in the subjective evaluation test is also included.
### Supported Tasks and Leaderboards
Piano Sound Classification
### Languages
English
## Dataset Structure
### Data Instances
.wav
### Data Fields
```
1_PearlRiver
2_YoungChang
3_Steinway-T
4_Hsinghai
5_Kawai
6_Steinway
7_Kawai-G
8_Yamaha
```
### Data Splits
trainset, validationset, testset
## Dataset Creation
### Curation Rationale
Lack of a dataset for piano sound quality
### Source Data
#### Initial Data Collection and Normalization
Zhaorui Liu, Shaohua Ji, Monan Zhou
#### Who are the source language producers?
Students from CCMUSIC
### Annotations
#### Annotation process
This database contains 12 full-range audio files (.wav/.mp3/.m4a format) of 7 models of piano (KAWAI upright piano, KAWAI grand piano, Yingchang upright piano, Xinghai upright piano, Grand Theatre Steinway piano, Steinway grand piano, Pearl River upright piano)
#### Who are the annotators?
Students from CCMUSIC
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
Help developing piano sound quality rating apps
### Discussion of Biases
Only for pianos
### Other Known Limitations
No black key in Steinway
## Additional Information
### Dataset Curators
Zijin Li
### Licensing Information
```
MIT License
Copyright (c) 2023 CCMUSIC
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Citation Information
```
@dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li},
title = {{Music Data Sharing Platform for Computational Musicology Research (CCMUSIC DATASET)}},
month = nov,
year = 2021,
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
}
```
### Contributions
Provide a dataset for piano sound quality |
true |
dataset_info:
features:
- name: intent
dtype: string
- name: user_utterance
dtype: string
- name: origin
dtype: string
# Dataset Card for "clinic150-SUR"
### Dataset Summary
The Clinic150-SUR dataset is a novel and augmented dataset designed to simulate natural human behavior during interactions with customer service-like centers.
Extending the [Clinic150 dataset](https://aclanthology.org/D19-1131/), it incorporates two augmentation techniques, including IBM's [LAMBADA](https://arxiv.org/abs/1911.03118) and [Parrot](https://github.com/PrithivirajDamodaran/Parrot_Paraphraser) models and carefully curated duplicated utterances.
This dataset aims to provide a more comprehensive and realistic representation of customer service interactions,
facilitating the development and evaluation of robust and efficient dialogue systems.
Key Features:
- Augmentation with IBM's [LAMBADA Model](https://arxiv.org/abs/1911.03118): The Clinic150-SUR dataset leverages IBM's LAMBADA model, a language generation model trained on a large corpus of text, to augment the original dataset. This augmentation process enhances the diversity and complexity of the dialogue data, allowing for a broader range of interactions.
- Integration of [Parrot](https://github.com/PrithivirajDamodaran/Parrot_Paraphraser) Model: In addition to the LAMBADA model, the Clinic150-SUR dataset also incorporates the Parrot model, providing a variety of paraphrases. By integrating Parrot, the dataset achieves more variations of existing utterances.
- Duplicated Utterances: The dataset includes carefully curated duplicated utterances to mimic real-world scenarios where users rephrase or repeat commonly asked queries. This feature adds variability to the data, reflecting the natural tendencies of human interactions, and enables dialogue systems to handle such instances better.
- [Clinic150](https://aclanthology.org/D19-1131/) as the Foundation: The Clinic150-SUR dataset is built upon the Clinic150 dataset, which originally consisted of 150 in-domain intent classes and 150 human utterances for each intent. By utilizing this foundation, the augmented dataset retains the in-domain expertise while better reflecting the nature of user requests towards a dialog system.
### Data Instances
#### clinic150-SUR
- **Size of downloaded dataset file:** 29 MB
### Data Fields
#### clinic150-SUR
- `intent`: a `string` feature.
- `user_utterance`: a `string` feature.
- `origin`: a `string` feature ('original', 'lambada', 'parrot').
### Citation Information
```
@inproceedings{rabinovich2022reliable,
title={Reliable and Interpretable Drift Detection in Streams of Short Texts},
author={Rabinovich, Ella and Vetzler, Matan and Ackerman, Samuel and Anaby-Tavor, Ateret},
booktitle = "Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (industry track)",
publisher = "Association for Computational Linguistics",
year={2023},
url={https://arxiv.org/abs/2305.17750}
}
```
### Contributions
Thanks to [Matan Vetzler](https://www.linkedin.com/in/matanvetzler/), [Ella Rabinovich](https://www.linkedin.com/in/ella-rabinovich-7b9a06/) for adding this dataset. |
false |
# HNC_Mini
Contains 306,084 samples collected from the following datasets.
- QQP_triplets
- HC3
- sentence-compression
|
false |
# Summary
This is a 🇹🇭 Thai-translated (GCP) dataset based on 4.5K codegen instruction dataset [GPTeacher](https://github.com/teknium1/GPTeacher)
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: Thai
Version: 1.0
--- |
true | # Dataset Card for Dataset Name
## Dataset Description
- **Autor:** Rubén Darío Jaramillo
- **Email:** rubend18@hotmail.com
- **WhatsApp:** +593 93 979 6676
### Dataset Summary
CIE10 is the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD), a medical classification list by the World Health Organization (WHO). It contains codes for diseases, signs and symptoms, abnormal findings, complaints, social circumstances, and external causes of injury or diseases. Work on ICD-10 began in 1983, became endorsed by the Forty-third World Health Assembly in 1990, and was first used by member states in 1994. It was replaced by ICD-11 on January 1, 2022.
While WHO manages and publishes the base version of the ICD, several member states have modified it to better suit their needs. In the base classification, the code set allows for more than 14,000 different codes and permits the tracking of many new diagnoses compared to the preceding ICD-9. Through the use of optional sub-classifications, ICD-10 allows for specificity regarding the cause, manifestation, location, severity, and type of injury or disease. The adapted versions may differ in a number of ways, and some national editions have expanded the code set even further; with some going so far as to add procedure codes. ICD-10-CM, for example, has over 70,000 codes.
The WHO provides detailed information regarding the ICD via its website – including an ICD-10 online browser and ICD training materials. The online training includes a support forum, a self-learning tool and user guide.
https://en.wikipedia.org/wiki/ICD-10 |
false |
# Revisiting Sentence Union Generation as a Testbed for Text Consolidation
[Eran Hirsch](https://scholar.google.com/citations?user=GPsTrDEAAAAJ)<sup>1</sup>,
[Valentina Pyatkin](https://valentinapy.github.io/)<sup>1</sup>,
Ruben Wolhandler<sup>1</sup>,
[Avi Caciularu](https://aviclu.github.io/)<sup>1</sup>,
Asi Shefer<sup>2</sup>,
[Ido Dagan](https://u.cs.biu.ac.il/~dagani/)<sup>1</sup>
<br>
<sup>1</sup>Bar-Ilan University, <sup>2</sup>One AI
This is the official dataset of the paper "Revisiting Sentence Union Generation as a Testbed for Text Consolidation".
* [Paper 📄](https://arxiv.org/abs/2305.15605) (Findings of ACL 2023)
* [Code 💻](https://github.com/eranhirs/sentence_union_generation)
## Abstract
Tasks involving text generation based on multiple input texts, such as multi-document summarization, long-form question answering and contemporary dialogue applications, challenge models for their ability to properly consolidate partly-overlapping multi-text information.
However, these tasks entangle the consolidation phase with the often subjective and ill-defined content selection requirement, impeding proper assessment of models' consolidation capabilities.
In this paper, we suggest revisiting the sentence union generation task as an effective well-defined testbed for assessing text consolidation capabilities, decoupling the consolidation challenge from subjective content selection.
To support research on this task, we present refined annotation methodology and tools for crowdsourcing sentence union, create the largest union dataset to date and provide an analysis of its rich coverage of various consolidation aspects.
We then propose a comprehensive evaluation protocol for union generation, including both human and automatic evaluation.
Finally, as baselines, we evaluate state-of-the-art language models on the task, along with a detailed analysis of their capacity to address multi-text consolidation challenges and their limitations. |
false | |
false |
# REALSumm: Re-evaluating EvALuation in Summarization
Dataset assembled from https://github.com/neulab/REALSumm with the conversion script:
```python
idx = [1017, 10586, 11343, 1521, 2736, 3789, 5025, 5272, 5576, 6564, 7174, 7770, 8334, 9325, 9781, 10231, 10595, 11351, 1573, 2748, 3906, 5075, 5334, 5626, 6714, 7397, 7823, 8565, 9393, 9825, 10325, 10680, 11355, 1890, 307, 4043, 5099, 5357, 5635, 6731, 7535, 7910, 8613, 9502, 10368, 10721, 1153, 19, 3152, 4303, 5231, 5420, 5912, 6774, 7547, 8001, 8815, 9555, 10537, 10824, 1173, 1944, 3172, 4315, 5243, 5476, 6048, 6784, 7584, 8054, 8997, 9590, 10542, 11049, 1273, 2065, 3583, 4637, 5244, 5524, 6094, 6976, 7626, 8306, 9086, 9605, 10563, 11264, 1492, 2292, 3621, 4725, 5257, 5558, 6329, 7058, 7670, 8312, 9221, 9709]
link = "https://github.com/neulab/REALSumm/raw/master/scores_dicts/abs.pkl"
x = requests.get(link)
data = pickle.loads(x.content)
with open("/home/manuel/Downloads/summeval/src.txt", "r") as f:
src = f.readlines()
src_cleaned = [src[i] for i in idx]
del src
models = list(data[0]["system_summaries"].keys())
tot_df = pd.DataFrame()
ref_sums = [data[x]["ref_summ"] for x in range(100)]
for model in models:
df = pd.DataFrame([data[x]["system_summaries"][model]["scores"] for x in range(100)])
sums = [data[x]["system_summaries"][model]["system_summary"] for x in range(100)]²
df["model"] = model
df["model_summary"] = sums
df["ref_summary"] = ref_sums
df["source"] = src_cleaned
tot_df = pd.concat([tot_df, df])
tot_df = tot_df.reset_index()
tot_df = tot_df.rename(columns={"index": "doc_id"})
tot_df.index.name = "index"
```
## Dataset Structure
```
DatasetDict({
train: Dataset({
features: ['index', 'doc_id', 'rouge_1_f_score', 'rouge_2_recall', 'rouge_l_recall', 'rouge_2_precision', 'rouge_2_f_score', 'rouge_1_precision', 'rouge_1_recall', 'rouge_l_precision', 'rouge_l_f_score', 'js-2', 'mover_score', 'bert_recall_score', 'bert_precision_score', 'bert_f_score', 'litepyramid_recall', 'model', 'model_summary', 'ref_summary', 'source'],
num_rows: 1400
})
})
```
```
@inproceedings{Bhandari-2020-reevaluating,
title = "Re-evaluating Evaluation in Text Summarization",
author = "Bhandari, Manik and Narayan Gour, Pranav and Ashfaq, Atabak and Liu, Pengfei and Neubig, Graham ",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
year = "2020"
}
``` |
true | |
true |
# Dataset Card for CaWikiTC
## Dataset Description
- **Point of Contact:** [Irene Baucells de la Peña](irene.baucells@bsc.es)
### Dataset Summary
CaWikiTC (Catalan Wikipedia Text Classification) is a text classification dataset authomatically created by scraping Catalan Wikipedia article summaries and their associated thematic category. It contains 21002 texts (19952 and 1050 in the train and dev partitions, respectively) classified under 67 exclusive categories.
For the dataset creation, we selected all the Catalan Wikipedia article summaries from a previously fixed variety of subcategories, most of which are professional disciplines and social sciences-related fields. The texts that were originally associated with more than one category were discarded to avoid class overlappings.
This dataset was created as part of the experiments from [reference]. Its original purpose was to serve as a task transfer source to train an entailment model, which was then used to perform a different text classification task.
### Supported Tasks and Leaderboards
Text classification, Language Model
### Languages
The dataset is in Catalan (`ca-CA`).
## Dataset Structure
### Data Instances
Two json files (train and development splits).
### Data Fields
Each example contains the following 3 fields:
* text: Catalan Wikipedia article summary (string)
* label: topic
#### Example:
<pre>
[
{
'text': "Novum Organum és el títol de l'obra més important de Francis Bacon, publicada el 1620. Rep el seu nom perquè pretén ser una superació del tractat sobre lògica d'Aristòtil, anomenat Organon. Es basa a trobar la causa de tot fenomen per inducció, observant quan passa i quan no i extrapolant aleshores les condicions que fan que es doni. Aquest raonament va influir decisivament en la formació del mètode científic, especialment en la fase d'elaboració d'hipòtesis. També indica que el prejudici és l'enemic de la ciència, perquè impideix generar noves idees. Els prejudicis més comuns s'expliquen amb la metàfora de l'ídol o allò que és falsament adorat. Existeixen ídols de la tribu (comuns a tots els éssers humans per la seva naturalesa), de la caverna (procedents de l'educació), del fòrum (causats per un ús incorrecte del llenguatge) i del teatre (basats en idees anteriors errònies, notablement en filosofia).",
'label': 'Filosofia',
},
...
]
</pre>
#### Labels
* 'Administració', 'Aeronàutica', 'Agricultura', 'Antropologia', 'Arqueologia', 'Arquitectura', 'Art', 'Astronomia', 'Astronàutica', 'Biblioteconomia', 'Biotecnologia', 'Catàstrofes', 'Circ', 'Ciència militar', 'Ciència-ficció', 'Ciències ambientals', 'Ciències de la salut', 'Ciències polítiques', 'Conflictes', 'Cronometria', 'Cultura popular', 'Dansa', 'Dret', 'Ecologia', 'Enginyeria', 'Epidèmies', 'Esoterisme', 'Estris', 'Festivals', 'Filologia', 'Filosofia', 'Fiscalitat', 'Física', 'Geografia', 'Geologia', 'Gestió', 'Heràldica', 'Història', 'Humor', 'Indumentària', 'Informàtica', 'Jaciments paleontològics', 'Jocs', 'Lingüística', 'Llengües', 'Llocs ficticis', 'Matemàtiques', 'Metodologia', 'Mitologia', 'Multimèdia', 'Museologia', 'Nàutica', 'Objectes astronòmics', 'Pedagogia', 'Periodisme', 'Protestes', 'Pseudociència', 'Psicologia', 'Química', 'Robòtica', 'Ràdio', 'Seguretat laboral', 'Sociologia', 'Telecomunicacions', 'Televisió', 'Teologia', 'Ètica'
### Data Splits
Train and development splits were created in a stratified fashion, following a 95% and 5% proportion, respectively. The sizes of each split are the following:
* train.json: 19952 examples
* dev.json: 1050 examples
### Annotations
#### Annotation process
The crawled data contained the categories' annotations, which were then used to create this dataset with the mentioned criteria.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this dataset contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Irene Baucells (irene.baucells@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International</a>.
### Citation Information
|
true | |
false | # Dataset Card for "product_ads_c"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false |
This dataset contains vi subsets (first 191 examples) and auto-translation from en to vi subsets (the rest, 38346 examples) from [OASST1](https://huggingface.co/datasets/OpenAssistant/oasst1). All auto-translation examples are generated using [VietAI envit5-translation](https://huggingface.co/VietAI/envit5-translation).
The vi subsets have the same features as the original dataset. Meanwhile, the auto-translation subsets introduce two new features:
- `"text_chunks"` is a list that contains chunked text split from `"text"`, each chunk has no more than 300 tokens. The sent_tokenizer and word_tokenzier used are from spacy en_core_web_sm model.
- `"text_translation"` contains merged of all translated chunks. Due to the auto-translation model, all new-line symbols (`\n`) are removed.
The translation script can be found at `translate_en_to_vi.py` |
false |
<p align="center"><h1> Legal case retrieval with Korean Precedents (powered by https://law.go.kr/)</h1></p>
This dataset repository maintains files required for legal case retrieval using Korean Precedents acquired from https://law.go.kr/
For codes and more information, refer to **[GitHub page](https://github.com/jaeminSon/law.go.kr-cases/tree/main)**
|
false |
# zhwiki-mnbvc
分项目:爬取并处理[中文维基百科](https://zh.wikipedia.org/wiki/Wikipedia:%E9%A6%96%E9%A1%B5)语料
数据时间:202302-202305 (持续更新)
主项目:MNBVC(Massive Never-ending BT Vast Chinese corpus)超大规模中文语料集 https://github.com/esbatmop/MNBVC
该项目清洗流程主要参考:https://kexue.fm/archives/4176/comment-page-1
并且使用组员开发的[去重工具](https://github.com/aplmikex/deduplication_mnbvc)进行数据格式化。
总行数(样本): 10,754,146
一个示例:
```json
{
"文件名": "cleaned/zhwiki-20230420/folder_0/723712.txt",
"是否待查文件": false,
"是否重复文件": false,
"文件大小": 558,
"simhash": 14363740497821204542,
"最长段落长度": 142,
"段落数": 6,
"去重段落数": 6,
"低质量段落数": 0,
"段落": [
{
"行号": 0,
"是否重复": false,
"是否跨文件重复": false,
"md5": "39a3b4c7a4785d88c7c7d774364ea17e",
"内容": "【龙州 (唐朝)】"
},
{
"行号": 1,
"是否重复": false,
"是否跨文件重复": false,
"md5": "856bdf443999603f349625a56a5e92d6",
"内容": "龙州,中国古代的州,今龙州县的前身。"
},
{
"行号": 2,
"是否重复": false,
"是否跨文件重复": false,
"md5": "45fd3b9dc612d6235b5653d1a5b40688",
"内容": "唐朝武德四年(621年)设置的州,治所在龙城县(今广西壮族自治区龙州县北),辖两县:龙城县、柳岭县。贞观七年(633年),柳岭县并入龙城县,撤销龙州,龙城县归南昆州管辖。元朝设万户府,移治今龙州。明朝洪武初年,复为龙州。清朝雍正三年(1725年)废为龙州县,今属广西壮族自治区崇左市。"
},
{
"行号": 4,
"是否重复": false,
"是否跨文件重复": false,
"md5": "8756367c3ee308f3875ed8e942a6e377",
"内容": "== 参考文献 =="
},
{
"行号": 5,
"是否重复": false,
"是否跨文件重复": false,
"md5": "6db73b5b7c22fb1bcf7829fbe585043f",
"内容": "* 《旧唐书·地理志》"
},
{
"行号": 6,
"是否重复": false,
"是否跨文件重复": false,
"md5": "38b370ac9f61b116d4f6c98873ffc4bd",
"内容": "* 《明史·地理志》"
}
],
"文件日期": "2023-04-20"
}
```
|
false | # SeCoDa [![CC BY-NC-SA 4.0][cc-by-nc-sa-shield]][cc-by-nc-sa]
Repository for the Sense Complexity Dataset (SeCoDa)
# Paper
For more information on the SeCoDa, see the [paper](http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.730.pdf).
Publications using this dataset must include a reference to the following publication:
<pre>
SeCoDa: Sense Complexity Dataset. David Strohmaier, Sian Gooding, Shiva Taslimipoor, Ekaterina Kochmar. Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020), pages 5964–5969, Marseille, 11–16 May 2020
</pre>
The dataset is based on the earlier CWIG3G2 dataset, see the [paper](https://aclanthology.org/I17-2068.pdf) and [website](https://www.inf.uni-hamburg.de/en/inst/ab/lt/resources/data/complex-word-identification-dataset.html). The relevant citation is
<pre>
Seid Muhie Yimam, Sanja Štajner, Martin Riedl, and Chris Biemann (2017): CWIG3G2 - Complex Word Identification Task across Three Text Genres and Two User Groups. In Proceedings of The 8th International Joint Conference on Natural Language Processing (IJCNLP 2017). Taipei, Taiwan
</pre>
The complexity data can be found in the CWIG3G2 dataset and combined with the senses provided by SeCoDa.
# Repository Content
Main data are found in SeCoDa.tsv. The columns are structured as follows.
1. Token to be disambiguated.
2. Offset start for token in context
3. Offset end for token in context
4. Context (sentence in which token occurs)
5. Selected sense
6. Comments (also contains MWE information)
Example:
| target | offset_start | offset_end | context | sense | comments |
| ------- |:------------:| ----------:| ------------------:| ----------------:| --------:|
| abroad | 39 | 45 | As we emerge... | OTHER COUNTRY... | - |
| abroad | 39 | 45 | As we emerge... | OTHER COUNTRY... | - |
| abroad | 73 | 79 | #1-8 The speech... | OTHER COUNTRY... | - |
The senses are drawn from the [Cambridge Advanced Learner's Dictionary](https://dictionary.cambridge.org).
*UPDATE*: Two missing entries have been added and typos in comments have been corrected.
*UPDATE*: Added further information to readme.
This work is licensed under a [Creative Commons Attribution-NonCommerial-ShareAlike 4.0
International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
[cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg
|
false |
# Dataset Card for MedMNIST v2
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://medmnist.com/
- **Repository:** https://github.com/MedMNIST/MedMNIST
- **Paper:** [MedMNIST v2 -- A large-scale lightweight benchmark for 2D and 3D biomedical image classification](https://arxiv.org/abs/2110.14795)
- **Leaderboard:**
- **Point of Contact:** [Bingbing Ni](mailto:nibingbing@sjtu.edu.cn)
### Dataset Summary
We introduce MedMNIST v2, a large-scale MNIST-like collection of standardized biomedical images, including 12 datasets for 2D and 6 datasets for 3D. All images are pre-processed into 28 x 28 (2D) or 28 x 28 x 28 (3D) with the corresponding classification labels, so that no background knowledge is required for users. Covering primary data modalities in biomedical images, MedMNIST v2 is designed to perform classification on lightweight 2D and 3D images with various data scales (from 100 to 100,000) and diverse tasks (binary/multi-class, ordinal regression and multi-label). The resulting dataset, consisting of 708,069 2D images and 9,998 3D images in total, could support numerous research / educational purposes in biomedical image analysis, computer vision and machine learning. We benchmark several baseline methods on MedMNIST v2, including 2D / 3D neural networks and open-source / commercial AutoML tools.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English (`en`).
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is licensed under [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) (CC BY 4.0).
Each subset keeps the same license as that of the source dataset. Please also cite the corresponding paper of source data if you use any subset of MedMNIST.
### Citation Information
If you find this project useful, please cite both v1 and v2 papers:
```
@article{medmnistv2,
title={MedMNIST v2-A large-scale lightweight benchmark for 2D and 3D biomedical image classification},
author={Yang, Jiancheng and Shi, Rui and Wei, Donglai and Liu, Zequan and Zhao, Lin and Ke, Bilian and Pfister, Hanspeter and Ni, Bingbing},
journal={Scientific Data},
volume={10},
number={1},
pages={41},
year={2023},
publisher={Nature Publishing Group UK London}
}
@inproceedings{medmnistv1,
title={MedMNIST Classification Decathlon: A Lightweight AutoML Benchmark for Medical Image Analysis},
author={Yang, Jiancheng and Shi, Rui and Ni, Bingbing},
booktitle={IEEE 18th International Symposium on Biomedical Imaging (ISBI)},
pages={191--195},
year={2021}
}
```
Please also cite the corresponding paper(s) of source data if you use any subset of MedMNIST as per the description on the [project website](https://medmnist.com/).
### Contributions
Thanks to [@albertvillanova](https://huggingface.co/albertvillanova) for adding this dataset.
|
false | |
false |
# Contextual Semantic Labels (Small) Benchmark Dataset
Please see [https://github.com/docugami/DFM-benchmarks](https://github.com/docugami/DFM-benchmarks) for more details, eval code, and current scores for different models.
# Using Dataset
Please refer to standard huggingface documentation to use this dataset: [https://huggingface.co/docs/datasets/index](https://huggingface.co/docs/datasets/index)
The [explore.ipynb](./explore.ipynb) notebook has some reference code. |
false |
# Contextual Semantic Labels (Large) Benchmark Dataset
Please see [https://github.com/docugami/DFM-benchmarks](https://github.com/docugami/DFM-benchmarks) for more details, eval code, and current scores for different models.
# Using Dataset
Please refer to standard huggingface documentation to use this dataset: [https://huggingface.co/docs/datasets/index](https://huggingface.co/docs/datasets/index)
The [explore.ipynb](./explore.ipynb) notebook has some reference code. |
false |
# Low Quality Live Attacks
The dataset includes live-recorded Anti-Spoofing videos from around the world, captured via **low-quality** webcams with resolutions like QVGA, QQVGA and QCIF.
# Get the Dataset
This is just an example of the data. If you need access to the entire dataset, contact us via **[sales@trainingdata.pro](mailto:sales@trainingdata.pro)** or leave a request on [https://trainingdata.pro/data-market](https://trainingdata.pro/data-market?utm_source=huggingface)

# Webcam Resolution
The collection of different video resolutions is provided, like:
- QVGA (320p x 240p),
- QQVGA (120p x 160p),
- QCIF (176p x 144p) and others.
# Metadata
Each attack instance is accompanied by the following details:
- Unique attack identifier
- Identifier of the user recording the attack
- User's age
- User's gender
- User's country of origin
- Attack resolution
Additionally, the model of the webcam is also specified.
Metadata is represented in the `file_info.csv`.
## **[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: https://www.kaggle.com/trainingdatapro/datasets
TrainingData's GitHub: https://github.com/trainingdata-pro |
false | # High Definition Live Attacks
The dataset includes live-recorded Anti-Spoofing videos from around the world, captured via **high-quality** webcams with Full HD resolution and above.
# Get the Dataset
This is just an example of the data. If you need access to the entire dataset, contact us via **[sales@trainingdata.pro](mailto:sales@trainingdata.pro)** or leave a request on [https://trainingdata.pro/data-market](https://trainingdata.pro/data-market?utm_source=huggingface)
.png?generation=1684702390091084&alt=media)
# Webcam Resolution
The collection of different video resolutions from Full HD (1080p) up to 4K (2160p) is provided, including several intermediate resolutions like QHD (1440p)

# Metadata
Each attack instance is accompanied by the following details:
- Unique attack identifier
- Identifier of the user recording the attack
- User's age
- User's gender
- User's country of origin
- Attack resolution
Additionally, the model of the webcam is also specified.
Metadata is represented in the `file_info.csv`.
## **[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/trainingdata-pro** |
true | https://github.com/yunx-z/MOVER
```
@inproceedings{zhang-wan-2022-mover,
title = "{MOVER}: Mask, Over-generate and Rank for Hyperbole Generation",
author = "Zhang, Yunxiang and
Wan, Xiaojun",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.440",
doi = "10.18653/v1/2022.naacl-main.440",
pages = "6018--6030",
abstract = "Despite being a common figure of speech, hyperbole is under-researched in Figurative Language Processing. In this paper, we tackle the challenging task of hyperbole generation to transfer a literal sentence into its hyperbolic paraphrase. To address the lack of available hyperbolic sentences, we construct HYPO-XL, the first large-scale English hyperbole corpus containing 17,862 hyperbolic sentences in a non-trivial way. Based on our corpus, we propose an unsupervised method for hyperbole generation that does not require parallel literal-hyperbole pairs. During training, we fine-tune BART to infill masked hyperbolic spans of sentences from HYPO-XL. During inference, we mask part of an input literal sentence and over-generate multiple possible hyperbolic versions. Then a BERT-based ranker selects the best candidate by hyperbolicity and paraphrase quality. Automatic and human evaluation results show that our model is effective at generating hyperbolic paraphrase sentences and outperforms several baseline systems.",
}
``` |
false | # Dataset Card for DUQA
## Table of Contents
- [Dataset Description](#dataset-description)
* [Abstract](#abstract)
* [Languages](#languages)
- [Dataset Structure](#dataset-structure)
* [Data Instances](#data-instances)
* [Data Fields](#data-fields)
- [Data Statistics](#data-statistics)
- [Dataset Creation](#dataset-creation)
* [Curation Rationale](#curation-rationale)
* [Source Data](#source-data)
* [Annotations](#annotations)
- [Considerations for Using the Data](#considerations-for-using-the-data)
* [Discussion of Social Impact and Biases](#discussion-of-social-impact-and-biases)
* [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
* [Dataset Curators](#dataset-curators)
* [Licensing Information](#licensing-information)
* [Citation Information](#citation-information)
## Dataset Description
### Abstract
DUQA is a dataset for single-step unit conversion questions. It comes in three sizes, ”DUQA10k”, ”DUQA100k” and ”DUQA1M”, with 10,000, 100,000 and 1,000,000 entries respectively. Each size contains a mixture of basic and complex conversion questions, including simple conversion, multiple answer, max/min, argmax/argmin, and noisy/q-noisy questions. The complexity level varies based on the amount of information present in the sentence and the number of reasoning steps required to calculate a correct answer.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
A single instance in the dataset consists of a question related to a single-step unit conversion problem, along with its corresponding correct answer.
### Data Fields
The dataset contains fields for the question, answer, and additional context about the question, along with multiple choices answers.
## Data Statistics
The dataset comes in three sizes, with 10,000, 100,000 and 1,000,000 entries respectively.
## Dataset Creation
### Curation Rationale
The dataset is curated to help machine learning models understand and perform single-step unit conversions. This ability is essential for many real-world applications, including but not limited to physical sciences, engineering, and data analysis tasks.
### Source Data
The source data for the dataset is generated using a Python library provided with the dataset, which can create new datasets from a list of templates.
### Annotations
The dataset does not contain any annotations.
## Considerations for Using the Data
### Discussion of Social Impact and Biases
The dataset is neutral and does not contain any explicit biases or social implications as it deals primarily with mathematical conversion problems.
### Other Known Limitations
The complexity of the questions is limited to single-step unit conversions. It does not cover multi-step or more complex unit conversion problems.
## Additional Information
### Dataset Curators
The dataset was created by a team of researchers. More information might be needed to provide specific names or organizations.
### Licensing Information
The licensing information for this dataset is not provided. Please consult the dataset provider for more details.
### Citation Information
The citation information for this dataset is not provided. Please consult the dataset provider for more details.
|
true | Dataset card for the dataset used in :
## Towards a Robust Detection of Language Model-Generated Text: Is ChatGPT that easy to detect?
Paper: https://gitlab.inria.fr/wantoun/robust-chatgpt-detection/-/raw/main/towards_chatgpt_detection.pdf
Source Code: https://gitlab.inria.fr/wantoun/robust-chatgpt-detection
## Dataset Summary
#### overview:
This dataset is made of two parts:
- First, an extension of the [Human ChatGPT Comparison Corpus (HC3) dataset](https://huggingface.co/datasets/Hello-SimpleAI/HC3) with French data automatically translated from the English source.
- Second, out-of-domain and adversarial French data set have been gathereed (Human adversarial, BingGPT, Native French ChatGPT responses).
#### Details:
- We first format the data into three subsets: `sentence`, `question` and `full` following the original paper.
- We then extend the data by translating the English questions and answers to French.
- We provide native French ChatGPT responses to a sample of the translated questions.
- We added a subset with QA pairs from BingGPT
- We included an adversarial subset with human-written answers in the style of conversational LLMs like Bing/ChatGPT.
## Available Subsets
### Out-of-domain:
- `hc3_fr_qa_chatgpt`: Translated French questions and native French ChatGPT answers pairs from HC3. This is the `ChatGPT-Native` subset from the paper.
- Features: `id`, `question`, `answer`, `chatgpt_answer`, `label`, `source`
- Size:
- test: `113` examples, `25592` words
- `qa_fr_binggpt`: French questions and BingGPT answers pairs. This is the `BingGPT` subset from the paper.
- Features: `id`, `question`, `answer`, `label`, `deleted_clues`, `deleted_sources`, `remarks`
- Size:
- test: `106` examples, `26291` words
- `qa_fr_binglikehuman`: French questions and human written BingGPT-like answers pairs. This is the `Adversarial` subset from the paper.
- Features: `id`, `question`, `answer`, `label`, `source`
- Size:
- test: `61` examples, `17328` words
- `faq_fr_gouv`: French FAQ questions and answers pairs from domain ending with `.gouv` from the MQA dataset (subset 'fr-faq-page'). https://huggingface.co/datasets/clips/mqa. This is the `FAQ-Gouv` subset from the paper.
- Features: `id`, `page_id`, `question_id`, `answer_id`, `bucket`, `domain`, `question`, `answer`, `label`
- Size:
- test: `235` examples, `22336` words
- `faq_fr_random`: French FAQ questions and answers pairs from random domain from the MQA dataset (subset 'fr-faq-page'). https://huggingface.co/datasets/clips/mqa. This is the `FAQ-Rand` subset from the paper.
- Features: `id`, `page_id`, `question_id`, `answer_id`, `bucket`, `domain`, `question`, `answer`, `label`
- Size:
- test: `4454` examples, `271823` words
### In-domain:
- `hc3_en_qa`: English questions and answers pairs from HC3.
- Features: `id`, `question`, `answer`, `label`, `source`
- Size:
- train: `68335` examples, `12306363` words
- validation: `17114` examples, `3089634` words
- test: `710` examples, `117001` words
- `hc3_en_sentence`: English answers split into sentences from HC3.
- Features: `id`, `text`, `label`, `source`
- Size:
- train: `455320` examples, `9983784` words
- validation: `113830` examples, `2510290` words
- test: `4366` examples, `99965` words
- `hc3_en_full`: English questions and answers pairs concatenated from HC3.
- Features: `id`, `text`, `label`, `source`
- Size:
- train: `68335` examples, `9982863` words
- validation: `17114` examples, `2510058` words
- test: `710` examples, `99926` words
- `hc3_fr_qa`: Translated French questions and answers pairs from HC3.
- Features: `id`, `question`, `answer`, `label`, `source`
- Size:
- train: `68283` examples, `12660717` words
- validation: `17107` examples, `3179128` words
- test: `710` examples, `127193` words
- `hc3_fr_sentence`: Translated French answers split into sentences from HC3.
- Features: `id`, `text`, `label`, `source`
- Size:
- train: `464885` examples, `10189606` words
- validation: `116524` examples, `2563258` words
- test: `4366` examples, `108374` words
- `hc3_fr_full`: Translated French questions and answers pairs concatenated from HC3.
- Features: `id`, `text`, `label`, `source`
- Size:
- train: `68283` examples, `10188669` words
- validation: `17107` examples, `2563037` words
- test: `710` examples, `108352` words
## How to load
```python
from datasets import load_dataset
dataset = load_dataset("almanach/hc3_multi", "hc3_fr_qa")
```
## Dataset Copyright
If the source datasets used in this corpus has a specific license which is stricter than CC-BY-SA, our products follow the same.
If not, they follow CC-BY-SA license.
| English Split | Source | Source License | Note |
|----------|-------------|--------|-------------|
| reddit_eli5 | [ELI5](https://github.com/facebookresearch/ELI5) | BSD License | |
| open_qa | [WikiQA](https://www.microsoft.com/en-us/download/details.aspx?id=52419) | [PWC Custom](https://paperswithcode.com/datasets/license) | |
| wiki_csai | Wikipedia | CC-BY-SA | | [Wiki FAQ](https://en.wikipedia.org/wiki/Wikipedia:FAQ/Copyright) |
| medicine | [Medical Dialog](https://github.com/UCSD-AI4H/Medical-Dialogue-System) | Unknown| [Asking](https://github.com/UCSD-AI4H/Medical-Dialogue-System/issues/10)|
| finance | [FiQA](https://paperswithcode.com/dataset/fiqa-1) | Unknown | Asking by 📧 |
| FAQ | [MQA]( https://huggingface.co/datasets/clips/mqa) | CC0 1.0| |
| ChatGPT/BingGPT | | Unknown | This is ChatGPT/BingGPT generated data. |
| Human | | CC-BY-SA | |
## Citation
```bibtex
@proceedings{towards-a-robust-2023-antoun,
title = "Towards a Robust Detection of Language Model-Generated Text: Is ChatGPT that easy to detect?",
editor = "Antoun, Wissam and
Mouilleron, Virginie and
Sagot, Benoit and
Seddah, Djam{\'e}",
month = "6",
year = "2023",
address = "Paris, France",
publisher = "ATALA",
url = "https://gitlab.inria.fr/wantoun/robust-chatgpt-detection/-/raw/main/towards_chatgpt_detection.pdf",
}
```
```bibtex
@article{guo-etal-2023-hc3,
title = "How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection",
author = "Guo, Biyang and
Zhang, Xin and
Wang, Ziyuan and
Jiang, Minqi and
Nie, Jinran and
Ding, Yuxuan and
Yue, Jianwei and
Wu, Yupeng",
journal={arXiv preprint arxiv:2301.07597}
year = "2023",
url ="https://arxiv.org/abs/2301.07597"
}
``` |
false | # Dataset Card for Algorithmic Reasoning (seed)
**Note: This dataset is WIP and most question's answer section is empty or incomplete! See also "Other Known Limitations" section**
**Warning: If you somehow do use this dataset, remember to NOT do any eval after training on the questions in this dataset!**
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** lemontea.Tom@gmail.com or https://github.com/lemonteaa
### Dataset Summary
Dataset to help LLM learn how to reason about code, especially on algorithmic tasks, by seeing human demostration.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- Question title
- Question
- Thought - Internal thought process that reason step by step/in an organized manner
- Answer presented to user (proof or code) - with explanation if necessary
### Data Splits
No split as of now - all are in the training section.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Questions are those I personally remember in my career, selected based on:
- interesting
- involving CS, math, or similar knowledge
- Target specific known weaknesses of existing open source/source available LLM (eg index notation handling)
- pratical/likely to appear in production work settings
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
Manually created by me entirely, writing in a level of details exceeeding what usually appears on the internet (bootcamp/FANNG interview prep/leetcode style training website etc) to help AI/LLM access knowledge that may be too obvious to human to write down.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
None as they are general, objective knowledge.
## Considerations for Using the Data
### Social Impact of Dataset
Although it is doubtful this dataset can actually work, in the event it does this may result in enhancing coding capability of LLM (which is intended), but which may create downstream effect simply due to LLM capability enhancement.
### Discussion of Biases
As questions are selected partly based on my taste, areas in CS that I am not interested in may be underrepresented.
### Other Known Limitations
- While I try to cover various mainstream programming language, each problem target only one specific language.
- It is currently in free-style markdown file. Maybe could make a script to convert to more structured format.
- Questions are asked in a conversational tone instead of leetcode style with strict I/O specification, hence may be more suitable for human eval than automated eval (eg extract and run code output in sandbox against test case automatically).
- As the dataset is completely manually created by a single human, the dataset size is extremely small.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
MIT
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false |
Sample conversations with vicuna-13b-fp16-v1.0 , focusing on coding ability tests. (Note that vicuna-13b have a v1.1 released and it improved on some coding tasks)
Probably not going to make more like this as creating JSON file by hand (copy-pasting from my note app) is exceedingly slow.
|
true |
This dataset is curated by [GIZ Data Service Center](https://www.giz.de/expertise/html/63018.html) for **Multi-Label Sector classification** of given text .The source dataset for this comes from [Climatewatchdata](https://www.climatewatchdata.org/data-explorer/historical-emissions?historical-emissions-data-sources=climate-watch&historical-emissions-gases=all-ghg&historical-emissions-regions=All%20Selected&historical-emissions-sectors=total-including-lucf%2Ctotal-including-lucf&page=1),
and Tracs(GIZ).
Specifications
- Dataset size: ~10k
- Average text length : 50 words
- Language: English
Sectors Included:
<pre><b>Agriculture,Buildings, Coastal Zone, Disaster Risk Management (DRM), Economy-wide, Energy, Environment, Health, Industries, LULUCF/Forestry, Social Development, Transport, Urban, Waste, Water</b> </pre>
Due to imbalanced sectors respresentation (True category), some more columns are added to signify some info.
- set0: [Agriculture,Energy,LULUCF/Forestry,Water,Environment] `count > 2000`
- set1:[Social Development,Transport,Urban,Economy-wide,Disaster Risk Management (DRM)] `2000 >count > 1000`
- set2:[Coastal Zone,Buildings,Health,Waste,Industries] `count < 1000` |
true | |
false | |
false |
# 2D Printed Masks Attacks
The dataset includes 3 different types of files of the real people: original selfies, original videos and videos of 2d printed masks attacks. The dataset solves tasks in the field of anti-spoofing and it is useful for buisness and safety systems.
# Get the Dataset
To order a dataset tailored to your needs, please write to us at the address: Andrew, [sales@trainingdata.pro](mailto:sales@trainingdata.pro) or leave a request on **https://trainingdata.pro/data-market?utm_source=huggingface**
# Content
### The dataset contains of three folders:
- **live_selfie** contains the original selfies of people
- **live_video** includes original videos of people
- **2d_masks** contains videos of attacks with the 2d printed mask using original images from "live_selfie" folder
### File with the extension .csv
includes the following information for each media file:
- **live_selfie**: the link to access the original selfie
- **live_video**: the link to access the original video
- **phone_model**: model of the phone, with which selfie and video were shot
- **2d_masks**: the link to access the video with the attack with the 2d printed mask
## **[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/trainingdata-pro** |
false |
# Dataset Card for ConvMix
## Dataset Description
- **Homepage:** [ConvMix Website](https://convinse.mpi-inf.mpg.de/)
- **Paper:** [Conversational Question Answering on Heterogeneous Sources](https://dl.acm.org/doi/10.1145/3477495.3531815)
- **Leaderboard:** [ConvMix Leaderboard](https://convinse.mpi-inf.mpg.de/)
- **Point of Contact:** [Philipp Christmann](mailto:pchristm@mpi-inf.mpg.de)
### Dataset Summary
We construct and release the first benchmark, ConvMix, for conversational question answering (ConvQA) over heterogeneous sources, comprising 3000 real-user conversations with 16000 questions, along with entity annotations, completed question utterances, and question paraphrases.
The dataset naturally requires information from multiple sources for answering the individual questions in the conversations.
### Dataset Creation
The ConvMix benchmark was created by real humans. We tried to ensure that the collected data is as natural as possible. Master crowdworkers on Amazon Mechanical Turk (AMT) selected an entity of interest in a specific domain, and then started issuing conversational questions on this entity, potentially drifting to other topics of interest throughout the course of the conversation. By letting users choose the entities themselves, we aimed to ensure that they are more interested into the topics the conversations are based on. After writing a question, users were asked to find the answer in eithers Wikidata, Wikipedia text, a Wikipedia table or a Wikipedia infobox, whatever they find more natural for the specific question at hand. Since Wikidata requires some basic understanding of knowledge bases, we provided video guidelines that illustrated how Wikidata can be used for detecting answers, following an example conversation. For each conversational question, that might be incomplete, the crowdworker provides a completed question that is intent-explicit, and can be answered without the conversational context. These questions constitute the CompMix dataset. We provide also the answer source the user found the answer in and question entities. |
false |
# ControlNet training
this dataset is subset of **fill_50k** dataset just to test the finetuning logic.
> *TODO*:
- [ ] add text data
|
false |
# MALLS NL-FOL Pairs 34K
## Dataset details
MALLS (large language **M**odel gener**A**ted natural-**L**anguage-to-first-order-**L**ogic pair**S**)
consists of 34K pairs of real-world natural language (NL) statements and the corresponding first-order logic (FOL) rules annotations.
All pairs are generated by prompting GPT-4 and processed to ensure the validity of the FOL rules.
Note that we did not conduct a rigorous alignment check on the pairs, meaning the FOL rule may not accurately reflect the meaning of the NL statement.
That said, we recommend treating the dataset as "silver" labels and using it for training, and using another dataset with "gold" labels for evaluation.
# Dataset Structure
The file `MALLS-v0.json` consists of the 34K pairs of the MALLS dataset; we also provide `folio_parsed.json` which consists of 2K pairs collected
and processed from the FOLIO datset. Each entry in the file is a dictionary object of the following format
```
{
'NL': <the NL statment>,
'FOL': <the FOL rule>
}
```
**License:**
Attribution-NonCommercial 4.0 International.
Since the data are collected from GPT-4, it also abides by the policy of OpenAI: https://openai.com/policies/terms-of-use
## Using the Dataset
We use MALLS to finetune a LLaMA-7B model for NL-FOL translation, namely LogicLLaMA, which achieves GPT-4 level performance.
**Project Page**
https://github.com/gblackout/LogicLLaMA
## Intended use
**Primary intended uses:**
MALLS is intended to be used for research.
## Citation
```
@article{yang2023harnessing,
title={Harnessing the Power of Large Language Models for Natural Language to First-Order Logic Translation},
author={Yuan Yang and Siheng Xiong and Ali Payani and Ehsan Shareghi and Faramarz Fekri},
journal={arXiv preprint arXiv:2305.15541},
year={2023}
}
``` |
false | ## 简介 ##
galgame rukuru社(作品:纸上魔法使)的角色lora模型。目前只有游行寺夜子,四条妃的模型。



 |
false | |
false | # AutoTrain Dataset for project: image-attribute-prediction
## Dataset Description
This dataset has been automatically processed by AutoTrain for project image-attribute-prediction.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<261x300 RGB PIL image>",
"target": 0
},
{
"image": "<300x300 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Paisley_Floral_A-Line_Dress', 'Paisley_Maxi_Cami_Dress'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 59 |
| valid | 16 |
|
true | |
true |
Preprocessed from https://huggingface.co/datasets/lorenzoscottb/PLANE-ood/
```python
df=pd.read_json('https://huggingface.co/datasets/lorenzoscottb/PLANE-ood/resolve/main/PLANE_trntst-OoV_inftype-all.json')
f = lambda df: pd.DataFrame(list(zip(*[df[c] for c in df.index])),columns=df.index)
ds=DatasetDict()
for split in ['train','test']:
dfs=pd.concat([f(df[c]) for c in df.columns if split in c.lower()]).reset_index(drop=True)
dfs['label']=dfs['label'].map(lambda x:{1:'entailment',0:'not-entailment'}[x])
ds[split]=Dataset.from_pandas(dfs,preserve_index=False)
ds.push_to_hub('tasksource/PLANE-ood')
```
# PLANE Out-of-Distribution Sets
PLANE (phrase-level adjective-noun entailment) is a benchmark to test models on fine-grained compositional inference.
The current dataset contains five sampled splits, used in the supervised experiments of [Bertolini et al., 22](https://aclanthology.org/2022.coling-1.359/).
### Features
Each entrance has 6 features: `seq, label, Adj_Class, Adj, Nn, Hy`
- `seq`:test sequense
- `label`: ground truth (1:entialment, 0:no-entailment)
- `Adj_Class`: the class of the sequence adjectives
- `Adj`: the adjective of the sequence (I: intersective, S: subsective, O: intensional)
- `N`n: the noun
- `Hy`: the noun's hypericum
Each sample in `seq` can take one of three forms (or inference types, in paper):
- An *Adjective-Noun* is a *Noun* (e.g. A red car is a car)
- An *Adjective-Noun* is a *Hypernym(Noun)* (e.g. A red car is a vehicle)
- An *Adjective-Noun* is a *Adjective-Hypernym(Noun)* (e.g. A red car is a red vehicle)
Please note that, as specified in the paper, the ground truth is automatically assigned based on the linguistic rule that governs the interaction between each adjective class and inference type – see the paper for more detail.
### Cite
If you use PLANE for your work, please cite the main COLING 2022 paper.
```
@inproceedings{bertolini-etal-2022-testing,
title = "Testing Large Language Models on Compositionality and Inference with Phrase-Level Adjective-Noun Entailment",
author = "Bertolini, Lorenzo and
Weeds, Julie and
Weir, David",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.359",
pages = "4084--4100",
}
``` |
false | |
false |
Completely uncurated collection of IRC logs from the Ubuntu IRC channels |
true |
# Dataset Card for Dataset Name
## Dataset Description
- **Repository:** [https://github.com/SJTU-LIT/SynCSE/](https://github.com/SJTU-LIT/SynCSE/)
- **Paper:** [Contrastive Learning of Sentence Embeddings from Scratch](https://arxiv.org/abs/2305.15077)
### Dataset Summary
The SynCSE-scratch-NLI is a Natural Language Inference dataset generated by GPT-3.5-Turbo. You can use it to learn better sentence representation with contrastive learning. More details can be found in [paper](https://arxiv.org/abs/2305.15077) and [code](https://github.com/SJTU-LIT/SynCSE/)
### Supported Tasks and Leaderboards
Natural Language Inference
Contrastive Learning of Sentence Embeddings
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
### Data Splits
We only provide the training set. Specifically, you can use this dataset to train of model with contrastive learning and evalaute your model on a variey of downstream sentence embedding tasks.
## Dataset Creation
GPT-3.5-turbo
### Curation Rationale
[More Information Needed]
# Citation
```
@article{zhang2023contrastive,
title={Contrastive Learning of Sentence Embeddings from Scratch},
author={Zhang, Junlei and Lan, Zhenzhong and He, Junxian},
journal={arXiv preprint arXiv:2305.15077},
year={2023}
}
``` |
true | # Physical Interaction: Question Answering (PIQA)
- Source: https://huggingface.co/datasets/piqa
- Num examples:
- 16,113 (train)
- 1,838 (validation)
- Language: English
```python
from datasets import load_dataset
load_dataset("vietgpt/piqa_en")
```
- Format for GPT-3
```python
def preprocess_gpt3(sample):
goal = sample['goal']
sol1 = sample['sol1']
sol2 = sample['sol2']
label = sample['label']
if label == 0:
output = f'\n<|correct|> {sol1}\n<|incorrect|> {sol2}'
elif label == 1:
output = f'\n<|correct|> {sol2}\n<|incorrect|> {sol1}'
return {'text': f'<|startoftext|><|context|> {goal} <|answer|> {output} <|endoftext|>'}
"""
<|startoftext|><|context|> When boiling butter, when it's ready, you can <|answer|>
<|correct|> Pour it into a jar
<|incorrect|> Pour it onto a plate <|endoftext|>
"""
``` |
true | # Physical Interaction: HellaSwag
- Source: https://huggingface.co/datasets/hellaswag
- Num examples:
- 39,905 (train)
- 10,042 (validation)
- Language: English
```python
from datasets import load_dataset
load_dataset("vietgpt/hellaswag_en")
```
- Format for GPT-3
```python
def preprocess_gpt3(sample):
ctx = sample['ctx']
endings = sample['endings']
label = sample['label']
if label == '0':
output = f'\n<|correct|> {endings[0]}\n<|incorrect|> {endings[1]}\n<|incorrect|> {endings[2]}\n<|incorrect|> {endings[3]}'
elif label == '1':
output = f'\n<|correct|> {endings[1]}\n<|incorrect|> {endings[0]}\n<|incorrect|> {endings[2]}\n<|incorrect|> {endings[3]}'
elif label == '2':
output = f'\n<|correct|> {endings[2]}\n<|incorrect|> {endings[0]}\n<|incorrect|> {endings[1]}\n<|incorrect|> {endings[3]}'
else:
output = f'\n<|correct|> {endings[3]}\n<|incorrect|> {endings[0]}\n<|incorrect|> {endings[1]}\n<|incorrect|> {endings[2]}'
return {'text': f'<|startoftext|><|context|> {ctx} <|answer|> {output} <|endoftext|>'}
"""
<|startoftext|><|context|> Then, the man writes over the snow covering the window of a car, and a woman wearing winter clothes smiles. then <|answer|>
<|correct|> , the man continues removing the snow on his car.
<|incorrect|> , the man adds wax to the windshield and cuts it.
<|incorrect|> , a person board a ski lift, while two men supporting the head of the person wearing winter clothes snow as the we girls sled.
<|incorrect|> , the man puts on a christmas coat, knitted with netting. <|endoftext|>
"""
``` |
false | <h1 style="text-align: center;"><em><strong><a href="https://expertscans.com/get-keto-drive-acv-gummies"><span style="color: #ff0000;">👉👉Best Price "OFFER" Shop Now</span></a></strong></em></h1>
<a href="https://expertscans.com/get-keto-drive-acv-gummies"><img class="aligncenter wp-image-464" src="https://nutrasciencelabs.store/wp-content/uploads/2023/06/1-1.png" alt="" width="721" height="470" /></a>
<strong><span style="color: #ff9900;">Product Name</span> <span style="color: #ff9900;">-</span> <span style="color: #00ccff;"><a style="color: #00ccff;" href="https://expertscans.com/get-keto-drive-acv-gummies">Keto Drive ACV Gummies</a></span></strong>
<strong><span style="color: #008080;">Benefits -</span> <span style="color: #ff99cc;"><span style="color: #ff99cc;">“Eliminates Unwanted Fat, Increases metabolic Rate, Regulating Appetite"</span></span></strong>
<strong><span style="color: #3366ff;">Category -</span><span style="color: #226b2d;"> Weight Loss</span></strong>
<strong><span style="color: #333399;">Availability –</span> <span style="color: #ff0000;"><a style="color: #ff0000;" href="https://expertscans.com/get-keto-drive-acv-gummies">Online</a></span></strong>
<strong><span style="color: #33cccc;">Rating: -</span> <span style="color: #e36488;">5.0/5.0</span> ⭐⭐⭐⭐⭐</strong>
<h2 style="text-align: center;"><a href="https://expertscans.com/get-keto-drive-acv-gummies"><span style="color: #ff0000;"><strong>✅Click Here To Visit – “OFFICIAL WEBSITE”✅</strong></span></a></h2>
<h2 style="text-align: center;"><a href="https://expertscans.com/get-keto-drive-acv-gummies"><span style="color: #ff0000;"><strong>✅Click Here To Visit – “OFFICIAL WEBSITE”✅</strong></span></a></h2>
<h2 style="text-align: center;"><a href="https://expertscans.com/get-keto-drive-acv-gummies"><span style="color: #ff0000;"><strong>✅Click Here To Visit – “OFFICIAL WEBSITE”✅</strong></span></a></h2>
<a href="https://groups.google.com/g/keto-drive-acv-gummies-diet"><strong>Keto Drive ACV Gummies</strong></a> is an excellent approach to increase your energy and decrease weight. These delightful little treats contain a variety of vitamins, folate, iodine, and apple cider vinegar, which assist your body enter a state known as ketosis, in which fat is burned for energy rather than carbohydrates. After a few weeks of consistent use, this natural weight loss supplement may help you attain ketosis. <a href="https://ketodriveacvgummy.blogspot.com"><strong>Keto Drive ACV Gummies</strong> </a>also deliver a sustained energy boost to help you stay focused throughout the day.
<h3 style="text-align: center;"><a href="https://expertscans.com/get-keto-drive-acv-gummies"><em><strong><span style="color: #ff0000;"><span style="color: #008080;">>>> </span>Click Here To Visit Keto Drive ACV Gummies – “OFFICIAL WEBSITE" BUY NOW</span></strong></em></a></h3>
<a href="https://expertscans.com/get-keto-drive-acv-gummies"><img class="aligncenter wp-image-469" src="https://nutrasciencelabs.store/wp-content/uploads/2023/06/diet.png" alt="" width="721" height="523" /></a>
This article contains all of the pertinent information on the ostensibly popular <a href="https://sites.google.com/view/keto-drive-acv-gummy-diet/home"><strong>Keto Drive ACV Gummies</strong></a>.
<h2 style="text-align: center;"><strong>How Do Keto Drive ACV Gummies Function?</strong></h2>
One of the primary causes of weight gain is an excess of fat in your body. This fat can lead to a variety of health issues, including high cholesterol and high blood pressure. It can also cause heart disease and is difficult to eliminate without our help.
ACV Keto Drive Gummies are intended to help you burn fat rather than carbohydrates. They function by inducing a state known as ketosis. When glucose levels are low, a natural metabolic process called ketosis occurs. Fats are transformed into fatty acids, which are then turned into acetyl coenzyme A and ketone molecules during this process. Ketone bodies are subsequently converted into fuel for the body's cells and organs. Ketosis, then, is a metabolic state in which your body burns fat for energy rather than carbohydrates. This causes the fat to be broken down and converted into energy. According to the manufacturer, consuming <a href="https://maptia.com/ketodriveacvgummies"><strong>Keto Drive ACV Gummies</strong></a> results in feeling more energised owing to the ketosis state, without having to drastically alter your lifestyle.
<h2 style="text-align: center;"><strong>The Science of ACV</strong></h2>
Compounds containing vinegar have been utilised for thousands of years for their alleged therapeutic effects. It was used to boost vigour, for "detoxification," as an antibiotic, and even as a scurvy remedy. However, new research indicates that acetic acid can really prevent fat deposits from accumulating, lower hunger, burn fat, and significantly increase metabolism.
<a href="https://expertscans.com/get-keto-drive-acv-gummies"><img class="aligncenter wp-image-465" src="https://nutrasciencelabs.store/wp-content/uploads/2023/06/2-1.png" alt="" width="721" height="364" /></a>
<h3 style="text-align: center;"><a href="https://expertscans.com/get-keto-drive-acv-gummies"><em><strong><span style="color: #ff0000;"><span style="color: #008080;">>>> </span>Click Here To Visit Keto Drive ACV Gummies – “OFFICIAL WEBSITE" BUY NOW</span></strong></em></a></h3>
The most generally cited human study is a 2009 trial of 175 people who consumed apple cider vinegar on a daily basis. After the study, those who had drunk the apple cider vinegar on a daily basis observed significant weight loss, lower triglyceride levels, improved skin look, and an overall sense of health. Those who did not take the apple cider vinegar saw no difference.
ACV has the same pectin content as apples (1.5 grammes). Because pectin makes you feel fuller and more content, having ACV in your diet might decrease your appetite, preventing you from consuming excessive amounts of food. So, why does Apple Cider Vinegar promote weight loss more than apples?According to research conducted in the United Kingdom, its high quantities of acetic acid keep blood sugar levels evenly controlled, limiting the typical appetite for sugar, sweets, and other junk food.
<h2 style="text-align: center;"><strong>What Advantages Does the Keto Drive ACV Gummies Manufacturer Promise?</strong></h2>
We researched <a href="https://www.scoop.it/u/keto-drive-acv-gummy-diet"><strong>Keto Drive ACV Gummies</strong></a> extensively and have summarised their benefits for you here:
<h3 style="text-align: center;"><a href="https://expertscans.com/get-keto-drive-acv-gummies"><span style="color: #ff0000;"><strong>(Special offer Today) Visit Official Website to Buy Keto Drive ACV Gummies</strong></span></a></h3>
<a href="https://expertscans.com/get-keto-drive-acv-gummies"><img class="aligncenter wp-image-466" src="https://nutrasciencelabs.store/wp-content/uploads/2023/06/3.png" alt="" width="721" height="534" /></a>
<ul>
<li><strong>Mood and energy enhancement</strong></li>
<li><strong>Improve mobility and concentration through increasing concentration.</strong></li>
<li><strong>Cravings and appetite suppression</strong></li>
<li><strong>Up to 5 pounds lost in the first week, and up to 20 pounds lost in the first month of use</strong></li>
<li><strong>Include health-promoting components like vitamins, iodine, and apple cider vinegar.</strong></li>
<li><strong>Available in a variety of flavours</strong></li>
<li><strong>There are no known adverse effects from using 100% natural substances.</strong></li>
<li><strong>Helps with ketosis</strong></li>
</ul>
<h2 style="text-align: center;"><strong>The Components Used In Keto Drive ACV Gummies</strong></h2>
For optimal efficiency, <a href="https://keto-drive-acv-gummy-diet.jimdosite.com/"><strong>Keto Drive ACV Gummies</strong></a>® are prepared with just the purest natural components. GMP Certified and FDA Approved Laboratory in the United States.
<ul>
<li><strong>Apple Cider Vinegar:</strong> Each ACV Keto Drive Gummy includes the recommended dose of 100% Pure Advanced Apple Cider Vinegar to help you burn fat quickly and enhance your overall health.</li>
<li><strong>Pomegranate Powder:</strong> In addition to being one of the most effective antioxidants, Pomegranate Powder has been linked to improved heart health, weight control, and a lower risk of a variety of other health issues.</li>
<li><strong>Beet Root Powder: </strong>Benefits of Beet Root Powder are extremely expansive ranging from heart health, endurance, brain health, blood pressure, inflammation, digestive health and much, much more.</li>
</ul>
<h2 style="text-align: center;"><strong>Keto Drive ACV Gummies: Are They Worth Purchasing?</strong></h2>
Perhaps you've been struggling with your weight for a long time and have tried numerous diets. The benefit of Keto Drive ACV Gummies is that it speeds up the ketosis process, allowing them to lose weight faster. Numerous studies have been conducted to demonstrate the effectiveness of a ketogenic diet in weight loss. According to study and evaluations, dieting with <a href="https://ketodriveacvgummy.contently.com/"><strong>Keto Drive ACV Gummies</strong></a> is successful in many circumstances. Users remark that it assisted them in achieving their objectives and that they are pleased with the results.
<a href="https://expertscans.com/get-keto-drive-acv-gummies"><img class="aligncenter wp-image-468" src="https://nutrasciencelabs.store/wp-content/uploads/2023/06/5.png" alt="" width="718" height="455" /></a>
<h3 style="text-align: center;"><a href="https://expertscans.com/get-keto-drive-acv-gummies"><span style="color: #ff0000;">Don’t wait.</span> <span style="color: #008080;">Get Keto Drive ACV Gummies Today!</span></a></h3>
The components in <a href="https://medium.com/@ketodriveacvgummies"><strong>Keto Drive ACV Gummies</strong></a> are natural, and the maker claims that ingesting them offers several health benefits aside from weight loss, including as improved well-being, concentration, and performance.
<h2 style="text-align: center;"><strong>Keto Drive ACV Gummies: Potential Side Effects</strong></h2>
So yet, no negative effects have been reported. Nonetheless, <a href="https://issuu.com/ketodriveacvgummies/docs/keto_drive_acv_gummies_c198375f36e163"><strong>Keto Drive ACV Gummies</strong></a>, like other nutritious items, can be harmful to your health. Among them are diarrhoea, headaches, drowsiness, and stomach discomfort. Although many people do not, certain populations may have severe reactions, such as rashes or breathing difficulties. Because these are significant adverse reactions, you should discontinue use of <a href="https://www.pinterest.com/pin/955326139687437092"><strong>Keto Drive ACV Gummies</strong></a> and seek medical attention.
<h2 style="text-align: center;"><strong>Conclusion</strong></h2>
Many people who are overweight or obese for a variety of reasons may wish to live a healthier lifestyle. <a href="https://www.pinterest.com/ketodriveacvgummies/"><strong>Keto Drive ACV Gummies</strong></a> are a sweet treat that can help with medical conditions, particularly fat and overweight. It contains only organic compounds that have been clinically shown to help your body enter ketosis and lose weight. It has had no negative consequences. Place your purchase for these novel candy-like snacks as soon as possible.
<h3 style="text-align: center;"><a href="https://expertscans.com/get-keto-drive-acv-gummies"><em><span style="color: #ff0000;"><strong><u><span style="color: #99cc00;">➧➧</span></u></strong></span></em><em><span style="color: #ff0000;"><strong><u> Click Here To Order Keto Drive ACV Gummies From The Official Website & Get The Lowest Price Online</u></strong></span></em></a></h3>
<a href="https://expertscans.com/get-keto-drive-acv-gummies"><img class="aligncenter wp-image-467" src="https://nutrasciencelabs.store/wp-content/uploads/2023/06/4.png" alt="" width="722" height="479" /></a>
|
false | **Datasets URL**:[https://drive.google.com/drive/folders/13r-l_OEUt63A8K-ol6jQiaKNuGdseZ7j?usp=sharing](https://drive.google.com/drive/folders/13r-l_OEUt63A8K-ol6jQiaKNuGdseZ7j?usp=sharing)
**Datasets Paper**: Chen Y, Tang Y, Hao H, et al. AMFF-YOLOX: Towards an Attention Mechanism and Multiple Feature Fusion Based on YOLOX for Industrial Defect Detection[J]. *Electronics*, 2023, 12(7): 1662.
Dataset Original Repository: [MCnet](https://github.com/zdfcvsn/MCnet)
Dataset Original Paper: Zhang D, Song K, Xu J, et al. MCnet: Multiple context information segmentation network of no-service rail surface defects[J]. *IEEE Transactions on Instrumentation and Measurement*, 2020, 70: 1-9.
If you want to cite this.
```
@Article{electronics12071662,
author = {Chen, Yu and Tang, Yongwei and Hao, Huijuan and Zhou, Jun and Yuan, Huimiao and Zhang, Yu and Zhao, Yuanyuan},
title = {AMFF-YOLOX: Towards an Attention Mechanism and Multiple Feature Fusion Based on YOLOX for Industrial Defect Detection},
journal = {Electronics},
volumn = {12},
year = {2023},
number = {7},
article-number = {1662},
url = {https://www.mdpi.com/2079-9292/12/7/1662},
issn = {2079-9292},
doi = {10.3390/electronics12071662}
}
```
and
```
@Article{9285332,
author = {Zhang, Defu and Song, Kechen and Xu, Jing and He, Yu and Niu, Menghui and Yan, Yunhui},
journal = {IEEE Transactions on Instrumentation and Measurement},
title = {MCnet: Multiple Context Information Segmentation Network of No-Service Rail Surface Defects},
year = {2021},
volume = {70},
number = {},
pages = {1-9},
doi = {10.1109/TIM.2020.3040890}}
``` |
false | All of the data together is around 61GB. It's the last hidden states of 131,072 samples from refinedweb padded/truncated to 512 tokens on the left, fed through [google/flan-t5-base](https://hf.co/google/flan-t5-base).
Structure:
```
{
"encoding": List, shaped (512, 768) aka (tokens, d_model),
"text": String, the original text that was encoded,
"attention_mask": List, binary mask to pass to your model with encoding to not attend to pad tokens
}
``` |
false | All of the data together is around 81.3GB. It's the last hidden states of 131,072 samples from refinedweb padded/truncated to 512 tokens on the left, fed through [google/flan-t5-base](https://hf.co/google/flan-t5-base).
Structure:
```
{
"encoding": List, shaped (512, 1024) aka (tokens, d_model),
"text": String, the original text that was encoded,
"attention_mask": List, binary mask to pass to your model with encoding to not attend to pad tokens
}
``` |
true |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:https://github.com/kaistAI/CoT-Collection**
- **Repository:https://github.com/kaistAI/CoT-Collection**
- **Paper:https://arxiv.org/abs/2305.14045**
- **Point of Contact:sejune@lklab.io**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
| name | train |
|-------------------|------:|
|CoT-Collection|1837928|
## Additional Information
### Citation Information
```
@article{kim2023cot,
title={The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning},
author={Kim, Seungone and Joo, Se June and Kim, Doyoung and Jang, Joel and Ye, Seonghyeon and Shin, Jamin and Seo, Minjoon},
journal={arXiv preprint arXiv:2305.14045},
year={2023}
}
``` |
true | https://github.com/asaparov/prontoqa/
```
@article{saparov2022language,
title={Language models are greedy reasoners: A systematic formal analysis of chain-of-thought},
author={Saparov, Abulhair and He, He},
journal={arXiv preprint arXiv:2210.01240},
year={2022}
}
``` |
false | # Dataset Card for Timbre and Range Database
## Dataset Description
- **Homepage:** <https://ccmusic-database.github.io>
- **Repository:** <https://huggingface.co/datasets/ccmusic-database/timbre_score>
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
- **Leaderboard:** <https://ccmusic-database.github.io/team.html>
- **Point of Contact:** N/A
### Dataset Summary
The timbre database contains acapella singing audio of 9 singers, as well as cut single-note audio, totaling 775 clips (.wav format)
### Supported Tasks and Leaderboards
Audio classification
### Languages
Chinese, English
## Dataset Structure
### Data Instances
.wav .txt
### Data Fields
```
song1-32
```
### Data Splits
Train, Valid, Test
## Dataset Creation
### Curation Rationale
Promoting the development of music AI industry
### Source Data
#### Initial Data Collection and Normalization
Zijin Li, Zhaorui Liu, Monan Zhou
#### Who are the source language producers?
Composers of the songs in dataset
### Annotations
#### Annotation process
CCMUSIC students collected acapella singing audios of 9 singers, as well as cut single-note audio, totaling 775 clips
#### Who are the annotators?
Students from CCMUSIC
### Personal and Sensitive Information
Due to copyright issues with the original music, only acapella singing audios are provided in the dataset
## Considerations for Using the Data
### Social Impact of Dataset
Promoting the development of AI in the music industry
### Discussion of Biases
Most are Chinese songs
### Other Known Limitations
Samples are not balanced enough
## Additional Information
### Dataset Curators
Zijin Li
### Licensing Information
```
MIT License
Copyright (c) 2023 CCMUSIC
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Citation Information
```
@dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li},
title = {{Music Data Sharing Platform for Computational Musicology Research (CCMUSIC DATASET)}},
month = nov,
year = 2021,
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
}
```
### Contributions
Provide a dataset for music timbre and range |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.