id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
HiTZ/alpaca_mt | 2023-04-07T15:15:55.000Z | [
"task_categories:text-generation",
"task_ids:dialogue-modeling",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:tatsu-lab/alpaca",
"language:en",
"language:pt... | HiTZ | Alpaca is a dataset of 52,000 instructions and demonstrations generated by OpenAI's text-davinci-003 engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better. This dataset also includes machine-translated data for 6 Iberian languages: Portuguese, Spanish, Catalan, Basque, Galician and Asturian. | @misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {https://github.com/tatsu-lab/stanford_alpaca},
} | null | 7 | 11 | ---
annotations_creators:
- no-annotation
language:
- en
- pt
- es
- ca
- eu
- gl
- at
language_creators:
- machine-generated
license: cc-by-nc-4.0
multilinguality:
- multilingual
- translation
pretty_name: Alpaca MT
size_categories:
- 10K<n<100K
source_datasets:
- tatsu-lab/alpaca
tags:
- instruction-finetuning
task_categories:
- text-generation
task_ids:
- dialogue-modeling
dataset_info:
- config_name: en
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 32088854
num_examples: 51942
download_size: 22764890
dataset_size: 32088854
- config_name: pt
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 33600380
num_examples: 51942
download_size: 23513483
dataset_size: 33600380
- config_name: es
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 35893136
num_examples: 51942
download_size: 24483751
dataset_size: 35893136
- config_name: ca
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 33938638
num_examples: 51942
download_size: 23096222
dataset_size: 33938638
- config_name: eu
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 29977672
num_examples: 51942
download_size: 20469814
dataset_size: 29977672
- config_name: gl
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 32736710
num_examples: 51942
download_size: 22356802
dataset_size: 32736710
- config_name: at
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 31487842
num_examples: 51942
download_size: 20688305
dataset_size: 31487842
---
# Dataset Card for Alpaca MT
## Dataset Description
- **Homepage:** https://crfm.stanford.edu/2023/03/13/alpaca.html
- **Repository:** https://github.com/juletx/alpaca-lora-mt
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** Rohan Taori
### Dataset Summary
Alpaca is a dataset of 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better. This dataset also includes machine-translated data for 6 Iberian languages: Portuguese, Spanish, Catalan, Basque, Galician and Asturian. Translation was done using NLLB-200 3.3B model.
The authors built on the data generation pipeline from [Self-Instruct framework](https://github.com/yizhongw/self-instruct) and made the following modifications:
- The `text-davinci-003` engine to generate the instruction data instead of `davinci`.
- A [new prompt](https://github.com/tatsu-lab/stanford_alpaca/blob/main/prompt.txt) was written that explicitly gave the requirement of instruction generation to `text-davinci-003`.
- Much more aggressive batch decoding was used, i.e., generating 20 instructions at once, which significantly reduced the cost of data generation.
- The data generation pipeline was simplified by discarding the difference between classification and non-classification instructions.
- Only a single instance was generated for each instruction, instead of 2 to 3 instances as in Self-Instruct.
This produced an instruction-following dataset with 52K examples obtained at a much lower cost (less than $500).
In a preliminary study, the authors also found that the 52K generated data to be much more diverse than the data released by [Self-Instruct](https://github.com/yizhongw/self-instruct/blob/main/data/seed_tasks.jsonl).
### Supported Tasks and Leaderboards
The Alpaca dataset designed for instruction training pretrained language models.
### Languages
The original data in Alpaca is in English (BCP-47 en). We also provide machine-translated data for 6 Iberian languages: Portuguese (BCP-47 pt), Spanish (BCP-47 es), Catalan (BCP-47 ca), Basque (BCP-47 eu), Galician (BCP-47 gl) and Asturian (BCP-47 at).
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"instruction": "Create a classification task by clustering the given list of items.",
"input": "Apples, oranges, bananas, strawberries, pineapples",
"output": "Class 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples",
"text": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nCreate a classification task by clustering the given list of items.\n\n### Input:\nApples, oranges, bananas, strawberries, pineapples\n\n### Response:\nClass 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples",
}
```
### Data Fields
The data fields are as follows:
* `instruction`: describes the task the model should perform. Each of the 52K instructions is unique.
* `input`: optional context or input for the task. For example, when the instruction is "Summarize the following article", the input is the article. Around 40% of the examples have an input.
* `output`: the answer to the instruction as generated by `text-davinci-003`.
* `text`: the `instruction`, `input` and `output` formatted with the [prompt template](https://github.com/tatsu-lab/stanford_alpaca#data-release) used by the authors for fine-tuning their models.
### Data Splits
| | train |
|---------------|------:|
| en | 52002 |
| pt | 52002 |
| es | 52002 |
| ca | 52002 |
| eu | 52002 |
| gl | 52002 |
| at | 52002 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Excerpt the [blog post](https://crfm.stanford.edu/2023/03/13/alpaca.html) accompanying the release of this dataset:
> We believe that releasing the above assets will enable the academic community to perform controlled scientific studies on instruction-following language models, resulting in better science and ultimately new techniques to address the existing deficiencies with these models. At the same time, any release carries some risk. First, we recognize that releasing our training recipe reveals the feasibility of certain capabilities. On one hand, this enables more people (including bad actors) to create models that could cause harm (either intentionally or not). On the other hand, this awareness might incentivize swift defensive action, especially from the academic community, now empowered by the means to perform deeper safety research on such models. Overall, we believe that the benefits for the research community outweigh the risks of this particular release. Given that we are releasing the training recipe, we believe that releasing the data, model weights, and training code incur minimal further risk, given the simplicity of the recipe. At the same time, releasing these assets has enormous benefits for reproducible science, so that the academic community can use standard datasets, models, and code to perform controlled comparisons and to explore extensions. Deploying an interactive demo for Alpaca also poses potential risks, such as more widely disseminating harmful content and lowering the barrier for spam, fraud, or disinformation. We have put into place two risk mitigation strategies. First, we have implemented a content filter using OpenAI’s content moderation API, which filters out harmful content as defined by OpenAI’s usage policies. Second, we watermark all the model outputs using the method described in Kirchenbauer et al. 2023, so that others can detect (with some probability) whether an output comes from Alpaca 7B. Finally, we have strict terms and conditions for using the demo; it is restricted to non-commercial uses and to uses that follow LLaMA’s license agreement. We understand that these mitigation measures can be circumvented once we release the model weights or if users train their own instruction-following models. However, by installing these mitigations, we hope to advance the best practices and ultimately develop community norms for the responsible deployment of foundation models.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
The `alpaca` data is generated by a language model (`text-davinci-003`) and inevitably contains some errors or biases. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
### Contributions
[More Information Needed] |
cambridgeltl/vsr_random | 2023-03-22T17:28:37.000Z | [
"task_categories:text-classification",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"multimodality",
"vision-and-language",
"arxiv:2205.00363",
"region:us"
] | cambridgeltl | null | null | null | 1 | 11 | ---
license: cc-by-4.0
task_categories:
- text-classification
- question-answering
language:
- en
tags:
- multimodality
- vision-and-language
pretty_name: VSR (random split)
size_categories:
- 10K<n<100K
---
# VSR: Visual Spatial Reasoning
This is the **random set** of **VSR**: *Visual Spatial Reasoning* (TACL 2023) [[paper]](https://arxiv.org/abs/2205.00363).
### Usage
```python
from datasets import load_dataset
data_files = {"train": "train.jsonl", "dev": "dev.jsonl", "test": "test.jsonl"}
dataset = load_dataset("cambridgeltl/vsr_random", data_files=data_files)
```
Note that the image files still need to be downloaded separately. See [`data/`](https://github.com/cambridgeltl/visual-spatial-reasoning/tree/master/data) for details.
Go to our [github repo](https://github.com/cambridgeltl/visual-spatial-reasoning) for more introductions.
### Citation
If you find VSR useful:
```bibtex
@article{Liu2022VisualSR,
title={Visual Spatial Reasoning},
author={Fangyu Liu and Guy Edward Toh Emerson and Nigel Collier},
journal={Transactions of the Association for Computational Linguistics},
year={2023},
}
``` |
s-nlp/en_paradetox_content | 2023-09-08T08:38:03.000Z | [
"task_categories:text-classification",
"language:en",
"license:openrail++",
"region:us"
] | s-nlp | null | null | null | 0 | 11 | ---
license: openrail++
task_categories:
- text-classification
language:
- en
---
# ParaDetox: Detoxification with Parallel Data (English). Content Task Results
This repository contains information about **Content Task** markup from [English Paradetox dataset](https://huggingface.co/datasets/s-nlp/paradetox) collection pipeline.
The original paper ["ParaDetox: Detoxification with Parallel Data"](https://aclanthology.org/2022.acl-long.469/) was presented at ACL 2022 main conference.
## ParaDetox Collection Pipeline
The ParaDetox Dataset collection was done via [Yandex.Toloka](https://toloka.yandex.com/) crowdsource platform. The collection was done in three steps:
* *Task 1:* **Generation of Paraphrases**: The first crowdsourcing task asks users to eliminate toxicity in a given sentence while keeping the content.
* *Task 2:* **Content Preservation Check**: We show users the generated paraphrases along with their original variants and ask them to indicate if they have close meanings.
* *Task 3:* **Toxicity Check**: Finally, we check if the workers succeeded in removing toxicity.
Specifically this repo contains the results of **Task 2: Content Preservation Check**. Here, the samples with markup confidence >= 90 are present. One text in the pair is toxic, another -- its non-toxic paraphrase (should be).
Totally, datasets contains 32,317 pairs. Among them, the minor part is negative examples (4,562 pairs).
## Citation
```
@inproceedings{logacheva-etal-2022-paradetox,
title = "{P}ara{D}etox: Detoxification with Parallel Data",
author = "Logacheva, Varvara and
Dementieva, Daryna and
Ustyantsev, Sergey and
Moskovskiy, Daniil and
Dale, David and
Krotova, Irina and
Semenov, Nikita and
Panchenko, Alexander",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.469",
pages = "6804--6818",
abstract = "We present a novel pipeline for the collection of parallel data for the detoxification task. We collect non-toxic paraphrases for over 10,000 English toxic sentences. We also show that this pipeline can be used to distill a large existing corpus of paraphrases to get toxic-neutral sentence pairs. We release two parallel corpora which can be used for the training of detoxification models. To the best of our knowledge, these are the first parallel datasets for this task.We describe our pipeline in detail to make it fast to set up for a new language or domain, thus contributing to faster and easier development of new parallel resources.We train several detoxification models on the collected data and compare them with several baselines and state-of-the-art unsupervised approaches. We conduct both automatic and manual evaluations. All models trained on parallel data outperform the state-of-the-art unsupervised models by a large margin. This suggests that our novel datasets can boost the performance of detoxification systems.",
}
```
## Contacts
For any questions, please contact: Daryna Dementieva (dardem96@gmail.com) |
Francesco/bacteria-ptywi | 2023-03-30T09:18:56.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': bacteria
'1': Str_pne
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: bacteria-ptywi
tags:
- rf100
---
# Dataset Card for bacteria-ptywi
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/bacteria-ptywi
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
bacteria-ptywi
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/bacteria-ptywi
### Citation Information
```
@misc{ bacteria-ptywi,
title = { bacteria ptywi Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/bacteria-ptywi } },
url = { https://universe.roboflow.com/object-detection/bacteria-ptywi },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
Francesco/road-signs-6ih4y | 2023-03-30T09:19:50.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | null | 3 | 11 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': road-signs
'1': bus_stop
'2': do_not_enter
'3': do_not_stop
'4': do_not_turn_l
'5': do_not_turn_r
'6': do_not_u_turn
'7': enter_left_lane
'8': green_light
'9': left_right_lane
'10': no_parking
'11': parking
'12': ped_crossing
'13': ped_zebra_cross
'14': railway_crossing
'15': red_light
'16': stop
'17': t_intersection_l
'18': traffic_light
'19': u_turn
'20': warning
'21': yellow_light
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: road-signs-6ih4y
tags:
- rf100
---
# Dataset Card for road-signs-6ih4y
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/road-signs-6ih4y
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
road-signs-6ih4y
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/road-signs-6ih4y
### Citation Information
```
@misc{ road-signs-6ih4y,
title = { road signs 6ih4y Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/road-signs-6ih4y } },
url = { https://universe.roboflow.com/object-detection/road-signs-6ih4y },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
mstz/annealing | 2023-04-15T11:07:36.000Z | [
"task_categories:tabular-classification",
"size_categories:100<n<1K",
"language:en",
"annealing",
"tabular_classification",
"multiclass_classificaiton",
"region:us"
] | mstz | null | null | null | 0 | 11 | ---
language:
- en
tags:
- annealing
- tabular_classification
- multiclass_classificaiton
pretty_name: Annealing
size_categories:
- 100<n<1K
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- annealing
---
# DO NOT USE
> Still working on it.
# Annealing
The [Annealing dataset](https://archive-beta.ics.uci.edu/dataset/3/annealing) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Dataset
# Configurations and tasks
| **Configuration** | **Task** | Description |
|-------------------|---------------------------|---------------------------------------------------------------|
| annealing | Multiclass classification | |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/annealing")["train"]
``` |
mstz/muskV2 | 2023-04-07T14:32:09.000Z | [
"task_categories:tabular-classification",
"size_categories:100<n<1K",
"language:en",
"musk",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"region:us"
] | mstz | null | @misc{misc_musk_(version_2)_75,
author = {Chapman,David & Jain,Ajay},
title = {{Musk (Version 2)}},
year = {1994},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C51608}}
} | null | 0 | 11 | ---
language:
- en
tags:
- musk
- tabular_classification
- binary_classification
- multiclass_classification
pretty_name: Musk
size_categories:
- 100<n<1K
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- musk
---
# Musk
The [Musk dataset](https://archive.ics.uci.edu/ml/datasets/Musk) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Census dataset including personal characteristic of a person, and their income threshold.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------------|
| musk | Binary classification | Is the molecule a musk?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/muskV2")["train"]
```
|
mstz/tic_tac_toe | 2023-04-16T18:03:22.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"TicTacToe",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_tic-tac-toe_endgame_101,
author = {Aha,David},
title = {{Tic-Tac-Toe Endgame}},
year = {1991},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5688J}}
} | null | 0 | 11 | ---
language:
- en
tags:
- TicTacToe
- tabular_classification
- binary_classification
- UCI
pretty_name: TicTacToe
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- tic_tac_toe
license: cc
---
# TicTacToe
The [TicTacToe dataset](https://archive-beta.ics.uci.edu/dataset/101/tic+tac+toe+endgame) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-------------------------|
| tic_tac_toe | Binary classification | Does the X player win? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/tic_tac_toe")["train"]
``` |
vincentmin/eli5_rlhf_explainlikeim5 | 2023-04-10T10:52:49.000Z | [
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:en",
"region:us"
] | vincentmin | null | null | null | 5 | 11 | ---
task_categories:
- text-generation
- question-answering
language:
- en
pretty_name: Reddit Explain Like I'm 5 for Reinforcement Learning Human Feedback
size_categories:
- 100K<n<1M
---
# ELI5 paired
This is a processed version of the [`eli5`](https://huggingface.co/datasets/eli5) dataset.
Compared to ["eli5_rlhf"](https://huggingface.co/datasets/vincentmin/eli5_rlhf), this dataset contains only QA pairs from the train split of the eli5 dataset and only from the subreddit explainlikeimfive.
Furthermore, the function
```
def get_question(example):
title = example["title"]
selftext = example["selftext"]
if selftext:
if selftext[-1] not in [".", "?", "!"]:
seperator = ". "
else:
seperator = " "
question = title + seperator + selftext
else:
question = title
example["question"] = question
return example
```
was applied to get the "question" column and the "title" and "selftext" columns were removed.
The dataset was created following very closely the steps in the [`stack-exchange-paired`](https://huggingface.co/datasets/lvwerra/stack-exchange-paired) dataset.
The following steps were applied:
- The "question" field is a concatenation of "title" with "selftext".
- Create pairs `(response_j, response_k)` where j was rated better than k
- Sample at most 10 pairs per question
- Shuffle the dataset globally
This dataset is designed to be used for preference learning. The processing notebook is in the repository as well. |
andreabac3/StackOverflow-Italian-Fauno-Baize | 2023-04-08T15:49:40.000Z | [
"license:gpl-3.0",
"arxiv:2304.01196",
"region:us"
] | andreabac3 | null | null | null | 1 | 11 | ---
license: gpl-3.0
---
# StackOverflow-Italian-Fauno-Baize
This dataset is an Italian translation of the StackOverflow dataset presented by Baize's authors.
## Dataset Description
- **Paper:** https://arxiv.org/abs/2304.01196
### Languages
Italian
## Dataset Structure
### Data Instances
Sentences 57,046
average number of turns 3.6
response lengths of each turn 36.0
### Data Fields
topic, input
### Data Splits
Train
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
https://github.com/project-baize/baize-chatbot
## Additional Information
### Dataset Curators
[Andrea Bacciu](https://andreabac3.github.io/), Dr. [Giovanni Trappolini](https://sites.google.com/view/giovannitrappolini), [Andrea Santilli](https://www.santilli.xyz/), and Professor [Fabrizio Silvestri](https://sites.google.com/diag.uniroma1.it/fabriziosilvestri/home).
### Licensing Information
This project is a derivative of Baize, and we adhere to the licensing constraints imposed by Baize's creators.
### Citation Information
```bibtex
@misc{fauno,
author = {Andrea Bacciu, Giovanni Trappolini, Andrea Santilli, Fabrizio Silvestri},
title = {Fauno: The Italian Large Language Model that will leave you senza parole!},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/andreabac3/Fauno-Italian-LLM}},
}
```
```bibtex
@article{xu2023baize,
title={Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data},
author={Xu, Canwen and Guo, Daya and Duan, Nan and McAuley, Julian},
journal={arXiv preprint arXiv:2304.01196},
year={2023}
}
``` |
andreabac3/Quora-Italian-Fauno-Baize | 2023-04-08T15:54:40.000Z | [
"license:gpl-3.0",
"arxiv:2304.01196",
"region:us"
] | andreabac3 | null | null | null | 2 | 11 | ---
license: gpl-3.0
---
# Quora-Italian-Fauno-Baize
This dataset is an Italian translation of the Quora dataset presented by Baize's authors.
## Dataset Description
- **Paper:** https://arxiv.org/abs/2304.01196
### Languages
Italian
## Dataset Structure
### Data Instances
Sentences 54,456
average number of turns 3.9
response lengths of each turn 35.9
### Data Fields
topic, input
### Data Splits
Train
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
https://github.com/project-baize/baize-chatbot
## Additional Information
### Dataset Curators
[Andrea Bacciu](https://andreabac3.github.io/), Dr. [Giovanni Trappolini](https://sites.google.com/view/giovannitrappolini), [Andrea Santilli](https://www.santilli.xyz/), and Professor [Fabrizio Silvestri](https://sites.google.com/diag.uniroma1.it/fabriziosilvestri/home).
### Licensing Information
This project is a derivative of Baize, and we adhere to the licensing constraints imposed by Baize's creators.
### Citation Information
```bibtex
@misc{fauno,
author = {Andrea Bacciu, Giovanni Trappolini, Andrea Santilli, Fabrizio Silvestri},
title = {Fauno: The Italian Large Language Model that will leave you senza parole!},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/andreabac3/Fauno-Italian-LLM}},
}
```
```bibtex
@article{xu2023baize,
title={Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data},
author={Xu, Canwen and Guo, Daya and Duan, Nan and McAuley, Julian},
journal={arXiv preprint arXiv:2304.01196},
year={2023}
}
``` |
lansinuote/diffsion_from_scratch | 2023-04-14T06:36:47.000Z | [
"region:us"
] | lansinuote | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 119417305.0
num_examples: 833
download_size: 99672356
dataset_size: 119417305.0
---
# Dataset Card for "diffsion_from_scratch"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mstz/golf | 2023-04-20T09:33:01.000Z | [
"task_categories:tabular-classification",
"language:en",
"golf",
"tabular_classification",
"binary_classification",
"region:us"
] | mstz | null | null | null | 0 | 11 | ---
language:
- en
tags:
- golf
- tabular_classification
- binary_classification
pretty_name: Golf
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- golf
---
# Golf
The Golf dataset.
Is it a good day to play golf?
# Configurations and tasks
| **Configuration** | **Task** |
|-----------------------|---------------------------|
| golf | Binary classification.|
|
mstz/kddcup | 2023-04-17T14:29:30.000Z | [
"task_categories:tabular-classification",
"language:en",
"kddcup",
"tabular_classification",
"binary_classification",
"region:us"
] | mstz | null | null | null | 1 | 11 | ---
language:
- en
tags:
- kddcup
- tabular_classification
- binary_classification
pretty_name: Kddcup
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- kddcup
---
# Kddcup
The Kddcup dataset.
# Configurations and tasks
| **Configuration** | **Task** |
|-----------------------|---------------------------|
| kddcup | Multiclass classification.|
|
mstz/letter | 2023-04-17T14:50:00.000Z | [
"task_categories:tabular-classification",
"language:en",
"letter",
"tabular_classification",
"multiclass_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_letter_recognition_59,
author = {Slate,David},
title = {{Letter Recognition}},
year = {1991},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5ZP40}}
} | null | 0 | 11 | ---
language:
- en
tags:
- letter
- tabular_classification
- multiclass_classification
- binary_classification
- UCI
pretty_name: Letter
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- letter
---
# Letter
The [Letter dataset](https://archive-beta.ics.uci.edu/dataset/59/letter+recognition) from the [UCI repository](https://archive-beta.ics.uci.edu/).
Letter recognition.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| letter | Multiclass classification.| |
| A | Binary classification. | Is this letter A? |
| B | Binary classification. | Is this letter B? |
| C | Binary classification. | Is this letter C? |
| ... | Binary classification. | ... |
|
mstz/p53 | 2023-04-17T15:55:36.000Z | [
"task_categories:tabular-classification",
"language:en",
"p53",
"tabular_classification",
"binary_classification",
"region:us"
] | mstz | null | @misc{misc_p53_mutants_188,
author = {Lathrop,Richard},
title = {{p53 Mutants}},
year = {2010},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5T89H}}
} | null | 0 | 11 | ---
language:
- en
tags:
- p53
- tabular_classification
- binary_classification
pretty_name: P53
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- p53
---
# P53
The [P53 dataset](https://archive-beta.ics.uci.edu/dataset/170/p53) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| p53 | Binary classification.| |
|
jacobthebanana/sst5-mapped-extreme | 2023-04-18T19:51:29.000Z | [
"region:us"
] | jacobthebanana | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: label
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 450820
num_examples: 4004
- name: test
num_bytes: 119474
num_examples: 1067
- name: validation
num_bytes: 60494
num_examples: 533
download_size: 413936
dataset_size: 630788
---
# Dataset Card for "sst5-mapped-extreme"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mstz/soybean | 2023-04-18T08:09:13.000Z | [
"task_categories:tabular-classification",
"language:en",
"soybean",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_soybean_(large)_90,
author = {Michalski,R.S. & Chilausky,R.L.},
title = {{Soybean (Large)}},
year = {1988},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5JG6Z}}
} | null | 0 | 11 | ---
language:
- en
tags:
- soybean
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Isoybean
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- soybean
---
# Soybean
The [Soybean dataset](https://archive-beta.ics.uci.edu/dataset/90/soybean+large) from the [UCI repository](https://archive-beta.ics.uci.edu/).
Classify the type of soybean.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-----------------|
| soybean | Binary classification.| Classify soybean type. |
| diaporthe_stem_canker | Binary classification | Is this instance of class diaporthe_stem_canker? |
| charcoal_rot | Binary classification | Is this instance of class charcoal_rot? |
| rhizoctonia_root_rot | Binary classification | Is this instance of class rhizoctonia_root_rot? |
| phytophthora_rot | Binary classification | Is this instance of class phytophthora_rot? |
| brown_stem_rot | Binary classification | Is this instance of class brown_stem_rot? |
| powdery_mildew | Binary classification | Is this instance of class powdery_mildew? |
| downy_mildew | Binary classification | Is this instance of class downy_mildew? |
| brown_spot | Binary classification | Is this instance of class brown_spot? |
| bacterial_blight | Binary classification | Is this instance of class bacterial_blight? |
| bacterial_pustule | Binary classification | Is this instance of class bacterial_pustule? |
| purple_seed_stain | Binary classification | Is this instance of class purple_seed_stain? |
| anthracnose | Binary classification | Is this instance of class anthracnose? |
| phyllosticta_leaf_spot | Binary classification | Is this instance of class phyllosticta_leaf_spot? |
| alternarialeaf_spot | Binary classification | Is this instance of class alternarialeaf_spot? |
| frog_eye_leaf_spot | Binary classification | Is this instance of class frog_eye_leaf_spot? |
| diaporthe_pod_&_stem_blight | Binary classification | Is this instance of class diaporthe_pod_? |
| cyst_nematode | Binary classification | Is this instance of class cyst_nematode? |
| 2_4_d_injury | Binary classification | Is this instance of class 2_4_d_injury? |
| herbicide_injury | Binary classification | Is this instance of class herbicide_injury? | |
mstz/uscensus | 2023-04-18T09:01:20.000Z | [
"task_categories:tabular-classification",
"language:en",
"uscensus",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | null | null | 0 | 11 | ---
language:
- en
tags:
- uscensus
- tabular_classification
- binary_classification
- UCI
pretty_name: Uscensus
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- uscensus
---
# Uscensus
[US census dataset]() from [UCI](). |
matejklemen/clc_fce | 2023-04-25T21:00:20.000Z | [
"license:other",
"region:us"
] | matejklemen | The CLC FCE Dataset is a set of 1,244 exam scripts written by candidates sitting the Cambridge ESOL First Certificate
in English (FCE) examination in 2000 and 2001. The dataset exposes the sentence-level pre-tokenized M2 version, totaling
33236 sentences. | @inproceedings{yannakoudakis-etal-2011-new,
title = "A New Dataset and Method for Automatically Grading {ESOL} Texts",
author = "Yannakoudakis, Helen and
Briscoe, Ted and
Medlock, Ben",
booktitle = "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2011",
url = "https://aclanthology.org/P11-1019",
pages = "180--189",
} | null | 0 | 11 | ---
license: other
dataset_info:
features:
- name: src_tokens
sequence: string
- name: tgt_tokens
sequence: string
- name: corrections
list:
- name: idx_src
sequence: int32
- name: idx_tgt
sequence: int32
- name: corr_type
dtype: string
splits:
- name: train
num_bytes: 8658209
num_examples: 28350
- name: validation
num_bytes: 668073
num_examples: 2191
- name: test
num_bytes: 823872
num_examples: 2695
download_size: 2774021
dataset_size: 10150154
---
|
Ubenwa/CryCeleb2023 | 2023-07-18T13:05:52.000Z | [
"task_categories:audio-classification",
"size_categories:10K<n<100K",
"license:cc-by-nc-nd-4.0",
"arxiv:2305.00969",
"doi:10.57967/hf/1014",
"region:us"
] | Ubenwa | null | null | null | 5 | 11 | ---
viewer: false
dataset_info:
features:
- name: baby_id
dtype: string
- name: period
dtype: string
- name: duration
dtype: float64
- name: split
dtype: string
- name: chronological_index
dtype: string
- name: file_name
dtype: string
- name: file_id
dtype: string
splits:
- name: train
num_bytes: 522198700
num_examples: 18190
num_babies: 586
total_length (minutes): 268
- name: dev
num_bytes: 45498424
num_examples: 1614
num_babies: 40
total_length (minutes): 23
- name: test
num_bytes: 192743500
num_examples: 6289
num_babies: 160
total_length (minutes): 99
dataset_size: 760444720
num_examples: 26093
num_babies: 786
total_length (minutes): 391
license: cc-by-nc-nd-4.0
task_categories:
- audio-classification
size_categories:
- 10K<n<100K
extra_gated_fields:
Affilation (company or university): text
Country: text
I agree to use this data for non-commercial use ONLY (under Creative Commons Attribution-NonCommercial-NoDerivatives 4 International license): checkbox
---
# Dataset Card for "CryCeleb2023"
## Table of Contents
- [Dataset Card for "CryCeleb2023"](#dataset-card-for-cryceleb2023)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Splits](#data-splits)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage: https://huggingface.co/datasets/Ubenwa/CryCeleb2023**
- **Repository: https://huggingface.co/datasets/Ubenwa/CryCeleb2023**
- **Paper: https://arxiv.org/abs/2305.00969**
- **Leaderboard: https://huggingface.co/spaces/competitions/CryCeleb2023**
- **Point of Contact: challenge@ubenwa.ai**
### Dataset Summary
The CryCeleb2023 dataset is a compilation of cries gathered from 786 infants from various hospitals. \
The 26k audio files make up 6.5 hours of pure expiration sounds. \
The dataset also contains information on the time of recording, which is either within the first hour(s) of life or \
upon hospital discharge, typically within 24 hours of birth.
### Supported Tasks and Leaderboards
[CryCeleb2023 competition](https://huggingface.co/spaces/competitions/CryCeleb2023)
## Dataset Structure
Audio folder contains short wav files (16 kHz wav PCM).
*audio* - folder with audio files structured by infant ID
```
audio/
train/
spk1/
B/
spk1_B_001.wav
...
spk6_B_001.wav
...
D/
spk1_D_001.wav
...
...
spk586
...
dev/
...(similar to train)...
test/
anonymous1/
B/
...
```
In this folder structure:
- spkN: folder with recordings corresponding to baby N
- B/D: time of recording (birth or discharge)
- 001, 002,, etc - chronological index of cry sound (expiration)
*metadata.csv* - metadata associated with each audio file
*dev_pairs.csv* - pairs of birth/discharge recordings used for evaluating development set (available to challenge participants)
*test_pairs.csv* - pairs of birth/discharge recordings used in CryCeleb2023 evaluation (public and private scores)
### Data Instances
Audio files 16 kHz wav PCM - manually segmented cry sounds (expirations)
### Data Splits
Number of Infants by Split and Time(s) of Recording(s)
| Time(s) of Recording | train | dev | test |
| --- | --- | --- | --- |
| Both birth and discharge | 348 | 40 | 160 |
| Only birth | 183 | 0 | 0 |
| Only discharge | 55 | 0 | 0 |
| | 586 | 40 | 160 |
### Source Data
Audio recordings of infant cries made by android application
### Annotations
#### Annotation process
- Manual segmentation of cry into three categories: expiration, inspiration, no cry
- Only expirations kept in this corpus
- Manual review to remove any PIIs
### Personal and Sensitive Information
PII such as intelligible background speech, etc, were removed from the data.
All identities are also anonymized.
## Considerations for Using the Data
### Discussion of Biases
The dataset only covers infants born in one country
### Other Known Limitations
Dataset only includes expirations.
Recording quality varies
## Additional Information
### Dataset Curators
Ubenwa.ai (contact: challenge@ubenwa.ai)
### Licensing Information
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
[](https://creativecommons.org/licenses/cc-nc-nd/4.0/)
### Citation Information
Please cite the following paper if you use this dataset
```
@article{ubenwa2023cryceleb,
title={CryCeleb: A Speaker Verification Dataset Based on Infant Cry Sounds},
author={David Budaghyan and Arsenii Gorin and Cem Subakan and Charles C. Onu},
year={2023},
journal={preprint arXiv:2305.00969},
}
```
|
moyix/SecurityEval | 2023-05-01T19:08:57.000Z | [
"region:us"
] | moyix | null | null | null | 7 | 11 | ---
dataset_info:
features:
- name: ID
dtype: string
- name: Prompt
dtype: string
- name: Insecure_code
dtype: string
splits:
- name: test
num_bytes: 72854
num_examples: 130
download_size: 46036
dataset_size: 72854
---
# Dataset Card for "SecurityEval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TrainingDataPro/license_plates | 2023-09-14T16:42:28.000Z | [
"task_categories:image-to-text",
"language:en",
"license:cc-by-nc-nd-4.0",
"finance",
"region:us"
] | TrainingDataPro | Over 1.2 million annotated license plates from vehicles around the world.
This dataset is tailored for License Plate Recognition tasks and includes
images from both YouTube and PlatesMania.
Annotation details are provided in the About section below. | @InProceedings{huggingface:dataset,
title = {license_plates},
author = {TrainingDataPro},
year = {2023}
} | null | 3 | 11 | ---
license: cc-by-nc-nd-4.0
task_categories:
- image-to-text
language:
- en
tags:
- finance
dataset_info:
- config_name: Brazil_youtube
features:
- name: image
dtype: image
- name: labeled_image
dtype: image
- name: bbox
dtype: string
- name: license_plate.id
dtype: string
- name: license_plate.visibility
dtype: string
- name: license_plate.rows_count
dtype: uint8
- name: license_plate.number
dtype: string
- name: license_plate.serial
dtype: string
- name: license_plate.country
dtype: string
- name: license_plate.mask
dtype: string
splits:
- name: train
num_bytes: 173536648
num_examples: 72
download_size: 22606962
dataset_size: 173536648
- config_name: Estonia_platesmania
features:
- name: image
dtype: image
- name: labeled_image
dtype: image
- name: bbox
dtype: string
- name: license_plate.id
dtype: string
- name: license_plate.visibility
dtype: string
- name: license_plate.rows_count
dtype: uint8
- name: license_plate.number
dtype: string
- name: license_plate.serial
dtype: string
- name: license_plate.country
dtype: string
- name: license_plate.mask
dtype: string
splits:
- name: train
num_bytes: 7990452
num_examples: 10
download_size: 7863164
dataset_size: 7990452
- config_name: Finland_platesmania
features:
- name: image
dtype: image
- name: labeled_image
dtype: image
- name: bbox
dtype: string
- name: license_plate.id
dtype: string
- name: license_plate.visibility
dtype: string
- name: license_plate.rows_count
dtype: uint8
- name: license_plate.number
dtype: string
- name: license_plate.serial
dtype: string
- name: license_plate.country
dtype: string
- name: license_plate.mask
dtype: string
splits:
- name: train
num_bytes: 9650579
num_examples: 10
download_size: 9485725
dataset_size: 9650579
- config_name: Kazakhstan_platesmania
features:
- name: image
dtype: image
- name: labeled_image
dtype: image
- name: bbox
dtype: string
- name: license_plate.id
dtype: string
- name: license_plate.visibility
dtype: string
- name: license_plate.rows_count
dtype: uint8
- name: license_plate.number
dtype: string
- name: license_plate.serial
dtype: string
- name: license_plate.country
dtype: string
- name: license_plate.mask
dtype: string
splits:
- name: train
num_bytes: 14064541
num_examples: 19
download_size: 7265915
dataset_size: 14064541
- config_name: Kazakhstan_youtube
features:
- name: image
dtype: image
- name: labeled_image
dtype: image
- name: bbox
dtype: string
- name: license_plate.id
dtype: string
- name: license_plate.visibility
dtype: string
- name: license_plate.rows_count
dtype: uint8
- name: license_plate.number
dtype: string
- name: license_plate.serial
dtype: string
- name: license_plate.country
dtype: string
- name: license_plate.mask
dtype: string
splits:
- name: train
num_bytes: 6324396
num_examples: 22
download_size: 2852873
dataset_size: 6324396
- config_name: Lithuania_platesmania
features:
- name: image
dtype: image
- name: labeled_image
dtype: image
- name: bbox
dtype: string
- name: license_plate.id
dtype: string
- name: license_plate.visibility
dtype: string
- name: license_plate.rows_count
dtype: uint8
- name: license_plate.number
dtype: string
- name: license_plate.serial
dtype: string
- name: license_plate.country
dtype: string
- name: license_plate.mask
dtype: string
splits:
- name: train
num_bytes: 8127614
num_examples: 10
download_size: 7940839
dataset_size: 8127614
- config_name: Serbia_platesmania
features:
- name: image
dtype: image
- name: labeled_image
dtype: image
- name: bbox
dtype: string
- name: license_plate.id
dtype: string
- name: license_plate.visibility
dtype: string
- name: license_plate.rows_count
dtype: uint8
- name: license_plate.number
dtype: string
- name: license_plate.serial
dtype: string
- name: license_plate.country
dtype: string
- name: license_plate.mask
dtype: string
splits:
- name: train
num_bytes: 10000777
num_examples: 10
download_size: 9808356
dataset_size: 10000777
- config_name: Serbia_youtube
features:
- name: image
dtype: image
- name: labeled_image
dtype: image
- name: bbox
dtype: string
- name: license_plate.id
dtype: string
- name: license_plate.visibility
dtype: string
- name: license_plate.rows_count
dtype: uint8
- name: license_plate.number
dtype: string
- name: license_plate.serial
dtype: string
- name: license_plate.country
dtype: string
- name: license_plate.mask
dtype: string
splits:
- name: train
num_bytes: 26535839
num_examples: 67
download_size: 4044272
dataset_size: 26535839
- config_name: UAE_platesmania
features:
- name: image
dtype: image
- name: labeled_image
dtype: image
- name: bbox
dtype: string
- name: license_plate.id
dtype: string
- name: license_plate.visibility
dtype: string
- name: license_plate.rows_count
dtype: uint8
- name: license_plate.number
dtype: string
- name: license_plate.serial
dtype: string
- name: license_plate.country
dtype: string
- name: license_plate.mask
dtype: string
splits:
- name: train
num_bytes: 8236358
num_examples: 10
download_size: 8028800
dataset_size: 8236358
- config_name: UAE_youtube
features:
- name: image
dtype: image
- name: labeled_image
dtype: image
- name: bbox
dtype: string
- name: license_plate.id
dtype: string
- name: license_plate.visibility
dtype: string
- name: license_plate.rows_count
dtype: uint8
- name: license_plate.number
dtype: string
- name: license_plate.serial
dtype: string
- name: license_plate.country
dtype: string
- name: license_plate.mask
dtype: string
splits:
- name: train
num_bytes: 41202317
num_examples: 162
download_size: 2666314
dataset_size: 41202317
---
# License Plates
Over **1.2 million** annotated license plates from vehicles around the world. This dataset is tailored for **License Plate Recognition tasks** and includes images from both YouTube and PlatesMania.
Annotation details are provided in the About section below.
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=license_plates) to discuss your requirements, learn about the price and buy the dataset.
# About
## Variables in .csv files:
- **file_name** - filename of the original car photo
- **license_plate.country** - country where the vehicle was captured
- **bbox** - normalized Bounding Box labeling of the car
- **license_plate.visibility** - the visibility type of the license plate
- **license_plate.id** - unique license plate's id
- **license_plate.mask** - normalized coordinates of the license plate
- **license_plate.rows_count** - single-line or double-line number
- **license_plate.number** - recognized text of the license plate
- **license_plate.serial** - only for UAE numbers - license plate series
- **license_plate.region** - only for UAE numbers - license plate subregion
- **license_plate.color** - only for Saudi Arabia - color of the international plate code
**How it works**: *go to the folder of the country, CSV-file contains all labeling information about images located in the subfolder "photos" of the corresponding folder.*
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=license_plates) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
birkhoffg/folktables-acs-income | 2023-05-08T19:31:11.000Z | [
"task_categories:tabular-classification",
"size_categories:1M<n<10M",
"language:en",
"adult",
"region:us"
] | birkhoffg | null | null | null | 1 | 11 | ---
dataset_info:
features:
- name: AGEP
dtype: float64
- name: COW
dtype: float64
- name: SCHL
dtype: float64
- name: MAR
dtype: float64
- name: OCCP
dtype: float64
- name: POBP
dtype: float64
- name: RELP
dtype: float64
- name: WKHP
dtype: float64
- name: SEX
dtype: float64
- name: RAC1P
dtype: float64
- name: STATE
dtype: string
- name: YEAR
dtype: int64
- name: PINCP
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 808018860
num_examples: 7345626
- name: test
num_bytes: 269339730
num_examples: 2448543
download_size: 197308481
dataset_size: 1077358590
task_categories:
- tabular-classification
language:
- en
tags:
- adult
size_categories:
- 1M<n<10M
---
# Dataset Card for "folktables-acs-income"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nan-Do/code-search-net-javascript | 2023-05-15T00:57:43.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:summarization",
"language:en",
"license:apache-2.0",
"code",
"javascript",
"CodeSearchNet",
"summary",
"region:us"
] | Nan-Do | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
- name: partition
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 543032741
num_examples: 138155
download_size: 182237165
dataset_size: 543032741
license: apache-2.0
task_categories:
- text-generation
- text2text-generation
- summarization
language:
- en
tags:
- code
- javascript
- CodeSearchNet
- summary
pretty_name: JavaScript CodeSearchNet with Summaries
---
# Dataset Card for "code-search-net-javascript"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/code-search-net-JavaScript
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This dataset is the JavaScript portion of the CodeSarchNet annotated with a summary column.
The code-search-net dataset includes open source functions that include comments found at GitHub.
The summary is a short description of what the function does.
### Languages
The dataset's comments are in English and the functions are coded in JavaScript
### Data Splits
Train, test, validation labels are included in the dataset as a column.
## Dataset Creation
May of 2023
### Curation Rationale
This dataset can be used to generate instructional (or many other interesting) datasets that are useful to train LLMs
### Source Data
The CodeSearchNet dataset can be found at https://www.kaggle.com/datasets/omduggineni/codesearchnet
### Annotations
This datasets include a summary column including a short description of the function.
#### Annotation process
The annotation procedure was done using [Salesforce](https://huggingface.co/Salesforce) T5 summarization models.
A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython
The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries. (some may still be present in the dataset)
### Licensing Information
Apache 2.0 |
Finnish-NLP/wikipedia_20230501_fi_cleaned | 2023-05-18T14:49:13.000Z | [
"region:us"
] | Finnish-NLP | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: perplexity_kenlm
dtype: int64
- name: label_identity_attack
dtype: float64
- name: label_insult
dtype: float64
- name: label_obscene
dtype: float64
- name: label_severe_toxicity
dtype: float64
- name: label_threat
dtype: float64
- name: label_toxicity
dtype: float64
splits:
- name: train
num_bytes: 950578253
num_examples: 410684
download_size: 557777488
dataset_size: 950578253
---
# Dataset Card for "wikipedia_20230501_fi_cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
coeuslearning/customerqueries | 2023-05-20T06:34:15.000Z | [
"region:us"
] | coeuslearning | null | null | null | 0 | 11 | Entry not found |
voidful/StrategyQA | 2023-05-20T16:06:43.000Z | [
"region:us"
] | voidful | null | null | null | 0 | 11 | A Question Answering Benchmark with Implicit Reasoning Strategies
The StrategyQA dataset was created through a crowdsourcing pipeline for eliciting creative and diverse yes/no questions that require implicit reasoning steps. To solve questions in StrategyQA, the reasoning steps should be inferred using a strategy. To guide and evaluate the question answering process, each example in StrategyQA was annotated with a decomposition into reasoning steps for answering it, and Wikipedia paragraphs that provide evidence for the answer to each step.
Illustrated in the figure below: Questions in StrategyQA (Q1) require implicit reasoning, in contrast to multi-step questions that explicitly specify the reasoning process (Q2). Each training example contains a question (Q1), yes/no answer (A), decomposition (D), and evidence paragraphs (E).
[strategyqa_test](https://huggingface.co/datasets/voidful/StrategyQA/resolve/main/strategyqa_test.json)
[strategyqa_train](https://huggingface.co/datasets/voidful/StrategyQA/blob/main/strategyqa_train.json)
[strategyqa_train_filtered](https://huggingface.co/datasets/voidful/StrategyQA/blob/main/strategyqa_train_filtered.json)
[strategyqa_train_paragraphs](https://huggingface.co/datasets/voidful/StrategyQA/blob/main/strategyqa_train_paragraphs.json)
Paper
Title: Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies
Authors: Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, Jonathan Berant
Transactions of the Association for Computational Linguistics (TACL), 2021
Citation:
```
@article{geva2021strategyqa,
title = {{Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies}},
author = {Geva, Mor and Khashabi, Daniel and Segal, Elad and Khot, Tushar and Roth, Dan and Berant, Jonathan},
journal = {Transactions of the Association for Computational Linguistics (TACL)},
year = {2021},
}
``` |
yankihue/turkish-news-categories | 2023-05-24T15:20:05.000Z | [
"region:us"
] | yankihue | null | null | null | 0 | 11 | Entry not found |
Linly-AI/Chinese-pretraining-dataset | 2023-05-26T02:32:06.000Z | [
"license:apache-2.0",
"region:us"
] | Linly-AI | null | null | null | 20 | 11 | ---
license: apache-2.0
---
Data source: https://github.com/CVI-SZU/Linly/wiki/Linly-OpenLLaMA |
saattrupdan/womens-clothing-ecommerce-reviews | 2023-05-25T20:18:53.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"multimodal",
"region:us"
] | saattrupdan | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: review_text
dtype: string
- name: age
dtype: int64
- name: rating
dtype: int64
- name: positive_feedback_count
dtype: int64
- name: division_name
dtype: string
- name: department_name
dtype: string
- name: class_name
dtype: string
- name: recommended_ind
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 7811312.540347158
num_examples: 20641
- name: val
num_bytes: 378436.72982642107
num_examples: 1000
- name: test
num_bytes: 378436.72982642107
num_examples: 1000
download_size: 4357015
dataset_size: 8568186.0
task_categories:
- text-classification
language:
- en
tags:
- multimodal
pretty_name: Women's Clothing E-Commerce Reviews
size_categories:
- 1K<n<10K
---
# Dataset Card for "womens-clothing-ecommerce-reviews"
Processed version of [this dataset](https://github.com/ya-stack/Women-s-Ecommerce-Clothing-Reviews). |
ccmusic-database/bel_folk | 2023-10-03T16:56:58.000Z | [
"task_categories:audio-classification",
"size_categories:n<1K",
"language:zh",
"language:en",
"license:mit",
"music",
"art",
"region:us"
] | ccmusic-database | This database contains hundreds of acapella singing clips that are sung in two styles,
Bel Conto and Chinese national singing style by professional vocalists.
All of them are sung by professional vocalists and were recorded in professional commercial recording studios. | @dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li},
title = {{Music Data Sharing Platform for Computational Musicology Research (CCMUSIC DATASET)}},
month = nov,
year = 2021,
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
} | null | 1 | 11 | ---
license: mit
task_categories:
- audio-classification
language:
- zh
- en
tags:
- music
- art
pretty_name: Bel Conto and Chinese Folk Song Singing Tech Database
size_categories:
- n<1K
---
# Dataset Card for Bel Conto and Chinese Folk Song Singing Tech Database
## Dataset Description
- **Homepage:** <https://ccmusic-database.github.io>
- **Repository:** <https://huggingface.co/datasets/ccmusic-database/bel_folk>
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
- **Leaderboard:** <https://ccmusic-database.github.io/team.html>
- **Point of Contact:** N/A
### Dataset Summary
This database contains hundreds of acapella singing clips that are sung in two styles, Bel Conto and Chinese national singing style by professional vocalists. All of them are sung by professional vocalists and were recorded in professional commercial recording studios.
### Supported Tasks and Leaderboards
Audio classification, singing method classification, voice classification
### Languages
Chinese, English
## Dataset Structure
### Data Instances
.zip(.wav, .jpg)
### Data Fields
m_bel, f_bel, m_folk, f_folk
### Data Splits
train, validation, test
## Dataset Creation
### Curation Rationale
Lack of a dataset for Bel Conto and Chinese folk song singing tech
### Source Data
#### Initial Data Collection and Normalization
Zhaorui Liu, Monan Zhou
#### Who are the source language producers?
Students from CCMUSIC
### Annotations
#### Annotation process
All of them are sung by professional vocalists and were recorded in professional commercial recording studios.
#### Who are the annotators?
professional vocalists
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
Promoting the development of AI in the music industry
### Discussion of Biases
Only for Chinese songs
### Other Known Limitations
Some singers may not have enough professional training in classical or ethnic vocal techniques.
## Additional Information
### Dataset Curators
Zijin Li
### Evaluation
Coming soon...
### Licensing Information
```
MIT License
Copyright (c) CCMUSIC
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Citation Information
```
@dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li},
title = {CCMUSIC DATABASE: Music Data Sharing Platform for Computational Musicology Research},
month = {nov},
year = {2021},
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
}
```
### Contributions
Provide a dataset for distinguishing Bel Conto and Chinese folk song singing tech |
TigerResearch/pretrain_en | 2023-05-30T10:01:55.000Z | [
"task_categories:text-generation",
"size_categories:10M<n<100M",
"language:en",
"license:apache-2.0",
"region:us"
] | TigerResearch | null | null | null | 11 | 11 | ---
dataset_info:
features:
- name: content
dtype: string
splits:
- name: train
num_bytes: 48490123196
num_examples: 22690306
download_size: 5070161762
dataset_size: 48490123196
license: apache-2.0
task_categories:
- text-generation
language:
- en
size_categories:
- 10M<n<100M
---
# Dataset Card for "pretrain_en"
[Tigerbot](https://github.com/TigerResearch/TigerBot) pretrain数据的英文部分。
## Usage
```python
import datasets
ds_sft = datasets.load_dataset('TigerResearch/pretrain_en')
``` |
OdiaGenAI/all_combined_bengali_252k | 2023-06-28T12:47:51.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:bn",
"license:cc-by-nc-sa-4.0",
"region:us"
] | OdiaGenAI | null | null | null | 2 | 11 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-generation
language:
- bn
pretty_name: all_combined_bengali_252K
size_categories:
- 100K<n<1M
---
# Dataset Card for all_combined_bengali_252K
## Dataset Description
- **Homepage: https://www.odiagenai.org/**
- **Repository: https://github.com/OdiaGenAI**
- **Point of Contact: Shantipriya Parida, and Sambit Sekhar**
### Dataset Summary
This dataset is a mix of Bengali instruction sets translated from open-source instruction sets:
* Dolly,
* Alpaca,
* ChatDoctor,
* Roleplay
* GSM
In this dataset Bengali instruction, input, and output strings are available.
### Supported Tasks and Leaderboards
Large Language Model (LLM)
### Languages
Bengali
## Dataset Structure
JSON
### Data Fields
output (string)
data_source (string)
instruction (string)
input (string)
### Licensing Information
This work is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
[cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg
### Citation Information
If you find this repository useful, please consider giving 👏 and citing:
```
@misc{OdiaGenAI,
author = {Shantipriya Parida and Sambit Sekhar and Guneet Singh Kohli and Arghyadeep Sen and Shashikanta Sahoo},
title = {Bengali Instruction Set},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/OdiaGenAI}},
}
```
### Contributions
- Shantipriya Parida
- Sambit Sekhar
- Guneet Singh Kohli
- Arghyadeep Sen
- Shashikanta Sahoo |
whu9/billsum_postprocess | 2023-06-03T06:23:32.000Z | [
"region:us"
] | whu9 | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: source
dtype: string
- name: summary
dtype: string
- name: source_num_tokens
dtype: int64
- name: summary_num_tokens
dtype: int64
splits:
- name: train
num_bytes: 217576274
num_examples: 18949
- name: test
num_bytes: 37517829
num_examples: 3269
- name: ca_test
num_bytes: 14715227
num_examples: 1234
download_size: 112581904
dataset_size: 269809330
---
# Dataset Card for "billsum_postprocess"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vietgpt/copa_en | 2023-06-03T21:20:32.000Z | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"SFT",
"region:us"
] | vietgpt | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': choice1
'1': choice2
splits:
- name: train
num_bytes: 49233
num_examples: 400
- name: validation
num_bytes: 12479
num_examples: 100
download_size: 45911
dataset_size: 61712
task_categories:
- text-classification
language:
- en
tags:
- SFT
size_categories:
- n<1K
---
# COPA
- Source: https://huggingface.co/datasets/super_glue
- Num examples:
- 400 (train)
- 100 (validation)
- Language: English
```python
from datasets import load_dataset
load_dataset("vietgpt/copa_en")
```
- Format for GPT-3
```python
def preprocess_gpt3(sample):
premise = sample['premise']
choice1 = sample['choice1']
choice2 = sample['choice2']
label = sample['label']
if label == 0:
output = f'\n<|correct|> {choice1}\n<|incorrect|> {choice2}'
elif label == 1:
output = f'\n<|correct|> {choice2}\n<|incorrect|> {choice1}'
return {'text': f'<|startoftext|><|context|> {premise} <|answer|> {output} <|endoftext|>'}
"""
<|startoftext|><|context|> My body cast a shadow over the grass. <|answer|>
<|correct|> The sun was rising.
<|incorrect|> The grass was cut. <|endoftext|>
"""
``` |
TrainingDataPro/helmet_detection | 2023-09-14T16:43:53.000Z | [
"task_categories:image-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"region:us"
] | TrainingDataPro | An example of a dataset that we've collected for a photo edit App.
The dataset includes 20 selfies of people (man and women)
in segmentation masks and their visualisations. | @InProceedings{huggingface:dataset,
title = {helmet_detection},
author = {TrainingDataPro},
year = {2023}
} | null | 2 | 11 | ---
license: cc-by-nc-nd-4.0
dataset_info:
features:
- name: id
dtype: string
- name: image
dtype: image
- name: mask
dtype: image
- name: bboxes
dtype: string
splits:
- name: train
num_bytes: 56575701
num_examples: 46
download_size: 56584366
dataset_size: 56575701
task_categories:
- image-classification
language:
- en
tags:
- code
---
# Helmet Detection Dataset
The dataset consist of photographs of construction workers during the work. The dataset provides helmet detection using bounding boxes, and addresses public safety tasks such as providing compliance with safety regulations, authomizing the processes of identification of rules violations and reducing accidents during the construction work.
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=helmet_detection) to discuss your requirements, learn about the price and buy the dataset.

# Dataset structure
- **img** - contains of original images of construction workers
- **boxes** - includes bounding box labeling for the original images
- **annotations.xml** - contains coordinates of the bounding boxes and labels (helmet, no_helmet), created for the original photo
# Data Format
Each image from `img` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the coordinates of the bounding boxes and labels for helmet detection. For each point, the x and y coordinates are provided.
# Example of XML file structure
.png?generation=1686295970420156&alt=media)
# Helmet Detection might be made in accordance with your requirements.
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=helmet_detection) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets**
|
13nishit/LoanApprovalPrediction | 2023-06-16T11:15:35.000Z | [
"license:unlicense",
"region:us"
] | 13nishit | null | null | null | 0 | 11 | ---
license: unlicense
---
|
aneeshd27/Corals-Classification | 2023-06-16T11:37:47.000Z | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"license:odbl",
"climate",
"region:us"
] | aneeshd27 | null | null | null | 1 | 11 | ---
license: odbl
task_categories:
- image-classification
tags:
- climate
size_categories:
- 1K<n<10K
--- |
dmayhem93/agieval-gaokao-mathqa | 2023-06-18T17:21:09.000Z | [
"license:mit",
"arxiv:2304.06364",
"region:us"
] | dmayhem93 | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold
sequence: int64
splits:
- name: test
num_bytes: 140041
num_examples: 351
download_size: 62472
dataset_size: 140041
license: mit
---
# Dataset Card for "agieval-gaokao-mathqa"
Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo.
MIT License
Copyright (c) Microsoft Corporation.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
@misc{zhong2023agieval,
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
year={2023},
eprint={2304.06364},
archivePrefix={arXiv},
primaryClass={cs.CL}
} |
xzuyn/Stable-Diffusion-Prompts-Deduped-2.008M | 2023-06-21T05:09:13.000Z | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:en",
"region:us"
] | xzuyn | null | null | null | 6 | 11 | ---
task_categories:
- text-generation
language:
- en
size_categories:
- 1M<n<10M
---
# [Original Dataset by FredZhang7](https://huggingface.co/datasets/FredZhang7/stable-diffusion-prompts-2.47M)
- Deduped from 2,473,022 down to 2,007,998.
- Changed anything that had `[ prompt text ]`, `( prompt text )`, or `< prompt text >`, to `[prompt text]`, `(prompt text)`, and `<prompt text>`.
- 2 or more spaces converted to a single space.
- Removed all `"`
- Removed spaces at beginnings. |
BramVanroy/dolly-15k-dutch | 2023-07-07T12:17:23.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:nl",
"license:cc-by-sa-3.0",
"dolly",
"instruct",
"instruction",
"doi:10.57967/hf/0785",
"region:us"
] | BramVanroy | null | null | null | 0 | 11 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- text-generation
language:
- nl
tags:
- dolly
- instruct
- instruction
pretty_name: Dolly 15k Dutch
size_categories:
- 10K<n<100K
---
# Dataset Card for Dolly 15k Dutch
## Dataset Description
- **Homepage:** N/A
- **Repository:** N/A
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** Bram Vanroy
### Dataset Summary
This dataset contains 14,934 instructions, contexts and responses, in several natural language categories such as classification, closed QA, generation, etc. The English [original dataset](https://huggingface.co/datasets/databricks/databricks-dolly-15k) was created by @databricks, who crowd-sourced the data creation via its employees. The current dataset is a translation of that dataset through ChatGPT (`gpt-3.5-turbo`).
☕ [**Want to help me out?**](https://www.buymeacoffee.com/bramvanroy) Translating the data with the OpenAI API, and prompt testing, cost me 💸$19.38💸. If you like this dataset, please consider [buying me a coffee](https://www.buymeacoffee.com/bramvanroy) to offset a portion of this cost, I appreciate it a lot! ☕
### Languages
- Dutch
## Dataset Structure
### Data Instances
```python
{
"id": 14963,
"instruction": "Wat zijn de duurste steden ter wereld?",
"context": "",
"response": "Dit is een uitgebreide lijst van de duurste steden: Singapore, Tel Aviv, New York, Hong Kong, Los Angeles, Zurich, Genève, San Francisco, Parijs en Sydney.",
"category": "brainstorming"
}
```
### Data Fields
- **id**: the ID of the item. The following 77 IDs are not included because they could not be translated (or were too long): `[1502, 1812, 1868, 4179, 4541, 6347, 8851, 9321, 10588, 10835, 11257, 12082, 12319, 12471, 12701, 12988, 13066, 13074, 13076, 13181, 13253, 13279, 13313, 13346, 13369, 13446, 13475, 13528, 13546, 13548, 13549, 13558, 13566, 13600, 13603, 13657, 13668, 13733, 13765, 13775, 13801, 13831, 13906, 13922, 13923, 13957, 13967, 13976, 14028, 14031, 14045, 14050, 14082, 14083, 14089, 14110, 14155, 14162, 14181, 14187, 14200, 14221, 14222, 14281, 14473, 14475, 14476, 14587, 14590, 14667, 14685, 14764, 14780, 14808, 14836, 14891, 1
4966]`
- **instruction**: the instruction (question)
- **context**: additional context that the AI can use to answer the question
- **response**: the AI's expected response
- **category**: the category of this type of question (see [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k#annotator-guidelines) for more info)
## Dataset Creation
Both the translations and the topics were translated with OpenAI's API for `gpt-3.5-turbo`. `max_tokens=1024, temperature=0` as parameters.
The prompt template to translate the input is (where `src_lang` was English and `tgt_lang` Dutch):
```python
CONVERSATION_TRANSLATION_PROMPT = """You are asked to translate a task's instruction, optional context to the task, and the response to the task, from {src_lang} to {tgt_lang}.
Here are the requirements that you should adhere to:
1. maintain the format: the task consists of a task instruction (marked `instruction: `), optional context to the task (marked `context: `) and response for the task marked with `response: `;
2. do not translate the identifiers `instruction: `, `context: `, and `response: ` but instead copy them to your output;
3. make sure that text is fluent to read and does not contain grammatical errors. Use standard {tgt_lang} without regional bias;
4. translate the instruction and context text using informal, but standard, language;
5. make sure to avoid biases (such as gender bias, grammatical bias, social bias);
6. if the instruction is to correct grammar mistakes or spelling mistakes then you have to generate a similar mistake in the context in {tgt_lang}, and then also generate a corrected output version in the output in {tgt_lang};
7. if the instruction is to translate text from one language to another, then you do not translate the text that needs to be translated in the instruction or the context, nor the translation in the response (just copy them as-is);
8. do not translate code fragments but copy them to your output. If there are English examples, variable names or definitions in code fragments, keep them in English.
Now translate the following task with the requirements set out above. Do not provide an explanation and do not add anything else.\n\n"""
```
The system message was:
```
You are a helpful assistant that translates English to Dutch according to the requirements that are given to you.
```
Note that 77 items (0.5%) were not successfully translated. This can either mean that the prompt was too long for the given limit (`max_tokens=1024`) or that the generated translation could not be parsed into `instruction`, `context` and `response` fields. The missing IDs are `[1502, 1812, 1868, 4179, 4541, 6347, 8851, 9321, 10588, 10835, 11257, 12082, 12319, 12471, 12701, 12988, 13066, 13074, 13076, 13181, 13253, 13279, 13313, 13346, 13369, 13446, 13475, 13528, 13546, 13548, 13549, 13558, 13566, 13600, 13603, 13657, 13668, 13733, 13765, 13775, 13801, 13831, 13906, 13922, 13923, 13957, 13967, 13976, 14028, 14031, 14045, 14050, 14082, 14083, 14089, 14110, 14155, 14162, 14181, 14187, 14200, 14221, 14222, 14281, 14473, 14475, 14476, 14587, 14590, 14667, 14685, 14764, 14780, 14808, 14836, 14891, 1
4966]`.
### Source Data
#### Initial Data Collection and Normalization
Initial data collection by [databricks](https://huggingface.co/datasets/databricks/databricks-dolly-15k). See their repository for more information about this dataset.
## Considerations for Using the Data
Note that the translations in this new dataset have not been verified by humans! Use at your own risk, both in terms of quality and biases.
### Discussion of Biases
As with any machine-generated texts, users should be aware of potential biases that are included in this dataset. Although the prompt specifically includes `make sure to avoid biases (such as gender bias, grammatical bias, social bias)`, of course the impact of such command is not known. It is likely that biases remain in the dataset so use with caution.
### Other Known Limitations
The translation quality has not been verified. Use at your own risk!
### Licensing Information
This repository follows the original databricks license, which is CC BY-SA 3.0 but see below for a specific restriction.
This text was generated (either in part or in full) with GPT-3 (`gpt-3.5-turbo`), OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.
If you use this dataset, you must also follow the [Sharing](https://openai.com/policies/sharing-publication-policy) and [Usage](https://openai.com/policies/usage-policies) policies.
As clearly stated in their [Terms of Use](https://openai.com/policies/terms-of-use), specifically 2c.iii, "[you may not] use output from the Services to develop models that compete with OpenAI". That means that you cannot use this dataset to build models that are intended to commercially compete with OpenAI. [As far as I am aware](https://law.stackexchange.com/questions/93308/licensing-material-generated-with-chatgpt), that is a specific restriction that should serve as an addendum to the current license.
### Citation Information
If you use this data set, please cite :
Vanroy, B. (2023). Dolly 15k Dutch [Data set]. Hugging Face. https://doi.org/10.57967/hf/0785
```bibtex
@misc {https://doi.org/10.57967/hf/0785,
author = {Vanroy, Bram },
title = { {D}olly 15k {D}utch },
year = 2023,
url = { https://huggingface.co/datasets/BramVanroy/dolly-15k-dutch },
doi = { 10.57967/hf/0785 },
publisher = { Hugging Face }
}
```
### Contributions
Thanks to [databricks](https://huggingface.co/datasets/databricks/databricks-dolly-15k) for the initial, high-quality dataset. |
Jumtra/jglue_jnli | 2023-06-21T00:31:30.000Z | [
"region:us"
] | Jumtra | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 647839
num_examples: 3079
download_size: 196877
dataset_size: 647839
---
# Dataset Card for "jglue_jnli"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_CNNDailyMail | 2023-06-21T12:27:24.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: article
dtype: string
- name: highlights
dtype: string
- name: len_article
dtype: int64
- name: len_highlights
dtype: int64
splits:
- name: train
num_bytes: 453635426
num_examples: 155098
- name: validation
num_bytes: 21468466
num_examples: 7512
- name: test
num_bytes: 18215547
num_examples: 6379
- name: debug
num_bytes: 292572035
num_examples: 100000
download_size: 484340245
dataset_size: 785891474
---
# Dataset Card for "UDR_CNNDailyMail"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_ComV | 2023-06-21T12:35:55.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: string
- name: question
dtype: string
- name: choices
dtype: string
- name: len_question
dtype: int64
- name: max_len_choices
dtype: int64
splits:
- name: train
num_bytes: 3487585
num_examples: 9992
- name: test
num_bytes: 337966
num_examples: 1000
- name: debug
num_bytes: 1749561
num_examples: 5000
download_size: 2193065
dataset_size: 5575112
---
# Dataset Card for "UDR_ComV"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_E2E | 2023-06-21T12:38:25.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: question
dtype: string
- name: target
dtype: string
- name: references
dtype: string
splits:
- name: train
num_bytes: 3627637
num_examples: 12563
- name: validation
num_bytes: 1009818
num_examples: 1483
- name: test
num_bytes: 1240499
num_examples: 1847
download_size: 1727722
dataset_size: 5877954
---
# Dataset Card for "UDR_E2E"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_MNLI | 2023-06-21T12:42:08.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: int64
- name: label_text
dtype: string
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 77946210
num_examples: 263789
- name: validation
num_bytes: 883710
num_examples: 3000
- name: validation_mm
num_bytes: 910699
num_examples: 3000
- name: debug
num_bytes: 29518034
num_examples: 100000
download_size: 47966458
dataset_size: 109258653
---
# Dataset Card for "UDR_MNLI"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_MR | 2023-06-21T12:42:19.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: int64
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1164193
num_examples: 8662
- name: test
num_bytes: 266849
num_examples: 2000
- name: debug
num_bytes: 672162
num_examples: 5000
download_size: 1379605
dataset_size: 2103204
---
# Dataset Card for "UDR_MR"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_MTOP | 2023-06-21T12:42:30.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: idx
dtype: string
- name: intent
dtype: string
- name: spans
dtype: string
- name: question
dtype: string
- name: domain
dtype: string
- name: lang
dtype: string
- name: logical_form
dtype: string
- name: tokenized_question
dtype: string
splits:
- name: train
num_bytes: 7507063
num_examples: 15667
- name: validation
num_bytes: 1075137
num_examples: 2235
- name: test
num_bytes: 2117126
num_examples: 4386
download_size: 3541998
dataset_size: 10699326
---
# Dataset Card for "UDR_MTOP"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_PHP | 2023-06-21T12:43:27.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: question
dtype: string
- name: target
dtype: string
- name: len_question
dtype: int64
- name: len_target
dtype: int64
splits:
- name: train
num_bytes: 143109431
num_examples: 240851
- name: validation
num_bytes: 7768571
num_examples: 12964
- name: test
num_bytes: 8233379
num_examples: 13998
- name: debug
num_bytes: 59457968
num_examples: 100000
download_size: 91077961
dataset_size: 218569349
---
# Dataset Card for "UDR_PHP"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_PubMed | 2023-06-21T12:44:37.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: question
dtype: string
- name: target
dtype: string
- name: len_question
dtype: int64
- name: len_target
dtype: int64
splits:
- name: train
num_bytes: 162901962
num_examples: 56254
- name: validation
num_bytes: 9201246
num_examples: 3187
- name: test
num_bytes: 9799062
num_examples: 3481
- name: debug
num_bytes: 14522497
num_examples: 5000
download_size: 110779150
dataset_size: 196424767
---
# Dataset Card for "UDR_PubMed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_Reddit | 2023-06-21T12:46:28.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: question
dtype: string
- name: target
dtype: string
- name: len_question
dtype: int64
- name: len_target
dtype: int64
splits:
- name: train
num_bytes: 68102488
num_examples: 37643
- name: validation
num_bytes: 1088422
num_examples: 576
- name: test
num_bytes: 1097563
num_examples: 562
- name: debug
num_bytes: 9039576
num_examples: 5000
download_size: 48794822
dataset_size: 79328049
---
# Dataset Card for "UDR_Reddit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_RocEnding | 2023-06-21T12:46:45.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: question
dtype: string
- name: target
dtype: string
- name: len_question
dtype: int64
- name: len_target
dtype: int64
splits:
- name: train
num_bytes: 22821733
num_examples: 87906
- name: validation
num_bytes: 2542405
num_examples: 9807
- name: test
num_bytes: 2542405
num_examples: 9807
- name: debug
num_bytes: 1297842
num_examples: 5000
download_size: 17953696
dataset_size: 29204385
---
# Dataset Card for "UDR_RocEnding"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_RocStory | 2023-06-21T12:47:02.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: question
dtype: string
- name: target
dtype: string
- name: len_question
dtype: int64
- name: len_target
dtype: int64
splits:
- name: train
num_bytes: 22735056
num_examples: 87526
- name: validation
num_bytes: 2540477
num_examples: 9799
- name: test
num_bytes: 2540477
num_examples: 9799
- name: debug
num_bytes: 1297855
num_examples: 5000
download_size: 17785834
dataset_size: 29113865
---
# Dataset Card for "UDR_RocStory"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_RTE | 2023-06-21T12:48:28.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: int64
- name: label_text
dtype: string
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 951884
num_examples: 2490
- name: validation
num_bytes: 102332
num_examples: 277
download_size: 633925
dataset_size: 1054216
---
# Dataset Card for "UDR_RTE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_SMCalFlow | 2023-06-21T12:48:41.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: idx
dtype: string
- name: user_utterance
dtype: string
- name: lispress
dtype: string
- name: fully_typed_lispress
dtype: string
splits:
- name: train
num_bytes: 61962192
num_examples: 58367
- name: validation
num_bytes: 10396934
num_examples: 14751
download_size: 14059335
dataset_size: 72359126
---
# Dataset Card for "UDR_SMCalFlow"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_SST-5 | 2023-06-21T12:49:23.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: int64
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1044651
num_examples: 8534
- name: test
num_bytes: 270516
num_examples: 2210
- name: debug
num_bytes: 612421
num_examples: 5000
download_size: 1268755
dataset_size: 1927588
---
# Dataset Card for "UDR_SST-5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_Subj | 2023-06-21T12:49:33.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: int64
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1181174
num_examples: 8000
- name: test
num_bytes: 299358
num_examples: 2000
- name: debug
num_bytes: 737874
num_examples: 5000
download_size: 1474560
dataset_size: 2218406
---
# Dataset Card for "UDR_Subj"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_TREC | 2023-06-21T12:49:41.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: int64
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 380267
num_examples: 5381
- name: test
num_bytes: 27979
num_examples: 500
- name: debug
num_bytes: 353299
num_examples: 5000
download_size: 465666
dataset_size: 761545
---
# Dataset Card for "UDR_TREC"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_WikiAuto | 2023-06-21T12:52:19.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: source
dtype: string
- name: target
dtype: string
- name: references
list: string
- name: len_source
dtype: int64
- name: len_target
dtype: int64
splits:
- name: train
num_bytes: 171935945
num_examples: 481018
- name: validation
num_bytes: 857630
num_examples: 1999
- name: test_asset
num_bytes: 483952
num_examples: 359
- name: test_turk
num_bytes: 415458
num_examples: 359
- name: test_wiki
num_bytes: 248732
num_examples: 403
- name: debug
num_bytes: 35726046
num_examples: 100000
download_size: 115397698
dataset_size: 209667763
---
# Dataset Card for "UDR_WikiAuto"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_Yahoo | 2023-06-21T12:52:33.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: int64
- name: title
dtype: string
- name: content
dtype: string
- name: sentence
dtype: string
- name: len_sentence
dtype: int64
splits:
- name: train
num_bytes: 17812235
num_examples: 29150
- name: test
num_bytes: 1767766
num_examples: 3000
- name: debug
num_bytes: 3032530
num_examples: 5000
download_size: 14936274
dataset_size: 22612531
---
# Dataset Card for "UDR_Yahoo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_Yelp | 2023-06-21T12:52:51.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: int64
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 22696875
num_examples: 30000
- name: test
num_bytes: 2261177
num_examples: 3000
- name: debug
num_bytes: 3745338
num_examples: 5000
download_size: 18407788
dataset_size: 28703390
---
# Dataset Card for "UDR_Yelp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ehartford/open-instruct-uncensored | 2023-06-22T18:41:10.000Z | [
"license:apache-2.0",
"region:us"
] | ehartford | null | null | null | 15 | 11 | ---
license: apache-2.0
---
This is [Allen AI's open-instruct dataset.](https://github.com/allenai/open-instruct)
It is used to train the Tulu family of models.
- https://huggingface.co/allenai/tulu-7b
- https://huggingface.co/allenai/tulu-13b
- https://huggingface.co/allenai/tulu-30b
- https://huggingface.co/allenai/tulu-65b
I have done the following:
1) Download the open-instruct repo
2) Execute the scripts/prepare_train_data.sh modified to download the "unfiltered" version of sharegpt dataset
3) Merged data/processed/**/*.jsonl into a single "open-instruct.jsonl"
4) Executed my "remove_refusals.py" against that "open-instruct.jsonl" to produce a "open-instruct-uncensored.jsonl"
I am currently training this "open-instruct-uncensored.jsonl" to a new model series named ehartford/tulu-uncensored
More info to come. |
vilm/lima-vi | 2023-06-24T01:56:11.000Z | [
"region:us"
] | vilm | null | null | null | 1 | 11 | Entry not found |
Splend1dchan/librispeech_asr_individual | 2023-06-25T15:57:52.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_ids:speaker-identification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"sou... | Splend1dchan | LibriSpeech is a corpus of approximately 1000 hours of read English speech with sampling rate of 16 kHz,
prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read
audiobooks from the LibriVox project, and has been carefully segmented and aligned.87 | @inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
} | null | 0 | 11 | ---
pretty_name: LibriSpeech
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: librispeech-1
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- audio-classification
task_ids:
- speaker-identification
dataset_info:
- config_name: clean
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.100
num_bytes: 6619683041
num_examples: 28539
- name: train.360
num_bytes: 23898214592
num_examples: 104014
- name: validation
num_bytes: 359572231
num_examples: 2703
- name: test
num_bytes: 367705423
num_examples: 2620
download_size: 30121377654
dataset_size: 31245175287
- config_name: other
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.500
num_bytes: 31810256902
num_examples: 148688
- name: validation
num_bytes: 337283304
num_examples: 2864
- name: test
num_bytes: 352396474
num_examples: 2939
download_size: 31236565377
dataset_size: 32499936680
- config_name: all
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.clean.100
num_bytes: 6627791685
num_examples: 28539
- name: train.clean.360
num_bytes: 23927767570
num_examples: 104014
- name: train.other.500
num_bytes: 31852502880
num_examples: 148688
- name: validation.clean
num_bytes: 359505691
num_examples: 2703
- name: validation.other
num_bytes: 337213112
num_examples: 2864
- name: test.clean
num_bytes: 368449831
num_examples: 2620
- name: test.other
num_bytes: 353231518
num_examples: 2939
download_size: 61357943031
dataset_size: 63826462287
---
# Dataset Card for librispeech_asr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [LibriSpeech ASR corpus](http://www.openslr.org/12)
- **Repository:** [Needs More Information]
- **Paper:** [LibriSpeech: An ASR Corpus Based On Public Domain Audio Books](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf)
- **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
- **Point of Contact:** [Daniel Povey](mailto:dpovey@gmail.com)
### Dataset Summary
LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`, `audio-speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER. An external leaderboard at https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-clean ranks the latest models from research and academia.
### Languages
The audio is in English. There are two configurations: `clean` and `other`.
The speakers in the corpus were ranked according to the WER of the transcripts of a model trained on
a different dataset, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher WER speakers designated as "other".
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'chapter_id': 141231,
'file': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'audio': {'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'id': '1272-141231-0000',
'speaker_id': 1272,
'text': 'A MAN SAID TO THE UNIVERSE SIR I EXIST'}
```
### Data Fields
- file: A path to the downloaded audio file in .flac format.
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- id: unique id of the data sample.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- chapter_id: id of the audiobook chapter which includes the transcription.
### Data Splits
The size of the corpus makes it impractical, or at least inconvenient
for some users, to distribute it as a single large archive. Thus the
training portion of the corpus is split into three subsets, with approximate size 100, 360 and 500 hours respectively.
A simple automatic
procedure was used to select the audio in the first two sets to be, on
average, of higher recording quality and with accents closer to US
English. An acoustic model was trained on WSJ’s si-84 data subset
and was used to recognize the audio in the corpus, using a bigram
LM estimated on the text of the respective books. We computed the
Word Error Rate (WER) of this automatic transcript relative to our
reference transcripts obtained from the book texts.
The speakers in the corpus were ranked according to the WER of
the WSJ model’s transcripts, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher-WER speakers designated as "other".
For "clean", the data is split into train, validation, and test set. The train set is further split into train.100 and train.360
respectively accounting for 100h and 360h of the training data.
For "other", the data is split into train, validation, and test set. The train set contains approximately 500h of recorded speech.
| | Train.500 | Train.360 | Train.100 | Valid | Test |
| ----- | ------ | ----- | ---- | ---- | ---- |
| clean | - | 104014 | 28539 | 2703 | 2620|
| other | 148688 | - | - | 2864 | 2939 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur.
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
tonytan48/TempReason | 2023-06-28T07:26:17.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | tonytan48 | null | null | null | 2 | 11 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
language:
- en
size_categories:
- 10K<n<100K
---
The TempReason dataset to evaluate the temporal reasoning capability of Large Language Models.
From paper "Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models" in ACL 2023. |
mtc/factcc_annotated_eval_data | 2023-07-07T15:28:16.000Z | [
"region:us"
] | mtc | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: claim
dtype: string
- name: label
dtype: string
- name: filepath
dtype: string
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: validation
num_bytes: 3261639
num_examples: 931
- name: test
num_bytes: 2060131
num_examples: 503
download_size: 1191194
dataset_size: 5321770
---
# Dataset Card for "factcc_annotated_eval_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FredZhang7/malicious-website-features-2.4M | 2023-08-14T05:21:51.000Z | [
"task_categories:text-classification",
"task_categories:feature-extraction",
"task_categories:tabular-classification",
"size_categories:1M<n<10M",
"language:no",
"language:af",
"language:en",
"language:et",
"language:sw",
"language:sv",
"language:sq",
"language:de",
"language:ca",
"languag... | FredZhang7 | null | null | null | 2 | 11 | ---
license: apache-2.0
task_categories:
- text-classification
- feature-extraction
- tabular-classification
language:
- 'no'
- af
- en
- et
- sw
- sv
- sq
- de
- ca
- hu
- da
- tl
- so
- fi
- fr
- cs
- hr
- cy
- es
- sl
- tr
- pl
- pt
- nl
- id
- sk
- lt
- lv
- vi
- it
- ro
- ru
- mk
- bg
- th
- ja
- ko
- multilingual
size_categories:
- 1M<n<10M
---
**Important Notice:**
- A subset of the URL dataset is from Kaggle, and the Kaggle datasets contained 10%-15% mislabelled data. See [this dicussion I opened](https://www.kaggle.com/datasets/sid321axn/malicious-urls-dataset/discussion/431505) for some false positives. I have contacted Kaggle regarding their erroneous "Usability" score calculation for these unreliable datasets.
- The feature extraction methods shown here are not robust at all in 2023, and there're even silly mistakes in 3 functions: `not_indexed_by_google`, `domain_registration_length`, and `age_of_domain`.
<br>
The *features* dataset is original, and my feature extraction method is covered in [feature_extraction.py](./feature_extraction.py).
To extract features from a website, simply passed the URL and label to `collect_data()`. The features are saved to `phishing_detection_dataset.csv` locally by default.
In the *features* dataset, there're 911,180 websites online at the time of data collection. The plots below show the regression line and correlation coefficients of 22+ features extracted and whether the URL is malicious.
If we could plot the lifespan of URLs, we could see that the oldest website has been online since Nov 7th, 2008, while the most recent phishing websites appeared as late as July 10th, 2023.
## Malicious URL Categories
- Defacement
- Malware
- Phishing
## Data Analysis
Here are two images showing the correlation coefficient and correlation of determination between predictor values and the target value `is_malicious`.


Let's exmain the correlations one by one and cross out any unreasonable or insignificant correlations.
| Variable | Justification for Crossing Out |
|-----------------------------|------------------------------------- |
| ~~redirects~~ | contracdicts previous research (as redirects increase, is_malicious tends to decrease by a little) |
| ~~not_indexed_by_google~~ | 0.00 correlation |
| ~~email_submission~~ | contracdicts previous research |
| request_url_percentage | |
| issuer | |
| certificate_age | |
| ~~url_anchor_percentage~~ | contracdicts previous research |
| ~~meta_percentage~~ | 0.00 correlation |
| script_percentage | |
| link_percentage | |
| ~~mouseover_changes~~ | contracdicts previous research & 0.00 correlation |
| ~~right_clicked_disabled~~ | contracdicts previous research & 0.00 correlation |
| ~~popup_window_has_text_field~~ | contracdicts previous research |
| ~~use_iframe~~ | contracdicts previous research |
| ~~has_suspicious_ports~~ | contracdicts previous research |
| ~~external_favicons~~ | contracdicts previous research |
| TTL (Time to Live) | |
| ip_address_count | |
| ~~TXT_record~~ | all websites had a TXT record |
| ~~check_sfh~~ | contracdicts previous research |
| count_domain_occurrences | |
| domain_registration_length | |
| abnormal_url | |
| age_of_domain | |
| page_rank_decimal | |
## Pre-training Ideas
For training, I split the classification task into two stages in anticipation of the limited availability of online phishing websites due to their short lifespan, as well as the possibility that research done on phishing is not up-to-date:
1. a small multilingual BERT model to output the confidence level of a URL being malicious to model #2, by finetuning on 2,436,727 legitimate and malicious URLs
2. (probably) LightGBM to analyze the confidence level, along with roughly 10 extracted features
This way, I can make the most out of the limited phishing websites avaliable.
## Source of the URLs
- https://moz.com/top500
- https://phishtank.org/phish_search.php?valid=y&active=y&Search=Search
- https://www.kaggle.com/datasets/siddharthkumar25/malicious-and-benign-urls
- https://www.kaggle.com/datasets/sid321axn/malicious-urls-dataset
- https://github.com/ESDAUNG/PhishDataset
- https://github.com/JPCERTCC/phishurl-list
- https://github.com/Dogino/Discord-Phishing-URLs
## Reference
- https://www.kaggle.com/datasets/akashkr/phishing-website-dataset
- https://www.kaggle.com/datasets/shashwatwork/web-page-phishing-detection-dataset
- https://www.kaggle.com/datasets/aman9d/phishing-data
## Side notes
- Cloudflare offers an [API for phishing URL scanning](https://developers.cloudflare.com/api/operations/phishing-url-information-get-results-for-a-url-scan), with a generous global rate limit of 1200 requests every 5 minutes. |
Dmini/FFHQ-64x64 | 2023-07-21T02:36:30.000Z | [
"region:us"
] | Dmini | null | null | null | 0 | 11 | Entry not found |
jxu9001/conll_v3 | 2023-07-25T15:09:12.000Z | [
"region:us"
] | jxu9001 | null | null | null | 0 | 11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: tokens
sequence: string
- name: tags
sequence: int64
splits:
- name: train
num_bytes: 3445822
num_examples: 14041
- name: validation
num_bytes: 866541
num_examples: 3250
- name: test
num_bytes: 784956
num_examples: 3453
download_size: 1247438
dataset_size: 5097319
---
# Dataset Card for "conll_v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vlofgren/cabrita-and-guanaco-PTBR | 2023-07-28T18:43:45.000Z | [
"license:openrail",
"region:us"
] | vlofgren | null | null | null | 0 | 11 | ---
license: openrail
---
|
kaxap/pg-wikiSQL-sql-instructions-80k | 2023-07-29T22:39:38.000Z | [
"license:bsd-3-clause",
"region:us"
] | kaxap | null | null | null | 1 | 11 | ---
license: bsd-3-clause
---
Converted, cleaned and syntax-checked [SQLWiki](https://github.com/salesforce/WikiSQL/) dataset.
The datapoints containing non latin column names were removed.
Resulting SQL statements were adapted for Postgres syntax and conventions.
Each SQL statement, including `CREATE TABLE` statements were syntax checked with [pgsanity](https://github.com/markdrago/pgsanity).
# Citations
```
@article{zhongSeq2SQL2017,
author = {Victor Zhong and
Caiming Xiong and
Richard Socher},
title = {Seq2SQL: Generating Structured Queries from Natural Language using
Reinforcement Learning},
journal = {CoRR},
volume = {abs/1709.00103},
year = {2017}
}
``` |
gardner/nz_legislation | 2023-07-30T09:35:26.000Z | [
"size_categories:1K<n<10K",
"language:en",
"license:other",
"region:us"
] | gardner | null | null | null | 0 | 11 | ---
license: other
language:
- en
pretty_name: NZ Legislation
size_categories:
- 1K<n<10K
---
## Overview
This is an initial version of public acts collected from legislation.govt.nz. The preamble sections of the acts have been excluded from this dataset.
Feedback is welcome: gardner@bickford.nz
The data is in `jsonl` format and each line contains:
```json
{
"id": "DLM415522",
"year": "1974",
"title": "Ngarimu VC and 28th (Maori) Battalion Memorial Scholarship Fund Amendment Act 1974",
"text": "1: Short Title\nThis Act may be cited as the Ngarimu VC and 28th (Maori) Battalion Memorial Scholarship Fund Amendment Act 1974, and shall be read together with and deemed part of the Ngarimu VC and 28th (Maori) Battalion Memorial Scholarship Fund Act 1945\n2:\n3:\n4: New sections substituted\n1: This subsection substituted section 14 section 15\n2: Notwithstanding anything in subsection (1) subsection (1)\n3: Notwithstanding anything in section 15 subsection (1)"
}
```
## Reproduction
The code to reproduce this dataset can be found at https://github.com/gardner/nz_legislation
## Copyright
The legislation text data in this dataset repository has **no copyright**.
From the Legislation.govt.nz [website](https://legislation.govt.nz/about.aspx#copyright):
> There is no copyright in New Zealand Acts, Bills, or the secondary legislation published on this website (see [section 27 of the Copyright Act 1994](https://legislation.govt.nz/act/public/1994/0143/latest/DLM345939.html)). All Acts, Bills, Supplementary Order Papers, and secondary legislation published on this website may be reproduced free of charge in any format or media without requiring specific permission.
|
dyngnosis/function_names_v2 | 2023-08-02T16:41:15.000Z | [
"region:us"
] | dyngnosis | null | null | null | 1 | 11 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 81857312.0
num_examples: 60464
- name: test
num_bytes: 20464328.0
num_examples: 15116
download_size: 0
dataset_size: 102321640.0
---
# Dataset Card for "function_names_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
aalexchengg/squig_distant_filtered | 2023-08-03T00:34:21.000Z | [
"region:us"
] | aalexchengg | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: Title
dtype: string
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: raw predictions
sequence: int64
- name: words
dtype: string
- name: value predictions
sequence: string
- name: word labels
sequence: string
splits:
- name: train
num_bytes: 24944793.437416665
num_examples: 6179
download_size: 1779276
dataset_size: 24944793.437416665
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "squig_distant_filtered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Tanmay09516/LLM-Science-exam-dataset | 2023-08-03T07:50:32.000Z | [
"region:us"
] | Tanmay09516 | null | null | null | 0 | 11 | Entry not found |
nlplabtdtu/health_qa | 2023-08-04T03:52:32.000Z | [
"region:us"
] | nlplabtdtu | null | null | null | 1 | 11 | Entry not found |
FanChen0116/19100_chat_16x_slot_pvi | 2023-09-22T17:30:57.000Z | [
"region:us"
] | FanChen0116 | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: tokens
sequence: string
- name: labels
sequence:
class_label:
names:
'0': O
'1': I-time
'2': B-date
'3': B-last_name
'4': B-people
'5': I-date
'6': I-people
'7': I-last_name
'8': I-first_name
'9': B-first_name
'10': B-time
- name: request_slot
sequence: string
splits:
- name: train
num_bytes: 96406
num_examples: 512
- name: validation
num_bytes: 5405
num_examples: 32
- name: test
num_bytes: 646729
num_examples: 3731
download_size: 18172
dataset_size: 748540
---
# Dataset Card for "19100_chat_16x_slot_pvi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FanChen0116/19100_chat_32x_slot_pvi | 2023-09-22T17:59:16.000Z | [
"region:us"
] | FanChen0116 | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: tokens
sequence: string
- name: labels
sequence:
class_label:
names:
'0': O
'1': I-time
'2': B-date
'3': B-last_name
'4': B-people
'5': I-date
'6': I-people
'7': I-last_name
'8': I-first_name
'9': B-first_name
'10': B-time
- name: request_slot
sequence: string
splits:
- name: train
num_bytes: 185240
num_examples: 1024
- name: validation
num_bytes: 5405
num_examples: 32
- name: test
num_bytes: 646729
num_examples: 3731
download_size: 30827
dataset_size: 837374
---
# Dataset Card for "19100_chat_32x_slot_pvi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
warshakhan/donut_vqa_synHMP | 2023-08-08T07:52:22.000Z | [
"license:unknown",
"region:us"
] | warshakhan | null | null | null | 0 | 11 | ---
license: unknown
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 1394974419.84
num_examples: 1920
- name: test
num_bytes: 332983681.8
num_examples: 480
download_size: 1665117509
dataset_size: 1727958101.6399999
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored | 2023-08-16T02:29:25.000Z | [
"license:other",
"region:us"
] | rombodawg | null | null | null | 20 | 11 | ---
license: other
---
_________________________________________________________________________________
VERSION 3 IS RELEASED DOWNLOAD HERE:
- https://huggingface.co/datasets/rombodawg/LosslessMegaCodeTrainingV3_2.2m_Evol
_________________________________________________________________________________
Updated/Uncensored version 1 here: https://huggingface.co/datasets/rombodawg/2XUNCENSORED_MegaCodeTraining188k
Non-code instruct training here: https://huggingface.co/datasets/rombodawg/2XUNCENSORED_alpaca_840k_Evol_USER_ASSIS
Legacy version 1 code training here: https://huggingface.co/datasets/rombodawg/MegaCodeTraining200k
This is the ultimate code training data, created to be lossless so the AI model does not lose any other abilities that it had previously (such as logical skills) after training on this dataset.
The reason why this dataset is so large is so that as the model learns to code, it continues to remember to follow regular instructions as to not lose previously learned abilities.
This is the outcome of all my work gathering data, testing AI models, and discovering what, why, and how coding models do and don't perform well.
If non of this is making any sense think of it this way, I took the old MegaCoding dataset, added like 8x more data that is purely instruction based (non coding), then ran a script to remove a ton (literally 10's of thousands of lines of instructions) that was deemed to be censored. This dataset is the result of that process.
This dataset is the combination of my 2 previous datasets found below:
Coding:
https://huggingface.co/datasets/rombodawg/2XUNCENSORED_MegaCodeTraining188k
Instruction following:
https://huggingface.co/datasets/rombodawg/2XUNCENSORED_alpaca_840k_Evol_USER_ASSIST |
FanChen0116/19100_chat_8x_slot_pvi | 2023-09-22T18:04:02.000Z | [
"region:us"
] | FanChen0116 | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: tokens
sequence: string
- name: labels
sequence:
class_label:
names:
'0': O
'1': I-time
'2': B-date
'3': B-last_name
'4': B-people
'5': I-date
'6': I-people
'7': I-last_name
'8': I-first_name
'9': B-first_name
'10': B-time
- name: request_slot
sequence: string
splits:
- name: train
num_bytes: 47424
num_examples: 256
- name: validation
num_bytes: 5405
num_examples: 32
- name: test
num_bytes: 646729
num_examples: 3731
download_size: 11313
dataset_size: 699558
---
# Dataset Card for "19100_chat_8x_slot_pvi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hakatashi/hakatashi-pixiv-bookmark-deepdanbooru | 2023-08-07T05:38:17.000Z | [
"task_categories:image-classification",
"task_categories:tabular-classification",
"size_categories:100K<n<1M",
"art",
"region:us"
] | hakatashi | null | null | null | 2 | 11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: tag_probs
sequence: float32
- name: class
dtype:
class_label:
names:
'0': not_bookmarked
'1': bookmarked_public
'2': bookmarked_private
splits:
- name: train
num_bytes: 4301053452
num_examples: 179121
- name: test
num_bytes: 1433684484
num_examples: 59707
- name: validation
num_bytes: 1433708496
num_examples: 59708
download_size: 7351682183
dataset_size: 7168446432
task_categories:
- image-classification
- tabular-classification
tags:
- art
size_categories:
- 100K<n<1M
---
The dataset for training classification model of pixiv artworks by my preference.
## Schema
* tag_probs: List of probabilities for each tag. Preprocessed by [RF5/danbooru-pretrained](https://github.com/RF5/danbooru-pretrained) model. The index of each probability corresponds to the index of the tag in the [class_names_6000.json](https://github.com/RF5/danbooru-pretrained/blob/master/config/class_names_6000.json) file.
* class:
* not_bookmarked (0): Generated from images randomly-sampled from [animelover/danbooru2022](https://huggingface.co/datasets/animelover/danbooru2022) dataset. The images are filtered in advance to the post with pixiv source.
* bookmarked_public (1): Generated from publicly bookmarked images of [hakatashi](https://twitter.com/hakatashi).
* bookmarked_private (2): Generated from privately bookmarked images of [hakatashi](https://twitter.com/hakatashi).
## Stats
train:test:validation = 6:2:2
* not_bookmarked (0): 202,290 images
* bookmarked_public (1): 73,587 images
* bookmarked_private (2): 22,659 images
## Usage
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("hakatashi/hakatashi-pixiv-bookmark-deepdanbooru")
>>> dataset
DatasetDict({
test: Dataset({
features: ['tag_probs', 'class'],
num_rows: 59707
})
train: Dataset({
features: ['tag_probs', 'class'],
num_rows: 179121
})
validation: Dataset({
features: ['tag_probs', 'class'],
num_rows: 59708
})
})
>>> dataset['train'].features
{'tag_probs': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None),
'class': ClassLabel(names=['not_bookmarked', 'bookmarked_public', 'bookmarked_private'], id=None)}
``` |
pie/scidtb_argmin | 2023-08-08T10:26:52.000Z | [
"region:us"
] | pie | null | null | null | 0 | 11 | Entry not found |
vsd-benchmark/vsd-fashion | 2023-08-30T21:20:03.000Z | [
"license:mit",
"vsd",
"arxiv:2308.14753",
"region:us"
] | vsd-benchmark | Visual similarities discovery (VSD) is an important task
with broad e-commerce applications. Given an image of
a certain object, the goal of VSD is to retrieve images of
different objects with high perceptual visual similarity. Al-
though being a highly addressed problem, the evaluation
of proposed methods for VSD is often based on a proxy of
an identification-retrieval task, evaluating the ability of a
model to retrieve different images of the same object. We
posit that evaluating VSD methods based on identification
tasks is limited, and faithful evaluation must rely on expert
annotations. In this paper, we introduce the first large-scale
fashion visual similarity benchmark dataset, consisting of
more than 110K expert-annotated image pairs. | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | null | 1 | 11 | ---
license: mit
tags:
- vsd
pretty_name: VSD Fashion
---
# VSD Fashion Dataset
## Description
A dataset for visual similarity tasks, that includes various zero-shot and trainable tasks from the VSD Paper.
Visual similarity measures the perceptual agreement between two objects based on their visual appearance. Two objects can be similar or dissimilar based on their color, shape, size, pattern, utility, and more. In fact, all of these factors and many others take part in determining the degree of visual similarity between two objects with varying importance. Therefore, defining the perceived visual simi- larity based on these factors is challenging. Nonetheless, learning visual similarities is a key building block for many practical utilities such as search, recommendations, etc.
- **Repository: https://github.com/vsd-benchmark/vsd**
- **Paper: https://arxiv.org/abs/2308.14753**
- **Leaderboard: https://vsd-benchmark.github.io/vsd/**
### Dataset Summary
Visual similarity measures the perceptual agreement between two objects based on their visual appearance. Two objects can be similar or dissimilar based on their color, shape, size, pattern, utility, and more. In fact, all of these factors and many others take part in determining the degree of visual similarity between two objects with varying importance. Therefore, defining the perceived visual simi- larity based on these factors is challenging. Nonetheless, learning visual similarities is a key building block for many practical utilities such as search, recommendations, etc.
### Supported Tasks and Leaderboards
The dataset contains several tasks (displayed in the leaderboard site):
1. VSD Fashion Dataset
a. In Catalog Retrieval
1. Zero Shot Retrieval Task. (**in_catalog_retrieval_zero_shot**)
2. Open Catalog Training Retrieval Task - Same queries appear in train and test. (**in_catalog_open_catalog**)
3. Closed Catalog Training Retrieval Task - Queries in train and test do not intersect. (**in_catalog_closed_catalog**)
b. Consumer-Catalog (Wild) Retrieval
1. Zero Shot Retrieval Task catalog_wild_zero_shot (**consumer-catalog_wild_zero_shot**)
More information about each task can be found in the leaderboard and the paper.
For information on how to report your model metrics to our leaderboard check out our [example model](https://huggingface.co/vsd-benchmark/vsd_example/blob/main/README.md) and [HuggingFace's instructions](https://huggingface.co/docs/hub/model-cards#evaluation-results).
### Usage
#### Dataset Images
The dataset itself contains only annotations, and is based on DeepFashion dataset.
The DeepFashion dataset can be downloaded from [here](https://mmlab.ie.cuhk.edu.hk/projects/DeepFashion.html) and extracted into a folder named 'img'.
The In-shop Clothes Retrieval Benchmark and Consumer-to-shop Clothes Retrieval Benchmark should be downloaded.
There should be six folders in ```datasets/img``` after extraction:
```
datasets/img/CLOTHING - Consumer-to-shop Clothes Retrieval Benchmark
datasets/img/DRESSES - Consumer-to-shop Clothes Retrieval Benchmark
datasets/img/TOPS - Consumer-to-shop Clothes Retrieval Benchmark
datasets/img/TROUSERS - Consumer-to-shop Clothes Retrieval Benchmark
datasets/img/MEN - In-shop Clothes Retrieval Benchmark
datasets/img/WOMEN - In-shop Clothes Retrieval Benchmark
```
#### HuggingFace datasets api
In order to load the dataset annotations through HuggingFace hub, choose a task and run
```python
ds = load_dataset('vsd-benchmark/vsd-fashion', 'your_chosen_task', image_folder='./image_folder')
```
Where 'your_chosen_task' should be one of the tasks mentioned above, and './image_folder' should be the path that contains the 'img' folder.
Note that the zero-shot tasks only have a 'test' set.
#### Using the annotations directly
Instead of using 'datasets' loader, you can download the annotation files directly using:
```python
dataset_builder = load_dataset_builder('vsd-benchmark/vsd-fashion', task, image_folder=image_folder)
splits = dataset_builder._split_generators(None)
```
### License
VSD Fashion is licensed under MIT License.
### Citation Information
```
@misc{barkan2023efficient,
title={Efficient Discovery and Effective Evaluation of Visual Perceptual Similarity: A Benchmark and Beyond},
author={Oren Barkan and Tal Reiss and Jonathan Weill and Ori Katz and Roy Hirsch and Itzik Malkiel and Noam Koenigstein},
year={2023},
eprint={2308.14753},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
brando/debug0_af | 2023-08-10T23:10:04.000Z | [
"license:apache-2.0",
"region:us"
] | brando | null | null | null | 0 | 11 | ---
license: apache-2.0
---
If you find this please cite it:
```
@software{brando2021ultimateutils,
author={Brando Miranda},
title={Ultimate Utils - the Ultimate Utils library for Machine Learning and Artificial Intelligence},
url={https://github.com/brando90/ultimate-utils},
year={2021}
}
```
it's not suppose to be used by people yet. It's under apache license too. |
Sneka/test | 2023-09-30T06:34:03.000Z | [
"region:us"
] | Sneka | null | null | null | 0 | 11 | Entry not found |
redwoodresearch/diamonds-seed0 | 2023-08-17T04:37:38.000Z | [
"region:us"
] | redwoodresearch | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: text
dtype: string
- name: is_correct
dtype: bool
- name: is_clean
dtype: bool
- name: measurements
sequence: bool
- name: difficulty
dtype: int64
splits:
- name: train
num_bytes: 61933513
num_examples: 25000
- name: validation
num_bytes: 20236618
num_examples: 7989
- name: train_for_val
num_bytes: 7471522
num_examples: 2997
download_size: 1091489
dataset_size: 89641653
---
# Dataset Card for "diamonds-seed0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
claudios/dypybench_functions | 2023-08-22T19:20:18.000Z | [
"license:cc-by-4.0",
"code",
"region:us"
] | claudios | null | null | null | 0 | 11 | ---
license: cc-by-4.0
pretty_name: DyPyBench Functions
tags:
- code
dataset_info:
features:
- name: nwo
dtype: string
- name: sha
dtype: string
- name: path
dtype: string
- name: identifier
dtype: string
- name: parameters
dtype: string
- name: return_statement
dtype: string
- name: docstring
dtype: string
- name: docstring_summary
dtype: string
- name: func_begin
dtype: int64
- name: func_end
dtype: int64
- name: function
dtype: string
- name: url
dtype: string
- name: project
dtype: int64
- name: executed_lines
sequence: int64
- name: executed_lines_pc
dtype: float64
- name: missing_lines
sequence: int64
- name: missing_lines_pc
dtype: float64
- name: covered
dtype: bool
- name: filecoverage
dtype: float64
- name: function_lines
dtype: int64
- name: mccabe
dtype: int64
- name: coverage
dtype: float64
- name: docstring_lines
dtype: int64
- name: function_nodoc
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 22383711
num_examples: 11168
download_size: 6805239
dataset_size: 22383711
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# DyPyBench Functions Datasets
[DyPyBench](https://zenodo.org/record/7886366) is a dataset constructed by Piyush Krishan Bajaj at the Software Lab, Institute of Software Engineering, University of Stuttgart. It contains 50 open source projects from GitHub.
We used [Nathan Cooper's](https://github.com/ncoop57/function_parser) `function_parser` tool, based off GitHub's CodeSearchNet `function_parser`, to extract all functions from all the projects, excluding library functions in the virtualenv. We also ran all tests in DyPyBench and produced a coverage report in JSON. Not all projects resulted in a coverage report due to project specific coverage report settings.
The columns provided are as follows:
| Column | Type | Notes |
| ----------------- | ---------- | ----------------------------------------------------------------------------------------------- |
| id | Int64 | Unique id of the function
| project | Int64 | DyPyBench project id |
| nwo | string | Project name in repo/project format |
| sha | string | SHA commit hash |
| url | string | GitHub URL to function lines at commit |
| path | string | Path of file containing function relative to project root |
| func_begin | Int64 | Begin of function line number in source file |
| func_end | Int64 | End of function line number in source file |
| function_lines | Int64 | Function line count |
| identifier | string | Function identifier |
| parameters | string | Function parameters |
| function | string | Source code of function including docstring |
| function_nodoc | string | Source code of function without docstring |
| docstring | string | Function docstring |
| docstring_lines | Int64 | Line count of docstring |
| docstring_summary | string | Function docstring summary |
| return_statement | string | Function return statement |
| filecoverage | Float64 | If coverage available, coverage percentage of file function is from |
| executed_lines | array[int] | If coverage available, executed lines relative to function lines (i.e. [0,1,2,...]) |
| executed_lines_pc | Float64 | If coverage available, executed line count over total function line count |
| missing_lines | array[int] | If coverage available, missing (unexecuted) lines relative to function lines (i.e. [0,1,2,...]) |
| missing_lines_pc | Float64 | If coverage available, missing line count over total function line count |
| covered | boolean | True if all lines executed and/or no lines missing |
| mccabe | Int64 | McCabe complexity of function |
| coverage | Float64 | Function coverage percentage (1-missing lines %) |
Note: Missing/executed lines purposefully exclude lines skipped by `pytest` due to configuration e.g. line level `# pragma: no cover`.
|
Tarklanse/Traditional_Chinese_roleplay_chat_Dataset | 2023-09-07T12:27:06.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:zh",
"license:cc-by-sa-4.0",
"region:us"
] | Tarklanse | null | null | null | 6 | 11 | ---
task_categories:
- text-generation
- text2text-generation
language:
- zh
license: cc-by-sa-4.0
---
# Traditional_Chinese_roleplay_chat_Dataset
這個資料集是以繁體中文為主,將各種由ChatGPT生成與極小部分個人撰寫的對話內容整理為alpaca dataset format的格式
以一層一層堆疊的方式,將一則對話紀錄拆成數筆資料(共約1000則對話),在幾次嘗試性的訓練中能夠讓llama2重現原本英文那種很活躍的對話風格,並且能夠維持善於扮演各種角色的能力
目前個人有以這個資料集製作一個lora
2023/09/07 更新
為資料集加入一些中英翻譯的句子,以期AI能以更好的文字去描寫他的動作,並增加了一些與食物有關的對話,希望能降低AI生出奇怪食物名的機率
|
BELLE-2/train_3.5M_CN_With_Category | 2023-08-18T07:24:01.000Z | [
"task_categories:text2text-generation",
"size_categories:1M<n<10M",
"language:zh",
"license:gpl-3.0",
"region:us"
] | BELLE-2 | null | null | null | 6 | 11 | ---
license: gpl-3.0
task_categories:
- text2text-generation
language:
- zh
size_categories:
- 1M<n<10M
---
## 内容
基于原有的[train_3.5M_CN](https://huggingface.co/datasets/BelleGroup/train_3.5M_CN)数据新增了指令类别字段,共包括13个类别,详情如下图所示:
## 样例
```
{
"id":"66182880",
"category":"generation"
}
```
### 字段:
```
id: 数据id
category: 该条指令数据对应的类别
```
## 使用限制
仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。
本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。 |
Stoemb/test5 | 2023-09-17T15:10:41.000Z | [
"region:us"
] | Stoemb | null | null | null | 0 | 11 | Entry not found |
ZhankuiHe/inspired_cikm | 2023-08-15T06:49:05.000Z | [
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:en",
"recommendation",
"region:us"
] | ZhankuiHe | null | null | null | 0 | 11 | ---
task_categories:
- conversational
language:
- en
tags:
- recommendation
pretty_name: inspired
size_categories:
- 10K<n<100K
viewer: true
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:** https://github.com/sweetpeach/Inspired
- **Repository:** https://github.com/sweetpeach/Inspired
- **Paper:** https://aclanthology.org/2020.emnlp-main.654.pdf
- **Leaderboard:** https://paperswithcode.com/dataset/inspired
- **Point of Contact:**
### Dataset Summary
A new dataset of 1,001 human-human dialogs for movie recommendation with measures for successful recommendations.
### Languages
English
### More Information
This is the [INSPIRED](https://paperswithcode.com/dataset/inspired) dataset adapted from the Conversational Recommender System toolkit [CRSLab](https://github.com/RUCAIBox/CRSLab#Datasets). |
DynamicSuperb/IntentClassification_FluentSpeechCommands-Location | 2023-08-16T10:53:59.000Z | [
"region:us"
] | DynamicSuperb | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: file
dtype: string
- name: speakerId
dtype: string
- name: transcription
dtype: string
- name: audio
dtype: audio
- name: label
dtype: string
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 752958575.0
num_examples: 10000
download_size: 639176861
dataset_size: 752958575.0
---
# Dataset Card for "Intent_Classification_FluentSpeechCommands_Location"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wetdog/TUT-urban-acoustic-scenes-2018-development | 2023-08-19T00:08:29.000Z | [
"task_categories:audio-classification",
"size_categories:1K<n<10K",
"license:afl-3.0",
"region:us"
] | wetdog | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: scene_label
dtype: string
- name: identifier
dtype: string
- name: source_label
dtype: string
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 24883936611.28
num_examples: 8640
download_size: 24885037396
dataset_size: 24883936611.28
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: afl-3.0
task_categories:
- audio-classification
size_categories:
- 1K<n<10K
---
# Dataset Card for "TUT-urban-acoustic-scenes-2018-development"
## Dataset Description
- **Homepage: https://zenodo.org/record/1228142**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact: Toni Heittola (toni.heittola@tut.fi, http://www.cs.tut.fi/~heittolt/)**
### Dataset Summary
TUT Urban Acoustic Scenes 2018 development dataset consists of 10-seconds audio segments from 10 acoustic scenes:
Airport - airport
Indoor shopping mall - shopping_mall
Metro station - metro_station
Pedestrian street - street_pedestrian
Public square - public_square
Street with medium level of traffic - street_traffic
Travelling by a tram - tram
Travelling by a bus - bus
Travelling by an underground metro - metro
Urban park - park
Each acoustic scene has 864 segments (144 minutes of audio). The dataset contains in total 24 hours of audio.
The dataset was collected in Finland by Tampere University of Technology between 02/2018 - 03/2018.
The data collection has received funding from the European Research Council under the ERC Grant Agreement 637422 EVERYSOUND.
### Supported Tasks and Leaderboards
- `audio-classification`: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name).
- The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard
- which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name).
## Dataset Structure
### Data Instances
```
{
'scene_label': 'airport',
'identifier': 'barcelona-0',
'source_label': 'a',
'audio': {'path': '/data/airport-barcelona-0-0-a.wav'
'array': array([-1.91628933e-04, -1.18494034e-04, -1.87635422e-04, ...,
4.90546227e-05, -4.98890877e-05, -4.66108322e-05]),
'sampling_rate': 48000}
}
```
### Data Fields
- `scene_label`: acoustic scene label from the 10 class set,
- `identifier`: city-location id 'barcelona-0',
- `source_label: device id, for this dataset is always the same 'a',
Filenames of the dataset have the following pattern:
[scene label]-[city]-[location id]-[segment id]-[device id].wav
### Data Splits
A suggested training/test partitioning of the development set is provided in order to make results reported with this dataset uniform. The partitioning is done such that the segments recorded at the same location are included into the same subset - either training or testing. The partitioning is done aiming for a 70/30 ratio between the number of segments in training and test subsets while taking into account recording locations, and selecting the closest available option.
| Scene class | Train / Segments | Train / Locations | Test / Segments | Test / Locations |
| ------------------ | ---------------- | ----------------- | --------------- | ---------------- |
| Airport | 599 | 15 | 265 | 7 |
| Bus | 622 | 26 | 242 | 10 |
| Metro | 603 | 20 | 261 | 9 |
| Metro station | 605 | 28 | 259 | 12 |
| Park | 622 | 18 | 242 | 7 |
| Public square | 648 | 18 | 216 | 6 |
| Shopping mall | 585 | 16 | 279 | 6 |
| Street, pedestrian | 617 | 20 | 247 | 8 |
| Street, traffic | 618 | 18 | 246 | 7 |
| Tram | 603 | 24 | 261 | 11 |
| **Total** | **6122** | **203** | **2518** | **83** |
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The dataset was recorded in six large European cities: Barcelona, Helsinki, London, Paris, Stockholm, and Vienna. For all acoustic scenes, audio was captured in multiple locations: different streets, different parks, different shopping malls. In each location, multiple 2-3 minute long audio recordings were captured in a few slightly different positions (2-4) within the selected location. Collected audio material was cut into segments of 10 seconds length.
The equipment used for recording consists of a binaural [Soundman OKM II Klassik/studio A3](http://www.soundman.de/en/products/) electret in-ear microphone and a [Zoom F8](https://www.zoom.co.jp/products/handy-recorder/zoom-f8-multitrack-field-recorder) audio recorder using 48 kHz sampling rate and 24 bit resolution. During the recording, the microphones were worn by the recording person in the ears, and head movement was kept to minimum.
### Annotations
#### Annotation process
Post-processing of the recorded audio involves aspects related to privacy of recorded individuals, and possible errors in the recording process. Some interferences from mobile phones are audible, but are considered part of real-world recording process.
#### Who are the annotators?
* Ronal Bejarano Rodriguez
* Eemi Fagerlund
* Aino Koskimies
* Toni Heittola
### Personal and Sensitive Information
The material was screened for content, and segments containing close microphone conversation were eliminated.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Toni Heittola (toni.heittola@tut.fi, http://www.cs.tut.fi/~heittolt/)
Annamaria Mesaros (annamaria.mesaros@tut.fi, http://www.cs.tut.fi/~mesaros/)
Tuomas Virtanen (tuomas.virtanen@tut.fi, http://www.cs.tut.fi/~tuomasv/)
### Licensing Information
Copyright (c) 2018 Tampere University of Technology and its licensors
All rights reserved.
Permission is hereby granted, without written agreement and without license or royalty
fees, to use and copy the TUT Urban Acoustic Scenes 2018 (“Work”) described in this document
and composed of audio and metadata. This grant is only for experimental and non-commercial
purposes, provided that the copyright notice in its entirety appear in all copies of this Work,
and the original source of this Work, (Audio Research Group from Laboratory of Signal
Processing at Tampere University of Technology),
is acknowledged in any publication that reports research using this Work.
Any commercial use of the Work or any part thereof is strictly prohibited.
Commercial use include, but is not limited to:
- selling or reproducing the Work
- selling or distributing the results or content achieved by use of the Work
- providing services by using the Work.
IN NO EVENT SHALL TAMPERE UNIVERSITY OF TECHNOLOGY OR ITS LICENSORS BE LIABLE TO ANY PARTY
FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE
OF THIS WORK AND ITS DOCUMENTATION, EVEN IF TAMPERE UNIVERSITY OF TECHNOLOGY OR ITS
LICENSORS HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
TAMPERE UNIVERSITY OF TECHNOLOGY AND ALL ITS LICENSORS SPECIFICALLY DISCLAIMS ANY
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS FOR A PARTICULAR PURPOSE. THE WORK PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND
THE TAMPERE UNIVERSITY OF TECHNOLOGY HAS NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT,
UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
### Citation Information
[](https://doi.org/10.5281/zenodo.1228142)
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Trelis/protein_stability_single_mutation | 2023-08-21T20:47:40.000Z | [
"task_categories:question-answering",
"task_categories:tabular-classification",
"task_categories:text-generation",
"size_categories:100K<1M",
"language:en",
"biology",
"proteins",
"amino-acids",
"region:us"
] | Trelis | null | null | null | 0 | 11 | ---
task_categories:
- question-answering
- tabular-classification
- text-generation
language:
- en
tags:
- biology
- proteins
- amino-acids
size_categories:
- 100K<1M
---
# Protein Data Stability - Single Mutation
This repository contains data on the change in protein stability with a single mutation.
## Attribution of Data Sources
- **Primary Source**: Tsuboyama, K., Dauparas, J., Chen, J. et al. Mega-scale experimental analysis of protein folding stability in biology and design. Nature 620, 434–444 (2023). [Link to the paper](https://www.nature.com/articles/s41586-023-06328-6)
- **Dataset Link**: [Zenodo Record](https://zenodo.org/record/7992926)
As to where the dataset comes from in this broader work, the relevant dataset (#3) is shown in `dataset_table.jpeg` of this repository's files.
## Sample Protein Stability Data [subset of 4 columns]
| Base Protein Sequence | Mutation | ΔΔG_ML | Classification |
|-------------------------------------------------------------|----------|--------------------|-----------------|
| FDIYVVTADYLPLGAEQDAITLREGQYVEVLDAAHPLRWLVRTKPTKSSPSRQGWVSPAYLDRRL | R63W | -0.2010871345320799 | neutral |
| FDIYVVTADYLPLGAEQDAITLREGQYVEVLDAAHPLRWLVRTKPTKSSPSRQGWVSPAYLDRRL | R63Y | 0.0194756159891467 | neutral |
| FDIYVVTADYLPLGAEQDAITLREGQYVEVLDAAHPLRWLVRTKPTKSSPSRQGWVSPAYLDRRL | R63F | 0.7231614929744659 | stabilising |
| FDIYVVTADYLPLGAEQDAITLREGQYVEVLDAAHPLRWLVRTKPTKSSPSRQGWVSPAYLDRRL | R63P | -0.3668887752897785 | neutral |
| FDIYVVTADYLPLGAEQDAITLREGQYVEVLDAAHPLRWLVRTKPTKSSPSRQGWVSPAYLDRRL | R63C | -0.5317304030261774 | destabilising |
## Dataset Structure
This dataset focuses on the differential deltaG of *unfolding* (mutation minus base) of various protein mutations and is derived from stability measurements (free energy of unfolding) measured by two proteases, trypsin and chymotrypsin.
### Columns (Trypsin):
- **name**: The name of the protein variant.
- **dna_seq**: The DNA sequence encoding the protein variant.
- **log10_K50_t**: The log10 of the K50 value measured with trypsin (a measure of stability).
- **log10_K50_t_95CI_high**: The upper bound of the 95% confidence interval for log10_K50_t.
- **log10_K50_t_95CI_low**: The lower bound of the 95% confidence interval for log10_K50_t.
- **log10_K50_t_95CI**: The width of the 95% confidence interval for log10_K50_t.
- **fitting_error_t**: A measure of error between the model and data for trypsin.
- **log10_K50unfolded_t**: The predicted log10 K50 value for the unfolded state with trypsin.
- **deltaG_t**: The ΔG stability calculated from the trypsin data.
- **deltaG_t_95CI_high**: The upper bound of the ΔG confidence interval from trypsin.
- **deltaG_t_95CI_low**: The lower bound of the ΔG confidence interval from trypsin.
- **deltaG_t_95CI**: The width of the ΔG confidence interval from trypsin.
### Columns (Chymotrypsin):
- **log10_K50_c**: Analogous to `log10_K50_t`, but for chymotrypsin.
- **log10_K50_c_95CI_high**: Upper bound of the 95% CI for `log10_K50_c`.
- **log10_K50_c_95CI_low**: Lower bound of the 95% CI for `log10_K50_c`.
- **log10_K50_c_95CI**: Width of the 95% CI for `log10_K50_c`.
- **fitting_error_c**: A measure of error between the model and data for chymotrypsin.
- **log10_K50unfolded_c**: Predicted log10 K50 value for the unfolded state with chymotrypsin.
- **deltaG_c**: ΔG stability calculated from the chymotrypsin data.
- **deltaG_c_95CI_high**: Upper bound of the ΔG CI from chymotrypsin.
- **deltaG_c_95CI_low**: Lower bound of the ΔG CI from chymotrypsin.
- **deltaG_c_95CI**: Width of the ΔG CI from chymotrypsin.
### Combined Data:
- **deltaG**: The combined ΔG estimate from both trypsin and chymotrypsin.
- **deltaG_95CI_high**: Upper bound of the combined ΔG confidence interval.
- **deltaG_95CI_low**: Lower bound of the combined ΔG confidence interval.
- **deltaG_95CI**: Width of the combined ΔG confidence interval.
### Protein Sequencing Data:
- **aa_seq_full**: The full amino acid sequence.
- **aa_seq**: A (sometimes shortened) amino acid sequence representing the protein.
- **mut_type**: The type of mutation introduced to the protein.
- **WT_name**: Name of the wild type variant.
- **WT_cluster**: Cluster classification for the wild type variant.
- **mutation**: Represented as a combination of amino acid and its position (e.g., F10N indicates changing the 10th amino acid (F) in a sequence for N).
- **base_aa_seq**: The base sequence of the protein before the mutation.
### Derived Data:
- **log10_K50_trypsin_ML**: Log10 value of K50 derived from a machine learning model using trypsin data.
- **log10_K50_chymotrypsin_ML**: Log10 value of K50 derived from a machine learning model using chymotrypsin data.
- **dG_ML**: ΔG derived from a machine learning model that makes use of stability measurements from both proteases.
- **ddG_ML**: Differential ΔG (mutation minus base) derived from a machine learning model.
### Classification:
- **Stabilizing_mut**: Indicates whether the mutation is stabilizing or not.
- **pair_name**: Name representation combining the wild type and mutation.
- **classification**: Classification based on `ddG_ML`:
- Rows below -0.5 standard deviations are classified as 'destabilising'.
- Rows above +0.5 standard deviations are classified as 'stabilising'.
- Rows between -0.5 and 0.5 standard deviations are classified as 'neutral'.
This dataset offers a comprehensive view of protein mutations, their effects, and how they relate to the stability measurements made with trypsin and chymotrypsin.
### Understanding ΔG (delta G)
ΔG is the Gibbs free energy change of a process, dictating whether a process is thermodynamically favorable:
- **Negative ΔG**: Indicates the process is energetically favorable. For protein unfolding, it implies the protein is more stable in its unfolded form.
- **Positive ΔG**: Indicates the process is not energetically favorable. In protein unfolding, it means the protein requires energy to maintain its unfolded state, i.e. it is stable in folded form.
The **delta delta G** (ΔΔG) represents the deltaG of the mutation compared to the base protein:
- **Positive ΔΔG**: Suggests the mutation enhances protein stability.
- **Negative ΔΔG**: Suggests the mutation decreases protein stability.
### Data Cleanup and Validation:
1. Filtering: The dataset has been curated to only include examples of single mutations.
2. Sequence mutations were extracted from the row names. Base mutations are labelled as 'base'.
3. Consistency Check: Only rows with a consistent 'mutation', aligned with both the base and mutated sequences from the raw data, have been retained. |
KushT/yelp-polarity-train-val-test | 2023-08-18T04:09:23.000Z | [
"license:mit",
"region:us"
] | KushT | null | null | null | 0 | 11 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 330583858
num_examples: 448000
- name: validation
num_bytes: 82974979
num_examples: 112000
- name: test
num_bytes: 27962097
num_examples: 38000
download_size: 282998240
dataset_size: 441520934
---
|
larryvrh/PIPPA-TavernFormat | 2023-08-19T11:11:08.000Z | [
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:en",
"license:agpl-3.0",
"not-for-all-audiences",
"roleplay",
"conversational",
"region:us"
] | larryvrh | null | null | null | 2 | 11 | ---
dataset_info:
features:
- name: categories
sequence: string
- name: name
dtype: string
- name: description
dtype: string
- name: first_msg
dtype: string
- name: personality
dtype: string
- name: example_dialogues
sequence: string
- name: conversation
list:
- name: is_human
dtype: bool
- name: message
dtype: string
splits:
- name: train
num_bytes: 174673097
num_examples: 11841
download_size: 88204818
dataset_size: 174673097
license: agpl-3.0
task_categories:
- conversational
language:
- en
tags:
- not-for-all-audiences
- roleplay
- conversational
size_categories:
- 10K<n<100K
---
# Dataset Card for "PIPPA_TavernFormat"
Converted from the deduped version (pippa_deduped.jsonl) of [PygmalionAI/PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA?not-for-all-audiences=true).
Since the CAI format and the Tavern format does not align exactly, there maybe some mismatches between fields, especially character description and personality. |
botp/alpaca-taiwan-dataset | 2023-08-22T09:39:03.000Z | [
"task_categories:text-generation",
"task_categories:question-answering",
"task_categories:conversational",
"size_categories:100M<n<1B",
"language:zh",
"language:en",
"license:apache-2.0",
"gpt",
"alpaca",
"llama",
"fine-tune",
"Traditional Chinese",
"Taiwan",
"region:us"
] | botp | null | null | null | 4 | 11 | ---
license: apache-2.0
task_categories:
- text-generation
- question-answering
- conversational
language:
- zh
- en
tags:
- gpt
- alpaca
- llama
- fine-tune
- Traditional Chinese
- Taiwan
pretty_name: Alpaca-Data-Taiwan-Chinese
size_categories:
- 100M<n<1B
---
# 你各位的 Alpaca Data Taiwan Chinese 正體中文數據集 |
Photolens/MedText-DoctorLLaMa-OpenOrca-formatted | 2023-08-24T17:58:09.000Z | [
"region:us"
] | Photolens | null | null | null | 3 | 11 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 974552
num_examples: 1412
download_size: 498326
dataset_size: 974552
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card coming...
This is for a coming project about doctorGPT and vqa models. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.